WorldWideScience

Sample records for region-of-interest based methods

  1. AN ITERATIVE SEGMENTATION METHOD FOR REGION OF INTEREST EXTRACTION

    Directory of Open Access Journals (Sweden)

    Volkan CETIN

    2013-01-01

    Full Text Available In this paper, a method is presented for applications which include mammographic image segmentation and region of interest extraction. Segmentation is a very critical and difficult stage to accomplish in computer aided detection systems. Although the presented segmentation method is developed for mammographic images, it can be used for any medical image which resembles the same statistical characteristics with mammograms. Fundamentally, the method contains iterative automatic thresholding and masking operations which is applied to the original or enhanced mammograms. Also the effect of image enhancement to the segmentation process was observed. A version of histogram equalization was applied to the images for enhancement. Finally, the results show that enhanced version of the proposed segmentation method is preferable because of its better success rate.

  2. Diffusion weighted imaging for differentiating benign from malignant orbital tumors: Diagnostic performance of the apparent diffusion coefficient based on region of interest selection method

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Xiao Quan; Hu, Hao Hu; Su, Guo Yi; Liu, Hu; Shi, Hai Bin; Wu, Fei Yun [First Affiliated Hospital of Nanjing Medical University, Nanjing (China)

    2016-09-15

    To evaluate the differences in the apparent diffusion coefficient (ADC) measurements based on three different region of interest (ROI) selection methods, and compare their diagnostic performance in differentiating benign from malignant orbital tumors. Diffusion-weighted imaging data of sixty-four patients with orbital tumors (33 benign and 31 malignant) were retrospectively analyzed. Two readers independently measured the ADC values using three different ROIs selection methods including whole-tumor (WT), single-slice (SS), and reader-defined small sample (RDSS). The differences of ADC values (ADC-ROI{sub WT}, ADC-ROI{sub SS}, and ADC-ROI{sub RDSS}) between benign and malignant group were compared using unpaired t test. Receiver operating characteristic curve was used to determine and compare their diagnostic ability. The ADC measurement time was compared using ANOVA analysis and the measurement reproducibility was assessed using Bland-Altman method and intra-class correlation coefficient (ICC). Malignant group showed significantly lower ADC-ROI{sub WT}, ADC-ROI{sub SS}, and ADC-ROI{sub RDSS} than benign group (all p < 0.05). The areas under the curve showed no significant difference when using ADC-ROI{sub WT}, ADC-ROI{sub SS}, and ADC-ROI{sub RDSS} as differentiating index, respectively (all p > 0.05). The ROI{sub SS} and ROI{sub RDSS} required comparable measurement time (p > 0.05), while significantly shorter than ROI{sub WT} (p < 0.05). The ROI{sub SS} showed the best reproducibility (mean difference ± limits of agreement between two readers were 0.022 [-0.080–0.123] × 10{sup -3} mm{sup 2}/s; ICC, 0.997) among three ROI method. Apparent diffusion coefficient values based on the three different ROI selection methods can help to differentiate benign from malignant orbital tumors. The results of measurement time, reproducibility and diagnostic ability suggest that the ROI{sub SS} method are potentially useful for clinical practice.

  3. Region of interest evaluation of SPECT image reconstruction methods using a realistic brain phantom

    International Nuclear Information System (INIS)

    Xia, Weishi; Glick, S.J.; Soares, E.J.

    1996-01-01

    A realistic numerical brain phantom, developed by Zubal et al, was used for a region-of-interest evaluation of the accuracy and noise variance of the following SPECT reconstruction methods: (1) Maximum-Likelihood reconstruction using the Expectation-Maximization (ML-EM) algorithm; (2) an EM algorithm using ordered-subsets (OS-EM); (3) a re-scaled block iterative EM algorithm (RBI-EM); and (4) a filtered backprojection algorithm that uses a combination of the Bellini method for attenuation compensation and an iterative spatial blurring correction method using the frequency-distance principle (FDP). The Zubal phantom was made from segmented MRI slices of the brain, so that neuro-anatomical structures are well defined and indexed. Small regions-of-interest (ROIs) from the white matter, grey matter in the center of the brain and grey matter from the peripheral area of the brain were selected for the evaluation. Photon attenuation and distance-dependent collimator blurring were modeled. Multiple independent noise realizations were generated for two different count levels. The simulation study showed that the ROI bias measured for the EM-based algorithms decreased as the iteration number increased, and that the OS-EM and RBI-EM algorithms (16 and 64 subsets were used) achieved the equivalent accuracy of the ML-EM algorithm at about the same noise variance, with much fewer number of iterations. The Bellini-FDP restoration algorithm converged fast and required less computation per iteration. The ML-EM algorithm had a slightly better ROI bias vs. variance trade-off than the other algorithms

  4. Shifting from region of interest (ROI) to voxel-based analysis in human brain mapping

    International Nuclear Information System (INIS)

    Astrakas, Loukas G.; Argyropoulou, Maria I.

    2010-01-01

    Current clinical studies involve multidimensional high-resolution images containing an overwhelming amount of structural and functional information. The analysis of such a wealth of information is becoming increasingly difficult yet necessary in order to improve diagnosis, treatment and healthcare. Voxel-wise analysis is a class of modern methods of image processing in the medical field with increased popularity. It has replaced manual region of interest (ROI) analysis and has provided tools to make statistical inferences at voxel level. The introduction of voxel-based analysis software in all modern commercial scanners allows clinical use of these techniques. This review will explain the main principles, advantages and disadvantages behind these methods of image analysis. (orig.)

  5. Mesh-to-raster region-of-interest-based nonrigid registration of multimodal images.

    Science.gov (United States)

    Tatano, Rosalia; Berkels, Benjamin; Deserno, Thomas M

    2017-10-01

    Region of interest (RoI) alignment in medical images plays a crucial role in diagnostics, procedure planning, treatment, and follow-up. Frequently, a model is represented as triangulated mesh while the patient data is provided from computed axial tomography scanners as pixel or voxel data. Previously, we presented a 2-D method for curve-to-pixel registration. This paper contributes (i) a general mesh-to-raster framework to register RoIs in multimodal images; (ii) a 3-D surface-to-voxel application, and (iii) a comprehensive quantitative evaluation in 2-D using ground truth (GT) provided by the simultaneous truth and performance level estimation (STAPLE) method. The registration is formulated as a minimization problem, where the objective consists of a data term, which involves the signed distance function of the RoI from the reference image and a higher order elastic regularizer for the deformation. The evaluation is based on quantitative light-induced fluoroscopy (QLF) and digital photography (DP) of decalcified teeth. STAPLE is computed on 150 image pairs from 32 subjects, each showing one corresponding tooth in both modalities. The RoI in each image is manually marked by three experts (900 curves in total). In the QLF-DP setting, our approach significantly outperforms the mutual information-based registration algorithm implemented with the Insight Segmentation and Registration Toolkit and Elastix.

  6. Introducing Alternative-Based Thresholding for Defining Functional Regions of Interest in fMRI

    Directory of Open Access Journals (Sweden)

    Jasper Degryse

    2017-04-01

    Full Text Available In fMRI research, one often aims to examine activation in specific functional regions of interest (fROIs. Current statistical methods tend to localize fROIs inconsistently, focusing on avoiding detection of false activation. Not missing true activation is however equally important in this context. In this study, we explored the potential of an alternative-based thresholding (ABT procedure, where evidence against the null hypothesis of no effect and evidence against a prespecified alternative hypothesis is measured to control both false positives and false negatives directly. The procedure was validated in the context of localizer tasks on simulated brain images and using a real data set of 100 runs per subject. Voxels categorized as active with ABT can be confidently included in the definition of the fROI, while inactive voxels can be confidently excluded. Additionally, the ABT method complements classic null hypothesis significance testing with valuable information by making a distinction between voxels that show evidence against both the null and alternative and voxels for which the alternative hypothesis cannot be rejected despite lack of evidence against the null.

  7. A filtering method for signal equalization in region-of-interest fluoroscopy

    International Nuclear Information System (INIS)

    Robert, Normand; Komljenovic, Philip T; Rowlands, J. A.

    2002-01-01

    A method to significantly reduce the exposure area product in fluoroscopy using a pre-patient region-of-interest (ROI) attenuator is presented. The attenuator has a thin central region and a gradually increasing thickness away from the center. It is shown that the unwanted brightening artifact caused by the attenuator can be eliminated by attenuating the low spatial frequencies in the detected image using digital image processing techniques. An investigation of the best image processing method to correct for the presence of the attenuator is undertaken. The correction procedure selected is suitable for use with real-time image processors and the ROI attenuator can be permitted to move during image acquisition. Images of an anthropomorphic chest phantom acquired in the presence of the ROI attenuator using an x-ray image intensifier/video chain are corrected to illustrate the clinical feasibility of our approach

  8. Quantitative sacroiliac scintigraphy. The effect of method of selection of region of interest

    International Nuclear Information System (INIS)

    Davis, M.C.; Turner, D.A.; Charters, J.R.; Golden, H.E.; Ali, A.; Fordham, E.W.

    1984-01-01

    Various authors have advocated quantitative methods of evaluating bone scintigrams to detect sacroiliitis, while others have not found them useful. Many explanations for this disagreement have been offered, including differences in the method of case selection, ethnicity, gender, and previous drug therapy. It would appear that one of the most important impediments to consistent results is the variability of selecting sacroiliac joint and reference regions of interest (ROIs). The effect of ROI selection would seem particularly important because of the normal variability of radioactivity within the reference regions that have been used (sacrum, spine, iliac wing) and the inhomogeneity of activity in the SI joints. We have investigated the effect of ROI selection, using five different methods representative of, though not necessarily identical to, those found in the literature. Each method produced unique mean indices that were different for patients with ankylosing spondylitis (AS) and controls. The method of Ayres (19) proved superior (largest mean difference, smallest variance), but none worked well as a diagnostic tool because of substantial overlap of the distributions of indices of patient and control groups. We conclude that ROI selection is important in determining results, and quantitative scintigraphic methods in general are not effective tools for diagnosing AS. Among the possible factors limiting success, difficulty in selecting a stable reference area seems of particular importance

  9. A Local Region of Interest Imaging Method for Electrical Impedance Tomography with Internal Electrodes

    Directory of Open Access Journals (Sweden)

    Hyeuknam Kwon

    2013-01-01

    Full Text Available Electrical Impedance Tomography (EIT is a very attractive functional imaging method despite the low sensitivity and resolution. The use of internal electrodes with the conventional reconstruction algorithms was not enough to enhance image resolution and accuracy in the region of interest (ROI. We propose a local ROI imaging method with internal electrodes developed from careful analysis of the sensitivity matrix that is designed to reduce the sensitivity of the voxels outside the local region and optimize the sensitivity of the voxel inside the local region. We perform numerical simulations and physical measurements to demonstrate the localized EIT imaging method. In preliminary results with multiple objects we show the benefits of using an internal electrode and the improved resolution due to the local ROI image reconstruction method. The sensitivity is further increased by allowing the surface electrodes to be unevenly spaced with a higher density of surface electrodes near the ROI. Also, we analyse how much the image quality is improved using several performance parameters for comparison. While these have not yet been studied in depth, it convincingly shows an improvement in local sensitivity in images obtained with an internal electrode in comparison to a standard reconstruction method.

  10. Medical Image Compression Based on Region of Interest, With Application to Colon CT Images

    National Research Council Canada - National Science Library

    Gokturk, Salih

    2001-01-01

    ...., in diagnostically important regions. This paper discusses a hybrid model of lossless compression in the region of interest, with high-rate, motion-compensated, lossy compression in other regions...

  11. Sliding Window-Based Region of Interest Extraction for Finger Vein Images

    Science.gov (United States)

    Yang, Lu; Yang, Gongping; Yin, Yilong; Xiao, Rongyang

    2013-01-01

    Region of Interest (ROI) extraction is a crucial step in an automatic finger vein recognition system. The aim of ROI extraction is to decide which part of the image is suitable for finger vein feature extraction. This paper proposes a finger vein ROI extraction method which is robust to finger displacement and rotation. First, we determine the middle line of the finger, which will be used to correct the image skew. Then, a sliding window is used to detect the phalangeal joints and further to ascertain the height of ROI. Last, for the corrective image with certain height, we will obtain the ROI by using the internal tangents of finger edges as the left and right boundary. The experimental results show that the proposed method can extract ROI more accurately and effectively compared with other methods, and thus improve the performance of finger vein identification system. Besides, to acquire the high quality finger vein image during the capture process, we propose eight criteria for finger vein capture from different aspects and these criteria should be helpful to some extent for finger vein capture. PMID:23507824

  12. NEBNext Direct: A Novel, Rapid, Hybridization-Based Approach for the Capture and Library Conversion of Genomic Regions of Interest.

    Science.gov (United States)

    Emerman, Amy B; Bowman, Sarah K; Barry, Andrew; Henig, Noa; Patel, Kruti M; Gardner, Andrew F; Hendrickson, Cynthia L

    2017-07-05

    Next-generation sequencing (NGS) is a powerful tool for genomic studies, translational research, and clinical diagnostics that enables the detection of single nucleotide polymorphisms, insertions and deletions, copy number variations, and other genetic variations. Target enrichment technologies improve the efficiency of NGS by only sequencing regions of interest, which reduces sequencing costs while increasing coverage of the selected targets. Here we present NEBNext Direct ® , a hybridization-based, target-enrichment approach that addresses many of the shortcomings of traditional target-enrichment methods. This approach features a simple, 7-hr workflow that uses enzymatic removal of off-target sequences to achieve a high specificity for regions of interest. Additionally, unique molecular identifiers are incorporated for the identification and filtering of PCR duplicates. The same protocol can be used across a wide range of input amounts, input types, and panel sizes, enabling NEBNext Direct to be broadly applicable across a wide variety of research and diagnostic needs. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  13. Repeatability and variation of region-of-interest methods using quantitative diffusion tensor MR imaging of the brain

    International Nuclear Information System (INIS)

    Hakulinen, Ullamari; Brander, Antti; Ryymin, Pertti; Öhman, Juha; Soimakallio, Seppo; Helminen, Mika; Dastidar, Prasun; Eskola, Hannu

    2012-01-01

    Diffusion tensor imaging (DTI) is increasingly used in various diseases as a clinical tool for assessing the integrity of the brain’s white matter. Reduced fractional anisotropy (FA) and an increased apparent diffusion coefficient (ADC) are nonspecific findings in most pathological processes affecting the brain’s parenchyma. At present, there is no gold standard for validating diffusion measures, which are dependent on the scanning protocols, methods of the softwares and observers. Therefore, the normal variation and repeatability effects on commonly-derived measures should be carefully examined. Thirty healthy volunteers (mean age 37.8 years, SD 11.4) underwent DTI of the brain with 3T MRI. Region-of-interest (ROI) -based measurements were calculated at eleven anatomical locations in the pyramidal tracts, corpus callosum and frontobasal area. Two ROI-based methods, the circular method (CM) and the freehand method (FM), were compared. Both methods were also compared by performing measurements on a DTI phantom. The intra- and inter-observer variability (coefficient of variation, or CV%) and repeatability (intra-class correlation coefficient, or ICC) were assessed for FA and ADC values obtained using both ROI methods. The mean FA values for all of the regions were 0.663 with the CM and 0.621 with the FM. For both methods, the FA was highest in the splenium of the corpus callosum. The mean ADC value was 0.727 ×10 -3 mm 2 /s with the CM and 0.747 ×10 -3 mm 2 /s with the FM, and both methods found the ADC to be lowest in the corona radiata. The CV percentages of the derived measures were < 13% with the CM and < 10% with the FM. In most of the regions, the ICCs were excellent or moderate for both methods. With the CM, the highest ICC for FA was in the posterior limb of the internal capsule (0.90), and with the FM, it was in the corona radiata (0.86). For ADC, the highest ICC was found in the genu of the corpus callosum (0.93) with the CM and in the uncinate

  14. Video-based respiration monitoring with automatic region of interest detection

    NARCIS (Netherlands)

    Janssen, R.J.M.; Wang, Wenjin; Moço, A.; de Haan, G.

    2016-01-01

    Vital signs monitoring is ubiquitous in clinical environments and emerging in home-based healthcare applications. Still, since current monitoring methods require uncomfortable sensors, respiration rate remains the least measured vital sign. In this paper, we propose a video-based respiration

  15. Cloud solution for histopathological image analysis using region of interest based compression.

    Science.gov (United States)

    Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana

    2017-07-01

    Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.

  16. Region of interest based robust watermarking scheme for adaptation in small displays

    Science.gov (United States)

    Vivekanandhan, Sapthagirivasan; K. B., Kishore Mohan; Vemula, Krishna Manohar

    2010-02-01

    Now-a-days Multimedia data can be easily replicated and the copyright is not legally protected. Cryptography does not allow the use of digital data in its original form and once the data is decrypted, it is no longer protected. Here we have proposed a new double protected digital image watermarking algorithm, which can embed the watermark image blocks into the adjacent regions of the host image itself based on their blocks similarity coefficient which is robust to various noise effects like Poisson noise, Gaussian noise, Random noise and thereby provide double security from various noises and hackers. As instrumentation application requires a much accurate data, the watermark image which is to be extracted back from the watermarked image must be immune to various noise effects. Our results provide better extracted image compared to the present/existing techniques and in addition we have done resizing the same for various displays. Adaptive resizing for various size displays is being experimented wherein we crop the required information in a frame, zoom it for a large display or resize for a small display using a threshold value and in either cases background is not given much importance but it is only the fore-sight object which gains importance which will surely be helpful in performing surgeries.

  17. Region of interest and windowing-based progressive medical image delivery using JPEG2000

    Science.gov (United States)

    Nagaraj, Nithin; Mukhopadhyay, Sudipta; Wheeler, Frederick W.; Avila, Ricardo S.

    2003-05-01

    An important telemedicine application is the perusal of CT scans (digital format) from a central server housed in a healthcare enterprise across a bandwidth constrained network by radiologists situated at remote locations for medical diagnostic purposes. It is generally expected that a viewing station respond to an image request by displaying the image within 1-2 seconds. Owing to limited bandwidth, it may not be possible to deliver the complete image in such a short period of time with traditional techniques. In this paper, we investigate progressive image delivery solutions by using JPEG 2000. An estimate of the time taken in different network bandwidths is performed to compare their relative merits. We further make use of the fact that most medical images are 12-16 bits, but would ultimately be converted to an 8-bit image via windowing for display on the monitor. We propose a windowing progressive RoI technique to exploit this and investigate JPEG 2000 RoI based compression after applying a favorite or a default window setting on the original image. Subsequent requests for different RoIs and window settings would then be processed at the server. For the windowing progressive RoI mode, we report a 50% reduction in transmission time.

  18. Quantification of carotid artery plaque stability with multiple region of interest based ultrasound strain indices and relationship with cognition

    Science.gov (United States)

    Meshram, N. H.; Varghese, T.; Mitchell, C. C.; Jackson, D. C.; Wilbrand, S. M.; Hermann, B. P.; Dempsey, R. J.

    2017-08-01

    Vulnerability and instability in carotid artery plaque has been assessed based on strain variations using noninvasive ultrasound imaging. We previously demonstrated that carotid plaques with higher strain indices in a region of interest (ROI) correlated to patients with lower cognition, probably due to cerebrovascular emboli arising from these unstable plaques. This work attempts to characterize the strain distribution throughout the entire plaque region instead of being restricted to a single localized ROI. Multiple ROIs are selected within the entire plaque region, based on thresholds determined by the maximum and average strains in the entire plaque, enabling generation of additional relevant strain indices. Ultrasound strain imaging of carotid plaques, was performed on 60 human patients using an 18L6 transducer coupled to a Siemens Acuson S2000 system to acquire radiofrequency data over several cardiac cycles. Patients also underwent a battery of neuropsychological tests under a protocol based on National Institute of Neurological Disorders and Stroke and Canadian Stroke Network guidelines. Correlation of strain indices with composite cognitive index of executive function revealed a negative association relating high strain to poor cognition. Patients grouped into high and low cognition groups were then classified using these additional strain indices. One of our newer indices, namely the average L  -  1 norm with plaque (AL1NWP) presented with significantly improved correlation with executive function when compared to our previously reported maximum accumulated strain indices. An optimal combination of three of the new indices generated classifiers of patient cognition with an area under the curve (AUC) of 0.880, 0.921 and 0.905 for all (n  =  60), symptomatic (n  =  33) and asymptomatic patients (n  =  27) whereas classifiers using maximum accumulated strain indices alone provided AUC values of 0.817, 0.815 and 0

  19. MRI-determined liver proton density fat fraction, with MRS validation: Comparison of regions of interest sampling methods in patients with type 2 diabetes.

    Science.gov (United States)

    Vu, Kim-Nhien; Gilbert, Guillaume; Chalut, Marianne; Chagnon, Miguel; Chartrand, Gabriel; Tang, An

    2016-05-01

    To assess the agreement between published magnetic resonance imaging (MRI)-based regions of interest (ROI) sampling methods using liver mean proton density fat fraction (PDFF) as the reference standard. This retrospective, internal review board-approved study was conducted in 35 patients with type 2 diabetes. Liver PDFF was measured by magnetic resonance spectroscopy (MRS) using a stimulated-echo acquisition mode sequence and MRI using a multiecho spoiled gradient-recalled echo sequence at 3.0T. ROI sampling methods reported in the literature were reproduced and liver mean PDFF obtained by whole-liver segmentation was used as the reference standard. Intraclass correlation coefficients (ICCs), Bland-Altman analysis, repeated-measures analysis of variance (ANOVA), and paired t-tests were performed. ICC between MRS and MRI-PDFF was 0.916. Bland-Altman analysis showed excellent intermethod agreement with a bias of -1.5 ± 2.8%. The repeated-measures ANOVA found no systematic variation of PDFF among the nine liver segments. The correlation between liver mean PDFF and ROI sampling methods was very good to excellent (0.873 to 0.975). Paired t-tests revealed significant differences (P sampling methods that exclusively or predominantly sampled the right lobe. Significant correlations with mean PDFF were found with sampling methods that included higher number of segments, total area equal or larger than 5 cm(2) , or sampled both lobes (P = 0.001, 0.023, and 0.002, respectively). MRI-PDFF quantification methods should sample each liver segment in both lobes and include a total surface area equal or larger than 5 cm(2) to provide a close estimate of the liver mean PDFF. © 2015 Wiley Periodicals, Inc.

  20. Region-of-interest volumetric visual hull refinement

    KAUST Repository

    Knoblauch, Daniel; Kuester, Falko

    2010-01-01

    This paper introduces a region-of-interest visual hull refinement technique, based on flexible voxel grids for volumetric visual hull reconstructions. Region-of-interest refinement is based on a multipass process, beginning with a focussed visual

  1. Half-Fan-Based Intensity-Weighted Region-of-Interest Imaging for Low-Dose Cone-Beam CT in Image-Guided Radiation Therapy.

    Science.gov (United States)

    Yoo, Boyeol; Son, Kihong; Pua, Rizza; Kim, Jinsung; Solodov, Alexander; Cho, Seungryong

    2016-10-01

    With the increased use of computed tomography (CT) in clinics, dose reduction is the most important feature people seek when considering new CT techniques or applications. We developed an intensity-weighted region-of-interest (IWROI) imaging method in an exact half-fan geometry to reduce the imaging radiation dose to patients in cone-beam CT (CBCT) for image-guided radiation therapy (IGRT). While dose reduction is highly desirable, preserving the high-quality images of the ROI is also important for target localization in IGRT. An intensity-weighting (IW) filter made of copper was mounted in place of a bowtie filter on the X-ray tube unit of an on-board imager (OBI) system such that the filter can substantially reduce radiation exposure to the outer ROI. In addition to mounting the IW filter, the lead-blade collimation of the OBI was adjusted to produce an exact half-fan scanning geometry for a further reduction of the radiation dose. The chord-based rebinned backprojection-filtration (BPF) algorithm in circular CBCT was implemented for image reconstruction, and a humanoid pelvis phantom was used for the IWROI imaging experiment. The IWROI image of the phantom was successfully reconstructed after beam-quality correction, and it was registered to the reference image within an acceptable level of tolerance. Dosimetric measurements revealed that the dose is reduced by approximately 61% in the inner ROI and by 73% in the outer ROI compared to the conventional bowtie filter-based half-fan scan. The IWROI method substantially reduces the imaging radiation dose and provides reconstructed images with an acceptable level of quality for patient setup and target localization. The proposed half-fan-based IWROI imaging technique can add a valuable option to CBCT in IGRT applications.

  2. Apparent diffusion coefficient measurement in glioma: Influence of region-of-interest determination methods on apparent diffusion coefficient values, interobserver variability, time efficiency, and diagnostic ability.

    Science.gov (United States)

    Han, Xu; Suo, Shiteng; Sun, Yawen; Zu, Jinyan; Qu, Jianxun; Zhou, Yan; Chen, Zengai; Xu, Jianrong

    2017-03-01

    To compare four methods of region-of-interest (ROI) placement for apparent diffusion coefficient (ADC) measurements in distinguishing low-grade gliomas (LGGs) from high-grade gliomas (HGGs). Two independent readers measured ADC parameters using four ROI methods (single-slice [single-round, five-round and freehand] and whole-volume) on 43 patients (20 LGGs, 23 HGGs) who had undergone 3.0 Tesla diffusion-weighted imaging and time required for each method of ADC measurements was recorded. Intraclass correlation coefficients (ICCs) were used to assess interobserver variability of ADC measurements. Mean and minimum ADC values and time required were compared using paired Student's t-tests. All ADC parameters (mean/minimum ADC values of three single-slice methods, mean/minimum/standard deviation/skewness/kurtosis/the10 th and 25 th percentiles/median/maximum of whole-volume method) were correlated with tumor grade (low versus high) by unpaired Student's t-tests. Discriminative ability was determined by receiver operating characteristic curves. All ADC measurements except minimum, skewness, and kurtosis of whole-volume ROI differed significantly between LGGs and HGGs (all P determination methods. Whole-volume histogram analysis did not yield better results than single-slice methods and took longer. Mean ADC value derived from single-round ROI is the most optimal parameter for differentiating LGGs from HGGs. 3 J. Magn. Reson. Imaging 2017;45:722-730. © 2016 International Society for Magnetic Resonance in Medicine.

  3. First clinical experience with a multiple region of interest registration and correction method in radiotherapy of head-and-neck cancer patients

    International Nuclear Information System (INIS)

    Beek, Suzanne van; Kranen, Simon van; Mencarelli, Angelo; Remeijer, Peter; Rasch, Coen; Herk, Marcel van; Sonke, Jan-Jakob

    2010-01-01

    Purpose: To discuss the first clinical experience with a multiple region of interest (mROI) registration and correction method for high-precision radiotherapy of head-and-neck cancer patients. Materials and methods: 12-13 3D rectangular-shaped ROIs were automatically placed around bony structures on the planning CT scans (n = 50 patients) which were individually registered to subsequent CBCT scans. mROI registration was used to quantify global and local setup errors. The time required to perform the mROI registration was compared with that of a previously used single-ROI method. The number of scans with residual local setup error exceeding 5 mm/5 deg. (warnings) was scored together with the frequency ROIs exceeding these limits for three or more consecutive imaging fractions (systematic errors). Results: In 40% of the CBCT scans, one or more ROI-registrations exceeded the 5 mm/5 deg.. Most warnings were seen in ROI 'hyoid', 31% of the rotation warnings and 14% of the translation warnings. Systematic errors lead to 52 consults of the treating physician. The preparation and registration time was similar for both registration methods. Conclusions: The mROI registration method is easy to use with little extra workload, provides additional information on local setup errors, and helps to select patients for re-planning.

  4. Biomedical image representation approach using visualness and spatial information in a concept feature space for interactive region-of-interest-based retrieval.

    Science.gov (United States)

    Rahman, Md Mahmudur; Antani, Sameer K; Demner-Fushman, Dina; Thoma, George R

    2015-10-01

    This article presents an approach to biomedical image retrieval by mapping image regions to local concepts where images are represented in a weighted entropy-based concept feature space. The term "concept" refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as the Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist the user in interactively selecting a region-of-interest (ROI) and searching for similar image ROIs. Further, a spatial verification step is used as a postprocessing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval is validated through experiments on two different data sets, which are collected from open access biomedical literature.

  5. Simple and efficient method for region of interest value extraction from picture archiving and communication system viewer with optical character recognition software and macro program.

    Science.gov (United States)

    Lee, Young Han; Park, Eun Hae; Suh, Jin-Suck

    2015-01-01

    The objectives are: 1) to introduce a simple and efficient method for extracting region of interest (ROI) values from a Picture Archiving and Communication System (PACS) viewer using optical character recognition (OCR) software and a macro program, and 2) to evaluate the accuracy of this method with a PACS workstation. This module was designed to extract the ROI values on the images of the PACS, and created as a development tool by using open-source OCR software and an open-source macro program. The principal processes are as follows: (1) capture a region of the ROI values as a graphic file for OCR, (2) recognize the text from the captured image by OCR software, (3) perform error-correction, (4) extract the values including area, average, standard deviation, max, and min values from the text, (5) reformat the values into temporary strings with tabs, and (6) paste the temporary strings into the spreadsheet. This principal process was repeated for the number of ROIs. The accuracy of this module was evaluated on 1040 recognitions from 280 randomly selected ROIs of the magnetic resonance images. The input times of ROIs were compared between conventional manual method and this extraction module-assisted input method. The module for extracting ROI values operated successfully using the OCR and macro programs. The values of the area, average, standard deviation, maximum, and minimum could be recognized and error-corrected with AutoHotkey-coded module. The average input times using the conventional method and the proposed module-assisted method were 34.97 seconds and 7.87 seconds, respectively. A simple and efficient method for ROI value extraction was developed with open-source OCR and a macro program. Accurate inputs of various numbers from ROIs can be extracted with this module. The proposed module could be applied to the next generation of PACS or existing PACS that have not yet been upgraded. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  6. A Low-Power Wireless Image Sensor Node with Noise-Robust Moving Object Detection and a Region-of-Interest Based Rate Controller

    Science.gov (United States)

    2017-03-01

    from both environment and hardware further reduces the transmission energy with negligible computation and memory overhead. The rate controller...detection, Region-of-interest, Rate control Introduction In wireless image sensor nodes for moving object surveillance, energy efficiency can be...noise, reliable moving object detection is required to avoid unnecessary transmission of background scenes [1]. Transmission energy can be further

  7. Influence of parameter settings in voxel-based morphometry 8. Using DARTEL and region-of-interest on reproducibility in gray matter volumetry.

    Science.gov (United States)

    Goto, M; Abe, O; Aoki, S; Hayashi, N; Miyati, T; Takao, H; Matsuda, H; Yamashita, F; Iwatsubo, T; Mori, H; Kunimatsu, A; Ino, K; Yano, K; Ohtomo, K

    2015-01-01

    To investigate whether reproducibility of gray matter volumetry is influenced by parameter settings for VBM 8 using Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra (DARTEL) with region-of-interest (ROI) analyses. We prepared three-dimensional T1-weighted magnetic resonance images (3D-T1WIs) of 21 healthy subjects. All subjects were imaged with each of five MRI systems. Voxel-based morphometry 8 (VBM 8) and WFU PickAtlas software were used for gray matter volumetry. The bilateral ROI labels used were those provided as default settings with the software: Frontal Lobe, Hippocampus, Occipital Lobe, Orbital Gyrus, Parietal Lobe, Putamen, and Temporal Lobe. All 3D-T1WIs were segmented to gray matter with six parameters of VBM 8, with each parameter having between three and eight selectable levels. Reproducibility was evaluated as the standard deviation (mm³) of measured values for the five MRI systems. Reproducibility was influenced by 'Bias regularization (BiasR)', 'Bias FWHM', and 'De-noising filter' settings, but not by 'MRF weighting', 'Sampling distance', or 'Warping regularization' settings. Reproducibility in BiasR was influenced by ROI. Superior reproducibility was observed in Frontal Lobe with the BiasR1 setting, and in Hippocampus, Parietal Lobe, and Putamen with the BiasR3*, BiasR1, and BiasR5 settings, respectively. Reproducibility of gray matter volumetry was influenced by parameter settings in VBM 8 using DARTEL and ROI. In multi-center studies, the use of appropriate settings in VBM 8 with DARTEL results in reduced scanner effect.

  8. Estimation of kidneys and urinary bladder doses based on the region of interest in 18fluorine-fluorodeoxyglucose positron emission tomography/computed tomography examination: a preliminary study.

    Science.gov (United States)

    Mustapha, Farida Aimi; Bashah, Farahnaz Ahmad Anwar; Yassin, Ihsan M; Fathinul Fikri, Ahmad Saad; Nordin, Abdul Jalil; Abdul Razak, Hairil Rashmizal

    2017-06-01

    Kidneys and urinary bladder are common physiologic uptake sites of 18fluorine-fluorodeoxyglucose ( 18 F-FDG) causing increased exposure of low energy ionizing radiation to these organs. Accurate measurement of organ dose is vital as 18 F-FDG is directly exposed to the organs. Organ dose from 18 F-FDG PET is calculated according to the injected 18 F-FDG activity with the application of dose coefficients established by International Commission on Radiological Protection (ICRP). But this dose calculation technique is not directly measured from these organs; rather it is calculated based on total injected activity of radiotracer prior to scanning. This study estimated the 18 F-FDG dose to the kidneys and urinary bladder in whole body positron emission tomography/computed tomography (PET/CT) examination by comparing dose from total injected activity of 18 F-FDG (calculated dose) and dose from organs activity based on the region of interest (ROI) (measured dose). Nine subjects were injected intravenously with the mean 18 F-FDG dose of 292.42 MBq prior to whole body PET/CT scanning. Kidneys and urinary bladder doses were estimated by using two approaches which are the total injected activity of 18 F-FDG and organs activity concentration of 18 F-FDG based on drawn ROI with the application of recommended dose coefficients for 18 F-FDG described in the ICRP 80 and ICRP 106. The mean percentage difference between calculated dose and measured dose ranged from 98.95% to 99.29% for the kidneys based on ICRP 80 and 98.96% to 99.32% based on ICRP 106. Whilst, the mean percentage difference between calculated dose and measured dose was 97.08% and 97.27% for urinary bladder based on ICRP 80 while 96.99% and 97.28% based on ICRP 106. Whereas, the range of mean percentage difference between calculated and measured organ doses derived from ICRP 106 and ICRP 80 for kidney doses were from 17.00% to 40.00% and for urinary bladder dose was 18.46% to 18.75%. There is a significant

  9. Searching Trajectories by Regions of Interest

    KAUST Repository

    Shang, Shuo

    2017-03-22

    With the increasing availability of moving-object tracking data, trajectory search is increasingly important. We propose and investigate a novel query type named trajectory search by regions of interest (TSR query). Given an argument set of trajectories, a TSR query takes a set of regions of interest as a parameter and returns the trajectory in the argument set with the highest spatial-density correlation to the query regions. This type of query is useful in many popular applications such as trip planning and recommendation, and location based services in general. TSR query processing faces three challenges: how to model the spatial-density correlation between query regions and data trajectories, how to effectively prune the search space, and how to effectively schedule multiple so-called query sources. To tackle these challenges, a series of new metrics are defined to model spatial-density correlations. An efficient trajectory search algorithm is developed that exploits upper and lower bounds to prune the search space and that adopts a query-source selection strategy, as well as integrates a heuristic search strategy based on priority ranking to schedule multiple query sources. The performance of TSR query processing is studied in extensive experiments based on real and synthetic spatial data.

  10. Searching Trajectories by Regions of Interest

    KAUST Repository

    Shang, Shuo; chen, Lisi; Jensen, Christian S.; Wen, Ji-Rong; Kalnis, Panos

    2017-01-01

    With the increasing availability of moving-object tracking data, trajectory search is increasingly important. We propose and investigate a novel query type named trajectory search by regions of interest (TSR query). Given an argument set of trajectories, a TSR query takes a set of regions of interest as a parameter and returns the trajectory in the argument set with the highest spatial-density correlation to the query regions. This type of query is useful in many popular applications such as trip planning and recommendation, and location based services in general. TSR query processing faces three challenges: how to model the spatial-density correlation between query regions and data trajectories, how to effectively prune the search space, and how to effectively schedule multiple so-called query sources. To tackle these challenges, a series of new metrics are defined to model spatial-density correlations. An efficient trajectory search algorithm is developed that exploits upper and lower bounds to prune the search space and that adopts a query-source selection strategy, as well as integrates a heuristic search strategy based on priority ranking to schedule multiple query sources. The performance of TSR query processing is studied in extensive experiments based on real and synthetic spatial data.

  11. First-Pass Angiography in Mice Using FDG-PET: A Simple Method of Deriving the Cardiovascular Transit Time Without the Need of Region-of-Interest Drawing.

    Science.gov (United States)

    Wu, Hsiao-Ming; Kreissl, Michael C; Schelbert, Heinrich R; Ladno, Waldemar; Prins, Mayumi; Shoghi-Jadid, Kooresh; Chatziioannou, Arion; Phelps, Michael E; Huang, Sung-Cheng

    2005-10-01

    In this study, we developed a simple and robust semi-automatic method to measure the right ventricle to left ventricle (RV-to-LV) transit time (TT) in mice using 2-[ 18 F]fluoro-2-deoxy-D-glucose (FDG) positron emission tomography (PET). The accuracy of the method was first evaluated using a 4-D digital dynamic mouse phantom. The RV-to-LV TTs of twenty-nine mouse studies were measured using the new method and compared to those obtained from the conventional ROI-drawing method. The results showed that the new method correctly separated different structures (e.g., RV, lung, and LV) in the PET images and generated corresponding time activity curve (TAC) of each structure. The RV-to-LV TTs obtained from the new method and ROI method were not statistically different (P = 0.20; r = 0.76). We expect that this fast and robust method is applicable to the pathophysiology of cardiovascular diseases using small animal models such as rats and mice.

  12. The Evolution of the ATLAS Region of Interest Builder

    CERN Document Server

    Love, Jeremy; The ATLAS collaboration

    2018-01-01

    The ATLAS trigger system is deployed to reduce the event rate from the Large Hadron Collider bunch crossing frequency of 40 MHz to 1 kHz for permanent storage using a tiered system. In the PC trigger farm decisions are seeded by Regions of Interest found by the custom hardware trigger system. The Regions of Interest are collected and distributed to the farm at 100 kHz by the ATLAS Region of Interest Builder. The Evolution of the Region of Interest Builder from a crate of custom VME-based electronics to a commodity PC hosting a single custom PCIe card has been undertaken to increase the system performance, flexibility, and ease maintenance. The functionality and performance of the Region of Interest Builder previously only possible using FPGAs and a custom backplane VME Crate, has now been implemented in a multi-threaded C++ software library interfaced to a single PCIe card with one Xilinx Vertex 6 FPGA. The PC-based system was installed in the ATLAS Data Acquisition system between the 2015 and 2016 data takin...

  13. Filtered region of interest cone-beam rotational angiography

    International Nuclear Information System (INIS)

    Schafer, Sebastian; Noeel, Peter B.; Walczak, Alan M.; Hoffmann, Kenneth R.

    2010-01-01

    Purpose: Cone-beam rotational angiography (CBRA) is widely used in the modern clinical settings. In a number of procedures, the area of interest is often considerably smaller than the field of view (FOV) of the detector, subjecting the patient to potentially unnecessary x-ray dose. The authors therefore propose a filter-based method to reduce the dose in the regions of low interest, while supplying high image quality in the region of interest (ROI). Methods: For such procedures, the authors propose a method of filtered region of interest (FROI)-CBRA. In the authors' approach, a gadolinium filter with a circular central opening is placed into the x-ray beam during image acquisition. The central region is imaged with high contrast, while peripheral regions are subjected to a substantial lower intensity and dose through beam filtering. The resulting images contain a high contrast/intensity ROI, as well as a low contrast/intensity peripheral region, and a transition region in between. To equalize the two regions' intensities, the first projection of the acquisition is performed with and without the filter in place. The equalization relationship, based on Beer's law, is established through linear regression using corresponding filtered and nonfiltered data. The transition region is equalized based on radial profiles. Results: Evaluations in 2D and 3D show no visible difference between conventional FROI-CBRA projection images and reconstructions in the ROI. CNR evaluations show similar image quality in the ROI, with a reduced CNR in the reconstructed peripheral region. In all filtered projection images, the scatter fraction inside the ROI was reduced. Theoretical and experimental dose evaluations show a considerable dose reduction; using a ROI half the original FOV reduces the dose by 60% for the filter thickness of 1.29 mm. Conclusions: These results indicate the potential of FROI-CBRA to reduce the dose to the patient while supplying the physician with the desired

  14. Embedding the shapes of regions of interest into a Clinical Document Architecture document.

    Science.gov (United States)

    Minh, Nguyen Hai; Yi, Byoung-Kee; Kim, Il Kon; Song, Joon Hyun; Binh, Pham Viet

    2015-03-01

    Sharing a medical image visually annotated by a region of interest with a remotely located specialist for consultation is a good practice. It may, however, require a special-purpose (and most likely expensive) system to send and view them, which is an unfeasible solution in developing countries such as Vietnam. In this study, we design and implement interoperable methods based on the HL7 Clinical Document Architecture and the eXtensible Markup Language Stylesheet Language for Transformation standards to seamlessly exchange and visually present the shapes of regions of interest using web browsers. We also propose a new integration architecture for a Clinical Document Architecture generator that enables embedding of regions of interest and simultaneous auto-generation of corresponding style sheets. Using the Clinical Document Architecture document and style sheet, a sender can transmit clinical documents and medical images together with coordinate values of regions of interest to recipients. Recipients can easily view the documents and display embedded regions of interest by rendering them in their web browser of choice. © The Author(s) 2014.

  15. Region-of-interest volumetric visual hull refinement

    KAUST Repository

    Knoblauch, Daniel

    2010-01-01

    This paper introduces a region-of-interest visual hull refinement technique, based on flexible voxel grids for volumetric visual hull reconstructions. Region-of-interest refinement is based on a multipass process, beginning with a focussed visual hull reconstruction, resulting in a first 3D approximation of the target, followed by a region-of-interest estimation, tasked with identifying features of interest, which in turn are used to locally refine the voxel grid and extract a higher-resolution surface representation for those regions. This approach is illustrated for the reconstruction of avatars for use in tele-immersion environments, where head and hand regions are of higher interest. To allow reproducability and direct comparison a publicly available data set for human visual hull reconstruction is used. This paper shows that region-of-interest reconstruction of the target is faster and visually comparable to higher resolution focused visual hull reconstructions. This approach reduces the amount of data generated through the reconstruction, allowing faster post processing, as rendering or networking of the surface voxels. Reconstruction speeds support smooth interactions between the avatar and the virtual environment, while the improved resolution of its facial region and hands creates a higher-degree of immersion and potentially impacts the perception of body language, facial expressions and eye-to-eye contact. Copyright © 2010 by the Association for Computing Machinery, Inc.

  16. Dynamic magnetic resonance imaging of cervical lymph nodes in patients with oral cancer. Utility of the small region of interest method in evaluating the architecture of cervical lymph nodes

    International Nuclear Information System (INIS)

    Oomori, Miwako; Fukunari, Fumiko; Kagawa, Toyohiro; Okamura, Kazuhiko; Yuasa, Kenji

    2008-01-01

    Our purpose was to evaluate the utility of the small region of interest (ROI) method to detect the architecture of cervical lymph nodes and the specificity of time-intensity curves for tissue present in cervical lymph nodes. Specimens were taken from 17 lymph nodes of eight patients (ten sides of the neck) with oral squamous cell carcinoma who underwent dynamic contrast-enhanced magnetic resonance imaging (MRI) and neck dissection between 2005 and 2007 at our hospital. Two methods of constructing time-intensity curves were compared: the conventional method that uses relatively large ROIs, and a new method that uses small ROIs. Curves made with the small ROI method were then compared to histopathological findings for dissected lymph nodes. The small ROI method allowed differences in signal intensity to be discerned at the tissue level, which was not possible with the conventional large ROI method. Curves for normal lymphoid tissue tended to be type I, those for tumor cells tended to be type II, and those for keratinization/necrosis tended to be types III and IV, indicating that time-intensity curves can be specific to tissue type within lymph nodes. The small ROI method was useful for evaluation of the architecture of cervical lymph nodes. (author)

  17. Region of interest-based versus whole-lung segmentation-based approach for MR lung perfusion quantification in 2-year-old children after congenital diaphragmatic hernia repair

    Energy Technology Data Exchange (ETDEWEB)

    Weis, M.; Sommer, V.; Hagelstein, C.; Schoenberg, S.O.; Neff, K.W. [Heidelberg University, Institute of Clinical Radiology and Nuclear Medicine, University Medical Center Mannheim, Medical Faculty Mannheim, Mannheim (Germany); Zoellner, F.G. [Heidelberg University, Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Mannheim (Germany); Zahn, K. [University of Heidelberg, Department of Paediatric Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Mannheim (Germany); Schaible, T. [Heidelberg University, Department of Paediatrics, University Medical Center Mannheim, Medical Faculty Mannheim, Mannheim (Germany)

    2016-12-15

    With a region of interest (ROI)-based approach 2-year-old children after congenital diaphragmatic hernia (CDH) show reduced MR lung perfusion values on the ipsilateral side compared to the contralateral. This study evaluates whether results can be reproduced by segmentation of whole-lung and whether there are differences between the ROI-based and whole-lung measurements. Using dynamic contrast-enhanced (DCE) MRI, pulmonary blood flow (PBF), pulmonary blood volume (PBV) and mean transit time (MTT) were quantified in 30 children after CDH repair. Quantification results of an ROI-based (six cylindrical ROIs generated of five adjacent slices per lung-side) and a whole-lung segmentation approach were compared. In both approaches PBF and PBV were significantly reduced on the ipsilateral side (p always <0.0001). In ipsilateral lungs, PBF of the ROI-based and the whole-lung segmentation-based approach was equal (p=0.50). In contralateral lungs, the ROI-based approach significantly overestimated PBF in comparison to the whole-lung segmentation approach by approximately 9.5 % (p=0.0013). MR lung perfusion in 2-year-old children after CDH is significantly reduced ipsilaterally. In the contralateral lung, the ROI-based approach significantly overestimates perfusion, which can be explained by exclusion of the most ventral parts of the lung. Therefore whole-lung segmentation should be preferred. (orig.)

  18. Region of interest-based versus whole-lung segmentation-based approach for MR lung perfusion quantification in 2-year-old children after congenital diaphragmatic hernia repair

    International Nuclear Information System (INIS)

    Weis, M.; Sommer, V.; Hagelstein, C.; Schoenberg, S.O.; Neff, K.W.; Zoellner, F.G.; Zahn, K.; Schaible, T.

    2016-01-01

    With a region of interest (ROI)-based approach 2-year-old children after congenital diaphragmatic hernia (CDH) show reduced MR lung perfusion values on the ipsilateral side compared to the contralateral. This study evaluates whether results can be reproduced by segmentation of whole-lung and whether there are differences between the ROI-based and whole-lung measurements. Using dynamic contrast-enhanced (DCE) MRI, pulmonary blood flow (PBF), pulmonary blood volume (PBV) and mean transit time (MTT) were quantified in 30 children after CDH repair. Quantification results of an ROI-based (six cylindrical ROIs generated of five adjacent slices per lung-side) and a whole-lung segmentation approach were compared. In both approaches PBF and PBV were significantly reduced on the ipsilateral side (p always <0.0001). In ipsilateral lungs, PBF of the ROI-based and the whole-lung segmentation-based approach was equal (p=0.50). In contralateral lungs, the ROI-based approach significantly overestimated PBF in comparison to the whole-lung segmentation approach by approximately 9.5 % (p=0.0013). MR lung perfusion in 2-year-old children after CDH is significantly reduced ipsilaterally. In the contralateral lung, the ROI-based approach significantly overestimates perfusion, which can be explained by exclusion of the most ventral parts of the lung. Therefore whole-lung segmentation should be preferred. (orig.)

  19. Filtered region of interest cone-beam rotational angiography

    Energy Technology Data Exchange (ETDEWEB)

    Schafer, Sebastian; Noeel, Peter B.; Walczak, Alan M.; Hoffmann, Kenneth R. [Department of Mechanical Engineering, SUNY at Buffalo, 3435 Main Street, Buffalo, New York 14214 (United States); Department of Neurosurgery, SUNY at Buffalo, 3435 Main Street, Buffalo, New York 14214 (United States) and Toshiba Stroke Research Center, SUNY at Buffalo, 3435 Main Street, Buffalo, New York 14214 (United States); Department of Neurosurgery, SUNY at Buffalo, 3435 Main Street, Buffalo, New York 14214 (United States); Department of Computer Science, SUNY at Buffalo, 3435 Main Street, Buffalo, New York 14214 (United States) and Toshiba Stroke Research Center, SUNY at Buffalo, 3435 Main Street, Buffalo, New York 14214 (United States); Department of Neurosurgery, SUNY at Buffalo, 3435 Main Street, Buffalo, New York 14214 and Toshiba Stroke Research Center, SUNY at Buffalo, 3435 Main Street, Buffalo, New York 14214 (United States); Department of Mechanical Engineering, SUNY at Buffalo, 3435 Main Street, Buffalo, New York 14214 (United States); Department of Neurosurgery, SUNY at Buffalo, 3435 Main Street, Buffalo, New York 14214 (United States); Department of Computer Science, SUNY at Buffalo, 3435 Main Street, Buffalo, New York 14214 (United States) and Toshiba Stroke Research Center, SUNY at Buffalo, 3435 Main Street, Buffalo, New York 14214 (United States)

    2010-02-15

    Purpose: Cone-beam rotational angiography (CBRA) is widely used in the modern clinical settings. In a number of procedures, the area of interest is often considerably smaller than the field of view (FOV) of the detector, subjecting the patient to potentially unnecessary x-ray dose. The authors therefore propose a filter-based method to reduce the dose in the regions of low interest, while supplying high image quality in the region of interest (ROI). Methods: For such procedures, the authors propose a method of filtered region of interest (FROI)-CBRA. In the authors' approach, a gadolinium filter with a circular central opening is placed into the x-ray beam during image acquisition. The central region is imaged with high contrast, while peripheral regions are subjected to a substantial lower intensity and dose through beam filtering. The resulting images contain a high contrast/intensity ROI, as well as a low contrast/intensity peripheral region, and a transition region in between. To equalize the two regions' intensities, the first projection of the acquisition is performed with and without the filter in place. The equalization relationship, based on Beer's law, is established through linear regression using corresponding filtered and nonfiltered data. The transition region is equalized based on radial profiles. Results: Evaluations in 2D and 3D show no visible difference between conventional FROI-CBRA projection images and reconstructions in the ROI. CNR evaluations show similar image quality in the ROI, with a reduced CNR in the reconstructed peripheral region. In all filtered projection images, the scatter fraction inside the ROI was reduced. Theoretical and experimental dose evaluations show a considerable dose reduction; using a ROI half the original FOV reduces the dose by 60% for the filter thickness of 1.29 mm. Conclusions: These results indicate the potential of FROI-CBRA to reduce the dose to the patient while supplying the physician with

  20. Apparent diffusion coefficient measurements in diffusion-weighted magnetic resonance imaging of the anterior mediastinum: inter-observer reproducibility of five different methods of region-of-interest positioning

    Energy Technology Data Exchange (ETDEWEB)

    Priola, Adriano Massimiliano; Priola, Sandro Massimo; Parlatano, Daniela; Gned, Dario; Veltri, Andrea [San Luigi Gonzaga University Hospital, Department of Diagnostic Imaging, Regione Gonzole 10, Orbassano, Torino (Italy); Giraudo, Maria Teresa [University of Torino, Department of Mathematics ' ' Giuseppe Peano' ' , Torino (Italy); Giardino, Roberto; Ardissone, Francesco [San Luigi Gonzaga University Hospital, Department of Thoracic Surgery, Regione Gonzole 10, Orbassano, Torino (Italy); Ferrero, Bruno [San Luigi Gonzaga University Hospital, Department of Neurology, Regione Gonzole 10, Orbassano, Torino (Italy)

    2017-04-15

    To investigate inter-reader reproducibility of five different region-of-interest (ROI) protocols for apparent diffusion coefficient (ADC) measurements in the anterior mediastinum. In eighty-one subjects, on ADC mapping, two readers measured the ADC using five methods of ROI positioning that encompassed the entire tissue (whole tissue volume [WTV], three slices observer-defined [TSOD], single-slice [SS]) or the more restricted areas (one small round ROI [OSR], multiple small round ROI [MSR]). Inter-observer variability was assessed with interclass correlation coefficient (ICC), coefficient of variation (CoV), and Bland-Altman analysis. Nonparametric tests were performed to compare the ADC between ROI methods. The measurement time was recorded and compared between ROI methods. All methods showed excellent inter-reader agreement with best and worst reproducibility in WTV and OSR, respectively (ICC, 0.937/0.874; CoV, 7.3 %/16.8 %; limits of agreement, ±0.44/±0.77 x 10{sup -3} mm{sup 2}/s). ADC values of OSR and MSR were significantly lower compared to the other methods in both readers (p < 0.001). The SS and OSR methods required less measurement time (14 ± 2 s) compared to the others (p < 0.0001), while the WTV method required the longest measurement time (90 ± 56 and 77 ± 49 s for each reader) (p < 0.0001). All methods demonstrate excellent inter-observer reproducibility with the best agreement in WTV, although it requires the longest measurement time. (orig.)

  1. Cortical region of interest definition on SPECT brain images using X-ray CT registration

    Energy Technology Data Exchange (ETDEWEB)

    Tzourio, N.; Sutton, D. (Commissariat a l' Energie Atomique, Orsay (France). Service Hospitalier Frederic Joliot); Joliot, M. (Commissariat a l' Energie Atomique, Orsay (France). Service Hospitalier Frederic Joliot INSERM, Orsay (France)); Mazoyer, B.M. (Commissariat a l' Energie Atomique, Orsay (France). Service Hospitalier Frederic Joliot Antenne d' Information Medicale, C.H.U. Bichat, Paris (France)); Charlot, V. (Hopital Louis Mourier, Colombes (France). Service de Psychiatrie); Salamon, G. (CHU La Timone, Marseille (France). Service de Neuroradiologie)

    1992-11-01

    We present a method for brain single photon emission computed tomography (SPECT) analysis based on individual registration of anatomical (CT) and functional ([sup 133]Xe regional cerebral blood flow) images and on the definition of three-dimensional functional regions of interest. Registration of CT and SPECT is performed through adjustment of CT-defined cortex limits to the SPECT image. Regions are defined by sectioning a cortical ribbon on the CT images, copied over the SPECT images and pooled through slices to give 3D cortical regions of interest. The proposed method shows good intra- and interobserver reproducibility (regional intraclass correlation coefficient [approx equal]0.98), and good accuracy in terms of repositioning ([approx equal]3.5 mm) as compared to the SPECT image resolution (14 mm). The method should be particularly useful for analysing SPECT studies when variations in brain anatomy (normal or abnormal) must be accounted for. (orig.).

  2. Region-of-interest imaging in cone beam computerized tomography

    International Nuclear Information System (INIS)

    Tam, K.C.

    1996-01-01

    Imaging a sectional region within an object with a detector just big enough to cover the sectional region-of-interest is analyzed. We show that with some suitable choice of scanning configuration and with an innovative method of data combination, all the Radon data can be obtained accurately. The algorithm is mathematically exact, and requires no iterations and no additional measurements. The method can be applied to inspect portions of large industrial objects in industrial imaging, as well as to image portions of human bodies in medical diagnosis

  3. Diffusion tensor imaging of nigral degeneration in Parkinson's disease: A region-of-interest and voxel-based study at 3 T and systematic review with meta-analysis☆

    Science.gov (United States)

    Schwarz, Stefan T.; Abaei, Maryam; Gontu, Vamsi; Morgan, Paul S.; Bajaj, Nin; Auer, Dorothee P.

    2013-01-01

    There is increasing interest in developing a reliable, affordable and accessible disease biomarker of Parkinson's disease (PD) to facilitate disease modifying PD-trials. Imaging biomarkers using magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) can describe parameters such as fractional anisotropy (FA), mean diffusivity (MD) or apparent diffusion coefficient (ADC). These parameters, when measured in the substantia nigra (SN), have not only shown promising but also varying and controversial results. To clarify the potential diagnostic value of nigral DTI in PD and its dependency on selection of region-of-interest, we undertook a high resolution DTI study at 3 T. 59 subjects (32 PD patients, 27 age and sex matched healthy controls) were analysed using manual outlining of SN and substructures, and voxel-based analysis (VBA). We also performed a systematic literature review and meta-analysis to estimate the effect size (DES) of disease related nigral DTI changes. We found a regional increase in nigral mean diffusivity in PD (mean ± SD, PD 0.80 ± 0.10 vs. controls 0.73 ± 0.06 · 10− 3 mm2/s, p = 0.002), but no difference using a voxel based approach. No significant disease effect was seen using meta-analysis of nigral MD changes (10 studies, DES = + 0.26, p = 0.17, I2 = 30%). None of the nigral regional or voxel based analyses of this study showed altered fractional anisotropy. Meta-analysis of 11 studies on nigral FA changes revealed a significant PD induced FA decrease. There was, however, a very large variation in results (I2 = 86%) comparing all studies. After exclusion of five studies with unusual high values of nigral FA in the control group, an acceptable heterogeneity was reached, but there was non-significant disease effect (DES = − 0.5, p = 0.22, I2 = 28%). The small PD related nigral MD changes in conjunction with the negative findings on VBA and meta-analysis limit the usefulness of nigral MD measures as

  4. Automatic detection of regions of interest in mammographic images

    Science.gov (United States)

    Cheng, Erkang; Ling, Haibin; Bakic, Predrag R.; Maidment, Andrew D. A.; Megalooikonomou, Vasileios

    2011-03-01

    This work is a part of our ongoing study aimed at comparing the topology of anatomical branching structures with the underlying image texture. Detection of regions of interest (ROIs) in clinical breast images serves as the first step in development of an automated system for image analysis and breast cancer diagnosis. In this paper, we have investigated machine learning approaches for the task of identifying ROIs with visible breast ductal trees in a given galactographic image. Specifically, we have developed boosting based framework using the AdaBoost algorithm in combination with Haar wavelet features for the ROI detection. Twenty-eight clinical galactograms with expert annotated ROIs were used for training. Positive samples were generated by resampling near the annotated ROIs, and negative samples were generated randomly by image decomposition. Each detected ROI candidate was given a confidences core. Candidate ROIs with spatial overlap were merged and their confidence scores combined. We have compared three strategies for elimination of false positives. The strategies differed in their approach to combining confidence scores by summation, averaging, or selecting the maximum score.. The strategies were compared based upon the spatial overlap with annotated ROIs. Using a 4-fold cross-validation with the annotated clinical galactographic images, the summation strategy showed the best performance with 75% detection rate. When combining the top two candidates, the selection of maximum score showed the best performance with 96% detection rate.

  5. Region of Interest Selection Interface for Wide-Angle Arthroscope

    Directory of Open Access Journals (Sweden)

    Jung Kyunghwa

    2015-01-01

    Full Text Available We have proposed a new interface for an wide-angle endoscope for solo surgery. The wide-angle arthroscopic view and magnified region of interest (ROI within the wide view were shown simultaneously. With a camera affixed to surgical instruments, the position of the ROI could be determined by manipulating the surgical instrument. Image features acquired by the A-KAZE approach were used to estimate the change of position of the surgical instrument by tracking the features every time the camera moved. We examined the accuracy of ROI selection using three different images, which were different-sized square arrays and tested phantom experiments. When the number of ROIs was twelve, the success rate was best, and the rate diminished as the size of ROIs decreased. The experimental results showed that the method of using a camera without additional sensors satisfied the appropriate accuracy required for ROI selection, and this interface was helpful in performing surgery with fewer assistants.

  6. High-Performance Region-of-Interest Image Error Concealment with Hiding Technique

    Directory of Open Access Journals (Sweden)

    Shih-Chang Hsia

    2010-01-01

    Full Text Available Recently region-of-interest (ROI based image coding is a popular topic. Since ROI area contains much more important information for an image, it must be prevented from error decoding while suffering from channel lost or unexpected attack. This paper presents an efficient error concealment method to recover ROI information with a hiding technique. Based on the progressive transformation, the low-frequency components of ROI are encoded to disperse its information into the high-frequency bank of original image. The capability of protection is carried out with extracting the ROI coefficients from the damaged image without increasing extra information. Simulation results show that the proposed method can efficiently reconstruct the ROI image when ROI bit-stream occurs errors, and the measurement of PSNR result outperforms the conventional error concealment techniques by 2 to 5 dB.

  7. An Algorithm to Detect the Retinal Region of Interest

    Science.gov (United States)

    Şehirli, E.; Turan, M. K.; Demiral, E.

    2017-11-01

    Retina is one of the important layers of the eyes, which includes sensitive cells to colour and light and nerve fibers. Retina can be displayed by using some medical devices such as fundus camera, ophthalmoscope. Hence, some lesions like microaneurysm, haemorrhage, exudate with many diseases of the eye can be detected by looking at the images taken by devices. In computer vision and biomedical areas, studies to detect lesions of the eyes automatically have been done for a long time. In order to make automated detections, the concept of ROI may be utilized. ROI which stands for region of interest generally serves the purpose of focusing on particular targets. The main concentration of this paper is the algorithm to automatically detect retinal region of interest belonging to different retinal images on a software application. The algorithm consists of three stages such as pre-processing stage, detecting ROI on processed images and overlapping between input image and obtained ROI of the image.

  8. Diffuse intrinsic pontine glioma: is MRI surveillance improved by region of interest volumetry?

    Science.gov (United States)

    Riley, Garan T; Armitage, Paul A; Batty, Ruth; Griffiths, Paul D; Lee, Vicki; McMullan, John; Connolly, Daniel J A

    2015-02-01

    Paediatric diffuse intrinsic pontine glioma (DIPG) is noteworthy for its fibrillary infiltration through neuroparenchyma and its resultant irregular shape. Conventional volumetry methods aim to approximate such irregular tumours to a regular ellipse, which could be less accurate when assessing treatment response on surveillance MRI. Region-of-interest (ROI) volumetry methods, using manually traced tumour profiles on contiguous imaging slices and subsequent computer-aided calculations, may prove more reliable. To evaluate whether the reliability of MRI surveillance of DIPGs can be improved by the use of ROI-based volumetry. We investigated the use of ROI- and ellipsoid-based methods of volumetry for paediatric DIPGs in a retrospective review of 22 MRI examinations. We assessed the inter- and intraobserver variability of the two methods when performed by four observers. ROI- and ellipsoid-based methods strongly correlated for all four observers. The ROI-based volumes showed slightly better agreement both between and within observers than the ellipsoid-based volumes (inter-[intra-]observer agreement 89.8% [92.3%] and 83.1% [88.2%], respectively). Bland-Altman plots show tighter limits of agreement for the ROI-based method. Both methods are reproducible and transferrable among observers. ROI-based volumetry appears to perform better with greater intra- and interobserver agreement for complex-shaped DIPG.

  9. Diffuse intrinsic pontine glioma: is MRI surveillance improved by region of interest volumetry?

    International Nuclear Information System (INIS)

    Riley, Garan T.; Armitage, Paul A.; Griffiths, Paul D.; Batty, Ruth; Connolly, Daniel J.A.; Lee, Vicki; McMullan, John

    2015-01-01

    Paediatric diffuse intrinsic pontine glioma (DIPG) is noteworthy for its fibrillary infiltration through neuroparenchyma and its resultant irregular shape. Conventional volumetry methods aim to approximate such irregular tumours to a regular ellipse, which could be less accurate when assessing treatment response on surveillance MRI. Region-of-interest (ROI) volumetry methods, using manually traced tumour profiles on contiguous imaging slices and subsequent computer-aided calculations, may prove more reliable. To evaluate whether the reliability of MRI surveillance of DIPGs can be improved by the use of ROI-based volumetry. We investigated the use of ROI- and ellipsoid-based methods of volumetry for paediatric DIPGs in a retrospective review of 22 MRI examinations. We assessed the inter- and intraobserver variability of the two methods when performed by four observers. ROI- and ellipsoid-based methods strongly correlated for all four observers. The ROI-based volumes showed slightly better agreement both between and within observers than the ellipsoid-based volumes (inter-[intra-]observer agreement 89.8% [92.3%] and 83.1% [88.2%], respectively). Bland-Altman plots show tighter limits of agreement for the ROI-based method. Both methods are reproducible and transferrable among observers. ROI-based volumetry appears to perform better with greater intra- and interobserver agreement for complex-shaped DIPG. (orig.)

  10. Diffuse intrinsic pontine glioma: is MRI surveillance improved by region of interest volumetry?

    Energy Technology Data Exchange (ETDEWEB)

    Riley, Garan T. [University Hospital of North Tees, Department of General Radiology, North Tees and Hartlepool NHS Foundation Trust, Stockton-on-Tees, Cleveland (United Kingdom); Armitage, Paul A.; Griffiths, Paul D. [University of Sheffield, Academic Unit of Radiology, Sheffield (United Kingdom); Batty, Ruth; Connolly, Daniel J.A. [Sheffield Children' s NHS Foundation Trust, Department of Radiology, Sheffield (United Kingdom); Lee, Vicki [Sheffield Children' s NHS Foundation Trust, Department of Oncology, Sheffield (United Kingdom); McMullan, John [Sheffield Children' s NHS Foundation Trust, Department of Neurosurgery, Sheffield (United Kingdom)

    2014-08-21

    Paediatric diffuse intrinsic pontine glioma (DIPG) is noteworthy for its fibrillary infiltration through neuroparenchyma and its resultant irregular shape. Conventional volumetry methods aim to approximate such irregular tumours to a regular ellipse, which could be less accurate when assessing treatment response on surveillance MRI. Region-of-interest (ROI) volumetry methods, using manually traced tumour profiles on contiguous imaging slices and subsequent computer-aided calculations, may prove more reliable. To evaluate whether the reliability of MRI surveillance of DIPGs can be improved by the use of ROI-based volumetry. We investigated the use of ROI- and ellipsoid-based methods of volumetry for paediatric DIPGs in a retrospective review of 22 MRI examinations. We assessed the inter- and intraobserver variability of the two methods when performed by four observers. ROI- and ellipsoid-based methods strongly correlated for all four observers. The ROI-based volumes showed slightly better agreement both between and within observers than the ellipsoid-based volumes (inter-[intra-]observer agreement 89.8% [92.3%] and 83.1% [88.2%], respectively). Bland-Altman plots show tighter limits of agreement for the ROI-based method. Both methods are reproducible and transferrable among observers. ROI-based volumetry appears to perform better with greater intra- and interobserver agreement for complex-shaped DIPG. (orig.)

  11. A robust automated left ventricle region of interest localization technique using a cardiac cine MRI atlas

    Science.gov (United States)

    Ben-Zikri, Yehuda Kfir; Linte, Cristian A.

    2016-03-01

    Region of interest detection is a precursor to many medical image processing and analysis applications, including segmentation, registration and other image manipulation techniques. The optimal region of interest is often selected manually, based on empirical knowledge and features of the image dataset. However, if inconsistently identified, the selected region of interest may greatly affect the subsequent image analysis or interpretation steps, in turn leading to incomplete assessment during computer-aided diagnosis or incomplete visualization or identification of the surgical targets, if employed in the context of pre-procedural planning or image-guided interventions. Therefore, the need for robust, accurate and computationally efficient region of interest localization techniques is prevalent in many modern computer-assisted diagnosis and therapy applications. Here we propose a fully automated, robust, a priori learning-based approach that provides reliable estimates of the left and right ventricle features from cine cardiac MR images. The proposed approach leverages the temporal frame-to-frame motion extracted across a range of short axis left ventricle slice images with small training set generated from les than 10% of the population. This approach is based on histogram of oriented gradients features weighted by local intensities to first identify an initial region of interest depicting the left and right ventricles that exhibits the greatest extent of cardiac motion. This region is correlated with the homologous region that belongs to the training dataset that best matches the test image using feature vector correlation techniques. Lastly, the optimal left ventricle region of interest of the test image is identified based on the correlation of known ground truth segmentations associated with the training dataset deemed closest to the test image. The proposed approach was tested on a population of 100 patient datasets and was validated against the ground truth

  12. Functional connectivity analysis of fMRI data using parameterized regions-of-interest.

    NARCIS (Netherlands)

    Weeda, W.D.; Waldorp, L.J.; Grasman, R.P.P.P.; van Gaal, S.; Huizenga, H.M.

    2011-01-01

    Connectivity analysis of fMRI data requires correct specification of regions-of-interest (ROIs). Selection of ROIs based on outcomes of a GLM analysis may be hindered by conservativeness of the multiple comparison correction, while selection based on brain anatomy may be biased due to inconsistent

  13. A Divide and Conquer Strategy for Scaling Weather Simulations with Multiple Regions of Interest

    Directory of Open Access Journals (Sweden)

    Preeti Malakar

    2013-01-01

    Full Text Available Accurate and timely prediction of weather phenomena, such as hurricanes and flash floods, require high-fidelity compute intensive simulations of multiple finer regions of interest within a coarse simulation domain. Current weather applications execute these nested simulations sequentially using all the available processors, which is sub-optimal due to their sub-linear scalability. In this work, we present a strategy for parallel execution of multiple nested domain simulations based on partitioning the 2-D processor grid into disjoint rectangular regions associated with each domain. We propose a novel combination of performance prediction, processor allocation methods and topology-aware mapping of the regions on torus interconnects. Experiments on IBM Blue Gene systems using WRF show that the proposed strategies result in performance improvement of up to 33% with topology-oblivious mapping and up to additional 7% with topology-aware mapping over the default sequential strategy.

  14. Refining a region-of-interest within an available CT image

    International Nuclear Information System (INIS)

    Enjilela, Esmaeil; Hussein, Esam M.A.

    2013-01-01

    This paper describes a numerical method for refining the image of a region-of-interest (RoI) within an existing tomographic slice, provided that projection data are stored along with the image. Using the attributes of the image, projection values (ray-sums) are adjusted to compensate for the material outside the RoI. Advantage is taken of the high degree of overdetermination of common computed tomography systems to reconstruct an RoI image over smaller pixels. The smaller size of a region-of-interest enables the use of iterative methods for RoI image reconstruction, which are less prone to error propagation. Simulation results are shown for an anthropomorphic head phantom, demonstrating that the introduced approach enhances both the spatial resolution and material contrast of RoI images; without the need to acquire any additional measurements or to alter existing imaging setups and systems. - Highlights: ► A method for refining the image of a region-of-interest within an existing tomographic image. ► Refined spatial-resolution within the region-of-interest, due to high redundancy of CT data. ► Enhancement in image contrast by the use of iterative image reconstruction, made possible by the smaller problem size. ► No need for additional measurements, no alteration of existing imaging setups and systems

  15. The ATLAS high level trigger region of interest builder

    International Nuclear Information System (INIS)

    Blair, R.; Dawson, J.; Drake, G.; Haberichter, W.; Schlereth, J.; Zhang, J.; Ermoline, Y.; Pope, B.; Aboline, M.; High Energy Physics; Michigan State Univ.

    2008-01-01

    This article describes the design, testing and production of the ATLAS Region of Interest Builder (RoIB). This device acts as an interface between the Level 1 trigger and the high level trigger (HLT) farm for the ATLAS LHC detector. It distributes all of the Level 1 data for a subset of events to a small number of (16 or less) individual commodity processors. These processors in turn provide this information to the HLT. This allows the HLT to use the Level 1 information to narrow data requests to areas of the detector where Level 1 has identified interesting objects

  16. AN ALGORITHM TO DETECT THE RETINAL REGION OF INTEREST

    Directory of Open Access Journals (Sweden)

    E. Şehirli

    2017-11-01

    Full Text Available Retina is one of the important layers of the eyes, which includes sensitive cells to colour and light and nerve fibers. Retina can be displayed by using some medical devices such as fundus camera, ophthalmoscope. Hence, some lesions like microaneurysm, haemorrhage, exudate with many diseases of the eye can be detected by looking at the images taken by devices. In computer vision and biomedical areas, studies to detect lesions of the eyes automatically have been done for a long time. In order to make automated detections, the concept of ROI may be utilized. ROI which stands for region of interest generally serves the purpose of focusing on particular targets. The main concentration of this paper is the algorithm to automatically detect retinal region of interest belonging to different retinal images on a software application. The algorithm consists of three stages such as pre-processing stage, detecting ROI on processed images and overlapping between input image and obtained ROI of the image.

  17. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Lerouge Sam

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4 .

  18. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Rik Van de Walle

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4%.

  19. Automated Region of Interest Retrieval of Metallographic Images for Quality Classification in Industry

    Directory of Open Access Journals (Sweden)

    Petr Kotas

    2012-01-01

    Full Text Available The aim of the research is development and testing of new methods to classify the quality of metallographic samples of steels with high added value (for example grades X70 according API. In this paper, we address the development of methods to classify the quality of slab samples images with the main emphasis on the quality of the image center called as segregation area. For this reason, we introduce an alternative method for automated retrieval of region of interest. In the first step, the metallographic image is segmented using both spectral method and thresholding. Then, the extracted macrostructure of the metallographic image is automatically analyzed by statistical methods. Finally, automatically extracted region of interests are compared with results of human experts.  Practical experience with retrieval of non-homogeneous noised digital images in industrial environment is discussed as well.

  20. Abnormal responses of ejection fraction to exercise, in healthy subjects, caused by region-of-interest selection

    International Nuclear Information System (INIS)

    Sorenson, S.G.; Caldwell, J.; Ritchie, J.; Hamilton, G.

    1981-01-01

    We performed serial exercise equilibrium radionuclide angiography in eight normal subjects with each subject executing three tests: control, after nitroglycerin, and after propranolol. The left-ventricular ejection fraction (EF) was calculated by two methods: (a) fixed region-of-interest (FROI) using a single end-diastolic ROI, and (b) variable region-of-interest (VROI) where an end-diastolic and end-systolic region of interest were used. Abnormal maximal EF responses occurred in five of eight subjects during control using FROI but in zero of eight employing VROI (p < 0.05). After nitroglycerin, three of eight subjects had abnormal responses by FROI, but zero of eight were abnormal by VROI (p < 0.05). After propranolol, blunted EF responses occurred in three of seven by both methods. Falsely abnormal EF responses to exercise RNA may occur due to the method of region-of-interest selection in normal subjects with normal or high ejection fractions

  1. CubeIndexer: Indexer for regions of interest in data cubes

    Science.gov (United States)

    Chilean Virtual Observatory; Araya, Mauricio; Candia, Gabriel; Gregorio, Rodrigo; Mendoza, Marcelo; Solar, Mauricio

    2015-12-01

    CubeIndexer indexes regions of interest (ROIs) in data cubes reducing the necessary storage space. The software can process data cubes containing megabytes of data in fractions of a second without human supervision, thus allowing it to be incorporated into a production line for displaying objects in a virtual observatory. The software forms part of the Chilean Virtual Observatory (ChiVO) and provides the capability of content-based searches on data cubes to the astronomical community.

  2. Finding regions of interest in pathological images: an attentional model approach

    Science.gov (United States)

    Gómez, Francisco; Villalón, Julio; Gutierrez, Ricardo; Romero, Eduardo

    2009-02-01

    This paper introduces an automated method for finding diagnostic regions-of-interest (RoIs) in histopathological images. This method is based on the cognitive process of visual selective attention that arises during a pathologist's image examination. Specifically, it emulates the first examination phase, which consists in a coarse search for tissue structures at a "low zoom" to separate the image into relevant regions.1 The pathologist's cognitive performance depends on inherent image visual cues - bottom-up information - and on acquired clinical medicine knowledge - top-down mechanisms -. Our pathologist's visual attention model integrates the latter two components. The selected bottom-up information includes local low level features such as intensity, color, orientation and texture information. Top-down information is related to the anatomical and pathological structures known by the expert. A coarse approximation to these structures is achieved by an oversegmentation algorithm, inspired by psychological grouping theories. The algorithm parameters are learned from an expert pathologist's segmentation. Top-down and bottom-up integration is achieved by calculating a unique index for each of the low level characteristics inside the region. Relevancy is estimated as a simple average of these indexes. Finally, a binary decision rule defines whether or not a region is interesting. The method was evaluated on a set of 49 images using a perceptually-weighted evaluation criterion, finding a quality gain of 3dB when comparing to a classical bottom-up model of attention.

  3. Treatment planning for prostate brachytherapy using region of interest adjoint functions and a greedy heuristic

    International Nuclear Information System (INIS)

    Yoo, Sua; Kowalok, Michael E; Thomadsen, Bruce R; Henderson, Douglass L

    2003-01-01

    We have developed an efficient treatment-planning algorithm for prostate implants that is based on region of interest (ROI) adjoint functions and a greedy heuristic. For this work, we define the adjoint function for an ROI as the sensitivity of the average dose in the ROI to a unit-strength brachytherapy source at any seed position. The greedy heuristic uses a ratio of target and critical structure adjoint functions to rank seed positions according to their ability to irradiate the target ROI while sparing critical structure ROIs. This ratio is computed once for each seed position prior to the optimization process. Optimization is performed by a greedy heuristic that selects seed positions according to their ratio values. With this method, clinically acceptable treatment plans are obtained in less than 2 s. For comparison, a branch-and-bound method to solve a mixed integer-programming model took more than 50 min to arrive at a feasible solution. Both methods achieved good treatment plans, but the speedup provided by the greedy heuristic was a factor of approximately 1500. This attribute makes this algorithm suitable for intra-operative real-time treatment planning

  4. Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography

    Science.gov (United States)

    Sidky, Emil Y.; Kraemer, David N.; Roth, Erin G.; Ullberg, Christer; Reiser, Ingrid S.; Pan, Xiaochuan

    2014-01-01

    Abstract. One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data. PMID:25685824

  5. Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography.

    Science.gov (United States)

    Sidky, Emil Y; Kraemer, David N; Roth, Erin G; Ullberg, Christer; Reiser, Ingrid S; Pan, Xiaochuan

    2014-10-03

    One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data.

  6. Theoretical Study of Penalized-Likelihood Image Reconstruction for Region of Interest Quantification

    International Nuclear Information System (INIS)

    Qi, Jinyi; Huesman, Ronald H.

    2006-01-01

    Region of interest (ROI) quantification is an important task in emission tomography (e.g., positron emission tomography and single photon emission computed tomography). It is essential for exploring clinical factors such as tumor activity, growth rate, and the efficacy of therapeutic interventions. Statistical image reconstruction methods based on the penalized maximum-likelihood (PML) or maximum a posteriori principle have been developed for emission tomography to deal with the low signal-to-noise ratio of the emission data. Similar to the filter cut-off frequency in the filtered backprojection method, the regularization parameter in PML reconstruction controls the resolution and noise tradeoff and, hence, affects ROI quantification. In this paper, we theoretically analyze the performance of ROI quantification in PML reconstructions. Building on previous work, we derive simplified theoretical expressions for the bias, variance, and ensemble mean-squared-error (EMSE) of the estimated total activity in an ROI that is surrounded by a uniform background. When the mean and covariance matrix of the activity inside the ROI are known, the theoretical expressions are readily computable and allow for fast evaluation of image quality for ROI quantification with different regularization parameters. The optimum regularization parameter can then be selected to minimize the EMSE. Computer simulations are conducted for small ROIs with variable uniform uptake. The results show that the theoretical predictions match the Monte Carlo results reasonably well

  7. Automated detection of regions of interest for tissue microarray experiments: an image texture analysis

    International Nuclear Information System (INIS)

    Karaçali, Bilge; Tözeren, Aydin

    2007-01-01

    Recent research with tissue microarrays led to a rapid progress toward quantifying the expressions of large sets of biomarkers in normal and diseased tissue. However, standard procedures for sampling tissue for molecular profiling have not yet been established. This study presents a high throughput analysis of texture heterogeneity on breast tissue images for the purpose of identifying regions of interest in the tissue for molecular profiling via tissue microarray technology. Image texture of breast histology slides was described in terms of three parameters: the percentage of area occupied in an image block by chromatin (B), percentage occupied by stroma-like regions (P), and a statistical heterogeneity index H commonly used in image analysis. Texture parameters were defined and computed for each of the thousands of image blocks in our dataset using both the gray scale and color segmentation. The image blocks were then classified into three categories using the texture feature parameters in a novel statistical learning algorithm. These categories are as follows: image blocks specific to normal breast tissue, blocks specific to cancerous tissue, and those image blocks that are non-specific to normal and disease states. Gray scale and color segmentation techniques led to identification of same regions in histology slides as cancer-specific. Moreover the image blocks identified as cancer-specific belonged to those cell crowded regions in whole section image slides that were marked by two pathologists as regions of interest for further histological studies. These results indicate the high efficiency of our automated method for identifying pathologic regions of interest on histology slides. Automation of critical region identification will help minimize the inter-rater variability among different raters (pathologists) as hundreds of tumors that are used to develop an array have typically been evaluated (graded) by different pathologists. The region of interest

  8. Multi-Task Vehicle Detection with Region-of-Interest Voting.

    Science.gov (United States)

    Chu, Wenqing; Liu, Yao; Shen, Chen; Cai, Deng; Hua, Xian-Sheng

    2017-10-12

    Vehicle detection is a challenging problem in autonomous driving systems, due to its large structural and appearance variations. In this paper, we propose a novel vehicle detection scheme based on multi-task deep convolutional neural networks (CNN) and region-of-interest (RoI) voting. In the design of CNN architecture, we enrich the supervised information with subcategory, region overlap, bounding-box regression and category of each training RoI as a multi-task learning framework. This design allows the CNN model to share visual knowledge among different vehicle attributes simultaneously, thus detection robustness can be effectively improved. In addition, most existing methods consider each RoI independently, ignoring the clues from its neighboring RoIs. In our approach, we utilize the CNN model to predict the offset direction of each RoI boundary towards the corresponding ground truth. Then each RoI can vote those suitable adjacent bounding boxes which are consistent with this additional information. The voting results are combined with the score of each RoI itself to find a more accurate location from a large number of candidates. Experimental results on the real-world computer vision benchmarks KITTI and the PASCAL2007 vehicle dataset show that our approach achieves superior performance in vehicle detection compared with other existing published works.

  9. Optimization of Bayesian Emission tomographic reconstruction for region of interest quantitation

    International Nuclear Information System (INIS)

    Qi, Jinyi

    2003-01-01

    Region of interest (ROI) quantitation is an important task in emission tomography (e.g., positron emission tomography and single photon emission computed tomography). It is essential for exploring clinical factors such as tumor activity, growth rate, and the efficacy of therapeutic interventions. Bayesian methods based on the maximum a posteriori principle (or called penalized maximum likelihood methods) have been developed for emission image reconstructions to deal with the low signal to noise ratio of the emission data. Similar to the filter cut-off frequency in the filtered backprojection method, the smoothing parameter of the image prior in Bayesian reconstruction controls the resolution and noise trade-off and hence affects ROI quantitation. In this paper we present an approach for choosing the optimum smoothing parameter in Bayesian reconstruction for ROI quantitation. Bayesian reconstructions are difficult to analyze because the resolution and noise properties are nonlinear and object-dependent. Building on the recent progress on deriving the approximate expressions for the local impulse response function and the covariance matrix, we derived simplied theoretical expressions for the bias, the variance, and the ensemble mean squared error (EMSE) of the ROI quantitation. One problem in evaluating ROI quantitation is that the truth is often required for calculating the bias. This is overcome by using ensemble distribution of the activity inside the ROI and computing the average EMSE. The resulting expressions allow fast evaluation of the image quality for different smoothing parameters. The optimum smoothing parameter of the image prior can then be selected to minimize the EMSE

  10. Region-of-interest reconstruction for a cone-beam dental CT with a circular trajectory

    International Nuclear Information System (INIS)

    Hu, Zhanli; Zou, Jing; Gui, Jianbao; Zheng, Hairong; Xia, Dan

    2013-01-01

    Dental CT is the most appropriate and accurate device for preoperative evaluation of dental implantation. It can demonstrate the quantity of bone in three dimensions (3D), the location of important adjacent anatomic structures and the quality of available bone with minimal geometric distortion. Nevertheless, with the rapid increase of dental CT examinations, we are facing the problem of dose reduction without loss of image quality. In this work, backprojection-filtration (BPF) and Feldkamp–Davis–Kress (FDK) algorithm was applied to reconstruct the 3D full image and region-of-interest (ROI) image from complete and truncated circular cone-beam data respectively by computer-simulation. In addition, the BPF algorithm was evaluated based on the 3D ROI-image reconstruction from real data, which was acquired from our developed circular cone-beam prototype dental CT system. The results demonstrated that the ROI-image quality reconstructed from truncated data using the BPF algorithm was comparable to that reconstructed from complete data. The FDK algorithm, however, created artifacts while reconstructing ROI-image. Thus it can be seen, for circular cone-beam dental CT, reducing scanning angular range of the BPF algorithm used for ROI-image reconstruction are helpful for reducing the radiation dose and scanning time. Finally, an analytical method was developed for estimation of the ROI projection area on the detector before CT scanning, which would help doctors to roughly estimate the total radiation dose before the CT examination. -- Highlights: ► BPF algorithm was applied by using dental CT for the first time. ► A method was developed for estimation of projection region before CT scanning. ► Roughly predict the total radiation dose before CT scans. ► Potential reduce imaging radiation dose, scatter, and scanning time

  11. Interior region-of-interest reconstruction using a small, nearly piecewise constant subregion

    International Nuclear Information System (INIS)

    Taguchi, Katsuyuki; Xu Jingyan; Srivastava, Somesh; Tsui, Benjamin M. W.; Cammin, Jochen; Tang Qiulin

    2011-01-01

    Purpose: To develop a method to reconstruct an interior region-of-interest (ROI) image with sufficient accuracy that uses differentiated backprojection (DBP) projection onto convex sets (POCS) [H. Kudo et al., ''Tiny a priori knowledge solves the interior problem in computed tomography'', Phys. Med. Biol. 53, 2207-2231 (2008)] and a tiny knowledge that there exists a nearly piecewise constant subregion. Methods: The proposed method first employs filtered backprojection to reconstruct an image on which a tiny region P with a small variation in the pixel values is identified inside the ROI. Total variation minimization [H. Yu and G. Wang, ''Compressed sensing based interior tomography'', Phys. Med. Biol. 54, 2791-2805 (2009); W. Han et al., ''A general total variation minimization theorem for compressed sensing based interior tomography'', Int. J. Biomed. Imaging 2009, Article 125871 (2009)] is then employed to obtain pixel values in the subregion P, which serve as a priori knowledge in the next step. Finally, DBP-POCS is performed to reconstruct f(x,y) inside the ROI. Clinical data and the reconstructed image obtained by an x-ray computed tomography system (SOMATOM Definition; Siemens Healthcare) were used to validate the proposed method. The detector covers an object with a diameter of ∼500 mm. The projection data were truncated either moderately to limit the detector coverage to diameter 350 mm of the object or severely to cover diameter 199 mm. Images were reconstructed using the proposed method. Results: The proposed method provided ROI images with correct pixel values in all areas except near the edge of the ROI. The coefficient of variation, i.e., the root mean square error divided by the mean pixel values, was less than 2.0% or 4.5% with the moderate or severe truncation cases, respectively, except near the boundary of the ROI. Conclusions: The proposed method allows for reconstructing interior ROI images with sufficient accuracy with a tiny knowledge that

  12. The evolution of the region of interest builder for the ATLAS experiment at CERN

    International Nuclear Information System (INIS)

    Abbott, B.; Rifki, O.; Blair, R.; Love, J.; Proudfoot, J.; Zhang, J.; Crone, G.; Green, B.; Vazquez, W.P.; Vandelli, W.

    2016-01-01

    The ATLAS detector uses a real time selective triggering system to reduce the high interaction rate from 40 MHz to its data storage capacity of 1 kHz. A hardware first level (L1) trigger limits the rate to 100 kHz and a software high level trigger (HLT) selects events for offline analysis. The HLT uses the Regions of Interest (RoIs) identified by L1 and provided by the Region of Interest Builder (RoIB). The current RoIB is a custom VMEbus based system that operated reliably since the first run of the LHC . Since the LHC will reach higher luminosity and ATLAS will increase the complexity and number of L1 triggers, it is desirable to have a more flexible and more operationally maintainable RoIB in the future. In this regard, the functionality of the multi-card VMEbus based RoIB is being migrated to a PC based RoIB with a PCI-Express card. Testing has produced a system that achieved the targeted rate of 100 kHz

  13. Joint Labeling Of Multiple Regions of Interest (Rois) By Enhanced Auto Context Models.

    Science.gov (United States)

    Kim, Minjeong; Wu, Guorong; Guo, Yanrong; Shen, Dinggang

    2015-04-01

    Accurate segmentation of a set of regions of interest (ROIs) in the brain images is a key step in many neuroscience studies. Due to the complexity of image patterns, many learning-based segmentation methods have been proposed, including auto context model (ACM) that can capture high-level contextual information for guiding segmentation. However, since current ACM can only handle one ROI at a time, neighboring ROIs have to be labeled separately with different ACMs that are trained independently without communicating each other. To address this, we enhance the current single-ROI learning ACM to multi-ROI learning ACM for joint labeling of multiple neighboring ROIs (called e ACM). First, we extend current independently-trained single-ROI ACMs to a set of jointly-trained cross-ROI ACMs, by simultaneous training of ACMs for all spatially-connected ROIs to let them to share their respective intermediate outputs for coordinated labeling of each image point. Then, the context features in each ACM can capture the cross-ROI dependence information from the outputs of other ACMs that are designed for neighboring ROIs. Second, we upgrade the output labeling map of each ACM with the multi-scale representation, thus both local and global context information can be effectively used to increase the robustness in characterizing geometric relationship among neighboring ROIs. Third, we integrate ACM into a multi-atlases segmentation paradigm, for encompassing high variations among subjects. Experiments on LONI LPBA40 dataset show much better performance by our e ACM, compared to the conventional ACM.

  14. Multi-resolution analysis for region of interest extraction in thermographic nondestructive evaluation

    Science.gov (United States)

    Ortiz-Jaramillo, B.; Fandiño Toro, H. A.; Benitez-Restrepo, H. D.; Orjuela-Vargas, S. A.; Castellanos-Domínguez, G.; Philips, W.

    2012-03-01

    Infrared Non-Destructive Testing (INDT) is known as an effective and rapid method for nondestructive inspection. It can detect a broad range of near-surface structuring flaws in metallic and composite components. Those flaws are modeled as a smooth contour centered at peaks of stored thermal energy, termed Regions of Interest (ROI). Dedicated methodologies must detect the presence of those ROIs. In this paper, we present a methodology for ROI extraction in INDT tasks. The methodology deals with the difficulties due to the non-uniform heating. The non-uniform heating affects low spatial/frequencies and hinders the detection of relevant points in the image. In this paper, a methodology for ROI extraction in INDT using multi-resolution analysis is proposed, which is robust to ROI low contrast and non-uniform heating. The former methodology includes local correlation, Gaussian scale analysis and local edge detection. In this methodology local correlation between image and Gaussian window provides interest points related to ROIs. We use a Gaussian window because thermal behavior is well modeled by Gaussian smooth contours. Also, the Gaussian scale is used to analyze details in the image using multi-resolution analysis avoiding low contrast, non-uniform heating and selection of the Gaussian window size. Finally, local edge detection is used to provide a good estimation of the boundaries in the ROI. Thus, we provide a methodology for ROI extraction based on multi-resolution analysis that is better or equal compared with the other dedicate algorithms proposed in the state of art.

  15. The Evolution of the Region of Interest Builder for the ATLAS Experiment at CERN

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00060668; Blair, Robert; Crone, Gordon Jeremy; Green, Barry; Love, Jeremy; Proudfoot, James; Rifki, Othmane; Panduro Vazquez, William; Vandelli, Wainer; Zhang, Jinlong

    2016-01-01

    ATLAS is a general purpose particle detector, at the Large Hadron Collider (LHC) at CERN, designed to measure the products of proton collisions. Given the high interaction rate (40 MHz), selective triggering in real time is required to reduce the rate to the experiment's data storage capacity (1 kHz). To meet this requirement, ATLAS employs a hardware trigger that reduces the rate to 100 kHz and software based triggers to select interesting interactions for physics analysis. The Region of Interest Builder (RoIB) is an essential part of the ATLAS detector Trigger and Data Acquisition (TDAQ) chain where the coordinates of the regions of interest (RoIs) identified by the first level trigger (L1) are collected and passed to the High Level Trigger (HLT) to make a decision. While the current custom VME based RoIB operated reliably during the first run of the LHC, it is desirable to have a more flexible RoIB and more operationally maintainable in the future, as the LHC reaches higher luminosity and ATLAS increases t...

  16. Finding of region of interest in radioisotope scintigraphy's images

    International Nuclear Information System (INIS)

    Glazs, A.; Lubans, A.

    2003-01-01

    The paper is about some problems, which arise, when physicians try to make diagnosis, using information from pictures, which are obtained at radioisotope scintigraphy. The algorithm of obtaining pictures' sets (called GFR) is described in this paper. The possible mistakes in diagnosis are also described. One reason of the mistakes is wrong detection the investigated organ's location. The new method is suggested for detection of organ's location in radioisotope scintigraphy's images' sets. Using of dynamic curves of pixels' intensities is suggested for solving of this problem. It is shown, why using of maximums of such curves is impossible for finding of the investigated organ's location in radioisotope scintigraphy's images sets. The using of integral expression is suggested to solve the problem. The suggested method allows finding and selecting of investigated organ's location in image's sequences (correction is not available in the existing methods). The results of using this method are present. The method can work fully automatically or with manual setting of threshold. (authors)

  17. Using Symmetrical Regions of Interest to Improve Visual SLAM

    NARCIS (Netherlands)

    Kootstra, Geert; Schomaker, Lambertus

    2009-01-01

    Simultaneous Localization and Mapping (SLAM) based on visual information is a challenging problem. One of the main problems with visual SLAM is to find good quality landmarks, that can be detected despite noise and small changes in viewpoint. Many approaches use SIFT interest points as visual

  18. A voxelwise approach to determine consensus regions-of-interest for the study of brain network plasticity

    Directory of Open Access Journals (Sweden)

    Sarah M. Rajtmajer

    2015-07-01

    Full Text Available Despite exciting advances in the functional imaging of the brain, it remains a challenge to define regions of interest (ROIs that do not require investigator supervision and permit examination of change in networks over time (or plasticity. Plasticity is most readily examined by maintaining ROIs constant via seed-based and anatomical-atlas based techniques, but these approaches are not data-driven, requiring definition based on prior experience (e.g. choice of seed-region, anatomical landmarks. These approaches are limiting especially when functional connectivity may evolve over time in areas that are finer than known anatomical landmarks or in areas outside predetermined seeded regions. An ideal method would permit investigators to study network plasticity due to learning, maturation effects, or clinical recovery via multiple time point data that can be compared to one another in the same ROI while also preserving the voxel-level data in those ROIs at each time point. Data-driven approaches (e.g., whole-brain voxelwise approaches ameliorate concerns regarding investigator bias, but the fundamental problem of comparing the results between distinct data sets remains. In this paper we propose an approach, aggregate-initialized label propagation (AILP, which allows for data at separate time points to be compared for examining developmental processes resulting in network change (plasticity. To do so, we use a whole-brain modularity approach to parcellate the brain into anatomically constrained functional modules at separate time points and then apply the AILP algorithm to form a consensus set of ROIs for examining change over time. To demonstrate its utility, we make use of a known dataset of individuals with traumatic brain injury sampled at two time points during the first year of recovery and show how the AILP procedure can be applied to select regions of interest to be used in a graph theoretical analysis of plasticity.

  19. A new fast algorithm for the evaluation of regions of interest and statistical uncertainty in computed tomography

    International Nuclear Information System (INIS)

    Huesman, R.H.

    1984-01-01

    A new algorithm for region of interest evaluation in computed tomography is described. Region of interest evaluation is a technique used to improve quantitation of the tomographic imaging process by summing (or averaging) the reconstructed quantity throughout a volume of particular significance. An important application of this procedure arises in the analysis of dynamic emission computed tomographic data, in which the uptake and clearance of radiotracers are used to determine the blood flow and/or physiologica function of tissue within the significant volume. The new algorithm replaces the conventional technique of repeated image reconstructions with one in which projected regions are convolved and then used to form multiple vector inner products with the raw tomographic data sets. Quantitation of regions of interest is made without the need for reconstruction of tomographic images. The computational advantage of the new algorithm over conventional methods is between factors of 20 and of 500 for typical applications encountered in medical science studies. The greatest benefit is the ease with which the statistical uncertainty of the result is computed. The entire covariance matrix for the evaluation of regions of interest can be calculated with relatively few operations. (author)

  20. Novel region of interest interrogation technique for diffusion tensor imaging analysis in the canine brain.

    Science.gov (United States)

    Li, Jonathan Y; Middleton, Dana M; Chen, Steven; White, Leonard; Ellinwood, N Matthew; Dickson, Patricia; Vite, Charles; Bradbury, Allison; Provenzale, James M

    2017-08-01

    Purpose We describe a novel technique for measuring diffusion tensor imaging metrics in the canine brain. We hypothesized that a standard method for region of interest placement could be developed that is highly reproducible, with less than 10% difference in measurements between raters. Methods Two sets of canine brains (three seven-week-old full-brains and two 17-week-old single hemispheres) were scanned ex-vivo on a 7T small-animal magnetic resonance imaging system. Strict region of interest placement criteria were developed and then used by two raters to independently measure diffusion tensor imaging metrics within four different white-matter regions within each specimen. Average values of fractional anisotropy, radial diffusivity, and the three eigenvalues (λ1, λ2, and λ3) within each region in each specimen overall and within each individual image slice were compared between raters by calculating the percentage difference between raters for each metric. Results The mean percentage difference between raters for all diffusion tensor imaging metrics when pooled by each region and specimen was 1.44% (range: 0.01-5.17%). The mean percentage difference between raters for all diffusion tensor imaging metrics when compared by individual image slice was 2.23% (range: 0.75-4.58%) per hemisphere. Conclusion Our results indicate that the technique described is highly reproducible, even when applied to canine specimens of differing age, morphology, and image resolution. We propose this technique for future studies of diffusion tensor imaging analysis in canine brains and for cross-sectional and longitudinal studies of canine brain models of human central nervous system disease.

  1. Digital assessment of distrurbances of ventilation distribution by defined regions of interest

    International Nuclear Information System (INIS)

    Reuter, T.D.; Kirchhuebel, H.; Dahlgruen, H.D.

    1976-01-01

    Pulmonary distribution of ventilation was assessed in ten patients with COPD on the basis of defined regions of interest. Areas of hypeventilation are demarcated on the basis of the trapped air scintigram corrected for lung volume. After the demarcations are transfered to the scintigram of fractional exchange of air the regional VI is computed and compared with normal values. The detectability of regional ventilation disturbances was found to be improved compared to a subdivision scheme of six regions of interest

  2. Emotion Discrimination using spatially Compact Regions of Interest extracted from Imaging EEG Activity

    Directory of Open Access Journals (Sweden)

    Jorge Ivan Padilla-Buritica

    2016-07-01

    Full Text Available Lately, research on computational models of emotion had been getting much attention due to their potential for understanding the mechanisms of emotions and their promising broad range of applications that potentially bridge the gap between human and machine interactions. We propose a new method for emotion classification that relies on features extracted from those active brain areas that are most likely related to emotions. To this end, we carry out the selection of spatially compact regions of interest that are computed using the brain neural activity reconstructed from electroencephalography data. Throughout this study, we consider three representative feature extraction methods widely applied to emotion detection tasks, including Power spectral density, Wavelet, and Hjorth parameters. Further feature selection is carried out using principal component analysis. For validation purpose, these features are used to feed a support vector machine classifier that is trained under the leave-one-out cross-validation strategy. Obtained results on real affective data show that incorporation of the proposed training method in combination with the enhanced spatial resolution provided by the source estimation allows improving the performed accuracy of discrimination in most of the considered emotions, namely: dominance, valence, and linking.

  3. Comparing brain graphs in which nodes are regions of interest or independent components: A simulation study.

    Science.gov (United States)

    Yu, Qingbao; Du, Yuhui; Chen, Jiayu; He, Hao; Sui, Jing; Pearlson, Godfrey; Calhoun, Vince D

    2017-11-01

    A key challenge in building a brain graph using fMRI data is how to define the nodes. Spatial brain components estimated by independent components analysis (ICA) and regions of interest (ROIs) determined by brain atlas are two popular methods to define nodes in brain graphs. It is difficult to evaluate which method is better in real fMRI data. Here we perform a simulation study and evaluate the accuracies of a few graph metrics in graphs with nodes of ICA components, ROIs, or modified ROIs in four simulation scenarios. Graph measures with ICA nodes are more accurate than graphs with ROI nodes in all cases. Graph measures with modified ROI nodes are modulated by artifacts. The correlations of graph metrics across subjects between graphs with ICA nodes and ground truth are higher than the correlations between graphs with ROI nodes and ground truth in scenarios with large overlapped spatial sources. Moreover, moving the location of ROIs would largely decrease the correlations in all scenarios. Evaluating graphs with different nodes is promising in simulated data rather than real data because different scenarios can be simulated and measures of different graphs can be compared with a known ground truth. Since ROIs defined using brain atlas may not correspond well to real functional boundaries, overall findings of this work suggest that it is more appropriate to define nodes using data-driven ICA than ROI approaches in real fMRI data. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Evaluation of auto region of interest settings for single photon emission computed tomography findings

    International Nuclear Information System (INIS)

    Mizuno, Takashi; Takahashi, Masaaki; Yoshioka, Katsunori

    2008-01-01

    It has been noted that the manual settings of region of interest (ROI) to the single-photon-emission-computed-tomography (SPECT) slice lacked objectivity when the fixed quantity value of regional cerebral blood flow (rCBF) was measured previously. Therefore, we jointly developed software Brain ROI' with Daiichi Radioisotope Laboratories, Ltd. (Present name: FUJIFILM RI Pharma Co., Ltd.) The software normalized an individual brain to a standard brain template by using Statistical Parametric Mapping 2 (SPM 2) of the easy Z-score Imaging System ver. 3.0 (eZIS Ver. 3.0), and the ROI template was set to a specific slice. In this study, we evaluated the accuracy of this software with an ROI template that we made of useful size and shape, in some clinical samples. The method of automatic setting of ROI was the objective. However, we felt that we should use the shape of the ROI template without the influence of brain atrophy. Moreover, we should see normalization of the individual brain and confirm the accuracy of normalization. When normalization failed, we should partially correct the ROI or set everything by manual operation for the operator. However, it was thought that this software was useful if the tendency was understood because examples of failure were few. (author)

  5. Reconstruction for interior region-of-interest inverse geometry computed tomography: preliminary study

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dong Su; Kim, Tae Ho; Kim, Kyeong Hyeon; Yoon, Do Kun; Suh, Tae Suk [Dept. of Biomedical Engineering, Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Kang, Seong Hee [Dept. of Radiation Oncology, Seoul National University Hospital, Seoul (Korea, Republic of); Cho, Min Seok [Dept. of Radiation Oncology, Asan Medical Center, Seoul (Korea, Republic of); Noh, Yu Yoon [Dept. of Radiation Oncology, Eulji University Hospital, Daejeon (Korea, Republic of)

    2017-04-15

    The inverse geometry computed tomography (IGCT) composed of multiple source and small size detector has several merits such as reduction of scatter effect and large volumetric imaging within one rotation without cone-beam artifact, compared to conventional cone-beam computed tomography (CBCT). By using this multi-source characteristics, we intend to present a selective and multiple interior region-of-interest (ROI) imaging method by using a designed source on-off sequence of IGCT. ROI-IGCT showed comparable image quality and has the capability to provide multi ROI image within a rotation. In this regard, it seems to be useful for diagnostic or image guidance for radiotherapy. ROI-IGCT showed comparable image quality and has the capability to provide multi ROI image within a rotation. Projection of ROI-IGCT is performed by selective irradiation, hence unnecessary imaging dose to non-interest region can be reduced. In this regard, it seems to be useful for diagnostic or image guidance for radiotherapy.

  6. Multiple regions-of-interest analysis of setup uncertainties for head-and-neck cancer radiotherapy

    International Nuclear Information System (INIS)

    Zhang Lifei; Garden, Adam S.; Lo, Justin; Ang, K. Kian; Ahamad, Anesa; Morrison, William H.; Rosenthal, David I.; Chambers, Mark S.; Zhu, X. Ronald; Mohan, Radhe; Dong Lei

    2006-01-01

    Purpose: To analyze three-dimensional setup uncertainties for multiple regions of interest (ROIs) in head-and-neck region. Methods and Materials: In-room computed tomography (CT) scans were acquired using a CT-on-rails system for 14 patients. Three separate bony ROIs were defined: C2 and C6 vertebral bodies and the palatine process of the maxilla. Translational shifts of 3 ROIs were calculated relative to the marked isocenter on the immobilization mask. Results: The shifts for all 3 ROIs were highly correlated. However, noticeable differences on the order of 2-6 mm existed between any 2 ROIs, indicating the flexibility and/or rotational effect in the head-and-neck region. The palatine process of the maxilla had the smallest right-left shifts because of the tight lateral fit in the face mask, but the largest superior-inferior movement because of in-plane rotation and variations in jaw positions. The neck region (C6) had the largest right-left shifts. The positioning mouthpiece was found effective in reducing variations in the superior-inferior direction. There was no statistically significant improvement for using the S-board (8 out of 14 patients) vs. the short face mask. Conclusions: We found variability in setup corrections for different regions of head-and-neck anatomy. These relative positional variations should be considered when making setup corrections or designing treatment margins

  7. RCAS: an RNA centric annotation system for transcriptome-wide regions of interest.

    Science.gov (United States)

    Uyar, Bora; Yusuf, Dilmurat; Wurmus, Ricardo; Rajewsky, Nikolaus; Ohler, Uwe; Akalin, Altuna

    2017-06-02

    In the field of RNA, the technologies for studying the transcriptome have created a tremendous potential for deciphering the puzzles of the RNA biology. Along with the excitement, the unprecedented volume of RNA related omics data is creating great challenges in bioinformatics analyses. Here, we present the RNA Centric Annotation System (RCAS), an R package, which is designed to ease the process of creating gene-centric annotations and analysis for the genomic regions of interest obtained from various RNA-based omics technologies. The design of RCAS is modular, which enables flexible usage and convenient integration with other bioinformatics workflows. RCAS is an R/Bioconductor package but we also created graphical user interfaces including a Galaxy wrapper and a stand-alone web service. The application of RCAS on published datasets shows that RCAS is not only able to reproduce published findings but also helps generate novel knowledge and hypotheses. The meta-gene profiles, gene-centric annotation, motif analysis and gene-set analysis provided by RCAS provide contextual knowledge which is necessary for understanding the functional aspects of different biological events that involve RNAs. In addition, the array of different interfaces and deployment options adds the convenience of use for different levels of users. RCAS is available at http://bioconductor.org/packages/release/bioc/html/RCAS.html and http://rcas.mdc-berlin.de. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. BPF-type region-of-interest reconstruction for parallel translational computed tomography.

    Science.gov (United States)

    Wu, Weiwen; Yu, Hengyong; Wang, Shaoyu; Liu, Fenglin

    2017-01-01

    The objective of this study is to present and test a new ultra-low-cost linear scan based tomography architecture. Similar to linear tomosynthesis, the source and detector are translated in opposite directions and the data acquisition system targets on a region-of-interest (ROI) to acquire data for image reconstruction. This kind of tomographic architecture was named parallel translational computed tomography (PTCT). In previous studies, filtered backprojection (FBP)-type algorithms were developed to reconstruct images from PTCT. However, the reconstructed ROI images from truncated projections have severe truncation artefact. In order to overcome this limitation, we in this study proposed two backprojection filtering (BPF)-type algorithms named MP-BPF and MZ-BPF to reconstruct ROI images from truncated PTCT data. A weight function is constructed to deal with data redundancy for multi-linear translations modes. Extensive numerical simulations are performed to evaluate the proposed MP-BPF and MZ-BPF algorithms for PTCT in fan-beam geometry. Qualitative and quantitative results demonstrate that the proposed BPF-type algorithms cannot only more accurately reconstruct ROI images from truncated projections but also generate high-quality images for the entire image support in some circumstances.

  9. Performance of shear-wave elastography for breast masses using different region-of-interest (ROI) settings.

    Science.gov (United States)

    Youk, Ji Hyun; Son, Eun Ju; Han, Kyunghwa; Gweon, Hye Mi; Kim, Jeong-Ah

    2018-07-01

    Background Various size and shape of region of interest (ROI) can be applied for shear-wave elastography (SWE). Purpose To investigate the diagnostic performance of SWE according to ROI settings for breast masses. Material and Methods To measure elasticity for 142 lesions, ROIs were set as follows: circular ROIs 1 mm (ROI-1), 2 mm (ROI-2), and 3 mm (ROI-3) in diameter placed over the stiffest part of the mass; freehand ROIs drawn by tracing the border of mass (ROI-M) and the area of peritumoral increased stiffness (ROI-MR); and circular ROIs placed within the mass (ROI-C) and to encompass the area of peritumoral increased stiffness (ROI-CR). Mean (E mean ), maximum (E max ), and standard deviation (E SD ) of elasticity values and their areas under the receiver operating characteristic (ROC) curve (AUCs) for diagnostic performance were compared. Results Means of E mean and E SD significantly differed between ROI-1, ROI-2, and ROI-3 ( P Shear-wave elasticity values and their diagnostic performance vary based on ROI settings and elasticity indices. E max is recommended for the ROIs over the stiffest part of mass and an ROI encompassing the peritumoral area of increased stiffness is recommended for elastic heterogeneity of mass.

  10. Region of interest methylation analysis: a comparison of MSP with MS-HRM and direct BSP.

    Science.gov (United States)

    Akika, Reem; Awada, Zainab; Mogharbil, Nahed; Zgheib, Nathalie K

    2017-07-01

    The aim of this study was to compare and contrast three DNA methylation methods of a specific region of interest (ROI): methylation-specific PCR (MSP), methylation-sensitive high resolution melting (MS-HRM) and direct bisulfite sequencing (BSP). The methylation of a CpG area in the promoter region of Estrogen receptor alpha (ESR1) was evaluated by these three methods with samples and standards of different methylation percentages. MSP data were neither reproducible nor sensitive, and the assay was not specific due to non-specific binding of primers. MS-HRM was highly reproducible and a step forward into categorizing the methylation status of the samples as percent ranges. Direct BSP was the most informative method regarding methylation percentage of each CpG site. Though not perfect, it was reproducible and sensitive. We recommend the use of either method depending on the research question and target amplicon, and provided that the designed primers and expected amplicons are within recommendations. If the research question targets a limited number of CpG sites and simple yes/no results are enough, MSP may be attempted. For short amplicons that are crowded with CpG sites and of single melting domain, MS-HRM may be the method of choice though it only indicates the overall methylation percentage of the entire amplicon. Although the assay is highly reproducible, being semi-quantitative makes it of lesser interest to study ROI methylation of samples with little methylation differences. Direct BSP is a step forward as it gives information about the methylation percentage at each CpG site.

  11. MRI measurements of water diffusion: impact of region of interest selection on ischemic quantification

    International Nuclear Information System (INIS)

    Ozsunar, Yelda; Koseoglu, Kutsi; Huisman, Thierry A.G.M.; Koroshetz, Walter; Sorensen, A. Gregory

    2004-01-01

    Objective: To investigate the effect of ADC heterogeneity on region of interest (ROI) measurement of isotropic and anisotropic water diffusion in acute (<12 h) cerebral infarctions. Methods and materials: Full diffusion tensor images were retrospectively analyzed in 32 patients with acute cerebral infarction. Fractional anisotropy (FA) and apparent diffusion coefficient (ADC) values were measured in ischemic lesions and in the corresponding contralateral, normal appearing brain by using four ROIs for each patient. The 2x2 pixel square ROIs were placed in the center, the lateral rim and the medial rim of the infarction. In addition, the whole volume of the infarction was measured using a free hand method. Each ROI value obtained from the ischemic lesion was normalized using contralateral normal ROI values. Results: The localization of the ROIs in relation to the ischemic lesion significantly affected ADC measurement (P<0.01, using Friedman test), but not FA measurement (P=0.25). Significant differences were found between ADC values of the center of the infarction versus whole volume (P<0.01), and medial rim versus whole volume of infarction (P<0.001) with variation of relative ADC values up to 11%. The differences of absolute ADC for these groups were 22 and 23%, respectively. The lowest ADC was found in the center, followed by medial rim, lateral rim and whole volume of infarction. Conclusion: ADC quantification may provide variable results depending on ROI method. The ADC and FA values, obtained from the center of infarction tend to be lower compared to the periphery. The researchers who try to compare studies or work on ischemic quantification should be aware of these differences and effects

  12. Cerebral perfusion computerized tomography: influence of reference vessels, regions of interest and interobserver variability

    International Nuclear Information System (INIS)

    Soustiel, Jean F.; Mor, Nadav; Zaaroor, Menashe; Goldsher, Dorith

    2006-01-01

    There are still no standardized guidelines for perfusion computerized tomography (PCT) analysis. A total of 61 PCT studies were analyzed using either the anterior cerebral artery (ACA) or the middle cerebral artery (MCA) as the arterial reference, and the superior sagittal sinus (SSS) or the vein of Galen (VG) as the venous reference. The sizes of regions of interest (ROI) were investigated comparing PCT results obtained using a hemispheric ROI combined with vascular pixel elimination with those obtained using five smaller ROIs located over the cortex and basal ganglia. In addition, interobserver variations were explored using a standardized protocol. MCA-based measurements of cerebral blood flow (CBF) and blood volume (CBV) were in accordance with those obtained with the ACA except in 16 patients with ischemic stroke, in whom CBF was overestimated by the ipsilateral MCA. Venous maximal intensity was significantly lower with the VG when compared with the SSS, resulting in overestimation of CBF and CBV. However, in 13.3% of patients the VG ROI yielded higher maximal intensities than the SSS ROI. There was no difference in PCT results between hemispheric ROI and averaged separate ROI when vascular pixel elimination was used. Finally, interobserver variations were as high as 11% for CBF and 12% for CBV. The present results suggest that pathological rather than anatomical considerations should dictate the choice of the arterial ROI. For venous ROI, although SSS seems to be adequate in most instances, deep cerebral veins may occasionally generate higher maximal intensities and should therefore be selected. Importantly, significant user-dependency should be taken into account. (orig.)

  13. Testing of Haar-Like Feature in Region of Interest Detection for Automated Target Recognition (ATR) System

    Science.gov (United States)

    Zhang, Yuhan; Lu, Dr. Thomas

    2010-01-01

    The objectives of this project were to develop a ROI (Region of Interest) detector using Haar-like feature similar to the face detection in Intel's OpenCV library, implement it in Matlab code, and test the performance of the new ROI detector against the existing ROI detector that uses Optimal Trade-off Maximum Average Correlation Height filter (OTMACH). The ROI detector included 3 parts: 1, Automated Haar-like feature selection in finding a small set of the most relevant Haar-like features for detecting ROIs that contained a target. 2, Having the small set of Haar-like features from the last step, a neural network needed to be trained to recognize ROIs with targets by taking the Haar-like features as inputs. 3, using the trained neural network from the last step, a filtering method needed to be developed to process the neural network responses into a small set of regions of interests. This needed to be coded in Matlab. All the 3 parts needed to be coded in Matlab. The parameters in the detector needed to be trained by machine learning and tested with specific datasets. Since OpenCV library and Haar-like feature were not available in Matlab, the Haar-like feature calculation needed to be implemented in Matlab. The codes for Adaptive Boosting and max/min filters in Matlab could to be found from the Internet but needed to be integrated to serve the purpose of this project. The performance of the new detector was tested by comparing the accuracy and the speed of the new detector against the existing OTMACH detector. The speed was referred as the average speed to find the regions of interests in an image. The accuracy was measured by the number of false positives (false alarms) at the same detection rate between the two detectors.

  14. A simple approach to spectrally resolved fluorescence and bright field microscopy over select regions of interest

    OpenAIRE

    Dahlberg, Peter D.; Boughter, Christopher T.; Faruk, Nabil F.; Hong, Lu; Koh, Young Hoon; Reyer, Matthew A.; Shaiber, Alon; Sherani, Aiman; Zhang, Jiacheng; Jureller, Justin E.; Hammond, Adam T.

    2016-01-01

    A standard wide field inverted microscope was converted to a spatially selective spectrally resolved microscope through the addition of a polarizing beam splitter, a pair of polarizers, an amplitude-mode liquid crystal-spatial light modulator, and a USB spectrometer. The instrument is capable of simultaneously imaging and acquiring spectra over user defined regions of interest. The microscope can also be operated in a bright-field mode to acquire absorption spectra of micron scale objects. Th...

  15. SU-F-J-183: Interior Region-Of-Interest Tomography by Using Inverse Geometry System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, K; Kim, D; Kang, S; Kim, T; Shin, D; Cho, M; Noh, Y; Suh, T [Department of Biomedical Engineering and Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of)

    2016-06-15

    Purpose: The inverse geometry computed tomography (IGCT) composed of multiple source and small size detector has several merits such as reduction of scatter effect and large volumetric imaging within one rotation without cone-beam artifact, compared to conventional cone-beam computed tomography (CBCT). By using this multi-source characteristics, we intend to present a selective and multiple interior region-of-interest (ROI) imaging method by using a designed source on-off sequence of IGCT. Methods: All of the IGCT sources are operated one by one sequentially, and each projection in the shape of narrow cone-beam covers its own partial volume of full field of view (FOV) determined from system geometry. Thus, through controlling multi source operation, limited irradiation within ROI is possible and selective radon space data for ROI imaging can be acquired without additional X-ray filtration. With this feature, we designed a source on-off sequence for multi ROI-IGCT imaging, and projections of ROI-IGCT were generated by using the on-off sequence. Multi ROI-IGCT images were reconstructed by using filtered back-projection algorithm. All these imaging process of our study has been performed by utilizing digital phantom and patient CT data. ROI-IGCT images of the phantom were compared to CBCT image and the phantom data for the image quality evaluation. Results: Image quality of ROI-IGCT was comparable to that of CBCT. However, the distal axial-plane from the FOV center, large cone-angle region, ROI-IGCT showed uniform image quality without significant cone-beam artifact contrary to CBCT. Conclusion: ROI-IGCT showed comparable image quality and has the capability to provide multi ROI image within a rotation. Projection of ROI-IGCT is performed by selective irradiation, hence unnecessary imaging dose to non-interest region can be reduced. In this regard, it seems to be useful for diagnostic or image guidance purpose in radiotherapy such as low dose target localization and

  16. SU-F-J-183: Interior Region-Of-Interest Tomography by Using Inverse Geometry System

    International Nuclear Information System (INIS)

    Kim, K; Kim, D; Kang, S; Kim, T; Shin, D; Cho, M; Noh, Y; Suh, T

    2016-01-01

    Purpose: The inverse geometry computed tomography (IGCT) composed of multiple source and small size detector has several merits such as reduction of scatter effect and large volumetric imaging within one rotation without cone-beam artifact, compared to conventional cone-beam computed tomography (CBCT). By using this multi-source characteristics, we intend to present a selective and multiple interior region-of-interest (ROI) imaging method by using a designed source on-off sequence of IGCT. Methods: All of the IGCT sources are operated one by one sequentially, and each projection in the shape of narrow cone-beam covers its own partial volume of full field of view (FOV) determined from system geometry. Thus, through controlling multi source operation, limited irradiation within ROI is possible and selective radon space data for ROI imaging can be acquired without additional X-ray filtration. With this feature, we designed a source on-off sequence for multi ROI-IGCT imaging, and projections of ROI-IGCT were generated by using the on-off sequence. Multi ROI-IGCT images were reconstructed by using filtered back-projection algorithm. All these imaging process of our study has been performed by utilizing digital phantom and patient CT data. ROI-IGCT images of the phantom were compared to CBCT image and the phantom data for the image quality evaluation. Results: Image quality of ROI-IGCT was comparable to that of CBCT. However, the distal axial-plane from the FOV center, large cone-angle region, ROI-IGCT showed uniform image quality without significant cone-beam artifact contrary to CBCT. Conclusion: ROI-IGCT showed comparable image quality and has the capability to provide multi ROI image within a rotation. Projection of ROI-IGCT is performed by selective irradiation, hence unnecessary imaging dose to non-interest region can be reduced. In this regard, it seems to be useful for diagnostic or image guidance purpose in radiotherapy such as low dose target localization and

  17. Comparison of manual and semi-automated delineation of regions of interest for radioligand PET imaging analysis

    International Nuclear Information System (INIS)

    Chow, Tiffany W; Verhoeff, Nicolaas PLG; Takeshita, Shinichiro; Honjo, Kie; Pataky, Christina E; St Jacques, Peggy L; Kusano, Maggie L; Caldwell, Curtis B; Ramirez, Joel; Black, Sandra

    2007-01-01

    As imaging centers produce higher resolution research scans, the number of man-hours required to process regional data has become a major concern. Comparison of automated vs. manual methodology has not been reported for functional imaging. We explored validation of using automation to delineate regions of interest on positron emission tomography (PET) scans. The purpose of this study was to ascertain improvements in image processing time and reproducibility of a semi-automated brain region extraction (SABRE) method over manual delineation of regions of interest (ROIs). We compared 2 sets of partial volume corrected serotonin 1a receptor binding potentials (BPs) resulting from manual vs. semi-automated methods. BPs were obtained from subjects meeting consensus criteria for frontotemporal degeneration and from age- and gender-matched healthy controls. Two trained raters provided each set of data to conduct comparisons of inter-rater mean image processing time, rank order of BPs for 9 PET scans, intra- and inter-rater intraclass correlation coefficients (ICC), repeatability coefficients (RC), percentages of the average parameter value (RM%), and effect sizes of either method. SABRE saved approximately 3 hours of processing time per PET subject over manual delineation (p < .001). Quality of the SABRE BP results was preserved relative to the rank order of subjects by manual methods. Intra- and inter-rater ICC were high (>0.8) for both methods. RC and RM% were lower for the manual method across all ROIs, indicating less intra-rater variance across PET subjects' BPs. SABRE demonstrated significant time savings and no significant difference in reproducibility over manual methods, justifying the use of SABRE in serotonin 1a receptor radioligand PET imaging analysis. This implies that semi-automated ROI delineation is a valid methodology for future PET imaging analysis

  18. The Evolution of the Region of Interest Builder in the ATLAS Experiment at CERN

    CERN Document Server

    Rifki, Othmane; The ATLAS collaboration; Crone, Gordon Jeremy; Green, Barry; Love, Jeremy; Proudfoot, James; Panduro Vazquez, William; Vandelli, Wainer; Zhang, Jinlong

    2015-01-01

    ATLAS is a general purpose particle detector at the Large Hadron Collider (LHC) at CERN designed to measure the products of proton collisions. Given their high interaction rate (1GHz), selective triggering in real time is required to reduce the rate to the experiment’s data storage capacity (1KHz). To meet this requirement, ATLAS employs a combination of hardware and software triggers to select interesting collisions for physics analysis. The Region of Interest Builder (RoIB) is an integral part of the ATLAS detector Trigger and Data Acquisition (TDAQ) chain where the coordinates of the regions of interest (RoIs) identified by the first level trigger (L1) are collected and passed to the High Level Trigger (HLT) to make a decision. While the current custom RoIB operated reliably during the first run of the LHC, it is desirable to have the RoIB more operationally maintainable in the new run, which will reach higher luminosities with an increased complexity of L1 triggers. We are responsible for migrating the ...

  19. The Evolution of the Region of Interest Builder in the ATLAS Experiment

    CERN Document Server

    Blair, Robert; The ATLAS collaboration; Green, Barry; Love, Jeremy; Proudfoot, James; Rifki, Othmane; Panduro Vazquez, Jose Guillermo; Zhang, Jinlong

    2015-01-01

    ATLAS is a general purpose particle detector at the Large Hadron Collider (LHC) at CERN designed to measure the products of proton collisions. Given their high interaction rate (1GHz), selective triggering in real time is required to reduce the rate to the experiment’s data storage capacity (1KHz). To meet this requirement, ATLAS employs a combination of hardware and software triggers to select interesting collisions for physics analysis. The Region of Interest Builder (RoIB) is an integral part of the ATLAS detector Trigger and Data Acquisition (TDAQ) chain where the coordinates of the regions of interest (RoIs) identified by the first level trigger (L1) are collected and passed to the High Level Trigger (HLT) to make a decision. While the current custom RoIB operated reliably during the first run of the LHC, it is desirable to have the RoIB more operationally maintainable in the new run, which will reach higher luminosities with an increased complexity of L1 triggers. We are responsible for migrating the ...

  20. Single-shot full resolution region-of-interest (ROI) reconstruction in image plane digital holographic microscopy

    Science.gov (United States)

    Singh, Mandeep; Khare, Kedar

    2018-05-01

    We describe a numerical processing technique that allows single-shot region-of-interest (ROI) reconstruction in image plane digital holographic microscopy with full pixel resolution. The ROI reconstruction is modelled as an optimization problem where the cost function to be minimized consists of an L2-norm squared data fitting term and a modified Huber penalty term that are minimized alternately in an adaptive fashion. The technique can provide full pixel resolution complex-valued images of the selected ROI which is not possible to achieve with the commonly used Fourier transform method. The technique can facilitate holographic reconstruction of individual cells of interest from a large field-of-view digital holographic microscopy data. The complementary phase information in addition to the usual absorption information already available in the form of bright field microscopy can make the methodology attractive to the biomedical user community.

  1. Personal dose assessment using region of interest analysis compared with harshaw TLD WinREMS software evaluation

    International Nuclear Information System (INIS)

    Adjei, D.

    2010-06-01

    Personal dose equivalents, Hp(10), have been evaluated manually using Region of Interest (ROI) analysis and compared with the automated computerized WinREMS software for the occupationally exposed in medical, industrial and research/teaching applications for 2008 and 2009. The mean annual effective dose estimated by the WinREMS software for medical, industrial and research/teaching applications for the study period are 0.459 mSv, 0.549mSv and 0.447 mSv, respectively compared with ROI analysis are 0.424 mSv, 0.520 mSv and 0.407 mSv respectively. The mean annual collective doses evaluated by the WinREMS software for medical, industrial and research/teaching applications for the two-year study period are 0.258 man-Sv, 0.084 man-Sv and 0.032 man-Sv respectively, compared with the ROI analysis with values: 0.238 man-Sv, 0.080 man-Sv and 0.029 man-Sv respectively. The individual doses for the occupationally exposed in Ghana fall within the typical range of individual doses in the UNSCEAR 2008 report. In calibration mode, the WinREMS method overestimated the personal dose equivalent by 51.3% for doses below 1 mSv and 12.0% above 1 mSv. The corresponding values for the Region of Interest analysis method are 13.2% and 6.5%. The results from the study indicate that the ROI analysis provides a better alternative to estimating the personal doses (au).

  2. A simple approach to spectrally resolved fluorescence and bright field microscopy over select regions of interest.

    Science.gov (United States)

    Dahlberg, Peter D; Boughter, Christopher T; Faruk, Nabil F; Hong, Lu; Koh, Young Hoon; Reyer, Matthew A; Shaiber, Alon; Sherani, Aiman; Zhang, Jiacheng; Jureller, Justin E; Hammond, Adam T

    2016-11-01

    A standard wide field inverted microscope was converted to a spatially selective spectrally resolved microscope through the addition of a polarizing beam splitter, a pair of polarizers, an amplitude-mode liquid crystal-spatial light modulator, and a USB spectrometer. The instrument is capable of simultaneously imaging and acquiring spectra over user defined regions of interest. The microscope can also be operated in a bright-field mode to acquire absorption spectra of micron scale objects. The utility of the instrument is demonstrated on three different samples. First, the instrument is used to resolve three differently labeled fluorescent beads in vitro. Second, the instrument is used to recover time dependent bleaching dynamics that have distinct spectral changes in the cyanobacteria, Synechococcus leopoliensis UTEX 625. Lastly, the technique is used to acquire the absorption spectra of CH 3 NH 3 PbBr 3 perovskites and measure differences between nanocrystal films and micron scale crystals.

  3. Identifying regions of interest in medical images using self-organizing maps.

    Science.gov (United States)

    Teng, Wei-Guang; Chang, Ping-Lin

    2012-10-01

    Advances in data acquisition, processing and visualization techniques have had a tremendous impact on medical imaging in recent years. However, the interpretation of medical images is still almost always performed by radiologists. Developments in artificial intelligence and image processing have shown the increasingly great potential of computer-aided diagnosis (CAD). Nevertheless, it has remained challenging to develop a general approach to process various commonly used types of medical images (e.g., X-ray, MRI, and ultrasound images). To facilitate diagnosis, we recommend the use of image segmentation to discover regions of interest (ROI) using self-organizing maps (SOM). We devise a two-stage SOM approach that can be used to precisely identify the dominant colors of a medical image and then segment it into several small regions. In addition, by appropriately conducting the recursive merging steps to merge smaller regions into larger ones, radiologists can usually identify one or more ROIs within a medical image.

  4. A simple approach to spectrally resolved fluorescence and bright field microscopy over select regions of interest

    Science.gov (United States)

    Dahlberg, Peter D.; Boughter, Christopher T.; Faruk, Nabil F.; Hong, Lu; Koh, Young Hoon; Reyer, Matthew A.; Shaiber, Alon; Sherani, Aiman; Zhang, Jiacheng; Jureller, Justin E.; Hammond, Adam T.

    2016-11-01

    A standard wide field inverted microscope was converted to a spatially selective spectrally resolved microscope through the addition of a polarizing beam splitter, a pair of polarizers, an amplitude-mode liquid crystal-spatial light modulator, and a USB spectrometer. The instrument is capable of simultaneously imaging and acquiring spectra over user defined regions of interest. The microscope can also be operated in a bright-field mode to acquire absorption spectra of micron scale objects. The utility of the instrument is demonstrated on three different samples. First, the instrument is used to resolve three differently labeled fluorescent beads in vitro. Second, the instrument is used to recover time dependent bleaching dynamics that have distinct spectral changes in the cyanobacteria, Synechococcus leopoliensis UTEX 625. Lastly, the technique is used to acquire the absorption spectra of CH3NH3PbBr3 perovskites and measure differences between nanocrystal films and micron scale crystals.

  5. Aplikasi Identifikasi Citra Telur Ayam Omega-3 Dengan Metode Segmentasi Region Of Interest Berbasis Android

    Directory of Open Access Journals (Sweden)

    Ahmad Muzami

    2016-04-01

    Full Text Available Telur ayam merupakan sumber protein hewani kedua setelah ikan. Harga telur yang terjangkau dan bernilai gizi tinggi menjadikan telur salah satu bahan makanan yang sering dikonsumsi oleh masyarakat. Namun sekarang telah muncul telur hasil rekayasa yang memiliki nilai gizi yang lebih tinggi, yaitu telur yang mengandung omega-3. Bagian yang membedakan telur biasa dengan telur omega-3 adalah kuning telur omega-3 agak kemerahan sementara kuning telur biasa berwarna kuning. Tujuan dari penelitian ini adalah untuk menghasilkan perangkat lunak yang dapat mengidentifikasi secara visual jenis telur biasa atau telur omega-3. Pendeteksian jenis telur dilakukan dengan menggunakan pencocokan tekstur cangkang telur berdasarkan data penelitian. Penelitian menghasilkan luaran metode atau algoritma untuk untuk identifikasi citra digital dengan metode pra-pengolahan, segmentasi region of interest, serta analisis tekstur citra menggunakan metode statistik orde pertama nilai mean dan standard deviasi. Hasil dari penelitian menujukkan bahwa aplikasi Deteksi Citra Telur Omega-3 dapat membedakan citra telur ayam biasa atau citra telur ayam omega-3.

  6. Aplikasi Pendeteksi Kualitas Daging Menggunakan Segmentasi Region of Interest Berbasis Mobile

    Directory of Open Access Journals (Sweden)

    Rismawan Fajril Falah

    2016-04-01

    Full Text Available Peningkatan kebutuhan daging sapi di Indonesia saat ini dimanfaatkan pedagang-pedagang curang untuk mengambil banyak keuntungan. Penjualan daging sapi berkualitas buruk menimbulkan kecemasan bagi masyarakat karena kandungan yang sangat berbahaya. Kualitas daging sapi yang baik dapat ditentukan dari warna, bau, tekstur dan kenampakan. Masyarakat pada umumnya menggunakan penglihatan kasat mata untuk menentukan kualitas daging sapi. Namun cara tersebut masih kurang efektif karena mata memiliki kelemahan untuk melihat suatu objek secara detail. Penelitian ini bertujuan merancang dan membuat aplikasi untuk mendeteksi kualitas daging sapi dengan menggunakan proses pengolahan citra. Aplikasi dibuat menggunakan sistem operasi Android yang terintegrasi dengan SDK Android, library OpenCV dan Eclipse. Proses deteksi dilakukan dengan cara pengambilan gambar daging sapi dan diolah dengan beberapa tahap pengolahan citra digital. Tahap pengolahan terdiri dari pra pengolahan citra aras keabuan, segmentasi Region of Interest, ekualisasi histogram dan analisis nilai statistik ekstraksi ciri. Penentukan kualitas daging sapi yang lebih efektif dapat dilakukan dengan melihat hasil analisis pengolahan citra. Penelitian ini menunjukkan hasil analisis akurasi ketepatan baca aplikasi adalah 90%.

  7. Investigation of Five Algorithms for Selection of the Optimal Region of Interest in Smartphone Photoplethysmography

    Directory of Open Access Journals (Sweden)

    Rong-Chao Peng

    2016-01-01

    Full Text Available Smartphone photoplethysmography is a newly developed technique that can detect several physiological parameters from the photoplethysmographic signal obtained by the built-in camera of a smartphone. It is simple, low-cost, and easy-to-use, with a great potential to be used in remote medicine and home healthcare service. However, the determination of the optimal region of interest (ROI, which is an important issue for extracting photoplethysmographic signals from the camera video, has not been well studied. We herein proposed five algorithms for ROI selection: variance (VAR, spectral energy ratio (SER, template matching (TM, temporal difference (TD, and gradient (GRAD. Their performances were evaluated by a 50-subject experiment comparing the heart rates measured from the electrocardiogram and those from the smartphone using the five algorithms. The results revealed that the TM and the TD algorithms outperformed the other three as they had less standard error of estimate (<1.5 bpm and smaller limits of agreement (<3 bpm. The TD algorithm was slightly better than the TM algorithm and more suitable for smartphone applications. These results may be helpful to improve the accuracy of the physiological parameters measurement and to make the smartphone photoplethysmography technique more practical.

  8. Accurate Region-of-Interest Recovery Improves the Measurement of the Cell Migration Rate in the In Vitro Wound Healing Assay.

    Science.gov (United States)

    Bedoya, Cesar; Cardona, Andrés; Galeano, July; Cortés-Mancera, Fabián; Sandoz, Patrick; Zarzycki, Artur

    2017-12-01

    The wound healing assay is widely used for the quantitative analysis of highly regulated cellular events. In this essay, a wound is voluntarily produced on a confluent cell monolayer, and then the rate of wound reduction (WR) is characterized by processing images of the same regions of interest (ROIs) recorded at different time intervals. In this method, sharp-image ROI recovery is indispensable to compensate for displacements of the cell cultures due either to the exploration of multiple sites of the same culture or to transfers from the microscope stage to a cell incubator. ROI recovery is usually done manually and, despite a low-magnification microscope objective is generally used (10x), repositioning imperfections constitute a major source of errors detrimental to the WR measurement accuracy. We address this ROI recovery issue by using pseudoperiodic patterns fixed onto the cell culture dishes, allowing the easy localization of ROIs and the accurate quantification of positioning errors. The method is applied to a tumor-derived cell line, and the WR rates are measured by means of two different image processing software. Sharp ROI recovery based on the proposed method is found to improve significantly the accuracy of the WR measurement and the positioning under the microscope.

  9. Using the standard deviation of a region of interest in an image to estimate camera to emitter distance.

    Science.gov (United States)

    Cano-García, Angel E; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.

  10. Using the Standard Deviation of a Region of Interest in an Image to Estimate Camera to Emitter Distance

    Directory of Open Access Journals (Sweden)

    Felipe Espinoza

    2012-05-01

    Full Text Available In this study, a camera to infrared diode (IRED distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.

  11. Use of X-ray CT-defined regions of interest for the determination of SPECT recovery coefficients

    International Nuclear Information System (INIS)

    Tang, H.R.; Brown, J.K.; Hasegawa, B.H.

    1996-01-01

    For accurate activity per unit volume measurements in SPECT, recovery coefficients are usually applied based on the size and shape of objects being imaged to properly account for the resolution limitations of the gamma camera. Because of noise and limited spatial resolution, determination of object sizes and boundaries can be difficult using the SPECT images alone. We therefore have developed a technique which determines activity concentrations for SPECT using regions of interest (ROI's) obtained from coregistered X-ray CT images. In this study, experimental phantoms containing cylindrical and spherical objects were imaged on a combined X-ray CT/SPECT system and reconstructed data volumes were registered using the known geometry of the system. ROI's were defined on the registered CT images and used to help quantify activity concentration in localized regions and to measure object volumes. We have derived the recovery curves for these objects and scan technique. We have also tested a technique that demonstrates activity quantitation without the need for object and size dependent recovery coefficients in the case of low background

  12. A simple derivation and analysis of a helical cone beam tomographic algorithm for long object imaging via a novel definition of region of interest

    International Nuclear Information System (INIS)

    Hu Jicun; Tam, Kwok; Johnson, Roger H

    2004-01-01

    We derive and analyse a simple algorithm first proposed by Kudo et al (2001 Proc. 2001 Meeting on Fully 3D Image Reconstruction in Radiology and Nuclear Medicine (Pacific Grove, CA) pp 7-10) for long object imaging from truncated helical cone beam data via a novel definition of region of interest (ROI). Our approach is based on the theory of short object imaging by Kudo et al (1998 Phys. Med. Biol. 43 2885-909). One of the key findings in their work is that filtering of the truncated projection can be divided into two parts: one, finite in the axial direction, results from ramp filtering the data within the Tam window. The other, infinite in the z direction, results from unbounded filtering of ray sums over PI lines only. We show that for an ROI defined by PI lines emanating from the initial and final source positions on a helical segment, the boundary data which would otherwise contaminate the reconstruction of the ROI can be completely excluded. This novel definition of the ROI leads to a simple algorithm for long object imaging. The overscan of the algorithm is analytically calculated and it is the same as that of the zero boundary method. The reconstructed ROI can be divided into two regions: one is minimally contaminated by the portion outside the ROI, while the other is reconstructed free of contamination. We validate the algorithm with a 3D Shepp-Logan phantom and a disc phantom

  13. Optimization of Region-of-Interest Sampling Strategies for Hepatic MRI Proton Density Fat Fraction Quantification

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z.; Schlein, Alexandra N.; Hooker, Jonathan C.; Dehkordy, Soudabeh Fazeli; Hamilton, Gavin; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.

    2017-01-01

    BACKGROUND Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. PURPOSE To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. STUDY TYPE Retrospective secondary analysis of prospectively acquired clinical research data. POPULATION A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. FIELD STRENGTH/SEQUENCE Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradientrecalled echo technique. ASSESSMENT An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. STATISTICAL TESTING Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland–Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland–Altman analyses. RESULTS The study population’s mean whole-liver PDFF was 10.1±8.9% (range: 1.1–44.1%). Although there was no significant difference in average segmental (P=0.452) or lobar (P=0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥ 4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. DATA CONCLUSION Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. Level of

  14. Optimization of region-of-interest sampling strategies for hepatic MRI proton density fat fraction quantification.

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z; Schlein, Alexandra N; Hooker, Jonathan C; Fazeli Dehkordy, Soudabeh; Hamilton, Gavin; Reeder, Scott B; Loomba, Rohit; Sirlin, Claude B

    2018-04-01

    Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. Retrospective secondary analysis of prospectively acquired clinical research data. A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradient-recalled echo technique. An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland-Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland-Altman analyses. The study population's mean whole-liver PDFF was 10.1 ± 8.9% (range: 1.1-44.1%). Although there was no significant difference in average segmental (P = 0.452) or lobar (P = 0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:988-994. © 2017 International Society for Magnetic Resonance

  15. Efficient random access high resolution region-of-interest (ROI) image retrieval using backward coding of wavelet trees (BCWT)

    Science.gov (United States)

    Corona, Enrique; Nutter, Brian; Mitra, Sunanda; Guo, Jiangling; Karp, Tanja

    2008-03-01

    Efficient retrieval of high quality Regions-Of-Interest (ROI) from high resolution medical images is essential for reliable interpretation and accurate diagnosis. Random access to high quality ROI from codestreams is becoming an essential feature in many still image compression applications, particularly in viewing diseased areas from large medical images. This feature is easier to implement in block based codecs because of the inherent spatial independency of the code blocks. This independency implies that the decoding order of the blocks is unimportant as long as the position for each is properly identified. In contrast, wavelet-tree based codecs naturally use some interdependency that exploits the decaying spectrum model of the wavelet coefficients. Thus one must keep track of the decoding order from level to level with such codecs. We have developed an innovative multi-rate image subband coding scheme using "Backward Coding of Wavelet Trees (BCWT)" which is fast, memory efficient, and resolution scalable. It offers far less complexity than many other existing codecs including both, wavelet-tree, and block based algorithms. The ROI feature in BCWT is implemented through a transcoder stage that generates a new BCWT codestream containing only the information associated with the user-defined ROI. This paper presents an efficient technique that locates a particular ROI within the BCWT coded domain, and decodes it back to the spatial domain. This technique allows better access and proper identification of pathologies in high resolution images since only a small fraction of the codestream is required to be transmitted and analyzed.

  16. Long-term, repeated measurements of mouse cortical microflow at the same region of interest with high spatial resolution.

    Science.gov (United States)

    Tomita, Yutaka; Pinard, Elisabeth; Tran-Dinh, Alexy; Schiszler, Istvan; Kubis, Nathalie; Tomita, Minoru; Suzuki, Norihiro; Seylaz, Jacques

    2011-02-04

    A method for long-term, repeated, semi-quantitative measurements of cerebral microflow at the same region of interest (ROI) with high spatial resolution was developed and applied to mice subjected to focal arterial occlusion. A closed cranial window was chronically implanted over the left parieto-occipital cortex. The anesthetized mouse was placed several times, e.g., weekly, under a dynamic confocal microscope, and Rhodamine B-isothiocyanate-dextran was each time intravenously injected as a bolus, while microflow images were video recorded. Left and right tail veins were sequentially catheterized in a mouse three times at maximum over a 1.5 months' observation period. Smearing of the input function resulting from the use of intravenous injection was shown to be sufficiently small. The distal middle cerebral artery (MCA) was thermocoagulated through the cranial window in six mice, and five sham-operated mice were studied in parallel. Dye injection and video recording were conducted four times in this series, i.e., before and at 10 min, 7 and 30 days after sham operation or MCA occlusion. Pixelar microflow values (1/MTT) in a matrix of approximately 50×50 pixels were displayed on a two-dimensional (2-D) map, and the frequency distribution of the flow values was also calculated. No significant changes in microflow values over time were detected in sham-operated mice, while the time course of flow changes in the ischemic penumbral area in operated mice was similar to those reported in the literature. This method provides a powerful tool to investigate long-term changes in mouse cortical microflow under physiological and pathological conditions. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. Functional connectivity and structural covariance between regions of interest can be measured more accurately using multivariate distance correlation.

    Science.gov (United States)

    Geerligs, Linda; Cam-Can; Henson, Richard N

    2016-07-15

    Studies of brain-wide functional connectivity or structural covariance typically use measures like the Pearson correlation coefficient, applied to data that have been averaged across voxels within regions of interest (ROIs). However, averaging across voxels may result in biased connectivity estimates when there is inhomogeneity within those ROIs, e.g., sub-regions that exhibit different patterns of functional connectivity or structural covariance. Here, we propose a new measure based on "distance correlation"; a test of multivariate dependence of high dimensional vectors, which allows for both linear and non-linear dependencies. We used simulations to show how distance correlation out-performs Pearson correlation in the face of inhomogeneous ROIs. To evaluate this new measure on real data, we use resting-state fMRI scans and T1 structural scans from 2 sessions on each of 214 participants from the Cambridge Centre for Ageing & Neuroscience (Cam-CAN) project. Pearson correlation and distance correlation showed similar average connectivity patterns, for both functional connectivity and structural covariance. Nevertheless, distance correlation was shown to be 1) more reliable across sessions, 2) more similar across participants, and 3) more robust to different sets of ROIs. Moreover, we found that the similarity between functional connectivity and structural covariance estimates was higher for distance correlation compared to Pearson correlation. We also explored the relative effects of different preprocessing options and motion artefacts on functional connectivity. Because distance correlation is easy to implement and fast to compute, it is a promising alternative to Pearson correlations for investigating ROI-based brain-wide connectivity patterns, for functional as well as structural data. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Influence of region of interest size and ultrasound lesion size on the performance of 2D shear wave elastography (SWE) in solid breast masses

    International Nuclear Information System (INIS)

    Skerl, K.; Vinnicombe, S.; Giannotti, E.; Thomson, K.; Evans, A.

    2015-01-01

    Aim: To evaluate the influence of the region of interest (ROI) size and lesion diameter on the diagnostic performance of 2D shear wave elastography (SWE) of solid breast lesions. Materials and methods: A study group of 206 consecutive patients (age range 21–92 years) with 210 solid breast lesions (70 benign, 140 malignant) who underwent core biopsy or surgical excision was evaluated. Lesions were divided into small (diameter <15 mm, n=112) and large lesions (diameter ≥15 mm, n=98). An ROI with a diameter of 1, 2, and 3 mm was positioned over the stiffest part of the lesion. The maximum elasticity (E_m_a_x), mean elasticity (E_m_e_a_n) and standard deviation (SD) for each ROI size were compared to the pathological outcome. Statistical analysis was undertaken using the chi-square test and receiver operating characteristic (ROC) analysis. Results: The ROI size used has a significant impact on the performance of E_m_e_a_n and SD but not on E_m_a_x. Youden's indices show a correlation with the ROI size and lesion size: generally, the benign/malignant threshold is lower with increasing ROI size but higher with increasing lesion size. Conclusions: No single SWE parameter has superior performance. Lesion size and ROI size influence diagnostic performance. - Highlights: • Optimal cut-off for benign/malignant differentiation depends on lesion size. • Region of interest size influences measurements of mean elasticity and standard deviation. • Large lesions are stiffer than small lesions. • Optimal cut-off for benign/malignant differentiation should increase with increasing lesion size. • Region of interest of 2 mm achieved best compromise of the diagnostic performance for all SWE parameter.

  19. SOFTWARE FOR REGIONS OF INTEREST RETRIEVAL ON MEDICAL 3D IMAGES

    Directory of Open Access Journals (Sweden)

    G. G. Stromov

    2014-01-01

    Full Text Available Background. Implementation of software for areas of interest retrieval in 3D medical images is described in this article. It has been tested against large volume of model MRIs.Material and methods. We tested software against normal and pathological (severe multiple sclerosis model MRIs from tge BrainWeb resource. Technological stack is based on open-source cross-platform solutions. We implemented storage system on Maria DB (an open-sourced fork of MySQL with P/SQL extensions. Python 2.7 scripting was used for automatization of extract-transform-load operations. The computational core is written on Java 7 with Spring framework 3. MongoDB was used as a cache in the cluster of workstations. Maven 3 was chosen as a dependency manager and build system, the project is hosted at Github.Results. As testing on SSMU's LAN has showed, software has been developed is quite efficiently retrieves ROIs are matching for the morphological substratum on pathological MRIs.Conclusion. Automation of a diagnostic process using medical imaging allows to level down the subjective component in decision making and increase the availability of hi-tech medicine. Software has shown in the article is a complex solution for ROI retrieving and segmentation process on model medical images in full-automated mode.We would like to thank Robert Vincent for great help with consulting of usage the BrainWeb resource.

  20. Fully automated quantification of regional cerebral blood flow with three-dimensional stereotaxic region of interest template. Validation using magnetic resonance imaging. Technical note

    Energy Technology Data Exchange (ETDEWEB)

    Takeuchi, Ryo; Katayama, Shigenori; Takeda, Naoya; Fujita, Katsuzo [Nishi-Kobe Medical Center (Japan); Yonekura, Yoshiharu [Fukui Medical Univ., Matsuoka (Japan); Konishi, Junji [Kyoto Univ. (Japan). Graduate School of Medicine

    2003-03-01

    The previously reported three-dimensional stereotaxic region of interest (ROI) template (3DSRT-t) for the analysis of anatomically standardized technetium-99m-L,L-ethyl cysteinate dimer ({sup 99m}Tc-ECD) single photon emission computed tomography (SPECT) images was modified for use in a fully automated regional cerebral blood flow (rCBF) quantification software, 3DSRT, incorporating an anatomical standardization engine transplanted from statistical parametric mapping 99 and ROIs for quantification based on 3DSRT-t. Three-dimensional T{sub 2}-weighted magnetic resonance images of 10 patients with localized infarcted areas were compared with the ROI contour of 3DSRT, and the positions of the central sulcus in the primary sensorimotor area were also estimated. All positions of the 20 lesions were in strict accordance with the ROI delineation of 3DSRT. The central sulcus was identified on at least one side of 210 paired ROIs and in the middle of 192 (91.4%) of these 210 paired ROIs among the 273 paired ROIs of the primary sensorimotor area. The central sulcus was recognized in the middle of more than 71.4% of the ROIs in which the central sulcus was identifiable in the respective 28 slices of the primary sensorimotor area. Fully automated accurate ROI delineation on anatomically standardized images is possible with 3DSRT, which enables objective quantification of rCBF and vascular reserve in only a few minutes using {sup 99m}Tc-ECD SPECT images obtained by the resting and vascular reserve (RVR) method. (author)

  1. Fully automated quantification of regional cerebral blood flow with three-dimensional stereotaxic region of interest template. Validation using magnetic resonance imaging. Technical note

    International Nuclear Information System (INIS)

    Takeuchi, Ryo; Katayama, Shigenori; Takeda, Naoya; Fujita, Katsuzo; Yonekura, Yoshiharu; Konishi, Junji

    2003-01-01

    The previously reported three-dimensional stereotaxic region of interest (ROI) template (3DSRT-t) for the analysis of anatomically standardized technetium-99m-L,L-ethyl cysteinate dimer ( 99m Tc-ECD) single photon emission computed tomography (SPECT) images was modified for use in a fully automated regional cerebral blood flow (rCBF) quantification software, 3DSRT, incorporating an anatomical standardization engine transplanted from statistical parametric mapping 99 and ROIs for quantification based on 3DSRT-t. Three-dimensional T 2 -weighted magnetic resonance images of 10 patients with localized infarcted areas were compared with the ROI contour of 3DSRT, and the positions of the central sulcus in the primary sensorimotor area were also estimated. All positions of the 20 lesions were in strict accordance with the ROI delineation of 3DSRT. The central sulcus was identified on at least one side of 210 paired ROIs and in the middle of 192 (91.4%) of these 210 paired ROIs among the 273 paired ROIs of the primary sensorimotor area. The central sulcus was recognized in the middle of more than 71.4% of the ROIs in which the central sulcus was identifiable in the respective 28 slices of the primary sensorimotor area. Fully automated accurate ROI delineation on anatomically standardized images is possible with 3DSRT, which enables objective quantification of rCBF and vascular reserve in only a few minutes using 99m Tc-ECD SPECT images obtained by the resting and vascular reserve (RVR) method. (author)

  2. Improved quantification for local regions of interest in preclinical PET imaging

    Science.gov (United States)

    Cal-González, J.; Moore, S. C.; Park, M.-A.; Herraiz, J. L.; Vaquero, J. J.; Desco, M.; Udias, J. M.

    2015-09-01

    In Positron Emission Tomography, there are several causes of quantitative inaccuracy, such as partial volume or spillover effects. The impact of these effects is greater when using radionuclides that have a large positron range, e.g. 68Ga and 124I, which have been increasingly used in the clinic. We have implemented and evaluated a local projection algorithm (LPA), originally evaluated for SPECT, to compensate for both partial-volume and spillover effects in PET. This method is based on the use of a high-resolution CT or MR image, co-registered with a PET image, which permits a high-resolution segmentation of a few tissues within a volume of interest (VOI) centered on a region within which tissue-activity values need to be estimated. The additional boundary information is used to obtain improved activity estimates for each tissue within the VOI, by solving a simple inversion problem. We implemented this algorithm for the preclinical Argus PET/CT scanner and assessed its performance using the radionuclides 18F, 68Ga and 124I. We also evaluated and compared the results obtained when it was applied during the iterative reconstruction, as well as after the reconstruction as a postprocessing procedure. In addition, we studied how LPA can help to reduce the ‘spillover contamination’, which causes inaccurate quantification of lesions in the immediate neighborhood of large, ‘hot’ sources. Quantification was significantly improved by using LPA, which provided more accurate ratios of lesion-to-background activity concentration for hot and cold regions. For 18F, the contrast was improved from 3.0 to 4.0 in hot lesions (when the true ratio was 4.0) and from 0.16 to 0.06 in cold lesions (true ratio  =  0.0), when using the LPA postprocessing. Furthermore, activity values estimated within the VOI using LPA during reconstruction were slightly more accurate than those obtained by post-processing, while also visually improving the image contrast and uniformity

  3. Automatic detection system for multiple region of interest registration to account for posture changes in head and neck radiotherapy

    Science.gov (United States)

    Mencarelli, A.; van Beek, S.; Zijp, L. J.; Rasch, C.; van Herk, M.; Sonke, J.-J.

    2014-04-01

    Despite immobilization of head and neck (H and N) cancer patients, considerable posture changes occur over the course of radiotherapy (RT). To account for the posture changes, we previously implemented a multiple regions of interest (mROIs) registration system tailored to the H and N region for image-guided RT correction strategies. This paper is focused on the automatic segmentation of the ROIs in the H and N region. We developed a fast and robust automatic detection system suitable for an online image-guided application and quantified its performance. The system was developed to segment nine high contrast structures from the planning CT including cervical vertebrae, mandible, hyoid, manubrium of sternum, larynx and occipital bone. It generates nine 3D rectangular-shaped ROIs and informs the user in case of ambiguities. Two observers evaluated the robustness of the segmentation on 188 H and N cancer patients. Bland-Altman analysis was applied to a sub-group of 50 patients to compare the registration results using only the automatically generated ROIs and those manually set by two independent experts. Finally the time performance and workload were evaluated. Automatic detection of individual anatomical ROIs had a success rate of 97%/53% with/without user notifications respectively. Following the notifications, for 38% of the patients one or more structures were manually adjusted. The processing time was on average 5 s. The limits of agreement between the local registrations of manually and automatically set ROIs was comprised between ±1.4 mm, except for the manubrium of sternum (-1.71 mm and 1.67 mm), and were similar to the limits agreement between the two experts. The workload to place the nine ROIs was reduced from 141 s (±20 s) by the manual procedure to 59 s (±17 s) using the automatic method. An efficient detection system to segment multiple ROIs was developed for Cone-Beam CT image-guided applications in the H and N region and is clinically implemented in

  4. SU-F-J-19: Robust Region-Of-Interest (ROI) for Consistent Registration On Deteriorated Surface Images

    Energy Technology Data Exchange (ETDEWEB)

    Kang, H; Malin, M; Chmura, S; Hasan, Y; Al-Hallaq, H [The Department of Radiation and Cellular Oncology, The University of Chicago Medicine, Chicago, IL (United States)

    2016-06-15

    Purpose: For African-American patients receiving breast radiotherapy with a bolus, skin darkening can affect the surface visualization when using optical imaging for daily positioning and gating at deep-inspiration breath holds (DIBH). Our goal is to identify a region-of-interest (ROI) that is robust against deteriorating surface image quality due to skin darkening. Methods: We study four patients whose post-mastectomy surfaces are imaged daily with AlignRT (VisionRT, UK) for DIBH radiotherapy and whose surface image quality is degraded toward the end of treatment. To simulate the effects of skin darkening, surfaces from the first ten fractions of each patient are systematically degraded by 25–35%, 40–50% and 65–75% of the total area of the clinically used ROI-ipsilateral-chestwall. The degraded surfaces are registered to the reference surface in six degrees-of-freedom. To identify a robust ROI, three additional reference ROIs — ROI-chest+abdomen, ROI-bilateral-chest and ROI-extended-ipsilateral-chestwall are created and registered to the degraded surfaces. Differences in registration using these ROIs are compared to that using ROI-ipsilateral-chestwall. Results: For three patients, the deviations in the registrations to ROI-ipsilateral-chestwall are > 2.0, 3.1 and 7.9mm on average for 25–35%, 40–50% and 65–75% degraded surfaces, respectively. Rotational deviations reach 11.1° in pitch. For the last patient, registration is consistent to within 2.6mm even on the 65–75% degraded surfaces, possibly because the surface topography has more distinct features. For ROI-bilateral-chest and ROI-extended-ipsilateral-chest registrations deviate in a similar pattern. However, registration on ROI-chest+abdomen is robust to deteriorating image qualities to within 4.2mm for all four patients. Conclusion: Registration deviations using ROI-ipsilateral-chestwall can reach 9.8mm on the 40–50% degraded surfaces. Caution is required when using AlignRT for patients

  5. Value of the region of interest technique in the scintigraphic diagnosis of primary bone tumors

    International Nuclear Information System (INIS)

    Buell, U.; Keyl, W.; Meister, P.; Pfeifer, J.P.; Hartel, P.; Muenchen Univ.

    1981-01-01

    Employing ROI-technique, a ratio Q was obtained from relating accumulation of 99 sup(m)Tc-MDP at the site of the bone lesion (n = 150) with that of contralateral non-involved osseous areas. Values of Q were correlated with histologic tumor diagnosis, its dignity and frequency. Values of Q of greater than 3.0 were found in 95% of all sarcomas, in 100% of the osteosarcomas but in only 3.8% of all benign bone tumors. Values ranging from 1.0 to 1.2 were exclusively measured in benign tumors (e.g., in 52% of juvenile bone cysts and in 67% of non-ossifing fibromas). Since the threshold - separating benign from malignant lesions - at Q = 3.0 was blurred by tumorlike lesions, metastases and especially by Paget's disease, this method does not precisely predict dignity. However, this method may complement radiographic evaluation with low values supporting the diagnosis of a benign lesion. The combined findings of radiography and these rations gained by nuclear imaging may help determine the pathway of a patient through further diagnosis and treatment. (orig.) [de

  6. Photogrammetric Documentation of Regions of Interest at Autopsy—A Pilot Study

    DEFF Research Database (Denmark)

    Slot, Liselott Kristina; Larsen, Peter Kastmand; Lynnerup, Niels

    2014-01-01

    In this pilot study, the authors tested whether photogrammetry can replace or supplement physical measurements made during autopsies and, based on such measurements, whether virtual computer models may be applicable in forensic reconstructions. Photogrammetric and physical measurements of markers...... denoting wounds on five volunteers were compared. Virtual models of the volunteers were made, and the precision of the markers' locations on the models was tested. Twelve of 13 mean differences between photogrammetric and physical measurements were below 1 cm, which indicates that the photogrammetric...

  7. TH-A-18C-10: Dynamic Intensity Weighted Region of Interest Imaging

    International Nuclear Information System (INIS)

    Pearson, E; Pan, X; Pelizzari, C

    2014-01-01

    Purpose: For image guidance tasks full image quality is not required throughout the entire image. With dynamic filtration of the kV imaging beam the noise properties of the CT image can be locally controlled, providing a high quality image around the target volume with a lower quality surrounding region while providing substantial dose sparing to the patient as well as reduced scatter fluence on the detector. Methods: A dynamic collimation device with 3mm copper blades has been designed to mount in place of the bowtie filter on the On-Board Imager (Varian Medical Systems). The beam intensity is reduced by 95% behind the copper filters and the aperture is controlled dynamically to conformally illuminate a given ROI during a standard cone-beam CT scan. A data correction framework to account for the physical effects of the collimator prior to reconstruction was developed. Furthermore, to determine the dose savings and scatter reduction a monte carlo model was built in BEAMnrc with specifics from the Varian Monte Carlo Data Package. The MC model was validated with Gafchromic film. Results: The reconstructed image shows image quality comparable to a standard scan in the specified ROI, with higher noise and streaks in the outer region but still sufficient information for alignment to high contrast structures. The monte carlo modeling showed that the scatter-to-primary ratio was reduced from 1.26 for an unfiltered scan to 0.45 for an intensity weighted scan, suggesting that image quality may be improved in the inner ROI. Dose in the inner region was reduced 10–15% due to reduced scatter and by as much as 75% in the outer region. Conclusion: Dynamic intensity-weighted ROI imaging allows reduction of imaging dose to sensitive organs away from the target region while providing images that retain their utility for patient setup and procedure guidance. Funding was provided in part by Varian Medical Systems and NIH Grants 1RO1CA120540, T32EB002103, S10 RR021039 and P30 CA

  8. Segmentation of Multi-Isotope Imaging Mass Spectrometry Data for Semi-Automatic Detection of Regions of Interest

    Science.gov (United States)

    Poczatek, J. Collin; Turck, Christoph W.; Lechene, Claude

    2012-01-01

    Multi-isotope imaging mass spectrometry (MIMS) associates secondary ion mass spectrometry (SIMS) with detection of several atomic masses, the use of stable isotopes as labels, and affiliated quantitative image-analysis software. By associating image and measure, MIMS allows one to obtain quantitative information about biological processes in sub-cellular domains. MIMS can be applied to a wide range of biomedical problems, in particular metabolism and cell fate [1], [2], [3]. In order to obtain morphologically pertinent data from MIMS images, we have to define regions of interest (ROIs). ROIs are drawn by hand, a tedious and time-consuming process. We have developed and successfully applied a support vector machine (SVM) for segmentation of MIMS images that allows fast, semi-automatic boundary detection of regions of interests. Using the SVM, high-quality ROIs (as compared to an expert's manual delineation) were obtained for 2 types of images derived from unrelated data sets. This automation simplifies, accelerates and improves the post-processing analysis of MIMS images. This approach has been integrated into “Open MIMS,” an ImageJ-plugin for comprehensive analysis of MIMS images that is available online at http://www.nrims.hms.harvard.edu/NRIMS_ImageJ.php. PMID:22347386

  9. Optimization of Region of Interest Drawing for Quantitative Analysis: Differentiation Between Benign and Malignant Breast Lesions on Contrast-Enhanced Sonography.

    Science.gov (United States)

    Nakata, Norio; Ohta, Tomoyuki; Nishioka, Makiko; Takeyama, Hiroshi; Toriumi, Yasuo; Kato, Kumiko; Nogi, Hiroko; Kamio, Makiko; Fukuda, Kunihiko

    2015-11-01

    This study was performed to evaluate the diagnostic utility of quantitative analysis of benign and malignant breast lesions using contrast-enhanced sonography. Contrast-enhanced sonography using the perflubutane-based contrast agent Sonazoid (Daiichi Sankyo, Tokyo, Japan) was performed in 94 pathologically proven palpable breast mass lesions, which could be depicted with B-mode sonography. Quantitative analyses using the time-intensity curve on contrast-enhanced sonography were performed in 5 region of interest (ROI) types (manually traced ROI and circular ROIs of 5, 10, 15, and 20 mm in diameter). The peak signal intensity, initial slope, time to peak, positive enhancement integral, and wash-out ratio were investigated in each ROI. There were significant differences between benign and malignant lesions in the time to peak (P benign and malignant lesions in the time to peak (P benign and malignant breast lesions. © 2015 by the American Institute of Ultrasound in Medicine.

  10. A local region of interest image reconstruction via filtered backprojection for fan-beam differential phase-contrast computed tomography

    International Nuclear Information System (INIS)

    Qi Zhihua; Chen Guanghong

    2007-01-01

    Recently, x-ray differential phase contrast computed tomography (DPC-CT) has been experimentally implemented using a conventional source combined with several gratings. Images were reconstructed using a parallel-beam reconstruction formula. However, parallel-beam reconstruction formulae are not directly applicable for a large image object where the parallel-beam approximation fails. In this note, we present a new image reconstruction formula for fan-beam DPC-CT. There are two major features in this algorithm: (1) it enables the reconstruction of a local region of interest (ROI) using data acquired from an angular interval shorter than 180 0 + fan angle and (2) it still preserves the filtered backprojection structure. Numerical simulations have been conducted to validate the image reconstruction algorithm. (note)

  11. Development of intelligent surveillance system (ISS) in region of interest (ROI) using Kalman filter and camshift on Raspberry Pi 2

    Science.gov (United States)

    Park, Junghun; Hong, Kicheon

    2017-06-01

    Due to the improvement of the picture quality of closed-circuit television (CCTV), the demand for CCTV has increased rapidly and its market size has also increased. The current system structure of CCTV transfers compressed images without analysis received from CCTV to a control center. The compressed images are suitable for the evidence required for a criminal arrest, but they cannot prevent crime in real time, which has been considered a limitation. Thus, the present paper proposes a system implementation that can prevent crimes by applying a situation awareness system at the back end of the CCTV cameras for image acquisition to prevent crimes efficiently. In the system implemented in the present paper, the region of interest (ROI) is set virtually within the image data when a barrier, such as fence, cannot be installed in actual sites and unauthorized intruders are tracked constantly through data analysis and recognized in the ROI via the developed algorithm. Additionally, a searchlight or alarm sound is activated to prevent crime in real time and the urgent information is transferred to the control center. The system was implemented in the Raspberry Pi 2 board to be run in real time. The experiment results showed that the recognition success rate was 85% or higher and the track accuracy was 90% or higher. By utilizing the system, crime prevention can be achieved by implementing a social safety network.

  12. Delayed Gadolinium-Enhanced MRI of Cartilage (dGEMRIC): Intra- and Interobserver Variability in Standardized Drawing of Regions of Interest

    International Nuclear Information System (INIS)

    Tiderius, C.J.; Tjoernstrand, J.; Aakeson, P.; Soedersten, K.; Dahlberg, L.; Leander, P.

    2004-01-01

    Purpose: To establish the reproducibility of a standardized region of interest (ROI) drawing procedure in delayed gadolinium-enhanced magnetic resonance imaging (MRI) of cartilage (dGEMRIC). Material and Methods: A large ROI in lateral and medial femoral weight-bearing cartilage was drawn in images of 12 healthy male volunteers by 6 investigators with different skills in MRI. The procedure was done twice, with a 1-week interval. Calculated T1-values were evaluated for intra- and interobserver variability. Results: The mean interobserver variability for both compartments ranged between 1.3% and 2.3% for the 6 different investigators without correlation to their experience in MRI. Post-contrast intra-observer variability was low in both the lateral and the medial femoral cartilage, 2.6% and 1.5%, respectively. The larger variability in lateral than in medial cartilage was related to slightly longer and thinner ROIs. Conclusion: Intra-observer variability and interobserver variability are both low when a large standardized ROI is used in dGEMRIC. The experience of the investigator does not affect the variability, which further supports a clinical applicability of the method

  13. Precision and accuracy in CT attenuation measurement of vascular wall using region-of-interest supported by differentiation curve

    International Nuclear Information System (INIS)

    Suzuki, Shigeru; Kidouchi, Takashi; Kuwahara, Sadatoshi; Vembar, Mani; Takei, Ryoji; Yamamoto, Asako

    2012-01-01

    Objectives: To evaluate the precision and accuracy in CT attenuation measurement of vascular wall using region-of-interest (ROI) supported by differentiation curves. Study design: We used vascular models (actual attenuation value of the wall: 87 HU) with wall thicknesses of 1.5, 1.0, or 0.5 mm, filled with contrast material of 250, 348, or 436 HU. The nine vascular models were scanned with a 64-detector CT. The wall attenuation values were measured using three sizes (diameter: 0.5, 1.0, and 1.5 mm) of ROIs without differentiation curves. Sixteen measurements were repeated for each vascular model by each of two operators. Measurements supported by differentiation curves were also performed. We used analyses of variance with repeated measures for the measured attenuations for each size of the ROI. Results: Without differentiation curves, there were significant differences in the attenuation values of the wall among the three densities of contrast material, and the attenuation values tended to be overestimated more as the contrast material density increased. Operator dependencies were also found in measurements for 0.5- and 1.5-mm thickness models. With differentiation curves, measurements were not possible for 0.5- and 1.0-mm thickness models. Using differentiation curves for 1.5-mm thickness models with a ROI of 1.0- or 1.5-mm diameter, the wall attenuations were not affected by the contrast material densities and were operator independent, measuring between 75 and 103 HU. Conclusions: The use of differentiation curves can improve the precision and accuracy in wall attenuation measurement using a ROI technique, while measurements for walls of ≤1.0 mm thickness are difficult.

  14. Definition and visualisation of regions of interest in post-prostatectomy image-guided intensity modulated radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Linda J, E-mail: linda.bell1@health.nsw.gov.au; Cox, Jennifer [Radiation Oncology Department, Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales (Australia); Faculty of Health Sciences, University of Sydney, Lidcombe, New South Wales (Australia); Eade, Thomas; Rinks, Marianne; Kneebone, Andrew [Radiation Oncology Department, Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales (Australia)

    2014-09-15

    Standard post-prostatectomy radiotherapy (PPRT) image verification uses bony anatomy alignment. However, the prostate bed (PB) moves independently of bony anatomy. Cone beam computed tomography (CBCT) can be used to soft tissue match, so radiation therapists (RTs) must understand pelvic anatomy and PPRT clinical target volumes (CTV). The aims of this study are to define regions of interest (ROI) to be used in soft tissue matching image guidance and determine their visibility on planning CT (PCT) and CBCT. Published CTV guidelines were used to select ROIs. The PCT scans (n = 23) and CBCT scans (n = 105) of 23 post-prostatectomy patients were reviewed. Details on ROI identification were recorded. Eighteen patients had surgical clips. All ROIs were identified on PCTs at least 90% of the time apart from mesorectal fascia (MF) (87%) due to superior image quality. When surgical clips are present, the seminal vesicle bed (SVB) was only seen in 2.3% of CBCTs and MF was unidentifiable. Most other structures were well identified on CBCT. The anterior rectal wall (ARW) was identified in 81.4% of images and penile bulb (PB) in 68.6%. In the absence of surgical clips, the MF and SVB were always identified; the ARW was identified in 89.5% of CBCTs and PB in 73.7%. Surgical clips should be used as ROIs when present to define SVB and MF. In the absence of clips, SVB, MF and ARW can be used. RTs must have a strong knowledge of soft tissue anatomy and PPRT CTV to ensure coverage and enable soft tissue matching.

  15. Definition and visualisation of regions of interest in post-prostatectomy image-guided intensity modulated radiotherapy

    International Nuclear Information System (INIS)

    Bell, Linda J; Cox, Jennifer; Eade, Thomas; Rinks, Marianne; Kneebone, Andrew

    2014-01-01

    Standard post-prostatectomy radiotherapy (PPRT) image verification uses bony anatomy alignment. However, the prostate bed (PB) moves independently of bony anatomy. Cone beam computed tomography (CBCT) can be used to soft tissue match, so radiation therapists (RTs) must understand pelvic anatomy and PPRT clinical target volumes (CTV). The aims of this study are to define regions of interest (ROI) to be used in soft tissue matching image guidance and determine their visibility on planning CT (PCT) and CBCT. Published CTV guidelines were used to select ROIs. The PCT scans (n = 23) and CBCT scans (n = 105) of 23 post-prostatectomy patients were reviewed. Details on ROI identification were recorded. Eighteen patients had surgical clips. All ROIs were identified on PCTs at least 90% of the time apart from mesorectal fascia (MF) (87%) due to superior image quality. When surgical clips are present, the seminal vesicle bed (SVB) was only seen in 2.3% of CBCTs and MF was unidentifiable. Most other structures were well identified on CBCT. The anterior rectal wall (ARW) was identified in 81.4% of images and penile bulb (PB) in 68.6%. In the absence of surgical clips, the MF and SVB were always identified; the ARW was identified in 89.5% of CBCTs and PB in 73.7%. Surgical clips should be used as ROIs when present to define SVB and MF. In the absence of clips, SVB, MF and ARW can be used. RTs must have a strong knowledge of soft tissue anatomy and PPRT CTV to ensure coverage and enable soft tissue matching

  16. Multi-site study of diffusion metric variability: effects of site, vendor, field strength, and echo time on regions-of-interest and histogram-bin analyses.

    Science.gov (United States)

    Helmer, K G; Chou, M-C; Preciado, R I; Gimi, B; Rollins, N K; Song, A; Turner, J; Mori, S

    2016-02-27

    It is now common for magnetic-resonance-imaging (MRI) based multi-site trials to include diffusion-weighted imaging (DWI) as part of the protocol. It is also common for these sites to possess MR scanners of different manufacturers, different software and hardware, and different software licenses. These differences mean that scanners may not be able to acquire data with the same number of gradient amplitude values and number of available gradient directions. Variability can also occur in achievable b-values and minimum echo times. The challenge of a multi-site study then, is to create a common protocol by understanding and then minimizing the effects of scanner variability and identifying reliable and accurate diffusion metrics. This study describes the effect of site, scanner vendor, field strength, and TE on two diffusion metrics: the first moment of the diffusion tensor field (mean diffusivity, MD), and the fractional anisotropy (FA) using two common analyses (region-of-interest and mean-bin value of whole brain histograms). The goal of the study was to identify sources of variability in diffusion-sensitized imaging and their influence on commonly reported metrics. The results demonstrate that the site, vendor, field strength, and echo time all contribute to variability in FA and MD, though to different extent. We conclude that characterization of the variability of DTI metrics due to site, vendor, field strength, and echo time is a worthwhile step in the construction of multi-center trials.

  17. Region-of-interest analyses of one-dimensional biomechanical trajectories: bridging 0D and 1D theory, augmenting statistical power

    Directory of Open Access Journals (Sweden)

    Todd C. Pataky

    2016-11-01

    Full Text Available One-dimensional (1D kinematic, force, and EMG trajectories are often analyzed using zero-dimensional (0D metrics like local extrema. Recently whole-trajectory 1D methods have emerged in the literature as alternatives. Since 0D and 1D methods can yield qualitatively different results, the two approaches may appear to be theoretically distinct. The purposes of this paper were (a to clarify that 0D and 1D approaches are actually just special cases of a more general region-of-interest (ROI analysis framework, and (b to demonstrate how ROIs can augment statistical power. We first simulated millions of smooth, random 1D datasets to validate theoretical predictions of the 0D, 1D and ROI approaches and to emphasize how ROIs provide a continuous bridge between 0D and 1D results. We then analyzed a variety of public datasets to demonstrate potential effects of ROIs on biomechanical conclusions. Results showed, first, that a priori ROI particulars can qualitatively affect the biomechanical conclusions that emerge from analyses and, second, that ROIs derived from exploratory/pilot analyses can detect smaller biomechanical effects than are detectable using full 1D methods. We recommend regarding ROIs, like data filtering particulars and Type I error rate, as parameters which can affect hypothesis testing results, and thus as sensitivity analysis tools to ensure arbitrary decisions do not influence scientific interpretations. Last, we describe open-source Python and MATLAB implementations of 1D ROI analysis for arbitrary experimental designs ranging from one-sample t tests to MANOVA.

  18. Comprehensive Investigation of White Matter Tracts in Professional Chess Players and Relation to Expertise: Region of Interest and DMRI Connectometry.

    Science.gov (United States)

    Mayeli, Mahsa; Rahmani, Farzaneh; Aarabi, Mohammad Hadi

    2018-01-01

    Purpose: Expertise is the product of training. Few studies have used functional connectivity or conventional diffusometric methods to identify neural underpinnings of chess expertise. Diffusometric variables of white matter might reflect these adaptive changes, along with changes in structural connectivity, which is a sensitive measure of microstructural changes. Method: Diffusometric variables of 29 professional chess players and 29 age-sex matched controls were extracted for white matter regions based on John Hopkin's Mori white matter atlas and partially correlated against professional training time and level of chess proficiency. Diffusion MRI connectometry was implemented to identify changes in structural connectivity in professional players compared to novices. Result: Compared to novices, higher planar anisotropy (CP) was observed in inferior longitudinal fasciculus (ILF), superior longitudinal fasciculus (SLF) and cingulate gyrus, in professional chess players, which correlated with higher RPM score in this group. Higher fractional anisotropy (FA) was observed in ILF, uncinate fasciculus (UF) and hippocampus and correlated with better scores in Raven's progressive matrices (RPM) score and longer duration of chess training in professional players. Consistently, radial diffusivity in bilateral IFOF, bilateral ILF and bilateral SLF was inversely correlated with level of training in professional players. DMRI connectometry analysis identified increased connectivity in bilateral UF, bilateral IFOF, bilateral cingulum, and corpus callosum in chess player's compared to controls. Conclusion: Structural connectivity of major associational subcortical white matter fibers are increased in professional chess players. FA and CP of ILF, SLF and UF directly correlates with duration of professional training and RPM score, in professional chess players.

  19. Diagnosis of angiomyolipoma using computed tomography-region of interest ≤-10 HU or 4 adjacent pixels ≤-10 HU are recommended as the diagnostic thresholds

    International Nuclear Information System (INIS)

    Simpson, E.; Patel, U.

    2006-01-01

    AIM: To study and compare the diagnostic accuracy of region of interest (ROI) density measurement and pixel mapping [computed tomography (CT) density of individual pixels] for the diagnosis of renal angiomyolipoma (AML) using CT. MATERIALS AND METHODS: A study group of histologically proven AMLs was compared with a control group of histologically proven renal cell cancers, normal renal parenchyma, and simple renal cysts. The mean tissue density (ROI circle) and a pixel density map were recorded. The diagnostic accuracy of various thresholds of ROI and pixel mapping values were compared using receiver operating characteristic curves. RESULTS: Twenty-two AMLs, 16 renal cell carcinomas (RCCs), 30 simple cysts, and 30 sites of renal parenchyma were evaluated. The mean (±1 SD) density of the AMLs was significantly lower [-15.2(20.8) units] than the three control groups [+36.0(8.1) units, +5.4(3.4) units and +22.2(46.5) units for RCC, renal cyst and parenchyma respectively; p<0.001 (analysis of variance)]. The sensitivities and specificities of the ROI diagnostic thresholds of ≤0 units, ≤-10 units and ≤-20 units were 77 and 97%, 73 and 100% and 50 and 100%, respectively. Using pixel mapping [diagnostic thresholds of either a line of 4 pixels ≤-10 units or a square of 4 pixels ≤-10 units] the sensitivity improves to 86% with a specificity of 97%. CONCLUSION: Although a ROI threshold value of ≤-10 units has a very high specificity (100% in the present study) the sensitivity is modest at only 73%. Pixel mapping is more sensitive for recognizing small clusters of fat. In practice, both methods can be recommended for the analysis of suspected AMLs. ROI density measurement is convenient when analysing large areas of suspected fat and ≤-10 units should be used as the diagnostic threshold. When faced with small lucent areas or indeterminate values after ROI analysis, pixel mapping is recommended using a line of 4 pixels ≤-10 units or a square of 4 pixels ≤-10

  20. Comparison of features response in texture-based iris segmentation

    CSIR Research Space (South Africa)

    Bachoo, A

    2009-03-01

    Full Text Available the Fisher linear discriminant and the iris region of interest is extracted. Four texture description methods are compared for segmenting iris texture using a region based pattern classification approach: Grey Level Co-occurrence Matrix (GLCM), Discrete...

  1. SU-E-CAMPUS-I-04: Automatic Skin-Dose Mapping for An Angiographic System with a Region-Of-Interest, High-Resolution Detector

    Energy Technology Data Exchange (ETDEWEB)

    Vijayan, S; Rana, V [Department of Physiology and Biophysics, Toshiba Stroke and Vascular Research Center (United States); Setlur Nagesh, S [Toshiba Stroke and Vascular Research Center (United States); Ionita, C [Department of Biomedical Engineering, University at Buffalo (State University of New York), Buffalo, NY (United States); Rudin, S [Department of Radiology, Department of Physiology and Biophysics, Toshiba Stroke and Vascular Research Center, Department of Biomedical Engineering, University at Buffalo (State University of New York), Buffalo, NY (United States); Bednarek, D [Department of Radiology, Department of Physiology and Biophysics, Toshiba Stroke and Vascular Research Center (United States)

    2014-06-15

    Purpose: Our real-time skin dose tracking system (DTS) has been upgraded to monitor dose for the micro-angiographic fluoroscope (MAF), a high-resolution, small field-of-view x-ray detector. Methods: The MAF has been mounted on a changer on a clinical C-Arm gantry so it can be used interchangeably with the standard flat-panel detector (FPD) during neuro-interventional procedures when high resolution is needed in a region-of-interest. To monitor patient skin dose when using the MAF, our DTS has been modified to automatically account for the change in scatter for the very small MAF FOV and to provide separated dose distributions for each detector. The DTS is able to provide a color-coded mapping of the cumulative skin dose on a 3D graphic model of the patient. To determine the correct entrance skin exposure to be applied by the DTS, a correction factor was determined by measuring the exposure at the entrance surface of a skull phantom with an ionization chamber as a function of entrance beam size for various beam filters and kVps. Entrance exposure measurements included primary radiation, patient backscatter and table forward scatter. To allow separation of the dose from each detector, a parameter log is kept that allows a replay of the procedure exposure events and recalculation of the dose components.The graphic display can then be constructed showing the dose distribution from the MAF and FPD separately or together. Results: The DTS is able to provide separate displays of dose for the MAF and FPD with field-size specific scatter corrections. These measured corrections change from about 49% down to 10% when changing from the FPD to the MAF. Conclusion: The upgraded DTS allows identification of the patient skin dose delivered when using each detector in order to achieve improved dose management as well as to facilitate peak skin-dose reduction through dose spreading. Research supported in part by Toshiba Medical Systems Corporation and NIH Grants R43FD0158401, R44FD

  2. A theoretical and experimental evaluation of the microangiographic fluoroscope: A high-resolution region-of-interest x-ray imager

    International Nuclear Information System (INIS)

    Jain, Amit; Bednarek, D. R.; Ionita, Ciprian; Rudin, S.

    2011-01-01

    Purpose: The increasing need for better image quality and high spatial resolution for successful endovascular image-guided interventions (EIGIs) and the inherent limitations of the state-of-the-art detectors provide motivation to develop a detector system tailored to the specific, demanding requirements of neurointerventional applications.Method: A microangiographic fluoroscope (MAF) was developed to serve as a high-resolution, region-of-interest (ROI) x-ray imaging detector in conjunction with large lower-resolution full field-of-view (FOV) state-of-the-art x-ray detectors. The newly developed MAF is an indirect x-ray imaging detector capable of providing real-time images (30 frames per second) with high-resolution, high sensitivity, no lag and low instrumentation noise. It consists of a CCD camera coupled to a Gen 2 dual-stage microchannel plate light image intensifier (LII) through a fiber-optic taper. A 300 μm thick CsI(Tl) phosphor serving as the front end is coupled to the LII. The LII is the key component of the MAF and the large variable gain provided by it enables the MAF to operate as a quantum-noise-limited detector for both fluoroscopy and angiography. Results: The linear cascade model was used to predict the theoretical performance of the MAF, and the theoretical prediction showed close agreement with experimental findings. Linear system metrics such as MTF and DQE were used to gauge the detector performance up to 10 cycles/mm. The measured zero frequency DQE(0) was 0.55 for an RQA5 spectrum. A total of 21 stages were identified for the whole imaging chain and each stage was characterized individually. Conclusions: The linear cascade model analysis provides insight into the imaging chain and may be useful for further development of the MAF detector. The preclinical testing of the prototype detector in animal procedures is showing encouraging results and points to the potential for significant impact on EIGIs when used in conjunction with a state

  3. Video digitizer (real time-frame grabber) with region of interest suitable for quantitative data analysis used on the infrared and H alpha cameras installed on the DIII-D experiment

    International Nuclear Information System (INIS)

    Ferguson, S.W.; Kevan, D.K.; Hill, D.N.; Allen, S.L.

    1987-01-01

    This paper describes a CAMAC based video digitizer with region of interest (ROI) capability that was designed for use with the infrared and H alpha cameras installed by Lawrence Livermore Laboratory on the DIII-D experiment at G.A. Technologies in San Diego, California. The video digitizer uses a custom built CAMAC video synchronizer module to clock data into a CAMAC transient recorder on a line-by-line basis starting at the beginning of a field. The number of fields that are recorded is limited only by the available transient recorder memory. In order to conserve memory, the CAMAC video synchronizer module provides for the alternative selection of a specific region of interest in each successive field to be recorded. Memory conservation can be optimized by specifying lines in the field, start time, stop time, and the number of data samples per line. This video frame grabber has proved versatile for capturing video in such diverse applications as recording video fields from a video tape recorder played in slow motion or recording video fields in real time during a DIII-D shot. In other cases, one or more lines of video are recorded per frame to give a cross sectional slice of the plasma. Since all the data in the digitizer memory is synchronized to video fields and lines, the data can be read directly into the control computer in the proper matrix format to facilitate rapid processing, display, and permanent storage

  4. Spine segmentation from C-arm CT data sets: application to region-of-interest volumes for spinal interventions

    Science.gov (United States)

    Buerger, C.; Lorenz, C.; Babic, D.; Hoppenbrouwers, J.; Homan, R.; Nachabe, R.; Racadio, J. M.; Grass, M.

    2017-03-01

    Spinal fusion is a common procedure to stabilize the spinal column by fixating parts of the spine. In such procedures, metal screws are inserted through the patients back into a vertebra, and the screws of adjacent vertebrae are connected by metal rods to generate a fixed bridge. In these procedures, 3D image guidance for intervention planning and outcome control is required. Here, for anatomical guidance, an automated approach for vertebra segmentation from C-arm CT images of the spine is introduced and evaluated. As a prerequisite, 3D C-arm CT images are acquired covering the vertebrae of interest. An automatic model-based segmentation approach is applied to delineate the outline of the vertebrae of interest. The segmentation approach is based on 24 partial models of the cervical, thoracic and lumbar vertebrae which aggregate information about (i) the basic shape itself, (ii) trained features for image based adaptation, and (iii) potential shape variations. Since the volume data sets generated by the C-arm system are limited to a certain region of the spine the target vertebra and hence initial model position is assigned interactively. The approach was trained and tested on 21 human cadaver scans. A 3-fold cross validation to ground truth annotations yields overall mean segmentation errors of 0.5 mm for T1 to 1.1 mm for C6. The results are promising and show potential to support the clinician in pedicle screw path and rod planning to allow accurate and reproducible insertions.

  5. Downscaling of coarse resolution LAI products to achieve both high spatial and temporal resolution for regions of interest

    KAUST Repository

    Houborg, Rasmus; McCabe, Matthew; Gao, Feng

    2015-01-01

    This paper presents a flexible tool for spatio-temporal enhancement of coarse resolution leaf area index (LAI) products, which is readily adaptable to different land cover types, landscape heterogeneities and cloud cover conditions. The framework integrates a rule-based regression tree approach for estimating Landsat-scale LAI from existing 1 km resolution LAI products, and the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) to intelligently interpolate the downscaled LAI between Landsat acquisitions. Comparisons against in-situ records of LAI measured over corn and soybean highlights its utility for resolving sub-field LAI dynamics occurring over a range of plant development stages.

  6. Downscaling of coarse resolution LAI products to achieve both high spatial and temporal resolution for regions of interest

    KAUST Repository

    Houborg, Rasmus

    2015-11-12

    This paper presents a flexible tool for spatio-temporal enhancement of coarse resolution leaf area index (LAI) products, which is readily adaptable to different land cover types, landscape heterogeneities and cloud cover conditions. The framework integrates a rule-based regression tree approach for estimating Landsat-scale LAI from existing 1 km resolution LAI products, and the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) to intelligently interpolate the downscaled LAI between Landsat acquisitions. Comparisons against in-situ records of LAI measured over corn and soybean highlights its utility for resolving sub-field LAI dynamics occurring over a range of plant development stages.

  7. In situ genomic DNA extraction for PCR analysis of regions of interest in four plant species and one filamentous fungi

    Directory of Open Access Journals (Sweden)

    Luis E. Rojas

    2014-07-01

    Full Text Available The extraction methods of genomic DNA are usually laborious and hazardous to human health and the environment by the use of organic solvents (chloroform and phenol. In this work a protocol for in situ extraction of genomic DNA by alkaline lysis is validated. It was used in order to amplify regions of DNA in four species of plants and fungi by polymerase chain reaction (PCR. From plant material of Saccharum officinarum L., Carica papaya L. and Digitalis purpurea L. it was possible to extend different regions of the genome through PCR. Furthermore, it was possible to amplify a fragment of avr-4 gene DNA purified from lyophilized mycelium of Mycosphaerella fijiensis. Additionally, it was possible to amplify the region ap24 transgene inserted into the genome of banana cv. `Grande naine' (Musa AAA. Key words: alkaline lysis, Carica papaya L., Digitalis purpurea L., Musa, Saccharum officinarum L.

  8. Selection of the regions of interest (SRI) in the SPECT semi-quantitative analysis of central dopaminergic receptors

    International Nuclear Information System (INIS)

    Baulieu, J.L.; Prunier-Levilion, C.; Tranquart, F.; Ribeiro, M.J.; Chartier, J.R.; Guilloteau, D.; Autret, A.; Besnard, J.C.; Bekhechi, D.; Chossat, F.

    1997-01-01

    The aim of this work was to compare different types of SRIs used in the SPECT semi-quantitative analysis of central dopaminergic receptors. The SPECT with 123 I iodolisuride (Cis bio international) was carried out in the same center with a Helix - Elscint double head camera with 'fan beam', one hour after injection of 123 I iodolisuride (190 ± 31 MBq). In 8 patients afflicted with Parkinson's disease (group 1) and 9 patients presenting an extra-pyramidal syndrome by striatal stretching (group 2), two approaches of SRI tracing were undertaken: 1. Geometrical and standard (circles, ellipses, rectangles) SRIs; 2. Anatomical and individual SRIs based on TDM and perfusion scintigraphy. The SRIs were placed on the entire striatum, the head of cauda nucleus, putamen, thalamus, frontal, occipital cortex and cerebellum. In total, for each patient, 31 ratios were calculated of the striatal activity and the activity of a references zone. The discriminative value of the ratios was evaluated by the p value of comparison between groups 1 and 2. A correlation has been searched for between the ratios taken 2 by 2. The most discriminative ratios were: cauda/occipital, cauda/frontal, striatum/occipital based on geometrical standard SRIs (p 0.001, p = 0.002, p = 0.003, respectively). A close correlation has been found between the ratios with occipital and cerebellar references (r 2 0.71) but not between the ratios with frontal or occipital reference, or frontal and cerebellum reference. In the employed conditions, the geometrical tracing of the SRIs is preferable as against an anatomic tracing. The occipital cortex is the best reference while the frontal activity can not be retained as reference. The cauda/occipital ratios allow a very good discrimination between the Parkinson's disease and other extra pyramidal syndromes investigated by 123 I iodolisuride SPECT

  9. A usefulness and evaluation of setting region of interest automatically, using NEURO FLEXER ver. 1.0

    International Nuclear Information System (INIS)

    Mizuno, Takashi; Takahashi, Masaaki

    2010-01-01

    Software NEURO FLEXER Ver. 1.0 (FLEX), Brain ROI (BROI) and 3D Stereotactic ROI Template ver. 3.1 (3DSRT) were compared with authors' manual method (MM) for ROI setting to estimate cerebral blood flow (CBF) in single photon emission computed tomography (SPECT) to search the most efficient automatic setting. Subjects analyzed were 123 IMP SPECT autoradiography (ARG) images of 52 patients (M/F 24/28, average age of 69 y) with I: cerebral infarction and ischemia (INF, 26 cases), Alzheimer disease and other dementia (AD, 14), postoperation subarachnoid hemorrhage (10) and 2 other cases, and II: their each 10 cases of AD with and INF without atrophy in MRI. The machine was Toshiba SPECT GCA9300A equipped with the high resolution fan beam collimator for low energy and the processor GMS5500/PI. ARG acquisition was conducted by 8 rotations/20 min with 3.44 mm thick slices. ROI was set by MM with GMS5500/PI or automatically with FLEX (Fuji Mediphysics), BROI and 3DSRT (both, Fuji Film RI Pharma). Anatomical standardization was done with iSSP5 (NEUROSTAT STEREO) and eZIZ (SPM2 spatial normalize). Five experts split up MM and one finally checked all. In group I (all lesions) and II (according to disease type), MM vs each automatic ROI setting was compared for rCBF (mL/min/100 g) by regression. The automatic ROI setting with FLEX software was suggested to be closest to that by authors' MM for estimating CBF in SPECT as the software in standardization and volume of interest (VOI) template application was found flexible even at reduced blood flow and atrophic lesion. For the automation, however, the user should understand the features of the standardization and know how to meet with the situation. (T.T.)

  10. Mobile NBM - Android medical mobile application designed to help in learning how to identify the different regions of interest in the brain's white matter.

    Science.gov (United States)

    Sánchez-Rola, Iskander; Zapirain, Begoña García

    2014-07-18

    One of the most critical tasks when conducting neurological studies is identifying the different regions of interest in the brain's white matter. Currently few programs or applications are available that serve as an interactive guide in this process. This is why a mobile application has been designed and developed in order to teach users how to identify the referred regions of the brain. It also enables users to share the results obtained and take an examination on the knowledge thus learnt. In order to provide direct user-user or user-developer contact, the project includes a website and a Twitter account. An application has been designed with a basic, minimalist look, which anyone can access easily in order to learn to identify a specific region in the brain's white matter. A survey has also been conducted on people who have used it, which has shown that the application is attractive both in the student (final mean satisfaction of 4.2/5) and in the professional (final mean satisfaction of 4.3/5) environment. The response obtained in the online part of the project reflects the high practical value and quality of the application, as shown by the fact that the website has seen a large number of visitors (over 1000 visitors) and the Twitter account has a high number of followers (over 280 followers). Mobile NBM is the first mobile application to be used as a guide in the process of identifying a region of interest in the brain's white matter. Although initially not many areas are available in the application, new ones can be added as required by users in their respective studies. Apart from the application itself, the online resources provided (website and Twitter account) significantly enhance users' experience.

  11. Comparison between PET template-based method and MRI-based method for cortical quantification of florbetapir (AV-45) uptake in vivo

    Energy Technology Data Exchange (ETDEWEB)

    Saint-Aubert, L.; Nemmi, F.; Peran, P. [Inserm, Imagerie Cerebrale et Handicaps neurologiques UMR 825, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Centre Hospitalier Universitaire de Toulouse, Universite de Toulouse, UPS, Imagerie Cerebrale et Handicaps Neurologiques UMR 825, Toulouse (France); Barbeau, E.J. [Universite de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, France, CNRS, CerCo, Toulouse (France); Service de Neurologie, Pole Neurosciences, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Payoux, P. [Inserm, Imagerie Cerebrale et Handicaps neurologiques UMR 825, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Centre Hospitalier Universitaire de Toulouse, Universite de Toulouse, UPS, Imagerie Cerebrale et Handicaps Neurologiques UMR 825, Toulouse (France); Service de Medecine Nucleaire, Pole Imagerie, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Chollet, F.; Pariente, J. [Inserm, Imagerie Cerebrale et Handicaps neurologiques UMR 825, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Centre Hospitalier Universitaire de Toulouse, Universite de Toulouse, UPS, Imagerie Cerebrale et Handicaps Neurologiques UMR 825, Toulouse (France); Service de Neurologie, Pole Neurosciences, Centre Hospitalier Universitaire de Toulouse, Toulouse (France)

    2014-05-15

    Florbetapir (AV-45) has been shown to be a reliable tool for assessing in vivo amyloid load in patients with Alzheimer's disease from the early stages. However, nonspecific white matter binding has been reported in healthy subjects as well as in patients with Alzheimer's disease. To avoid this issue, cortical quantification might increase the reliability of AV-45 PET analyses. In this study, we compared two quantification methods for AV-45 binding, a classical method relying on PET template registration (route 1), and a MRI-based method (route 2) for cortical quantification. We recruited 22 patients at the prodromal stage of Alzheimer's disease and 17 matched controls. AV-45 binding was assessed using both methods, and target-to-cerebellum mean global standard uptake values (SUVr) were obtained for each of them, together with SUVr in specific regions of interest. Quantification using the two routes was compared between the clinical groups (intragroup comparison), and between groups for each route (intergroup comparison). Discriminant analysis was performed. In the intragroup comparison, differences in uptake values were observed between route 1 and route 2 in both groups. In the intergroup comparison, AV-45 uptake was higher in patients than controls in all regions of interest using both methods, but the effect size of this difference was larger using route 2. In the discriminant analysis, route 2 showed a higher specificity (94.1 % versus 70.6 %), despite a lower sensitivity (77.3 % versus 86.4 %), and D-prime values were higher for route 2. These findings suggest that, although both quantification methods enabled patients at early stages of Alzheimer's disease to be well discriminated from controls, PET template-based quantification seems adequate for clinical use, while the MRI-based cortical quantification method led to greater intergroup differences and may be more suitable for use in current clinical research. (orig.)

  12. MRI language dominance assessment in epilepsy patients at 1.0 T: region of interest analysis and comparison with intracarotid amytal testing

    International Nuclear Information System (INIS)

    Deblaere, K.; Vandemaele, P.; Tieleman, A.; Achten, E.; Boon, P.A.; Vonck, K.; Vingerhoets, G.; Backes, W.; Defreyne, L.

    2004-01-01

    The primary goal of this study was to test the reliability of presurgical language lateralization in epilepsy patients with functional magnetic resonance imaging (fMRI) with a 1.0-T MR scanner using a simple word generation paradigm and conventional equipment. In addition, hemispherical fMRI language lateralization analysis and region of interest (ROI) analysis in the frontal and temporo-parietal regions were compared with the intracarotid amytal test (IAT). Twenty epilepsy patients under presurgical evaluation were prospectively examined by both fMRI and IAT. The fMRI experiment consisted of a word chain task (WCT) using the conventional headphone set and a sparse sequence. In 17 of the 20 patients, data were available for comparison between the two procedures. Fifteen of these 17 patients were categorized as left hemispheric dominant, and 2 patients demonstrated bilateral language representation by both fMRI and IAT. The highest reliability for lateralization was obtained using frontal ROI analysis. Hemispherical analysis was less powerful and reliable in all cases but one, while temporo-parietal ROI analysis was unreliable as a stand-alone analysis when compared with IAT. The effect of statistical threshold on language lateralization prompted for the use of t-value-dependent lateralization index plots. This study illustrates that fMRI-determined language lateralization can be performed reliably in a clinical MR setting operating at a low field strength of 1 T without expensive stimulus presentation systems. (orig.)

  13. A hands-free region-of-interest selection interface for solo surgery with a wide-angle endoscope: preclinical proof of concept.

    Science.gov (United States)

    Jung, Kyunghwa; Choi, Hyunseok; Hong, Hanpyo; Adikrishna, Arnold; Jeon, In-Ho; Hong, Jaesung

    2017-02-01

    A hands-free region-of-interest (ROI) selection interface is proposed for solo surgery using a wide-angle endoscope. A wide-angle endoscope provides images with a larger field of view than a conventional endoscope. With an appropriate selection interface for a ROI, surgeons can also obtain a detailed local view as if they moved a conventional endoscope in a specific position and direction. To manipulate the endoscope without releasing the surgical instrument in hand, a mini-camera is attached to the instrument, and the images taken by the attached camera are analyzed. When a surgeon moves the instrument, the instrument orientation is calculated by an image processing. Surgeons can select the ROI with this instrument movement after switching from 'task mode' to 'selection mode.' The accelerated KAZE algorithm is used to track the features of the camera images once the instrument is moved. Both the wide-angle and detailed local views are displayed simultaneously, and a surgeon can move the local view area by moving the mini-camera attached to the surgical instrument. Local view selection for a solo surgery was performed without releasing the instrument. The accuracy of camera pose estimation was not significantly different between camera resolutions, but it was significantly different between background camera images with different numbers of features (P solo surgeries without a camera assistant.

  14. Lateralisation with magnetic resonance spectroscopic imaging in temporal lobe epilepsy: an evaluation of visual and region-of-interest analysis of metabolite concentration images

    Energy Technology Data Exchange (ETDEWEB)

    Vikhoff-Baaz, B. [Sahlgrenska University Hospital, Goeteborg (Sweden); Div. of Medical Physics and Biomedical Engineering, Goeteborg Univ. (Sweden); Goeteborg Univ. (Sweden). Dept. of Radiation Physics; Malmgren, K. [Dept. of Neurology, Goeteborg Univ. (Sweden); Joensson, L.; Ekholm, S. [Dept. of Radiology, Goeteborg Univ. (Sweden); Starck, G. [Div. of Medical Physics and Biomedical Engineering, Goeteborg Univ. (Sweden); Ljungberg, M.; Forssell-Aronsson, E. [Goeteborg Univ. (Sweden). Dept. of Radiation Physics; Uvebrant, P. [Dept. of Paediatrics, Goeteborg Univ. (Sweden)

    2001-09-01

    We carried out spectroscopic imaging (MRSI) on nine consecutive patients with temporal lobe epilepsy being assessed for epilepsy surgery, and nine neurologically healthy, age-matched volunteers. A volume of interest (VOI) was angled along the temporal horns on axial and sagittal images, and symmetrically over the temporal lobes on coronal images. Images showing the concentrations of N-acetylaspartate (NAA) and of choline-containing compounds plus creatine and phosphocreatine (Cho + Cr) were used for lateralisation. We compared assessment by visual inspection and by signal analysis from regions of interest (ROI) in different positions, where side-to-side differences in NAA/(Cho + Cr) ratio were used for lateralisation. The NAA/(Cho + Cr) ratio from the different ROI was also compared with that in the brain stem to assess if the latter could be used as an internal reference, e. g., for identification of bilateral changes. The metabolite concentration images were found useful for lateralisation of temporal lobe abnormalities related to epilepsy. Visual analysis can, with high accuracy, be used routinely. ROI analysis is useful for quantifying changes, giving more quantitative information about spatial distribution and the degree of signal loss. There was a large variation in NAA/(Cho + Cr) values in both patients and volunteers. The brain stem may be used as a reference for identification of bilateral changes. (orig.)

  15. Diffusion-weighted imaging of breast lesions: Region-of-interest placement and different ADC parameters influence apparent diffusion coefficient values

    Energy Technology Data Exchange (ETDEWEB)

    Bickel, Hubert; Pinker, Katja; Polanec, Stephan; Magometschnigg, Heinrich; Wengert, Georg; Spick, Claudio; Helbich, Thomas H.; Baltzer, Pascal [Medical University Vienna, Division of Molecular and Gender Imaging, Department of Biomedical Imaging and Image-Guided Therapy, Vienna (Austria); Bogner, Wolfgang [Medical University Vienna - MR Center of Excellence, Department of Biomedical Imaging and Image-Guided Therapy, Vienna (Austria); Bago-Horvath, Zsuzsanna [Medical University Vienna, Department of Pathology, Vienna (Austria)

    2017-05-15

    To investigate the influence of region-of-interest (ROI) placement and different apparent diffusion coefficient (ADC) parameters on ADC values, diagnostic performance, reproducibility and measurement time in breast tumours. In this IRB-approved, retrospective study, 149 histopathologically proven breast tumours (109 malignant, 40 benign) in 147 women (mean age 53.2) were investigated. Three radiologists independently measured minimum, mean and maximum ADC, each using three ROI placement approaches:1 - small 2D-ROI, 2 - large 2D-ROI and 3 - 3D-ROI covering the whole lesion. One reader performed all measurements twice. Median ADC values, diagnostic performance, reproducibility, and measurement time were calculated and compared between all combinations of ROI placement approaches and ADC parameters. Median ADC values differed significantly between the ROI placement approaches (p <.001). Minimum ADC showed the best diagnostic performance (AUC.928-.956), followed by mean ADC obtained from 2D ROIs (.926-.94). Minimum and mean ADC showed high intra- (ICC.85-.94) and inter-reader reproducibility (ICC.74-.94). Median measurement time was significantly shorter for the 2D ROIs (p <.001). ROI placement significantly influences ADC values measured in breast tumours. Minimum and mean ADC acquired from 2D-ROIs are useful for the differentiation of benign and malignant breast lesions, and are highly reproducible, with rapid measurement. (orig.)

  16. In situ study of the impact of inter- and intra-reader variability on region of interest (ROI) analysis in preclinical molecular imaging.

    Science.gov (United States)

    Habte, Frezghi; Budhiraja, Shradha; Keren, Shay; Doyle, Timothy C; Levin, Craig S; Paik, David S

    2013-01-01

    We estimated reader-dependent variability of region of interest (ROI) analysis and evaluated its impact on preclinical quantitative molecular imaging. To estimate reader variability, we used five independent image datasets acquired each using microPET and multispectral fluorescence imaging (MSFI). We also selected ten experienced researchers who utilize molecular imaging in the same environment that they typically perform their own studies. Nine investigators blinded to the data type completed the ROI analysis by drawing ROIs manually that delineate the tumor regions to the best of their knowledge and repeated the measurements three times, non-consecutively. Extracted mean intensities of voxels within each ROI are used to compute the coefficient of variation (CV) and characterize the inter- and intra-reader variability. The impact of variability was assessed through random samples iterated from normal distributions for control and experimental groups on hypothesis testing and computing statistical power by varying subject size, measured difference between groups and CV. The results indicate that inter-reader variability was 22.5% for microPET and 72.2% for MSFI. Additionally, mean intra-reader variability was 10.1% for microPET and 26.4% for MSFI. Repeated statistical testing showed that a total variability of CV variability has been observed mainly due to differences in the ROI placement and geometry drawn between readers, which may adversely affect statistical power and erroneously lead to negative study outcomes.

  17. Lateralisation with magnetic resonance spectroscopic imaging in temporal lobe epilepsy: an evaluation of visual and region-of-interest analysis of metabolite concentration images

    International Nuclear Information System (INIS)

    Vikhoff-Baaz, B.; Joensson, L.; Ekholm, S.; Starck, G.

    2001-01-01

    We carried out spectroscopic imaging (MRSI) on nine consecutive patients with temporal lobe epilepsy being assessed for epilepsy surgery, and nine neurologically healthy, age-matched volunteers. A volume of interest (VOI) was angled along the temporal horns on axial and sagittal images, and symmetrically over the temporal lobes on coronal images. Images showing the concentrations of N-acetylaspartate (NAA) and of choline-containing compounds plus creatine and phosphocreatine (Cho + Cr) were used for lateralisation. We compared assessment by visual inspection and by signal analysis from regions of interest (ROI) in different positions, where side-to-side differences in NAA/(Cho + Cr) ratio were used for lateralisation. The NAA/(Cho + Cr) ratio from the different ROI was also compared with that in the brain stem to assess if the latter could be used as an internal reference, e. g., for identification of bilateral changes. The metabolite concentration images were found useful for lateralisation of temporal lobe abnormalities related to epilepsy. Visual analysis can, with high accuracy, be used routinely. ROI analysis is useful for quantifying changes, giving more quantitative information about spatial distribution and the degree of signal loss. There was a large variation in NAA/(Cho + Cr) values in both patients and volunteers. The brain stem may be used as a reference for identification of bilateral changes. (orig.)

  18. MRI language dominance assessment in epilepsy patients at 1.0 T: region of interest analysis and comparison with intracarotid amytal testing

    Energy Technology Data Exchange (ETDEWEB)

    Deblaere, K.; Vandemaele, P.; Tieleman, A.; Achten, E. [Department of Neuroradiology, Ghent University Hospital, De Pintelaan 185, 9000, Ghent (Belgium); Boon, P.A.; Vonck, K. [Reference Center for Refractory Epilepsy of the Department of Neurology, Ghent University Hospital, Ghent (Belgium); Vingerhoets, G. [Labaratory for Neuropsychology, Neurology Section of the Department of Internal Medicine, Ghent University, Ghent (Belgium); Backes, W. [Department of Neuroradiology, University Hospital Maastricht, Maastricht (Netherlands); Defreyne, L. [Department of Interventional Radiology, Ghent University Hospital, Ghent (Belgium)

    2004-06-01

    The primary goal of this study was to test the reliability of presurgical language lateralization in epilepsy patients with functional magnetic resonance imaging (fMRI) with a 1.0-T MR scanner using a simple word generation paradigm and conventional equipment. In addition, hemispherical fMRI language lateralization analysis and region of interest (ROI) analysis in the frontal and temporo-parietal regions were compared with the intracarotid amytal test (IAT). Twenty epilepsy patients under presurgical evaluation were prospectively examined by both fMRI and IAT. The fMRI experiment consisted of a word chain task (WCT) using the conventional headphone set and a sparse sequence. In 17 of the 20 patients, data were available for comparison between the two procedures. Fifteen of these 17 patients were categorized as left hemispheric dominant, and 2 patients demonstrated bilateral language representation by both fMRI and IAT. The highest reliability for lateralization was obtained using frontal ROI analysis. Hemispherical analysis was less powerful and reliable in all cases but one, while temporo-parietal ROI analysis was unreliable as a stand-alone analysis when compared with IAT. The effect of statistical threshold on language lateralization prompted for the use of t-value-dependent lateralization index plots. This study illustrates that fMRI-determined language lateralization can be performed reliably in a clinical MR setting operating at a low field strength of 1 T without expensive stimulus presentation systems. (orig.)

  19. Influence of region of interest size and ultrasound lesion size on the performance of 2D shear wave elastography (SWE) in solid breast masses.

    Science.gov (United States)

    Skerl, K; Vinnicombe, S; Giannotti, E; Thomson, K; Evans, A

    2015-12-01

    To evaluate the influence of the region of interest (ROI) size and lesion diameter on the diagnostic performance of 2D shear wave elastography (SWE) of solid breast lesions. A study group of 206 consecutive patients (age range 21-92 years) with 210 solid breast lesions (70 benign, 140 malignant) who underwent core biopsy or surgical excision was evaluated. Lesions were divided into small (diameter <15 mm, n=112) and large lesions (diameter ≥15 mm, n=98). An ROI with a diameter of 1, 2, and 3 mm was positioned over the stiffest part of the lesion. The maximum elasticity (Emax), mean elasticity (Emean) and standard deviation (SD) for each ROI size were compared to the pathological outcome. Statistical analysis was undertaken using the chi-square test and receiver operating characteristic (ROC) analysis. The ROI size used has a significant impact on the performance of Emean and SD but not on Emax. Youden's indices show a correlation with the ROI size and lesion size: generally, the benign/malignant threshold is lower with increasing ROI size but higher with increasing lesion size. No single SWE parameter has superior performance. Lesion size and ROI size influence diagnostic performance. Copyright © 2015. Published by Elsevier Ltd.

  20. Photoplethysmography Signal Analysis for Optimal Region-of-Interest Determination in Video Imaging on a Built-In Smartphone under Different Conditions

    Directory of Open Access Journals (Sweden)

    Yunyoung Nam

    2017-10-01

    Full Text Available Smartphones and tablets are widely used in medical fields, which can improve healthcare and reduce healthcare costs. Many medical applications for smartphones and tablets have already been developed and widely used by both health professionals and patients. Specifically, video recordings of fingertips made using a smartphone camera contain a pulsatile component caused by the cardiac pulse equivalent to that present in a photoplethysmographic signal. By performing peak detection on the pulsatile signal, it is possible to estimate a continuous heart rate and a respiratory rate. To estimate the heart rate and respiratory rate accurately, which pixel regions of the color bands give the most optimal signal quality should be investigated. In this paper, we investigate signal quality to determine the best signal quality by the largest amplitude values for three different smartphones under different conditions. We conducted several experiments to obtain reliable PPG signals and compared the PPG signal strength in the three color bands when the flashlight was both on and off. We also evaluated the intensity changes of PPG signals obtained from the smartphones with motion artifacts and fingertip pressure force. Furthermore, we have compared the PSNR of PPG signals of the full-size images with that of the region of interests (ROIs.

  1. Attribute-Based Methods

    Science.gov (United States)

    Thomas P. Holmes; Wiktor L. Adamowicz

    2003-01-01

    Stated preference methods of environmental valuation have been used by economists for decades where behavioral data have limitations. The contingent valuation method (Chapter 5) is the oldest stated preference approach, and hundreds of contingent valuation studies have been conducted. More recently, and especially over the last decade, a class of stated preference...

  2. The prognostic value of quantified MRI at an early stage of Bell's palsy; Der prognostische Wert der dynamischen, kontrastmittelverstaerkten Region-of-Interest-MRT in der Akutphase der idiopathischen Fazialisparese

    Energy Technology Data Exchange (ETDEWEB)

    Kress, B.P.J.; Efinger, K.; Gottschalk, A.; Nissen, S.; Solbach, T.; Baehren, W. [Abt. fuer Radiologie, Bundeswehrkrankenhaus Ulm (Germany); Griesbeck, F.; Goriup, A.; Kornhuber, A.W. [Abt. fuer Neurologie und Psychiatrie, Bundeswehrkrankenhaus Ulm (Germany)

    2002-04-01

    DTPA/kg Koerpergewicht. Der Signalintensitaetsanstieg wurde quantitativ mittels Region of interest (ROI) evaluiert. Die Ergebnisse wurden mit dem klinischen Verlauf und den Resultaten der Elektrophysiologie verglichen. Ergebnisse: Alle 30 Patienten konnten quantitativ ausgewertet werden. Die drei Patienten, die im Verlauf eine chronische Fazialisparese entwickelt haben, wurden mit der MRT am Aufnahmetag erfasst. Die Patienten, die in der MRT Zeichen fuer einen unguenstigen Verlauf zeigten, wiesen in der Elektronenphysiologie ein hochpathologisches Muskelsummenaktionspotenzial auf. Statt mit komplizierten Messalgorithmen liess sich mittels einer einmaligen Messung im inneren Gehoergang eine ausreichend verlaessliche prognostische Aussage machen. Zusammenfassung: Die Region-of-interest-MRT hat zu einem fruehen Zeitpunkt der Erkrankung eine zuverlaessige prognostische Bedeutung. Mit einer im klinischen Alltag einfach durchfuehrbaren quantitativen Messung kann die Prognose in einem Stadium abgeschaetzt werden, an dem eine kausale Therapie noch moeglich ist. (orig.)

  3. Methods in Logic Based Control

    DEFF Research Database (Denmark)

    Christensen, Georg Kronborg

    1999-01-01

    Desing and theory of Logic Based Control systems.Boolean Algebra, Karnaugh Map, Quine McClusky's algorithm. Sequential control design. Logic Based Control Method, Cascade Control Method. Implementation techniques: relay, pneumatic, TTL/CMOS,PAL and PLC- and Soft_PLC implementation. PLC...

  4. Activity based costing (ABC Method

    Directory of Open Access Journals (Sweden)

    Prof. Ph.D. Saveta Tudorache

    2008-05-01

    Full Text Available In the present paper the need and advantages are presented of using the Activity BasedCosting method, need arising from the need of solving the information pertinence issue. This issue has occurreddue to the limitation of classic methods in this field, limitation also reflected by the disadvantages ofsuch classic methods in establishing complete costs.

  5. MO-FG-CAMPUS-JeP3-02: A Novel Setup Approach to Improve C-Spine Curvature Reproducibility for Head and Neck Radiotherapy Using Optical Surface Imaging with Two Regions of Interest

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, K; Gil, M; Li, G [Memorial Sloan Kettering Cancer Center, New York, NY (United States); Della Biancia, C [Memorial Sloan-Kettering Cancer Center, New York, NY (United States)

    2016-06-15

    Purpose: To develop a novel approach to improve cervical spine (c-spine) curvature reproducibility for head and neck (HN) patients using optical surface imaging (OSI) with two regions of interests (ROIs). Methods: The OSI-guided, two-step setup procedure requires two ROIs: ROI-1 of the shoulders and ROI-2 of the face. The neck can be stretched or squeezed in superior-inferior (SI) direction using a specially-designed sliding head support. We hypothesize that when these two ROIs are aligned, the c-spine should fall into a naturally reproducible position under same setup conditions. An anthropomorphous phantom test was performed to examine neck pitch angles comparing with the calculated angles. Three volunteers participated in the experiments, which start with conventional HN setup using skin markers and room lasers. An OSI image and lateral photo-picture were acquired as the references. In each of the three replicate tests, conventional setup was first applied after volunteers got on the couch. ROI-1 was aligned by moving the body, followed by ROI-2 alignment via adjusting head position and orientation under real-time OSI guidance. A final static OSI image and lateral picture were taken to evaluate both anterior and posterior surface alignments. Three degrees of freedom can be adjusted if an open-face mask was applied, including head SI shift using the sliding head support and pitch-and-roll rotations using a commercial couch extension. Surface alignment was analyzed comparing with conventional setup. Results: The neck pitch angle measured by OSI is consistent with the calculated (0.2±0.6°). Volunteer study illustrated improved c-spine setup reproducibility using OSI comparing with conventional setup. ROI alignments with 2mm/1° tolerance are achieved within 3 minutes. Identical knee support is important to achieve ROI-1 pitch alignment. Conclusion: The feasibility of this novel approach has been demonstrated for c-spine curvature setup reproducibility. Further

  6. Use of Relative vs Fixed Offset Distance to Define Region of Interest at the Distal Radius and Tibia in High-Resolution Peripheral Quantitative Computed Tomography

    DEFF Research Database (Denmark)

    Shanbhogue, Vikram V; Hansen, Stinus; Halekoh, Ulrich

    2015-01-01

    adjacent to the measurement site. This study aimed at compare the morphologic variation in measurements using the standard fixed offset distance to define the distal starting slice against those obtained by using a relative measurement position scaled to the individual bone length at the distal radius...... defined by, first, the standard measurement protocol, where the most distal CT slice was 9.5 mm and 22.5 mm from the end plate of the radius and tibia, respectively, and second, the relative measurement method, where the most distal CT slice was at 4% and 7% of the radial and tibial lengths, respectively....... Volumetric densities and microarchitectural parameters were compared between the 2 methods. Measurements of the total and cortical volumetric density and cortical thickness at the radius and tibia and cortical porosity, trabecular volumetric density, and trabecular number at the tibia were significantly...

  7. Whole-organ and segmental stiffness measured with liver magnetic resonance elastography in healthy adults: significance of the region of interest.

    Science.gov (United States)

    Rusak, Grażyna; Zawada, Elżbieta; Lemanowicz, Adam; Serafin, Zbigniew

    2015-04-01

    MR elastography (MRE) is a recent non-invasive technique that provides in vivo data on the viscoelasticity of the liver. Since the method is not well established, several different protocols were proposed that differ in results. The aim of the study was to analyze the variability of stiffness measurements in different regions of the liver. Twenty healthy adults aged 24-45 years were recruited. The examination was performed using a mechanical excitation of 64 Hz. MRE images were fused with axial T2WI breath-hold images (thickness 10 mm, spacing 10 mm). Stiffness was measured as a mean value of each cross section of the whole liver, on a single largest cross section, in the right lobe, and in ROIs (50 pix.) placed in the center of the left lobe, segments 5/6, 7, 8, and the parahilar region. Whole-liver stiffness ranged from 1.56 to 2.75 kPa. Mean segmental stiffness differed significantly between the tested regions (range from 1.55 ± 0.28 to 2.37 ± 0.32 kPa; P < 0.0001, ANOVA). Within-method variability of measurements ranged from 14 % for whole liver and segment 8-26 % for segment 7. Within-subject variability ranged from 13 to 31 %. Results of measurement within segment 8 were closest to the whole-liver method (ICC, 0.84). Stiffness of the liver presented significant variability depending on the region of measurement. The most reproducible method is averaging of cross sections of the whole liver. There was significant variability between stiffness in subjects considered healthy, which requires further investigation.

  8. The effect of region of interest strategies on apparent diffusion coefficient assessment in patients treated with palliative radiation therapy to brain metastases

    DEFF Research Database (Denmark)

    Mahmood, Faisal; Johannesen, Helle H; Geertsen, Poul

    2015-01-01

    scans, before start of RT (pre-RT) and at the 9th/10th fraction (end-RT). The following ROI strategies were applied. ROIb800 and ROIb0: Entire tumor volume visible on DW(b = 800 s/mm(2)) and DW(b = 0 s/mm(2)) images, respectively. ROIb800vi: Viable tumor volume based on DW(b = 800 s/mm(2)). ROIb800rep...

  9. In situ genomic DNA extraction for PCR analysis of regions of interest in four plant species and one filamentous fungi

    OpenAIRE

    Luis E. Rojas; Maritza Reyes; Naivy Pérez-Alonso; María I. Olóriz; Laisyn Posada-Pérez; Bárbara Ocaña; Orelvis Portal; Borys Chong-Pérez; Jorge L. Pérez Pérez

    2014-01-01

    The extraction methods of genomic DNA are usually laborious and hazardous to human health and the environment by the use of organic solvents (chloroform and phenol). In this work a protocol for in situ extraction of genomic DNA by alkaline lysis is validated. It was used in order to amplify regions of DNA in four species of plants and fungi by polymerase chain reaction (PCR). From plant material of Saccharum officinarum L., Carica papaya L. and Digitalis purpurea L. it was possible to extend ...

  10. Analyzing three-dimensional position of region of interest using an image of contrast media using unilateral X-ray exposure

    International Nuclear Information System (INIS)

    Harauchi, Hajime; Gotou, Hiroshi; Tanooka, Masao

    1994-01-01

    Analyzing three-dimensional internal structure of object in an X-ray study is usually performed by using two or more of the incidents of an X-ray direction. In this report, we analyzed the three-dimensional position of tubes with a phantom by using both contrast media and imaging of one direction in the X-ray study. The concentration of the iodine in contrast media can be known by using the log-subtraction image of only the one-directional incident X-ray. Also the diameter of tube filled with contrast media is calculated by the concentration of iodine. So we can show the three-dimensional position of tubes geometrically, by the diameter of tube and the measured value of the film. We verified this method by an experiment according to the theory. (author)

  11. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth

  12. Assessment of cerebral perfusion with single-photon emission tomography in normal subjects and in patients with Alzheimer's disease: effects of region of interest selection

    Energy Technology Data Exchange (ETDEWEB)

    Claus, J J [Dept. of Neurology, Univ. Hospital, Rotterdam (Netherlands) Dept. of Epidemiology and Biostatistics, Erasmus Univ. Medical School, Rotterdam (Netherlands); Harskamp, F van [Dept. of Neurology, Univ. Hospital, Rotterdam (Netherlands); Breteler, M M.B. [Dept. of Epidemiology and Biostatistics, Erasmus Univ. Medical School, Rotterdam (Netherlands); Krenning, E P [Dept. of Nuclear Medicine, Univ. Hospital, Rotterdam (Netherlands); Cammen, T.J.M. van der (Dept. of Geriatric Medicine, Univ. Hospital, Rotterdam (Netherlands)); Hofman, A [Dept. of Epidemiology and Biostatistics, Erasmus Univ. Medical School, Rotterdam (Netherlands); Hasan, D [Dept. of Neurology, Univ. Hospital, Rotterdam (Netherlands)

    1994-10-01

    We compared three different ROIs in a SPET study with 60 controls and in 48 patients with probable Alzheimer's disease diagnosed according to the NINCDS-ADRDA criteria. Regional cerebral blood flow (rCBF) was assessed with SPET using technetium-99m d,l-hexamethylpropylene amine oxime ([sup 99m]Tc-HMPAO), normalized to the mean activity in a cerebellar reference slice. The three different ROIs were: a multi-slice and a single-slice ROI with reference to the normal brain anatomy (using an anatomical atlas), and a rectangular (2x4 pixels) ROI in the frontal, temporal, temporoparietal and occipital cortices. No differences were observed for the means of rCBF values between the single-slice and multi-slice ROI's with reference to the normal anatomy, but some variability was present for individual comparisons. In contrast, significantly higher mean rCBF values were obtained with the single-slice rectangular ROIs in all four regions for both patients and controls and considerable variability was shown for individual subjects. After analysis with multivariate logistic regression and receiver operator characteristic curves, the ability of SPET to discriminate between controls and Alzheimer patients was similar in the three methods for mild and moderate Alzheimer patients (Global Deterioration Scale = GDS of 3 and 4). However, with increasing dementia severity (GDS>4) the rectangular ROIs showed lower ability to discriminate between groups compared to the single-slice and multi-slice anatomically defined ROIs. This study suggests that results of rCBF assessment with SPET using [sup 99m]Tc-HMPAO in patients with severe Alzheimer's disease are influenced by the shape and size of the ROI. (orig.)

  13. Assessment of cerebral perfusion with single-photon emission tomography in normal subjects and in patients with Alzheimer's disease: effects of region of interest selection

    International Nuclear Information System (INIS)

    Claus, J.J.; Harskamp, F. van; Breteler, M.M.B.; Krenning, E.P.; Cammen, T.J.M. van der; Hofman, A.; Hasan, D.

    1994-01-01

    We compared three different ROIs in a SPET study with 60 controls and in 48 patients with probable Alzheimer's disease diagnosed according to the NINCDS-ADRDA criteria. Regional cerebral blood flow (rCBF) was assessed with SPET using technetium-99m d,l-hexamethylpropylene amine oxime ( 99m Tc-HMPAO), normalized to the mean activity in a cerebellar reference slice. The three different ROIs were: a multi-slice and a single-slice ROI with reference to the normal brain anatomy (using an anatomical atlas), and a rectangular (2x4 pixels) ROI in the frontal, temporal, temporoparietal and occipital cortices. No differences were observed for the means of rCBF values between the single-slice and multi-slice ROI's with reference to the normal anatomy, but some variability was present for individual comparisons. In contrast, significantly higher mean rCBF values were obtained with the single-slice rectangular ROIs in all four regions for both patients and controls and considerable variability was shown for individual subjects. After analysis with multivariate logistic regression and receiver operator characteristic curves, the ability of SPET to discriminate between controls and Alzheimer patients was similar in the three methods for mild and moderate Alzheimer patients (Global Deterioration Scale = GDS of 3 and 4). However, with increasing dementia severity (GDS>4) the rectangular ROIs showed lower ability to discriminate between groups compared to the single-slice and multi-slice anatomically defined ROIs. This study suggests that results of rCBF assessment with SPET using 99m Tc-HMPAO in patients with severe Alzheimer's disease are influenced by the shape and size of the ROI. (orig.)

  14. Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra provides reduced effect of scanner for cortex volumetry with atlas-based method in healthy subjects.

    Science.gov (United States)

    Goto, Masami; Abe, Osamu; Aoki, Shigeki; Hayashi, Naoto; Miyati, Tosiaki; Takao, Hidemasa; Iwatsubo, Takeshi; Yamashita, Fumio; Matsuda, Hiroshi; Mori, Harushi; Kunimatsu, Akira; Ino, Kenji; Yano, Keiichi; Ohtomo, Kuni

    2013-07-01

    This study aimed to investigate whether the effect of scanner for cortex volumetry with atlas-based method is reduced using Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra (DARTEL) normalization compared with standard normalization. Three-dimensional T1-weighted magnetic resonance images (3D-T1WIs) of 21 healthy subjects were obtained and evaluated for effect of scanner in cortex volumetry. 3D-T1WIs of the 21 subjects were obtained with five MRI systems. Imaging of each subject was performed on each of five different MRI scanners. We used the Voxel-Based Morphometry 8 tool implemented in Statistical Parametric Mapping 8 and WFU PickAtlas software (Talairach brain atlas theory). The following software default settings were used as bilateral region-of-interest labels: "Frontal Lobe," "Hippocampus," "Occipital Lobe," "Orbital Gyrus," "Parietal Lobe," "Putamen," and "Temporal Lobe." Effect of scanner for cortex volumetry using the atlas-based method was reduced with DARTEL normalization compared with standard normalization in Frontal Lobe, Occipital Lobe, Orbital Gyrus, Putamen, and Temporal Lobe; was the same in Hippocampus and Parietal Lobe; and showed no increase with DARTEL normalization for any region of interest (ROI). DARTEL normalization reduces the effect of scanner, which is a major problem in multicenter studies.

  15. Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra provides reduced effect of scanner for cortex volumetry with atlas-based method in healthy subjects

    Energy Technology Data Exchange (ETDEWEB)

    Goto, Masami; Ino, Kenji; Yano, Keiichi [University of Tokyo Hospital, Department of Radiological Technology, Bunkyo-ku, Tokyo (Japan); Abe, Osamu [Nihon University School of Medicine, Department of Radiology, Itabashi-ku, Tokyo (Japan); Aoki, Shigeki [Juntendo University, Department of Radiology, Bunkyo-ku, Tokyo (Japan); Hayashi, Naoto [University of Tokyo Hospital, Department of Computational Diagnostic Radiology and Preventive Medicine, Bunkyo-ku, Tokyo (Japan); Miyati, Tosiaki [Kanazawa University, Graduate School of Medical Science, Kanazawa (Japan); Takao, Hidemasa; Mori, Harushi; Kunimatsu, Akira; Ohtomo, Kuni [University of Tokyo Hospital, Department of Radiology and Department of Computational Diagnostic Radiology and Preventive Medicine, Bunkyo-ku, Tokyo (Japan); Iwatsubo, Takeshi [University of Tokyo, Department of Neuropathology, Bunkyo-ku, Tokyo (Japan); Yamashita, Fumio [Iwate Medical University, Department of Radiology, Yahaba, Iwate (Japan); Matsuda, Hiroshi [Integrative Brain Imaging Center National Center of Neurology and Psychiatry, Department of Nuclear Medicine, Kodaira, Tokyo (Japan); Collaboration: Japanese Alzheimer' s Disease Neuroimaging Initiative

    2013-07-15

    This study aimed to investigate whether the effect of scanner for cortex volumetry with atlas-based method is reduced using Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra (DARTEL) normalization compared with standard normalization. Three-dimensional T1-weighted magnetic resonance images (3D-T1WIs) of 21 healthy subjects were obtained and evaluated for effect of scanner in cortex volumetry. 3D-T1WIs of the 21 subjects were obtained with five MRI systems. Imaging of each subject was performed on each of five different MRI scanners. We used the Voxel-Based Morphometry 8 tool implemented in Statistical Parametric Mapping 8 and WFU PickAtlas software (Talairach brain atlas theory). The following software default settings were used as bilateral region-of-interest labels: ''Frontal Lobe,'' ''Hippocampus,'' ''Occipital Lobe,'' ''Orbital Gyrus,'' ''Parietal Lobe,'' ''Putamen,'' and ''Temporal Lobe.'' Effect of scanner for cortex volumetry using the atlas-based method was reduced with DARTEL normalization compared with standard normalization in Frontal Lobe, Occipital Lobe, Orbital Gyrus, Putamen, and Temporal Lobe; was the same in Hippocampus and Parietal Lobe; and showed no increase with DARTEL normalization for any region of interest (ROI). DARTEL normalization reduces the effect of scanner, which is a major problem in multicenter studies. (orig.)

  16. Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra provides reduced effect of scanner for cortex volumetry with atlas-based method in healthy subjects

    International Nuclear Information System (INIS)

    Goto, Masami; Ino, Kenji; Yano, Keiichi; Abe, Osamu; Aoki, Shigeki; Hayashi, Naoto; Miyati, Tosiaki; Takao, Hidemasa; Mori, Harushi; Kunimatsu, Akira; Ohtomo, Kuni; Iwatsubo, Takeshi; Yamashita, Fumio; Matsuda, Hiroshi

    2013-01-01

    This study aimed to investigate whether the effect of scanner for cortex volumetry with atlas-based method is reduced using Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra (DARTEL) normalization compared with standard normalization. Three-dimensional T1-weighted magnetic resonance images (3D-T1WIs) of 21 healthy subjects were obtained and evaluated for effect of scanner in cortex volumetry. 3D-T1WIs of the 21 subjects were obtained with five MRI systems. Imaging of each subject was performed on each of five different MRI scanners. We used the Voxel-Based Morphometry 8 tool implemented in Statistical Parametric Mapping 8 and WFU PickAtlas software (Talairach brain atlas theory). The following software default settings were used as bilateral region-of-interest labels: ''Frontal Lobe,'' ''Hippocampus,'' ''Occipital Lobe,'' ''Orbital Gyrus,'' ''Parietal Lobe,'' ''Putamen,'' and ''Temporal Lobe.'' Effect of scanner for cortex volumetry using the atlas-based method was reduced with DARTEL normalization compared with standard normalization in Frontal Lobe, Occipital Lobe, Orbital Gyrus, Putamen, and Temporal Lobe; was the same in Hippocampus and Parietal Lobe; and showed no increase with DARTEL normalization for any region of interest (ROI). DARTEL normalization reduces the effect of scanner, which is a major problem in multicenter studies. (orig.)

  17. Assessment of automatic segmentation of teeth using a watershed-based method.

    Science.gov (United States)

    Galibourg, Antoine; Dumoncel, Jean; Telmon, Norbert; Calvet, Adèle; Michetti, Jérôme; Maret, Delphine

    2018-01-01

    Tooth 3D automatic segmentation (AS) is being actively developed in research and clinical fields. Here, we assess the effect of automatic segmentation using a watershed-based method on the accuracy and reproducibility of 3D reconstructions in volumetric measurements by comparing it with a semi-automatic segmentation(SAS) method that has already been validated. The study sample comprised 52 teeth, scanned with micro-CT (41 µm voxel size) and CBCT (76; 200 and 300 µm voxel size). Each tooth was segmented by AS based on a watershed method and by SAS. For all surface reconstructions, volumetric measurements were obtained and analysed statistically. Surfaces were then aligned using the SAS surfaces as the reference. The topography of the geometric discrepancies was displayed by using a colour map allowing the maximum differences to be located. AS reconstructions showed similar tooth volumes when compared with SAS for the 41 µm voxel size. A difference in volumes was observed, and increased with the voxel size for CBCT data. The maximum differences were mainly found at the cervical margins and incisal edges but the general form was preserved. Micro-CT, a modality used in dental research, provides data that can be segmented automatically, which is timesaving. AS with CBCT data enables the general form of the region of interest to be displayed. However, our AS method can still be used for metrically reliable measurements in the field of clinical dentistry if some manual refinements are applied.

  18. A novel method based on learning automata for automatic lesion detection in breast magnetic resonance imaging.

    Science.gov (United States)

    Salehi, Leila; Azmi, Reza

    2014-07-01

    Breast cancer continues to be a significant public health problem in the world. Early detection is the key for improving breast cancer prognosis. In this way, magnetic resonance imaging (MRI) is emerging as a powerful tool for the detection of breast cancer. Breast MRI presently has two major challenges. First, its specificity is relatively poor, and it detects many false positives (FPs). Second, the method involves acquiring several high-resolution image volumes before, during, and after the injection of a contrast agent. The large volume of data makes the task of interpretation by the radiologist both complex and time-consuming. These challenges have led to the development of the computer-aided detection systems to improve the efficiency and accuracy of the interpretation process. Detection of suspicious regions of interests (ROIs) is a critical preprocessing step in dynamic contrast-enhanced (DCE)-MRI data evaluation. In this regard, this paper introduces a new automatic method to detect the suspicious ROIs for breast DCE-MRI based on region growing. The results indicate that the proposed method is thoroughly able to identify suspicious regions (accuracy of 75.39 ± 3.37 on PIDER breast MRI dataset). Furthermore, the FP per image in this method is averagely 7.92, which shows considerable improvement comparing to other methods like ROI hunter.

  19. Activity – based costing method

    Directory of Open Access Journals (Sweden)

    Èuchranová Katarína

    2001-06-01

    Full Text Available Activity based costing is a method of identifying and tracking the operating costs directly associated with processing items. It is the practice of focusing on some unit of output, such as a purchase order or an assembled automobile and attempting to determine its total as precisely as poccible based on the fixed and variable costs of the inputs.You use ABC to identify, quantify and analyze the various cost drivers (such as labor, materials, administrative overhead, rework. and to determine which ones are candidates for reduction.A processes any activity that accepts inputs, adds value to these inputs for customers and produces outputs for these customers. The customer may be either internal or external to the organization. Every activity within an organization comprimes one or more processes. Inputs, controls and resources are all supplied to the process.A process owner is the person responsible for performing and or controlling the activity.The direction of cost through their contact to partial activity and processes is a new modern theme today. Beginning of this method is connected with very important changes in the firm processes.ABC method is a instrument , that bring a competitive advantages for the firm.

  20. Comparison of Hounsfield units by changing in size of physical area and setting size of region of interest by using the CT phantom made with a 3D printer

    International Nuclear Information System (INIS)

    Seung, Youl Hun

    2015-01-01

    In this study, we have observed the change of the Hounsfield (HU) in the alteration of by changing in size of physical area and setting size of region of interest (ROI) at focus on kVp and mAs. Four-channel multi-detector computed tomography was used to get transverse axial scanning images and HU. Three dimensional printer which is type of fused deposition modeling (FDM) was used to produce the Phantom. The structure of the phantom was designed to be a type of cylinder that contains 33 mm, 24 mm, 19 mm, 16 mm, 9 mm size of circle holes that are symmetrically located. It was charged with mixing iodine contrast agent and distilled water in the holes. The images were gained with changing by 90 kVp, 120 kVp, 140 kVp and 50 mAs, 100 mAs, 150 mAs, respectively. The ‘image J’ was used to get the HU measurement of gained images of ROI. As a result, it was confirmed that kVp affects to HU more than mAs. And it is suggested that the smaller size of physical area, the more decreasing HU even in material of a uniform density and the smaller setting size of ROI, the more increasing HU. Therefore, it is reason that to set maximum ROI within 5 HU is the best way to minimize in the alteration of by changing in size of physical area and setting size of region of interest

  1. Comparison of Hounsfield units by changing in size of physical area and setting size of region of interest by using the CT phantom made with a 3D printer

    Energy Technology Data Exchange (ETDEWEB)

    Seung, Youl Hun [Dept. of Radiological Science, Cheongju University, Cheongju (Korea, Republic of)

    2015-12-15

    In this study, we have observed the change of the Hounsfield (HU) in the alteration of by changing in size of physical area and setting size of region of interest (ROI) at focus on kVp and mAs. Four-channel multi-detector computed tomography was used to get transverse axial scanning images and HU. Three dimensional printer which is type of fused deposition modeling (FDM) was used to produce the Phantom. The structure of the phantom was designed to be a type of cylinder that contains 33 mm, 24 mm, 19 mm, 16 mm, 9 mm size of circle holes that are symmetrically located. It was charged with mixing iodine contrast agent and distilled water in the holes. The images were gained with changing by 90 kVp, 120 kVp, 140 kVp and 50 mAs, 100 mAs, 150 mAs, respectively. The ‘image J’ was used to get the HU measurement of gained images of ROI. As a result, it was confirmed that kVp affects to HU more than mAs. And it is suggested that the smaller size of physical area, the more decreasing HU even in material of a uniform density and the smaller setting size of ROI, the more increasing HU. Therefore, it is reason that to set maximum ROI within 5 HU is the best way to minimize in the alteration of by changing in size of physical area and setting size of region of interest.

  2. The method for detecting small lesions in medical image based on sliding window

    Science.gov (United States)

    Han, Guilai; Jiao, Yuan

    2016-10-01

    At present, the research on computer-aided diagnosis includes the sample image segmentation, extracting visual features, generating the classification model by learning, and according to the model generated to classify and judge the inspected images. However, this method has a large scale of calculation and speed is slow. And because medical images are usually low contrast, when the traditional image segmentation method is applied to the medical image, there is a complete failure. As soon as possible to find the region of interest, improve detection speed, this topic attempts to introduce the current popular visual attention model into small lesions detection. However, Itti model is mainly for natural images. But the effect is not ideal when it is used to medical images which usually are gray images. Especially in the early stages of some cancers, the focus of a disease in the whole image is not the most significant region and sometimes is very difficult to be found. But these lesions are prominent in the local areas. This paper proposes a visual attention mechanism based on sliding window, and use sliding window to calculate the significance of a local area. Combined with the characteristics of the lesion, select the features of gray, entropy, corner and edge to generate a saliency map. Then the significant region is segmented and distinguished. This method reduces the difficulty of image segmentation, and improves the detection accuracy of small lesions, and it has great significance to early discovery, early diagnosis and treatment of cancers.

  3. LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics

    Science.gov (United States)

    Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel

    2017-10-01

    Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.

  4. A multilevel-ROI-features-based machine learning method for detection of morphometric biomarkers in Parkinson's disease.

    Science.gov (United States)

    Peng, Bo; Wang, Suhong; Zhou, Zhiyong; Liu, Yan; Tong, Baotong; Zhang, Tao; Dai, Yakang

    2017-06-09

    Machine learning methods have been widely used in recent years for detection of neuroimaging biomarkers in regions of interest (ROIs) and assisting diagnosis of neurodegenerative diseases. The innovation of this study is to use multilevel-ROI-features-based machine learning method to detect sensitive morphometric biomarkers in Parkinson's disease (PD). Specifically, the low-level ROI features (gray matter volume, cortical thickness, etc.) and high-level correlative features (connectivity between ROIs) are integrated to construct the multilevel ROI features. Filter- and wrapper- based feature selection method and multi-kernel support vector machine (SVM) are used in the classification algorithm. T1-weighted brain magnetic resonance (MR) images of 69 PD patients and 103 normal controls from the Parkinson's Progression Markers Initiative (PPMI) dataset are included in the study. The machine learning method performs well in classification between PD patients and normal controls with an accuracy of 85.78%, a specificity of 87.79%, and a sensitivity of 87.64%. The most sensitive biomarkers between PD patients and normal controls are mainly distributed in frontal lobe, parental lobe, limbic lobe, temporal lobe, and central region. The classification performance of our method with multilevel ROI features is significantly improved comparing with other classification methods using single-level features. The proposed method shows promising identification ability for detecting morphometric biomarkers in PD, thus confirming the potentiality of our method in assisting diagnosis of the disease. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Delineation of wetland areas from high resolution WorldView-2 data by object-based method

    International Nuclear Information System (INIS)

    Hassan, N; Hamid, J R A; Adnan, N A; Jaafar, M

    2014-01-01

    Various classification methods are available that can be used to delineate land cover types. Object-based is one of such methods for delineating the land cover from satellite imageries. This paper focuses on the digital image processing aspects of discriminating wetland areas via object-based method using high resolution satellite multispectral WorldView-2 image data taken over part of Penang Island region. This research is an attempt to improve the wetland area delineation in conjunction with a range of classification techniques which can be applied to satellite data with high spatial and spectral resolution such as World View 2. The intent is to determine a suitable approach to delineate and map these wetland areas more appropriately. There are common parameters to take into account that are pivotal in object-based method which are the spatial resolution and the range of spectral channels of the imaging sensor system. The preliminary results of the study showed object-based analysis is capable of delineating wetland region of interest with an accuracy that is acceptable to the required tolerance for land cover classification

  6. A nodal method based on matrix-response method

    International Nuclear Information System (INIS)

    Rocamora Junior, F.D.; Menezes, A.

    1982-01-01

    A nodal method based in the matrix-response method, is presented, and its application to spatial gradient problems, such as those that exist in fast reactors, near the core - blanket interface, is investigated. (E.G.) [pt

  7. [Bases and methods of suturing].

    Science.gov (United States)

    Vogt, P M; Altintas, M A; Radtke, C; Meyer-Marcotty, M

    2009-05-01

    If pharmaceutic modulation of scar formation does not improve the quality of the healing process over conventional healing, the surgeon must rely on personal skill and experience. Therefore a profound knowledge of wound healing based on experimental and clinical studies supplemented by postsurgical means of scar management and basic techniques of planning incisions, careful tissue handling, and thorough knowledge of suturing remain the most important ways to avoid abnormal scarring. This review summarizes the current experimental and clinical bases of surgical scar management.

  8. Based on Penalty Function Method

    Directory of Open Access Journals (Sweden)

    Ishaq Baba

    2015-01-01

    Full Text Available The dual response surface for simultaneously optimizing the mean and variance models as separate functions suffers some deficiencies in handling the tradeoffs between bias and variance components of mean squared error (MSE. In this paper, the accuracy of the predicted response is given a serious attention in the determination of the optimum setting conditions. We consider four different objective functions for the dual response surface optimization approach. The essence of the proposed method is to reduce the influence of variance of the predicted response by minimizing the variability relative to the quality characteristics of interest and at the same time achieving the specific target output. The basic idea is to convert the constraint optimization function into an unconstraint problem by adding the constraint to the original objective function. Numerical examples and simulations study are carried out to compare performance of the proposed method with some existing procedures. Numerical results show that the performance of the proposed method is encouraging and has exhibited clear improvement over the existing approaches.

  9. COMPANY VALUATION METHODS BASED ON PATRIMONY

    Directory of Open Access Journals (Sweden)

    SUCIU GHEORGHE

    2013-02-01

    Full Text Available The methods used for the company valuation can be divided into 3 main groups: methods based on patrimony,methods based on financial performance, methods based both on patrimony and on performance. The companyvaluation methods based on patrimony are implemented taking into account the balance sheet or the financialstatement. The financial statement refers to that type of balance in which the assets are arranged according to liquidity,and the liabilities according to their financial maturity date. The patrimonial methods are based on the principle thatthe value of the company equals that of the patrimony it owns. From a legal point of view, the patrimony refers to allthe rights and obligations of a company. The valuation of companies based on their financial performance can be donein 3 ways: the return value, the yield value, the present value of the cash flows. The mixed methods depend both onpatrimony and on financial performance or can make use of other methods.

  10. Defect inspection in hot slab surface: multi-source CCD imaging based fuzzy-rough sets method

    Science.gov (United States)

    Zhao, Liming; Zhang, Yi; Xu, Xiaodong; Xiao, Hong; Huang, Chao

    2016-09-01

    To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.

  11. 3D CSEM inversion based on goal-oriented adaptive finite element method

    Science.gov (United States)

    Zhang, Y.; Key, K.

    2016-12-01

    We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with

  12. Automated artery-venous classification of retinal blood vessels based on structural mapping method

    Science.gov (United States)

    Joshi, Vinayak S.; Garvin, Mona K.; Reinhardt, Joseph M.; Abramoff, Michael D.

    2012-03-01

    Retinal blood vessels show morphologic modifications in response to various retinopathies. However, the specific responses exhibited by arteries and veins may provide a precise diagnostic information, i.e., a diabetic retinopathy may be detected more accurately with the venous dilatation instead of average vessel dilatation. In order to analyze the vessel type specific morphologic modifications, the classification of a vessel network into arteries and veins is required. We previously described a method for identification and separation of retinal vessel trees; i.e. structural mapping. Therefore, we propose the artery-venous classification based on structural mapping and identification of color properties prominent to the vessel types. The mean and standard deviation of each of green channel intensity and hue channel intensity are analyzed in a region of interest around each centerline pixel of a vessel. Using the vector of color properties extracted from each centerline pixel, it is classified into one of the two clusters (artery and vein), obtained by the fuzzy-C-means clustering. According to the proportion of clustered centerline pixels in a particular vessel, and utilizing the artery-venous crossing property of retinal vessels, each vessel is assigned a label of an artery or a vein. The classification results are compared with the manually annotated ground truth (gold standard). We applied the proposed method to a dataset of 15 retinal color fundus images resulting in an accuracy of 88.28% correctly classified vessel pixels. The automated classification results match well with the gold standard suggesting its potential in artery-venous classification and the respective morphology analysis.

  13. A nodal method based on the response-matrix method

    International Nuclear Information System (INIS)

    Cunha Menezes Filho, A. da; Rocamora Junior, F.D.

    1983-02-01

    A nodal approach based on the Response-Matrix method is presented with the purpose of investigating the possibility of mixing two different allocations in the same problem. It is found that the use of allocation of albedo combined with allocation of direct reflection produces good results for homogeneous fast reactor configurations. (Author) [pt

  14. Integration tests of prototype LVL1 calorimeter trigger CP/JEP ROD and LVL2 trigger Region-of-Interest Builder. Also visible in the photo are two further racks containing the demonstrator prototypes of the LVL1 CTP and the MUCTPI.

    CERN Multimedia

    Gee, N

    2001-01-01

    Integration tests of prototype LVL1 calorimeter trigger CP/JEP ROD and LVL2 trigger Region-of-Interest Builder. Also visible in the photo are two further racks containing the demonstrator prototypes of the LVL1 CTP and the MUCTPI.

  15. Color image definition evaluation method based on deep learning method

    Science.gov (United States)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  16. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    Science.gov (United States)

    Stoitsis, John; Valavanis, Ioannis; Mougiakakou, Stavroula G.; Golemati, Spyretta; Nikita, Alexandra; Nikita, Konstantina S.

    2006-12-01

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.

  17. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    International Nuclear Information System (INIS)

    Stoitsis, John; Valavanis, Ioannis; Mougiakakou, Stavroula G.; Golemati, Spyretta; Nikita, Alexandra; Nikita, Konstantina S.

    2006-01-01

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis

  18. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Stoitsis, John [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece)]. E-mail: stoitsis@biosim.ntua.gr; Valavanis, Ioannis [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Mougiakakou, Stavroula G. [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Golemati, Spyretta [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Nikita, Alexandra [University of Athens, Medical School 152 28 Athens (Greece); Nikita, Konstantina S. [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece)

    2006-12-20

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.

  19. History based batch method preserving tally means

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Choi, Sung Hoon

    2012-01-01

    In the Monte Carlo (MC) eigenvalue calculations, the sample variance of a tally mean calculated from its cycle-wise estimates is biased because of the inter-cycle correlations of the fission source distribution (FSD). Recently, we proposed a new real variance estimation method named the history-based batch method in which a MC run is treated as multiple runs with small number of histories per cycle to generate independent tally estimates. In this paper, the history-based batch method based on the weight correction is presented to preserve the tally mean from the original MC run. The effectiveness of the new method is examined for the weakly coupled fissile array problem as a function of the dominance ratio and the batch size, in comparison with other schemes available

  20. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  1. Spectrum estimation method based on marginal spectrum

    International Nuclear Information System (INIS)

    Cai Jianhua; Hu Weiwen; Wang Xianchun

    2011-01-01

    FFT method can not meet the basic requirements of power spectrum for non-stationary signal and short signal. A new spectrum estimation method based on marginal spectrum from Hilbert-Huang transform (HHT) was proposed. The procession of obtaining marginal spectrum in HHT method was given and the linear property of marginal spectrum was demonstrated. Compared with the FFT method, the physical meaning and the frequency resolution of marginal spectrum were further analyzed. Then the Hilbert spectrum estimation algorithm was discussed in detail, and the simulation results were given at last. The theory and simulation shows that under the condition of short data signal and non-stationary signal, the frequency resolution and estimation precision of HHT method is better than that of FFT method. (authors)

  2. Interchange Recognition Method Based on CNN

    Directory of Open Access Journals (Sweden)

    HE Haiwei

    2018-03-01

    Full Text Available The identification and classification of interchange structures in OSM data can provide important information for the construction of multi-scale model, navigation and location services, congestion analysis, etc. The traditional method of interchange identification relies on the low-level characteristics of artificial design, and cannot distinguish the complex interchange structure with interference section effectively. In this paper, a new method based on convolutional neural network for identification of the interchange is proposed. The method combines vector data with raster image, and uses neural network to learn the fuzzy characteristics of the interchange, and classifies the complex interchange structure in OSM. Experiments show that this method has strong anti-interference, and has achieved good results in the classification of complex interchange shape, and there is room for further improvement with the expansion of the case base and the optimization of neural network model.

  3. Recommendation advertising method based on behavior retargeting

    Science.gov (United States)

    Zhao, Yao; YIN, Xin-Chun; CHEN, Zhi-Min

    2011-10-01

    Online advertising has become an important business in e-commerce. Ad recommended algorithms are the most critical part in recommendation systems. We propose a recommendation advertising method based on behavior retargeting which can avoid leakage click of advertising due to objective reasons and can observe the changes of the user's interest in time. Experiments show that our new method can have a significant effect and can be further to apply to online system.

  4. Personnel Selection Based on Fuzzy Methods

    Directory of Open Access Journals (Sweden)

    Lourdes Cañós

    2011-03-01

    Full Text Available The decisions of managers regarding the selection of staff strongly determine the success of the company. A correct choice of employees is a source of competitive advantage. We propose a fuzzy method for staff selection, based on competence management and the comparison with the valuation that the company considers the best in each competence (ideal candidate. Our method is based on the Hamming distance and a Matching Level Index. The algorithms, implemented in the software StaffDesigner, allow us to rank the candidates, even when the competences of the ideal candidate have been evaluated only in part. Our approach is applied in a numerical example.

  5. Examining Brain Morphometry Associated with Self-Esteem in Young Adults Using Multilevel-ROI-Features-Based Classification Method

    Directory of Open Access Journals (Sweden)

    Bo Peng

    2017-05-01

    Full Text Available Purpose: This study is to exam self-esteem related brain morphometry on brain magnetic resonance (MR images using multilevel-features-based classification method.Method: The multilevel region of interest (ROI features consist of two types of features: (i ROI features, which include gray matter volume, white matter volume, cerebrospinal fluid volume, cortical thickness, and cortical surface area, and (ii similarity features, which are based on similarity calculation of cortical thickness between ROIs. For each feature type, a hybrid feature selection method, comprising of filter-based and wrapper-based algorithms, is used to select the most discriminating features. ROI features and similarity features are integrated by using multi-kernel support vector machines (SVMs with appropriate weighting factor.Results: The classification performance is improved by using multilevel ROI features with an accuracy of 96.66%, a specificity of 96.62%, and a sensitivity of 95.67%. The most discriminating ROI features that are related to self-esteem spread over occipital lobe, frontal lobe, parietal lobe, limbic lobe, temporal lobe, and central region, mainly involving white matter and cortical thickness. The most discriminating similarity features are distributed in both the right and left hemisphere, including frontal lobe, occipital lobe, limbic lobe, parietal lobe, and central region, which conveys information of structural connections between different brain regions.Conclusion: By using ROI features and similarity features to exam self-esteem related brain morphometry, this paper provides a pilot evidence that self-esteem is linked to specific ROIs and structural connections between different brain regions.

  6. A spray based method for biofilm removal

    NARCIS (Netherlands)

    Cense, A.W.

    2005-01-01

    Biofilm growth on human teeth is the cause of oral diseases such as caries (tooth decay), gingivitis (inflammation of the gums) and periodontitis (inflammation of the tooth bone). In this thesis, a water based cleaning method is designed for removal of oral biofilms, or dental plaque. The first part

  7. Arts-Based Methods in Education

    DEFF Research Database (Denmark)

    Chemi, Tatiana; Du, Xiangyun

    2017-01-01

    This chapter introduces the field of arts-based methods in education with a general theoretical perspective, reviewing the journey of learning in connection to the arts, and the contribution of the arts to societies from an educational perspective. Also presented is the rationale and structure...

  8. Computer Animation Based on Particle Methods

    Directory of Open Access Journals (Sweden)

    Rafal Wcislo

    1999-01-01

    Full Text Available The paper presents the main issues of a computer animation of a set of elastic macroscopic objects based on the particle method. The main assumption of the generated animations is to achieve very realistic movements in a scene observed on the computer display. The objects (solid bodies interact mechanically with each other, The movements and deformations of solids are calculated using the particle method. Phenomena connected with the behaviour of solids in the gravitational field, their defomtations caused by collisions and interactions with the optional liquid medium are simulated. The simulation ofthe liquid is performed using the cellular automata method. The paper presents both simulation schemes (particle method and cellular automata rules an the method of combining them in the single animation program. ln order to speed up the execution of the program the parallel version based on the network of workstation was developed. The paper describes the methods of the parallelization and it considers problems of load-balancing, collision detection, process synchronization and distributed control of the animation.

  9. New technology-based recruitment methods

    OpenAIRE

    Oksanen, Reija

    2018-01-01

    The transformation that recruitment might encounter due to big data analytics and artificial intelligence (AI) is particularly fascinating which is why this thesis focuses on the changes recruitment processes are and will be facing as new technological solutions are emerging. The aim and main objective of this study is to widen knowledge about new technology-based recruitment methods, focusing on how they are utilized by Finnish recruitment professionals and how the opportunities and risks th...

  10. A multicore based parallel image registration method.

    Science.gov (United States)

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L; Foran, David J

    2009-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform.

  11. Lagrangian based methods for coherent structure detection

    Energy Technology Data Exchange (ETDEWEB)

    Allshouse, Michael R., E-mail: mallshouse@chaos.utexas.edu [Center for Nonlinear Dynamics and Department of Physics, University of Texas at Austin, Austin, Texas 78712 (United States); Peacock, Thomas, E-mail: tomp@mit.edu [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States)

    2015-09-15

    There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.

  12. Chapter 11. Community analysis-based methods

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Y.; Wu, C.H.; Andersen, G.L.; Holden, P.A.

    2010-05-01

    Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. In increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.

  13. Trade off between variable and fixed size normalization in orthogonal polynomials based iris recognition system.

    Science.gov (United States)

    Krishnamoorthi, R; Anna Poorani, G

    2016-01-01

    Iris normalization is an important stage in any iris biometric, as it has a propensity to trim down the consequences of iris distortion. To indemnify the variation in size of the iris owing to the action of stretching or enlarging the pupil in iris acquisition process and camera to eyeball distance, two normalization schemes has been proposed in this work. In the first method, the iris region of interest is normalized by converting the iris into the variable size rectangular model in order to avoid the under samples near the limbus border. In the second method, the iris region of interest is normalized by converting the iris region into a fixed size rectangular model in order to avoid the dimensional discrepancies between the eye images. The performance of the proposed normalization methods is evaluated with orthogonal polynomials based iris recognition in terms of FAR, FRR, GAR, CRR and EER.

  14. Bus Based Synchronization Method for CHIPPER Based NoC

    Directory of Open Access Journals (Sweden)

    D. Muralidharan

    2016-01-01

    Full Text Available Network on Chip (NoC reduces the communication delay of System on Chip (SoC. The main limitation of NoC is power consumption and area overhead. Bufferless NoC reduces the area complexity and power consumption by eliminating buffers in the traditional routers. The bufferless NoC design should include live lock freeness since they use hot potato routing. This increases the complexity of bufferless NoC design. Among the available propositions to reduce this complexity, CHIPPER based bufferless NoC is considered as one of the best options. Live lock freeness is provided in CHIPPER through golden epoch and golden packet. All routers follow some synchronization method to identify a golden packet. Clock based method is intuitively followed for synchronization in CHIPPER based NoCs. It is shown in this work that the worst-case latency of packets is unbearably high when the above synchronization is followed. To alleviate this problem, broadcast bus NoC (BBus NoC approach is proposed in this work. The proposed method decreases the worst-case latency of packets by increasing the golden epoch rate of CHIPPER.

  15. METHODICAL BASES OF MANAGEMENT OF INSURANCE PORTFOLIO

    Directory of Open Access Journals (Sweden)

    Serdechna Yulia

    2018-01-01

    Full Text Available Introduction. Despite the considerable arsenal of developments in the issues of assessing the management of the insurance portfolio remains unresolved. In order to detail, specify and further systematize the indicators for the indicated evaluation, the publications of scientists are analyzed. The purpose of the study is to analyze existing methods by which it is possible to formulate and manage the insurance portfolio in order to achieve its balance, which will contribute to ensuring the financial reliability of the insurance company. Results. The description of the essence of the concept of “management of insurance portfolio”, as the application of actuarial methods and techniques to the combination of various insurance risks offered for insurance or are already part of the insurance portfolio, allowing to adjust the size and structure of the portfolio in order to ensure its financial stability, achievement the maximum level of income of an insurance organization, preservation of the value of its equity and financial security of insurance liabilities. It is determined that the main methods by which the insurer’s insurance portfolio can be formed and managed is the selection of risks; reinsurance operations that ensure diversification of risks; formation and placement of insurance reserves, which form the financial basis of insurance activities. The method of managing an insurance portfolio, which can be both active and passive, is considered. Conclusions. It is determined that the insurance portfolio is the basis on which all the activities of the insurer are based and which determines its financial stability. The combination of methods and technologies applied to the insurance portfolio is a management method that can be both active and passive and has a number of specific methods through which the insurer’s insurance portfolio can be formed and managed. It is substantiated that each insurance company aims to form an efficient and

  16. Cut Based Method for Comparing Complex Networks.

    Science.gov (United States)

    Liu, Qun; Dong, Zhishan; Wang, En

    2018-03-23

    Revealing the underlying similarity of various complex networks has become both a popular and interdisciplinary topic, with a plethora of relevant application domains. The essence of the similarity here is that network features of the same network type are highly similar, while the features of different kinds of networks present low similarity. In this paper, we introduce and explore a new method for comparing various complex networks based on the cut distance. We show correspondence between the cut distance and the similarity of two networks. This correspondence allows us to consider a broad range of complex networks and explicitly compare various networks with high accuracy. Various machine learning technologies such as genetic algorithms, nearest neighbor classification, and model selection are employed during the comparison process. Our cut method is shown to be suited for comparisons of undirected networks and directed networks, as well as weighted networks. In the model selection process, the results demonstrate that our approach outperforms other state-of-the-art methods with respect to accuracy.

  17. A flocking based method for brain tractography.

    Science.gov (United States)

    Aranda, Ramon; Rivera, Mariano; Ramirez-Manzanares, Alonso

    2014-04-01

    We propose a new method to estimate axonal fiber pathways from Multiple Intra-Voxel Diffusion Orientations. Our method uses the multiple local orientation information for leading stochastic walks of particles. These stochastic particles are modeled with mass and thus they are subject to gravitational and inertial forces. As result, we obtain smooth, filtered and compact trajectory bundles. This gravitational interaction can be seen as a flocking behavior among particles that promotes better and robust axon fiber estimations because they use collective information to move. However, the stochastic walks may generate paths with low support (outliers), generally associated to incorrect brain connections. In order to eliminate the outlier pathways, we propose a filtering procedure based on principal component analysis and spectral clustering. The performance of the proposal is evaluated on Multiple Intra-Voxel Diffusion Orientations from two realistic numeric diffusion phantoms and a physical diffusion phantom. Additionally, we qualitatively demonstrate the performance on in vivo human brain data. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Forced Ignition Study Based On Wavelet Method

    Science.gov (United States)

    Martelli, E.; Valorani, M.; Paolucci, S.; Zikoski, Z.

    2011-05-01

    The control of ignition in a rocket engine is a critical problem for combustion chamber design. Therefore it is essential to fully understand the mechanism of ignition during its earliest stages. In this paper the characteristics of flame kernel formation and initial propagation in a hydrogen-argon-oxygen mixing layer are studied using 2D direct numerical simulations with detailed chemistry and transport properties. The flame kernel is initiated by adding an energy deposition source term in the energy equation. The effect of unsteady strain rate is studied by imposing a 2D turbulence velocity field, which is initialized by means of a synthetic field. An adaptive wavelet method, based on interpolating wavelets is used in this study to solve the compressible reactive Navier- Stokes equations. This method provides an alternative means to refine the computational grid points according to local demands of the physical solution. The present simulations show that in the very early instants the kernel perturbed by the turbulent field is characterized by an increased burning area and a slightly increased rad- ical formation. In addition, the calculations show that the wavelet technique yields a significant reduction in the number of degrees of freedom necessary to achieve a pre- scribed solution accuracy.

  19. Math-Based Simulation Tools and Methods

    National Research Council Canada - National Science Library

    Arepally, Sudhakar

    2007-01-01

    .... The following methods are reviewed: matrix operations, ordinary and partial differential system of equations, Lagrangian operations, Fourier transforms, Taylor Series, Finite Difference Methods, implicit and explicit finite element...

  20. Filter-based reconstruction methods for tomography

    NARCIS (Netherlands)

    Pelt, D.M.

    2016-01-01

    In X-ray tomography, a three-dimensional image of the interior of an object is computed from multiple X-ray images, acquired over a range of angles. Two types of methods are commonly used to compute such an image: analytical methods and iterative methods. Analytical methods are computationally

  1. DNA-based methods of geochemical prospecting

    Energy Technology Data Exchange (ETDEWEB)

    Ashby, Matthew [Mill Valley, CA

    2011-12-06

    The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.

  2. Triptycene-based ladder monomers and polymers, methods of making each, and methods of use

    KAUST Repository

    Pinnau, Ingo

    2015-02-05

    Embodiments of the present disclosure provide for a triptycene-based A-B monomer, a method of making a triptycene-based A-B monomer, a triptycene-based ladder polymer, a method of making a triptycene-based ladder polymers, a method of using triptycene-based ladder polymers, a structure incorporating triptycene-based ladder polymers, a method of gas separation, and the like.

  3. Alternative methods of flexible base compaction acceptance.

    Science.gov (United States)

    2012-11-01

    "This report presents the results from the second year of research work investigating issues with flexible base acceptance testing within the Texas Department of Transportation. This second year of work focused on shadow testing non-density-based acc...

  4. Math-Based Simulation Tools and Methods

    National Research Council Canada - National Science Library

    Arepally, Sudhakar

    2007-01-01

    ...: HMMWV 30-mph Rollover Test, Soldier Gear Effects, Occupant Performance in Blast Effects, Anthropomorphic Test Device, Human Models, Rigid Body Modeling, Finite Element Methods, Injury Criteria...

  5. Techniques for virtual lung nodule insertion: volumetric and morphometric comparison of projection-based and image-based methods for quantitative CT

    Science.gov (United States)

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan

    2017-09-01

    Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (Mea{{n}RHD} , ST{{D}RHD} and C{{V}RHD}{) }~ of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and

  6. A novel method for unsteady flow field segmentation based on stochastic similarity of direction

    Science.gov (United States)

    Omata, Noriyasu; Shirayama, Susumu

    2018-04-01

    Recent developments in fluid dynamics research have opened up the possibility for the detailed quantitative understanding of unsteady flow fields. However, the visualization techniques currently in use generally provide only qualitative insights. A method for dividing the flow field into physically relevant regions of interest can help researchers quantify unsteady fluid behaviors. Most methods at present compare the trajectories of virtual Lagrangian particles. The time-invariant features of an unsteady flow are also frequently of interest, but the Lagrangian specification only reveals time-variant features. To address these challenges, we propose a novel method for the time-invariant spatial segmentation of an unsteady flow field. This segmentation method does not require Lagrangian particle tracking but instead quantitatively compares the stochastic models of the direction of the flow at each observed point. The proposed method is validated with several clustering tests for 3D flows past a sphere. Results show that the proposed method reveals the time-invariant, physically relevant structures of an unsteady flow.

  7. Neurophysiological Based Methods of Guided Image Search

    National Research Council Canada - National Science Library

    Marchak, Frank

    2003-01-01

    .... We developed a model of visual feature detection, the Neuronal Synchrony Model, based on neurophysiological models of temporal neuronal processing, to improve the accuracy of automatic detection...

  8. Triptycene-based ladder monomers and polymers, methods of making each, and methods of use

    KAUST Repository

    Pinnau, Ingo; Ghanem, Bader; Swaidan, Raja

    2015-01-01

    Embodiments of the present disclosure provide for a triptycene-based A-B monomer, a method of making a triptycene-based A-B monomer, a triptycene-based ladder polymer, a method of making a triptycene-based ladder polymers, a method of using

  9. portfolio optimization based on nonparametric estimation methods

    Directory of Open Access Journals (Sweden)

    mahsa ghandehari

    2017-03-01

    Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.

  10. Enhanced Deforestation Mapping in North Korea using Spatial-temporal Image Fusion Method and Phenology-based Index

    Science.gov (United States)

    Jin, Y.; Lee, D.

    2017-12-01

    North Korea (the Democratic People's Republic of Korea, DPRK) is known to have some of the most degraded forest in the world. The characteristics of forest landscape in North Korea is complex and heterogeneous, the major vegetation cover types in the forest are hillside farm, unstocked forest, natural forest, and plateau vegetation. Better classification of types in high spatial resolution of deforested areas could provide essential information for decisions about forest management priorities and restoration of deforested areas. For mapping heterogeneous vegetation covers, the phenology-based indices are helpful to overcome the reflectance value confusion that occurs when using one season images. Coarse spatial resolution images may be acquired with a high repetition rate and it is useful for analyzing phenology characteristics, but may not capture the spatial detail of the land cover mosaic of the region of interest. Previous spatial-temporal fusion methods were only capture the temporal change, or focused on both temporal change and spatial change but with low accuracy in heterogeneous landscapes and small patches. In this study, a new concept for spatial-temporal image fusion method focus on heterogeneous landscape was proposed to produce fine resolution images at both fine spatial and temporal resolution. We classified the three types of pixels between the base image and target image, the first type is only reflectance changed caused by phenology, this type of pixels supply the reflectance, shape and texture information; the second type is both reflectance and spectrum changed in some bands caused by phenology like rice paddy or farmland, this type of pixels only supply shape and texture information; the third type is reflectance and spectrum changed caused by land cover type change, this type of pixels don't provide any information because we can't know how land cover changed in target image; and each type of pixels were applied different prediction methods

  11. Topology-Based Methods in Visualization 2015

    CERN Document Server

    Garth, Christoph; Weinkauf, Tino

    2017-01-01

    This book presents contributions on topics ranging from novel applications of topological analysis for particular problems, through studies of the effectiveness of modern topological methods, algorithmic improvements on existing methods, and parallel computation of topological structures, all the way to mathematical topologies not previously applied to data analysis. Topological methods are broadly recognized as valuable tools for analyzing the ever-increasing flood of data generated by simulation or acquisition. This is particularly the case in scientific visualization, where the data sets have long since surpassed the ability of the human mind to absorb every single byte of data. The biannual TopoInVis workshop has supported researchers in this area for a decade, and continues to serve as a vital forum for the presentation and discussion of novel results in applications in the area, creating a platform to disseminate knowledge about such implementations throughout and beyond the community. The present volum...

  12. A Tomographic method based on genetic algorithms

    International Nuclear Information System (INIS)

    Turcanu, C.; Alecu, L.; Craciunescu, T.; Niculae, C.

    1997-01-01

    Computerized tomography being a non-destructive and non-evasive technique is frequently used in medical application to generate three dimensional images of objects. Genetic algorithms are efficient, domain independent for a large variety of problems. The proposed method produces good quality reconstructions even in case of very small number of projection angles. It requests no a priori knowledge about the solution and takes into account the statistical uncertainties. The main drawback of the method is the amount of computer memory and time needed. (author)

  13. Scope-Based Method Cache Analysis

    DEFF Research Database (Denmark)

    Huber, Benedikt; Hepp, Stefan; Schoeberl, Martin

    2014-01-01

    The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution, as it req......The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution...

  14. Valuing Convertible Bonds Based on LSRQM Method

    Directory of Open Access Journals (Sweden)

    Jian Liu

    2014-01-01

    Full Text Available Convertible bonds are one of the essential financial products for corporate finance, while the pricing theory is the key problem to the theoretical research of convertible bonds. This paper demonstrates how to price convertible bonds with call and put provisions using Least-Squares Randomized Quasi-Monte Carlo (LSRQM method. We consider the financial market with stochastic interest rates and credit risk and present a detailed description on calculating steps of convertible bonds value. The empirical results show that the model fits well the market prices of convertible bonds in China’s market and the LSRQM method is effective.

  15. HMM-Based Gene Annotation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Haussler, David; Hughey, Richard; Karplus, Keven

    1999-09-20

    Development of new statistical methods and computational tools to identify genes in human genomic DNA, and to provide clues to their functions by identifying features such as transcription factor binding sites, tissue, specific expression and splicing patterns, and remove homologies at the protein level with genes of known function.

  16. A new method based on PAVAN

    International Nuclear Information System (INIS)

    Ding Sizhong; Jiang Haoyu

    2010-01-01

    In order to get more precise results, this paper changed the input data of PAVAN from joint frequency to hourly meteorological data. Although the sample points of meteorological diffusion factor of this new method is more conservative than PAVAN's, the results of the short term meteorological diffusion factor is not necessarily more conservative than PAVAN. (authors)

  17. Arts-based Methods and Organizational Learning

    DEFF Research Database (Denmark)

    This thematic volume explores the relationship between the arts and learning in various educational contexts and across cultures, but with a focus on higher education and organizational learning. Arts-based interventions are at the heart of this volume, which addresses how they are conceived, des...

  18. DTI analysis methods : Voxel-based analysis

    NARCIS (Netherlands)

    Van Hecke, Wim; Leemans, Alexander; Emsell, Louise

    2016-01-01

    Voxel-based analysis (VBA) of diffusion tensor imaging (DTI) data permits the investigation of voxel-wise differences or changes in DTI metrics in every voxel of a brain dataset. It is applied primarily in the exploratory analysis of hypothesized group-level alterations in DTI parameters, as it does

  19. Hepatic fat quantification using the two-point Dixon method and fat color maps based on non-alcoholic fatty liver disease activity score.

    Science.gov (United States)

    Hayashi, Tatsuya; Saitoh, Satoshi; Takahashi, Junji; Tsuji, Yoshinori; Ikeda, Kenji; Kobayashi, Masahiro; Kawamura, Yusuke; Fujii, Takeshi; Inoue, Masafumi; Miyati, Tosiaki; Kumada, Hiromitsu

    2017-04-01

    The two-point Dixon method for magnetic resonance imaging (MRI) is commonly used to non-invasively measure fat deposition in the liver. The aim of the present study was to assess the usefulness of MRI-fat fraction (MRI-FF) using the two-point Dixon method based on the non-alcoholic fatty liver disease activity score. This retrospective study included 106 patients who underwent liver MRI and MR spectroscopy, and 201 patients who underwent liver MRI and histological assessment. The relationship between MRI-FF and MR spectroscopy-fat fraction was used to estimate the corrected MRI-FF for hepatic multi-peaks of fat. Then, a color FF map was generated with the corrected MRI-FF based on the non-alcoholic fatty liver disease activity score. We defined FF variability as the standard deviation of FF in regions of interest. Uniformity of hepatic fat was visually graded on a three-point scale using both gray-scale and color FF maps. Confounding effects of histology (iron, inflammation and fibrosis) on corrected MRI-FF were assessed by multiple linear regression. The linear correlations between MRI-FF and MR spectroscopy-fat fraction, and between corrected MRI-FF and histological steatosis were strong (R 2  = 0.90 and R 2  = 0.88, respectively). Liver fat variability significantly increased with visual fat uniformity grade using both of the maps (ρ = 0.67-0.69, both P Hepatic iron, inflammation and fibrosis had no significant confounding effects on the corrected MRI-FF (all P > 0.05). The two-point Dixon method and the gray-scale or color FF maps based on the non-alcoholic fatty liver disease activity score were useful for fat quantification in the liver of patients without severe iron deposition. © 2016 The Japan Society of Hepatology.

  20. Application of different Scheimpflug-based lens densitometry methods in phacodynamics prediction

    Directory of Open Access Journals (Sweden)

    Faria-Correia F

    2016-04-01

    Full Text Available Fernando Faria-Correia,1–5 Bernardo T Lopes,5,6 Isaac C Ramos,5,6 Tiago Monteiro,1,2 Nuno Franqueira,1 Renato Ambrósio Jr5–8 1Ophthalmology Department, Hospital de Braga, Braga, Portugal; 2Life and Health Sciences Research Institute (ICVS, School of Health Sciences, University of Minho, Braga, Portugal; 3ICVS/3B’s - PT Government Associate Laboratory, Braga, Portugal; 4ICVS/3B’s - PT Government Associate Laboratory, Guimarães, Portugal; 5Rio de Janeiro Corneal Tomography and Biomechanics Study Group, Rio de Janeiro, Brazil; 6Instituto de Olhos Renato Ambrósio, Rio de Janeiro, Brazil; 7VisareRio, Rio de Janeiro, Brazil; 8Department of Ophthalmology and Visual Sciences, Federal University of São Paulo, São Paulo, Brazil Purpose: To evaluate the correlations between preoperative Scheimpflug-based lens densitometry metrics and phacodynamics. Methods: The Lens Opacities Classification System III (LOCS III was used to grade nuclear opalescence (NO, along with different methods of lens densitometry evaluation (absolute scale from 0% to 100%: three-dimensional (3D, linear, and region of interest (ROI modes. Cumulative dissipated energy (CDE and total ultrasound (US time were recorded and correlated with the different methods of cataract grading. Significant correlations were evaluated using Pearson or Spearman correlation coefficients according to data normality. Results: A positive correlation was detected between the NO score and the average density and the maximum density derived from the 3D mode (r=0.624, P<0.001; r=0.619, P<0.001, respectively and the ROI mode (r=0.600, P<0.001; r=0.642, P<0.001, respectively. Regarding the linear mode, only the average density parameter presented a significant relationship with the NO score (r=0.569, P<0.001. The 3D-derived average density and maximum density were positively correlated with CDE (rho =0.682, P<0.001; rho =0.683, P<0.001, respectively and total US time (rho =0.631 and rho =0

  1. Knowledge-based methods for control systems

    International Nuclear Information System (INIS)

    Larsson, J.E.

    1992-01-01

    This thesis consists of three projects which combine artificial intelligence and control. The first part describes an expert system interface for system identification, using the interactive identification program Idpac. The interface works as an intelligent help system, using the command spy strategy. It contains a multitude of help system ideas. The concept of scripts is introduced as a data structure used to describe the procedural part of the knowledge in the interface. Production rules are used to represent diagnostic knowledge. A small knowledge database of scripts and rules has been developed and an example run is shown. The second part describes an expert system for frequency response analysis. This is one of the oldest and most widely used methods to determine the dynamics of a stable linear system. Though quite simple, it requires knowledge and experience of the user, in order to produce reliable results. The expert system is designed to help the user in performing the analysis. It checks whether the system is linear, finds the frequency and amplitude ranges, verifies the results, and, if errors should occur, tries to give explanation and remedies for them. The third part describes three diagnostic methods for use with industrial processes. They are measurement validation, i.e., consistency checking of sensor and measurement values using any redundancy of instrumentation; alarm analysis, i.e. analysis of multiple alarm situations to find which alarms are directly connected to primary faults and which alarms are consequential effects of the primary ones; and fault diagnosis, i.e., a search for the causes of and remedies for faults. The three methods use multilevel flow models, (MFM), to describe the target process. They have been implemented in the programming tool G2, and successfully tested on two small processes. (164 refs.) (au)

  2. Limitations of correlation-based redatuming methods

    Science.gov (United States)

    Barrera P, D. F.; Schleicher, J.; van der Neut, J.

    2017-12-01

    Redatuming aims to correct seismic data for the consequences of an acquisition far from the target. That includes the effects of an irregular acquisition surface and of complex geological structures in the overburden such as strong lateral heterogeneities or layers with low or very high velocity. Interferometric techniques can be used to relocate sources to positions where only receivers are available and have been used to move acquisition geometries to the ocean bottom or transform data between surface-seismic and vertical seismic profiles. Even if no receivers are available at the new datum, the acquisition system can be relocated to any datum in the subsurface to which the propagation of waves can be modeled with sufficient accuracy. By correlating the modeled wavefield with seismic surface data, one can carry the seismic acquisition geometry from the surface closer to geologic horizons of interest. Specifically, we show the derivation and approximation of the one-sided seismic interferometry equation for surface-data redatuming, conveniently using Green’s theorem for the Helmholtz equation with density variation. Our numerical examples demonstrate that correlation-based single-boundary redatuming works perfectly in a homogeneous overburden. If the overburden is inhomogeneous, primary reflections from deeper interfaces are still repositioned with satisfactory accuracy. However, in this case artifacts are generated as a consequence of incorrectly redatumed overburden multiples. These artifacts get even worse if the complete wavefield is used instead of the direct wavefield. Therefore, we conclude that correlation-based interferometric redatuming of surface-seismic data should always be applied using direct waves only, which can be approximated with sufficient quality if a smooth velocity model for the overburden is available.

  3. Triptycene-based dianhydrides, polyimides, methods of making each, and methods of use

    KAUST Repository

    Ghanem, Bader; Pinnau, Ingo; Swaidan, Raja

    2015-01-01

    A triptycene-based monomer, a method of making a triptycene-based monomer, a triptycene-based aromatic polyimide, a method of making a triptycene- based aromatic polyimide, methods of using triptycene-based aromatic polyimides, structures incorporating triptycene-based aromatic polyimides, and methods of gas separation are provided. Embodiments of the triptycene-based monomers and triptycene-based aromatic polyimides have high permeabilities and excellent selectivities. Embodiments of the triptycene-based aromatic polyimides have one or more of the following characteristics: intrinsic microporosity, good thermal stability, and enhanced solubility. In an exemplary embodiment, the triptycene-based aromatic polyimides are microporous and have a high BET surface area. In an exemplary embodiment, the triptycene-based aromatic polyimides can be used to form a gas separation membrane.

  4. Triptycene-based dianhydrides, polyimides, methods of making each, and methods of use

    KAUST Repository

    Ghanem, Bader

    2015-12-30

    A triptycene-based monomer, a method of making a triptycene-based monomer, a triptycene-based aromatic polyimide, a method of making a triptycene- based aromatic polyimide, methods of using triptycene-based aromatic polyimides, structures incorporating triptycene-based aromatic polyimides, and methods of gas separation are provided. Embodiments of the triptycene-based monomers and triptycene-based aromatic polyimides have high permeabilities and excellent selectivities. Embodiments of the triptycene-based aromatic polyimides have one or more of the following characteristics: intrinsic microporosity, good thermal stability, and enhanced solubility. In an exemplary embodiment, the triptycene-based aromatic polyimides are microporous and have a high BET surface area. In an exemplary embodiment, the triptycene-based aromatic polyimides can be used to form a gas separation membrane.

  5. Comparison of demons deformable registration-based methods for texture analysis of serial thoracic CT scans

    Science.gov (United States)

    Cunliffe, Alexandra R.; Al-Hallaq, Hania A.; Fei, Xianhan M.; Tuohy, Rachel E.; Armato, Samuel G.

    2013-02-01

    To determine how 19 image texture features may be altered by three image registration methods, "normal" baseline and follow-up computed tomography (CT) scans from 27 patients were analyzed. Nineteen texture feature values were calculated in over 1,000 32x32-pixel regions of interest (ROIs) randomly placed in each baseline scan. All three methods used demons registration to map baseline scan ROIs to anatomically matched locations in the corresponding transformed follow-up scan. For the first method, the follow-up scan transformation was subsampled to achieve a voxel size identical to that of the baseline scan. For the second method, the follow-up scan was transformed through affine registration to achieve global alignment with the baseline scan. For the third method, the follow-up scan was directly deformed to the baseline scan using demons deformable registration. Feature values in matched ROIs were compared using Bland- Altman 95% limits of agreement. For each feature, the range spanned by the 95% limits was normalized to the mean feature value to obtain the normalized range of agreement, nRoA. Wilcoxon signed-rank tests were used to compare nRoA values across features for the three methods. Significance for individual tests was adjusted using the Bonferroni method. nRoA was significantly smaller for affine-registered scans than for the resampled scans (p=0.003), indicating lower feature value variability between baseline and follow-up scan ROIs using this method. For both of these methods, however, nRoA was significantly higher than when feature values were calculated directly on demons-deformed followup scans (p<0.001). Across features and methods, nRoA values remained below 26%.

  6. Algebraic Verification Method for SEREs Properties via Groebner Bases Approaches

    Directory of Open Access Journals (Sweden)

    Ning Zhou

    2013-01-01

    Full Text Available This work presents an efficient solution using computer algebra system to perform linear temporal properties verification for synchronous digital systems. The method is essentially based on both Groebner bases approaches and symbolic simulation. A mechanism for constructing canonical polynomial set based symbolic representations for both circuit descriptions and assertions is studied. We then present a complete checking algorithm framework based on these algebraic representations by using Groebner bases. The computational experience result in this work shows that the algebraic approach is a quite competitive checking method and will be a useful supplement to the existent verification methods based on simulation.

  7. Comparison of gas dehydration methods based on energy ...

    African Journals Online (AJOL)

    Comparison of gas dehydration methods based on energy consumption. ... PROMOTING ACCESS TO AFRICAN RESEARCH ... This study compares three conventional methods of natural gas (Associated Natural Gas) dehydration to carry out ...

  8. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-03

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  9. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-01

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  10. An alternative to γ histograms for ROI-based quantitative dose comparisons

    International Nuclear Information System (INIS)

    Dvorak, P

    2009-01-01

    An alternative to gamma (γ) histograms for ROI-based quantitative comparisons of dose distributions using the γ concept is proposed. The method provides minimum values of dose difference and distance-to-agreement such that a pre-set fraction of the region of interest passes the γ test. Compared to standard γ histograms, the method provides more information in terms of pass rate per γ calculation. This is achieved at negligible additional calculation cost and without loss of accuracy. The presented method is proposed as a useful and complementary alternative to standard γ histograms, increasing both the quantity and quality of information for use in acceptance or rejection decisions. (note)

  11. Qualitative Comparison of Contraction-Based Curve Skeletonization Methods

    NARCIS (Netherlands)

    Sobiecki, André; Yasan, Haluk C.; Jalba, Andrei C.; Telea, Alexandru C.

    2013-01-01

    In recent years, many new methods have been proposed for extracting curve skeletons of 3D shapes, using a mesh-contraction principle. However, it is still unclear how these methods perform with respect to each other, and with respect to earlier voxel-based skeletonization methods, from the viewpoint

  12. Power quality events recognition using a SVM-based method

    Energy Technology Data Exchange (ETDEWEB)

    Cerqueira, Augusto Santiago; Ferreira, Danton Diego; Ribeiro, Moises Vidal; Duque, Carlos Augusto [Department of Electrical Circuits, Federal University of Juiz de Fora, Campus Universitario, 36036 900, Juiz de Fora MG (Brazil)

    2008-09-15

    In this paper, a novel SVM-based method for power quality event classification is proposed. A simple approach for feature extraction is introduced, based on the subtraction of the fundamental component from the acquired voltage signal. The resulting signal is presented to a support vector machine for event classification. Results from simulation are presented and compared with two other methods, the OTFR and the LCEC. The proposed method shown an improved performance followed by a reasonable computational cost. (author)

  13. Droplet-based microfluidic method for synthesis of microparticles

    CSIR Research Space (South Africa)

    Mbanjwa, MB

    2012-10-01

    Full Text Available Droplet-based microfluidics has, in recent years, received increased attention as an important tool for performing numerous methods in modern day chemistry and biology such as the synthesis of hydrogel microparticles. Hydrogels have been used in many..., in recent years, received increased attention as an important tool for performing numerous methods in modern day chemistry and biology, such as synthesis of hydrogel microparticles. CONCLUSION AND OUTLOOK The droplet-based microfluidic method offers...

  14. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...

  15. Improved initial guess with semi-subpixel level accuracy in digital image correlation by feature-based method

    Science.gov (United States)

    Zhang, Yunlu; Yan, Lei; Liou, Frank

    2018-05-01

    The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.

  16. Exploration of mineral resource deposits based on analysis of aerial and satellite image data employing artificial intelligence methods

    Science.gov (United States)

    Osipov, Gennady

    2013-04-01

    We propose a solution to the problem of exploration of various mineral resource deposits, determination of their forms / classification of types (oil, gas, minerals, gold, etc.) with the help of satellite photography of the region of interest. Images received from satellite are processed and analyzed to reveal the presence of specific signs of deposits of various minerals. Course of data processing and making forecast can be divided into some stages: Pre-processing of images. Normalization of color and luminosity characteristics, determination of the necessary contrast level and integration of a great number of separate photos into a single map of the region are performed. Construction of semantic map image. Recognition of bitmapped image and allocation of objects and primitives known to system are realized. Intelligent analysis. At this stage acquired information is analyzed with the help of a knowledge base, which contain so-called "attention landscapes" of experts. Used methods of recognition and identification of images: a) combined method of image recognition, b)semantic analysis of posterized images, c) reconstruction of three-dimensional objects from bitmapped images, d)cognitive technology of processing and interpretation of images. This stage is fundamentally new and it distinguishes suggested technology from all others. Automatic registration of allocation of experts` attention - registration of so-called "attention landscape" of experts - is the base of the technology. Landscapes of attention are, essentially, highly effective filters that cut off unnecessary information and emphasize exactly the factors used by an expert for making a decision. The technology based on denoted principles involves the next stages, which are implemented in corresponding program agents. Training mode -> Creation of base of ophthalmologic images (OI) -> Processing and making generalized OI (GOI) -> Mode of recognition and interpretation of unknown images. Training mode

  17. FUSION SEGMENTATION METHOD BASED ON FUZZY THEORY FOR COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    J. Zhao

    2017-09-01

    Full Text Available The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  18. Conceptual bases of the brand valuation by cost method

    Directory of Open Access Journals (Sweden)

    G.Y. Studinska

    2015-03-01

    Full Text Available The necessity of valuing intangible assets in accordance with international trends is substantiated. The brand is seen as more important component of intangible assets, as an effective management tool company. The benefits and uses of brand evaluation results are investigated. System monocriterion cost brand evaluation methods is analyzed. In particular, methods that require evaluation by the time factor (current and forecast methods and methods for factor comparison base (relative and absolute. The cost method of brand valuation through market transactions in accordance J.Common’s classification is considered in detail. The explanation of the difference between method a summation of all costs and method of brand valuation through market transactions is provided. The advantages and disadvantages considered cost method of brand valuation are investigated. The cost method as the relative-predicted of the brand valuation, «The method of determining the proportion of the brand from the discounted total costs» is grounded

  19. Language Practitioners' Reflections on Method-Based and Post-Method Pedagogies

    Science.gov (United States)

    Soomro, Abdul Fattah; Almalki, Mansoor S.

    2017-01-01

    Method-based pedagogies are commonly applied in teaching English as a foreign language all over the world. However, in the last quarter of the 20th century, the concept of such pedagogies based on the application of a single best method in EFL started to be viewed with concerns by some scholars. In response to the growing concern against the…

  20. Changes in neural resting state activity in primary and higher-order motor areas induced by a short sensorimotor intervention based on the Feldenkrais method

    Directory of Open Access Journals (Sweden)

    Julius eVerrel

    2015-04-01

    Full Text Available We use functional magnetic resonance imaging to investigate short-term neural effects of a brief sensorimotor intervention adapted from the Feldenkrais method, a movement-based learning method. Twenty-one participants (10 men, 19-30 years took part in the study. Participants were in a supine position in the scanner with extended legs while an experienced Feldenkrais practitioner used a planar board to touch and apply minimal force to different parts of the sole and toes of their left foot under two experimental conditions. In the local condition, the practitioner explored movement within foot and ankle. In the global condition, the practitioner focused on the connection and support from the foot to the rest of the body. Before (baseline and after each intervention (post-local, post-global, we measured brain activity during intermittent pushing/releasing with the left leg and during resting state. Independent localizer tasks were used to identify regions of interest (ROI.Brain activity during left-foot pushing did not significantly differ between conditions in sensorimotor areas. Resting state activity (regional homogeneity, ReHo increased from baseline to post-local in medial right motor cortex, and from baseline to post-global in the left supplementary/cingulate motor area. Contrasting post-global to post-local showed higher ReHo in right lateral motor cortex. ROI analyses showed significant increases in ReHo in pushing-related areas from baseline to both post-local and post-global, and this increase tended to be more pronounced post-local. The results of this exploratory study show that a short, non-intrusive sensorimotor intervention can have short-term effects on spontaneous cortical activity in functionally related brain regions. Increased resting state activity in higher-order motor areas supports the hypothesis that the global intervention engages action-related neural processes.

  1. Quantitative MR thermometry based on phase-drift correction PRF shift method at 0.35 T.

    Science.gov (United States)

    Chen, Yuping; Ge, Mengke; Ali, Rizwan; Jiang, Hejun; Huang, Xiaoyan; Qiu, Bensheng

    2018-04-10

    Noninvasive magnetic resonance thermometry (MRT) at low-field using proton resonance frequency shift (PRFS) is a promising technique for monitoring ablation temperature, since low-field MR scanners with open-configuration are more suitable for interventional procedures than closed systems. In this study, phase-drift correction PRFS with first-order polynomial fitting method was proposed to investigate the feasibility and accuracy of quantitative MR thermography during hyperthermia procedures in a 0.35 T open MR scanner. Unheated phantom and ex vivo porcine liver experiments were performed to evaluate the optimal polynomial order for phase-drift correction PRFS. The temperature estimation approach was tested in brain temperature experiments of three healthy volunteers at room temperature, and in ex vivo porcine liver microwave ablation experiments. The output power of the microwave generator was set at 40 W for 330 s. In the unheated experiments, the temperature root mean square error (RMSE) in the inner region of interest was calculated to assess the best-fitting order for polynomial fit. For ablation experiments, relative temperature difference profile measured by the phase-drift correction PRFS was compared with the temperature changes recorded by fiber optic temperature probe around the microwave ablation antenna within the target thermal region. The phase-drift correction PRFS using first-order polynomial fitting could achieve the smallest temperature RMSE in unheated phantom, ex vivo porcine liver and in vivo human brain experiments. In the ex vivo porcine liver microwave ablation procedure, the temperature error between MRT and fiber optic probe of all but six temperature points were less than 2 °C. Overall, the RMSE of all temperature points was 1.49 °C. Both in vivo and ex vivo experiments showed that MR thermometry based on the phase-drift correction PRFS with first-order polynomial fitting could be applied to monitor temperature changes during

  2. Soucreless efficiency calibration for HPGe detector based on medical images

    International Nuclear Information System (INIS)

    Chen Chaobin; She Ruogu; Xiao Gang; Zuo Li

    2012-01-01

    Digital phantom of patient and region of interest (supposed to be filled with isotropy volume source) are built from medical CT images. They are used to calculate the detection efficiency of HPGe detectors located outside of human body by sourceless calibration method based on a fast integral technique and MCNP code respectively, and the results from two codes are in good accord besides a max difference about 5% at intermediate energy region. The software produced in this work are in better behavior than Monte Carlo code not only in time consume but also in complexity of problem to solve. (authors)

  3. Data Mining and Knowledge Discovery via Logic-Based Methods

    CERN Document Server

    Triantaphyllou, Evangelos

    2010-01-01

    There are many approaches to data mining and knowledge discovery (DM&KD), including neural networks, closest neighbor methods, and various statistical methods. This monograph, however, focuses on the development and use of a novel approach, based on mathematical logic, that the author and his research associates have worked on over the last 20 years. The methods presented in the book deal with key DM&KD issues in an intuitive manner and in a natural sequence. Compared to other DM&KD methods, those based on mathematical logic offer a direct and often intuitive approach for extracting easily int

  4. Kontexte qualitativer Sozialforschung: Arts-Based Research, Mixed Methods und Emergent Methods

    OpenAIRE

    Schreier, Margrit

    2017-01-01

    In dem vorliegenden Beitrag werden drei Kontexte qualitativer Sozialforschung genauer dargestellt, die in den vergangenen Jahren zunehmend an Bedeutung gewonnen haben: Arts-Based Research, Mixed Methods und Emergent Methods. Es werden verschiedene Ansätze und Varianten von Arts-Informed und Arts-Based Research genauer beschrieben, und es wird argumentiert, dass Arts-Based Research eine eigenständige Forschungstradition darstellt, die der qualitativen Sozialforschung wichtige Impulse geben kan...

  5. Topology Optimization of Passive Micromixers Based on Lagrangian Mapping Method

    Directory of Open Access Journals (Sweden)

    Yuchen Guo

    2018-03-01

    Full Text Available This paper presents an optimization-based design method of passive micromixers for immiscible fluids, which means that the Peclet number infinitely large. Based on topology optimization method, an optimization model is constructed to find the optimal layout of the passive micromixers. Being different from the topology optimization methods with Eulerian description of the convection-diffusion dynamics, this proposed method considers the extreme case, where the mixing is dominated completely by the convection with negligible diffusion. In this method, the mixing dynamics is modeled by the mapping method, a Lagrangian description that can deal with the case with convection-dominance. Several numerical examples have been presented to demonstrate the validity of the proposed method.

  6. Enhancements to Graph based methods for Multi Document Summarization

    Directory of Open Access Journals (Sweden)

    Rengaramanujam Srinivasan

    2009-01-01

    Full Text Available This paper focuses its attention on extractivesummarization using popular graph based approaches. Graphbased methods can be broadly classified into two categories:non- PageRank type and PageRank type methods. Of themethods already proposed - the Centrality Degree methodbelongs to the former category while LexRank and ContinuousLexRank methods belong to later category. The paper goes on tosuggest two enhancements to both PageRank type and non-PageRank type methods. The first modification is that ofrecursively discounting the selected sentences, i.e. if a sentence isselected it is removed from further consideration and the nextsentence is selected based upon the contributions of theremaining sentences only. Next the paper suggests a method ofincorporating position weight to these schemes. In all 14methods –six of non- PageRank type and eight of PageRanktype have been investigated. To clearly distinguish betweenvarious schemes, we call the methods of incorporatingdiscounting and position weight enhancements over LexicalRank schemes as Sentence Rank (SR methods. Intrinsicevaluation of all the 14 graph based methods were done usingconventional Precision metric and metrics earlier proposed byus - Effectiveness1 (E1 and Effectiveness2 (E2. Experimentalstudy brings out that the proposed SR methods are superior toall the other methods.

  7. Base oils and methods for making the same

    Science.gov (United States)

    Ohler, Nicholas; Fisher, Karl; Tirmizi, Shakeel

    2018-01-09

    Provided herein are isoparaffins derived from hydrocarbon terpenes such as myrcene, ocimene and farnesene, and methods for making the same. In certain variations, the isoparaffins have utility as lubricant base stocks.

  8. New LSB-based colour image steganography method to enhance ...

    Indian Academy of Sciences (India)

    Mustafa Cem kasapbaşi

    2018-04-27

    Apr 27, 2018 ... evaluate the proposed method, comparative performance tests are carried out against different spatial image ... image steganography applications based on LSB are ..... worst case scenario could occur when having highest.

  9. EEG feature selection method based on decision tree.

    Science.gov (United States)

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  10. Agile Service Development: A Rule-Based Method Engineering Approach

    NARCIS (Netherlands)

    dr. Martijn Zoet; Stijn Hoppenbrouwers; Inge van de Weerd; Johan Versendaal

    2011-01-01

    Agile software development has evolved into an increasingly mature software development approach and has been applied successfully in many software vendors’ development departments. In this position paper, we address the broader agile service development. Based on method engineering principles we

  11. Multivariate Methods Based Soft Measurement for Wine Quality Evaluation

    Directory of Open Access Journals (Sweden)

    Shen Yin

    2014-01-01

    a decision. However, since the physicochemical indexes of wine can to some extent reflect the quality of wine, the multivariate statistical methods based soft measure can help the oenologist in wine evaluation.

  12. Convergence of a residual based artificial viscosity finite element method

    KAUST Repository

    Nazarov, Murtazo

    2013-02-01

    We present a residual based artificial viscosity finite element method to solve conservation laws. The Galerkin approximation is stabilized by only residual based artificial viscosity, without any least-squares, SUPG, or streamline diffusion terms. We prove convergence of the method, applied to a scalar conservation law in two space dimensions, toward an unique entropy solution for implicit time stepping schemes. © 2012 Elsevier B.V. All rights reserved.

  13. A Rapid Aeroelasticity Optimization Method Based on the Stiffness characteristics

    OpenAIRE

    Yuan, Zhe; Huo, Shihui; Ren, Jianting

    2018-01-01

    A rapid aeroelasticity optimization method based on the stiffness characteristics was proposed in the present study. Large time expense in static aeroelasticity analysis based on traditional time domain aeroelasticity method is solved. Elastic axis location and torsional stiffness are discussed firstly. Both torsional stiffness and the distance between stiffness center and aerodynamic center have a direct impact on divergent velocity. The divergent velocity can be adjusted by changing the cor...

  14. Repeatability of Brain Volume Measurements Made with the Atlas-based Method from T1-weighted Images Acquired Using a 0.4 Tesla Low Field MR Scanner.

    Science.gov (United States)

    Goto, Masami; Suzuki, Makoto; Mizukami, Shinya; Abe, Osamu; Aoki, Shigeki; Miyati, Tosiaki; Fukuda, Michinari; Gomi, Tsutomu; Takeda, Tohoru

    2016-10-11

    An understanding of the repeatability of measured results is important for both the atlas-based and voxel-based morphometry (VBM) methods of magnetic resonance (MR) brain volumetry. However, many recent studies that have investigated the repeatability of brain volume measurements have been performed using static magnetic fields of 1-4 tesla, and no study has used a low-strength static magnetic field. The aim of this study was to investigate the repeatability of measured volumes using the atlas-based method and a low-strength static magnetic field (0.4 tesla). Ten healthy volunteers participated in this study. Using a 0.4 tesla magnetic resonance imaging (MRI) scanner and a quadrature head coil, three-dimensional T 1 -weighted images (3D-T 1 WIs) were obtained from each subject, twice on the same day. VBM8 software was used to construct segmented normalized images [gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) images]. The regions-of-interest (ROIs) of GM, WM, CSF, hippocampus (HC), orbital gyrus (OG), and cerebellum posterior lobe (CPL) were generated using WFU PickAtlas. The percentage change was defined as[100 × (measured volume with first segmented image - mean volume in each subject)/(mean volume in each subject)]The average percentage change was calculated as the percentage change in the 6 ROIs of the 10 subjects. The mean of the average percentage changes for each ROI was as follows: GM, 0.556%; WM, 0.324%; CSF, 0.573%; HC, 0.645%; OG, 1.74%; and CPL, 0.471%. The average percentage change was higher for the orbital gyrus than for the other ROIs. We consider that repeatability of the atlas-based method is similar between 0.4 and 1.5 tesla MR scanners. To our knowledge, this is the first report to show that the level of repeatability with a 0.4 tesla MR scanner is adequate for the estimation of brain volume change by the atlas-based method.

  15. Implementation of an office-based semen preparation method (SEP ...

    African Journals Online (AJOL)

    Implementation of an office-based semen preparation method (SEP-D Kit) for intra-uterine insemination (IUI): A controlled randomised study to compare the IUI pregnancy outcome between a routine (swim-up) and the SEP-D Kit method.

  16. A Hybrid Positioning Method Based on Hypothesis Testing

    DEFF Research Database (Denmark)

    Amiot, Nicolas; Pedersen, Troels; Laaraiedh, Mohamed

    2012-01-01

    maxima. We propose to first estimate the support region of the two peaks of the likelihood function using a set membership method, and then decide between the two regions using a rule based on the less reliable observations. Monte Carlo simulations show that the performance of the proposed method...

  17. A Channelization-Based DOA Estimation Method for Wideband Signals

    Directory of Open Access Journals (Sweden)

    Rui Guo

    2016-07-01

    Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.

  18. Tomographs based on non-conventional radiation sources and methods

    International Nuclear Information System (INIS)

    Barbuzza, R.; Fresno, M. del; Venere, Marcelo J.; Clausse, Alejandro; Moreno, C.

    2000-01-01

    Computer techniques for tomographic reconstruction of objects X-rayed with a compact plasma focus (PF) are presented. The implemented reconstruction algorithms are based on stochastic searching of solutions of Radon equation, using Genetic Algorithms and Monte Carlo methods. Numerical experiments using actual projections were performed concluding the feasibility of the application of both methods in tomographic reconstruction problem. (author)

  19. The afforestation problem: a heuristic method based on simulated annealing

    DEFF Research Database (Denmark)

    Vidal, Rene Victor Valqui

    1992-01-01

    This paper presents the afforestation problem, that is the location and design of new forest compartments to be planted in a given area. This optimization problem is solved by a two-step heuristic method based on simulated annealing. Tests and experiences with this method are also presented....

  20. Qualitative Assessment of Inquiry-Based Teaching Methods

    Science.gov (United States)

    Briggs, Michael; Long, George; Owens, Katrina

    2011-01-01

    A new approach to teaching method assessment using student focused qualitative studies and the theoretical framework of mental models is proposed. The methodology is considered specifically for the advantages it offers when applied to the assessment of inquiry-based teaching methods. The theoretical foundation of mental models is discussed, and…

  1. DNA based methods used for characterization and detection of food ...

    African Journals Online (AJOL)

    Detection of food borne pathogen is of outmost importance in the food industries and related agencies. For the last few decades conventional methods were used to detect food borne pathogens based on phenotypic characters. At the advent of complementary base pairing and amplification of DNA, the diagnosis of food ...

  2. Human Detection System by Fusing Depth Map-Based Method and Convolutional Neural Network-Based Method

    Directory of Open Access Journals (Sweden)

    Anh Vu Le

    2017-01-01

    Full Text Available In this paper, the depth images and the colour images provided by Kinect sensors are used to enhance the accuracy of human detection. The depth-based human detection method is fast but less accurate. On the other hand, the faster region convolutional neural network-based human detection method is accurate but requires a rather complex hardware configuration. To simultaneously leverage the advantages and relieve the drawbacks of each method, one master and one client system is proposed. The final goal is to make a novel Robot Operation System (ROS-based Perception Sensor Network (PSN system, which is more accurate and ready for the real time application. The experimental results demonstrate the outperforming of the proposed method compared with other conventional methods in the challenging scenarios.

  3. Horizontal and Vertical Rule Bases Method in Fuzzy Controllers

    OpenAIRE

    Aminifar, Sadegh; bin Marzuki, Arjuna

    2013-01-01

    Concept of horizontal and vertical rule bases is introduced. Using this method enables the designers to look for main behaviors of system and describes them with greater approximations. The rules which describe the system in first stage are called horizontal rule base. In the second stage, the designer modulates the obtained surface by describing needed changes on first surface for handling real behaviors of system. The rules used in the second stage are called vertical rule base. Horizontal...

  4. Arts-Based Methods in Education Around the World

    DEFF Research Database (Denmark)

    Arts-Based Methods in Education Around the World aims to investigate arts-based encounters in educational settings in response to a global need for studies that connect the cultural, inter-cultural, cross-cultural, and global elements of arts-based methods in education. In this extraordinary...... collection, contributions are collected from experts all over the world and involve a multiplicity of arts genres and traditions. These contributions bring together diverse cultural and educational perspectives and include a large variety of artistic genres and research methodologies. The topics covered...

  5. Optimizing distance-based methods for large data sets

    Science.gov (United States)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  6. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  7. Distance Based Method for Outlier Detection of Body Sensor Networks

    Directory of Open Access Journals (Sweden)

    Haibin Zhang

    2016-01-01

    Full Text Available We propose a distance based method for the outlier detection of body sensor networks. Firstly, we use a Kernel Density Estimation (KDE to calculate the probability of the distance to k nearest neighbors for diagnosed data. If the probability is less than a threshold, and the distance of this data to its left and right neighbors is greater than a pre-defined value, the diagnosed data is decided as an outlier. Further, we formalize a sliding window based method to improve the outlier detection performance. Finally, to estimate the KDE by training sensor readings with errors, we introduce a Hidden Markov Model (HMM based method to estimate the most probable ground truth values which have the maximum probability to produce the training data. Simulation results show that the proposed method possesses a good detection accuracy with a low false alarm rate.

  8. An Entropy-Based Network Anomaly Detection Method

    Directory of Open Access Journals (Sweden)

    Przemysław Bereziński

    2015-04-01

    Full Text Available Data mining is an interdisciplinary subfield of computer science involving methods at the intersection of artificial intelligence, machine learning and statistics. One of the data mining tasks is anomaly detection which is the analysis of large quantities of data to identify items, events or observations which do not conform to an expected pattern. Anomaly detection is applicable in a variety of domains, e.g., fraud detection, fault detection, system health monitoring but this article focuses on application of anomaly detection in the field of network intrusion detection.The main goal of the article is to prove that an entropy-based approach is suitable to detect modern botnet-like malware based on anomalous patterns in network. This aim is achieved by realization of the following points: (i preparation of a concept of original entropy-based network anomaly detection method, (ii implementation of the method, (iii preparation of original dataset, (iv evaluation of the method.

  9. Deterministic and fuzzy-based methods to evaluate community resilience

    Science.gov (United States)

    Kammouh, Omar; Noori, Ali Zamani; Taurino, Veronica; Mahin, Stephen A.; Cimellaro, Gian Paolo

    2018-04-01

    Community resilience is becoming a growing concern for authorities and decision makers. This paper introduces two indicator-based methods to evaluate the resilience of communities based on the PEOPLES framework. PEOPLES is a multi-layered framework that defines community resilience using seven dimensions. Each of the dimensions is described through a set of resilience indicators collected from literature and they are linked to a measure allowing the analytical computation of the indicator's performance. The first method proposed in this paper requires data on previous disasters as an input and returns as output a performance function for each indicator and a performance function for the whole community. The second method exploits a knowledge-based fuzzy modeling for its implementation. This method allows a quantitative evaluation of the PEOPLES indicators using descriptive knowledge rather than deterministic data including the uncertainty involved in the analysis. The output of the fuzzy-based method is a resilience index for each indicator as well as a resilience index for the community. The paper also introduces an open source online tool in which the first method is implemented. A case study illustrating the application of the first method and the usage of the tool is also provided in the paper.

  10. Correction of Misclassifications Using a Proximity-Based Estimation Method

    Directory of Open Access Journals (Sweden)

    Shmulevich Ilya

    2004-01-01

    Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  11. An overview of modal-based damage identification methods

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, C.R.; Doebling, S.W. [Los Alamos National Lab., NM (United States). Engineering Analysis Group

    1997-09-01

    This paper provides an overview of methods that examine changes in measured vibration response to detect, locate, and characterize damage in structural and mechanical systems. The basic idea behind this technology is that modal parameters (notably frequencies, mode shapes, and modal damping) are functions of the physical properties of the structure (mass, damping, and stiffness). Therefore, changes in the physical properties will cause detectable changes in the modal properties. The motivation for the development of this technology is first provided. The methods are then categorized according to various criteria such as the level of damage detection provided, model-based vs. non-model-based methods and linear vs. nonlinear methods. This overview is limited to methods that can be adapted to a wide range of structures (i.e., are not dependent on a particular assumed model form for the system such as beam-bending behavior and methods and that are not based on updating finite element models). Next, the methods are described in general terms including difficulties associated with their implementation and their fidelity. Past, current and future-planned applications of this technology to actual engineering systems are summarized. The paper concludes with a discussion of critical issues for future research in the area of modal-based damage identification.

  12. [Reconstituting evaluation methods based on both qualitative and quantitative paradigms].

    Science.gov (United States)

    Miyata, Hiroaki; Okubo, Suguru; Yoshie, Satoru; Kai, Ichiro

    2011-01-01

    Debate about the relationship between quantitative and qualitative paradigms is often muddled and confusing and the clutter of terms and arguments has resulted in the concepts becoming obscure and unrecognizable. In this study we conducted content analysis regarding evaluation methods of qualitative healthcare research. We extracted descriptions on four types of evaluation paradigm (validity/credibility, reliability/credibility, objectivity/confirmability, and generalizability/transferability), and classified them into subcategories. In quantitative research, there has been many evaluation methods based on qualitative paradigms, and vice versa. Thus, it might not be useful to consider evaluation methods of qualitative paradigm are isolated from those of quantitative methods. Choosing practical evaluation methods based on the situation and prior conditions of each study is an important approach for researchers.

  13. Improving the safety of a body composition analyser based on the PGNAA method

    Energy Technology Data Exchange (ETDEWEB)

    Miri-Hakimabad, Hashem; Izadi-Najafabadi, Reza; Vejdani-Noghreiyan, Alireza; Panjeh, Hamed [FUM Radiation Detection And Measurement Laboratory, Ferdowsi University of Mashhad (Iran, Islamic Republic of)

    2007-12-15

    The {sup 252}Cf radioisotope and {sup 241}Am-Be are intense neutron emitters that are readily encapsulated in compact, portable and sealed sources. Some features such as high flux of neutron emission and reliable neutron spectrum of these sources make them suitable for the prompt gamma neutron activation analysis (PGNAA) method. The PGNAA method can be used in medicine for neutron radiography and body chemical composition analysis. {sup 252}Cf and {sup 241}Am-Be sources generate not only neutrons but also are intense gamma emitters. Furthermore, the sample in medical treatments is a human body, so it may be exposed to the bombardments of these gamma-rays. Moreover, accumulations of these high-rate gamma-rays in the detector volume cause simultaneous pulses that can be piled up and distort the spectra in the region of interest (ROI). In order to remove these disadvantages in a practical way without being concerned about losing the thermal neutron flux, a gamma-ray filter made of Pb must be employed. The paper suggests a relatively safe body chemical composition analyser (BCCA) machine that uses a spherical Pb shield, enclosing the neutron source. Gamma-ray shielding effects and the optimum radius of the spherical Pb shield have been investigated, using the MCNP-4C code, and compared with the unfiltered case, the bare source. Finally, experimental results demonstrate that an optimised gamma-ray shield for the neutron source in a BCCA can reduce effectively the risk of exposure to the {sup 252}Cf and {sup 241}Am-Be sources.

  14. Energy-Based Acoustic Source Localization Methods: A Survey

    Directory of Open Access Journals (Sweden)

    Wei Meng

    2017-02-01

    Full Text Available Energy-based source localization is an important problem in wireless sensor networks (WSNs, which has been studied actively in the literature. Numerous localization algorithms, e.g., maximum likelihood estimation (MLE and nonlinear-least-squares (NLS methods, have been reported. In the literature, there are relevant review papers for localization in WSNs, e.g., for distance-based localization. However, not much work related to energy-based source localization is covered in the existing review papers. Energy-based methods are proposed and specially designed for a WSN due to its limited sensor capabilities. This paper aims to give a comprehensive review of these different algorithms for energy-based single and multiple source localization problems, their merits and demerits and to point out possible future research directions.

  15. System and method for deriving a process-based specification

    Science.gov (United States)

    Hinchey, Michael Gerard (Inventor); Rash, James Larry (Inventor); Rouff, Christopher A. (Inventor)

    2009-01-01

    A system and method for deriving a process-based specification for a system is disclosed. The process-based specification is mathematically inferred from a trace-based specification. The trace-based specification is derived from a non-empty set of traces or natural language scenarios. The process-based specification is mathematically equivalent to the trace-based specification. Code is generated, if applicable, from the process-based specification. A process, or phases of a process, using the features disclosed can be reversed and repeated to allow for an interactive development and modification of legacy systems. The process is applicable to any class of system, including, but not limited to, biological and physical systems, electrical and electro-mechanical systems in addition to software, hardware and hybrid hardware-software systems.

  16. Constructing financial network based on PMFG and threshold method

    Science.gov (United States)

    Nie, Chun-Xiao; Song, Fu-Tie

    2018-04-01

    Based on planar maximally filtered graph (PMFG) and threshold method, we introduced a correlation-based network named PMFG-based threshold network (PTN). We studied the community structure of PTN and applied ISOMAP algorithm to represent PTN in low-dimensional Euclidean space. The results show that the community corresponds well to the cluster in the Euclidean space. Further, we studied the dynamics of the community structure and constructed the normalized mutual information (NMI) matrix. Based on the real data in the market, we found that the volatility of the market can lead to dramatic changes in the community structure, and the structure is more stable during the financial crisis.

  17. Therapy Decision Support Based on Recommender System Methods.

    Science.gov (United States)

    Gräßer, Felix; Beckert, Stefanie; Küster, Denise; Schmitt, Jochen; Abraham, Susanne; Malberg, Hagen; Zaunseder, Sebastian

    2017-01-01

    We present a system for data-driven therapy decision support based on techniques from the field of recommender systems. Two methods for therapy recommendation, namely, Collaborative Recommender and Demographic-based Recommender , are proposed. Both algorithms aim to predict the individual response to different therapy options using diverse patient data and recommend the therapy which is assumed to provide the best outcome for a specific patient and time, that is, consultation. The proposed methods are evaluated using a clinical database incorporating patients suffering from the autoimmune skin disease psoriasis. The Collaborative Recommender proves to generate both better outcome predictions and recommendation quality. However, due to sparsity in the data, this approach cannot provide recommendations for the entire database. In contrast, the Demographic-based Recommender performs worse on average but covers more consultations. Consequently, both methods profit from a combination into an overall recommender system.

  18. Therapy Decision Support Based on Recommender System Methods

    Directory of Open Access Journals (Sweden)

    Felix Gräßer

    2017-01-01

    Full Text Available We present a system for data-driven therapy decision support based on techniques from the field of recommender systems. Two methods for therapy recommendation, namely, Collaborative Recommender and Demographic-based Recommender, are proposed. Both algorithms aim to predict the individual response to different therapy options using diverse patient data and recommend the therapy which is assumed to provide the best outcome for a specific patient and time, that is, consultation. The proposed methods are evaluated using a clinical database incorporating patients suffering from the autoimmune skin disease psoriasis. The Collaborative Recommender proves to generate both better outcome predictions and recommendation quality. However, due to sparsity in the data, this approach cannot provide recommendations for the entire database. In contrast, the Demographic-based Recommender performs worse on average but covers more consultations. Consequently, both methods profit from a combination into an overall recommender system.

  19. Phase Difference Measurement Method Based on Progressive Phase Shift

    Directory of Open Access Journals (Sweden)

    Min Zhang

    2018-06-01

    Full Text Available This paper proposes a method for phase difference measurement based on the principle of progressive phase shift (PPS. A phase difference measurement system based on PPS and implemented in the FPGA chip is proposed and tested. In the realized system, a fully programmable delay line (PDL is constructed, which provides accurate and stable delay, benefitting from the feed-back structure of the control module. The control module calibrates the delay according to process, voltage and temperature (PVT variations. Furthermore, a modified method based on double PPS is incorporated to improve the resolution. The obtained resolution is 25 ps. Moreover, to improve the resolution, the proposed method is implemented on the 20 nm Xilinx Kintex Ultrascale platform, and test results indicate that the obtained measurement error and clock synchronization error is within the range of ±5 ps.

  20. International Conference on Robust Rank-Based and Nonparametric Methods

    CERN Document Server

    McKean, Joseph

    2016-01-01

    The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with r...

  1. A novel method of S-box design based on chaotic map and composition method

    International Nuclear Information System (INIS)

    Lambić, Dragan

    2014-01-01

    Highlights: • Novel chaotic S-box generation method is presented. • Presented S-box has better cryptographic properties than other examples of chaotic S-boxes. • The advantages of the proposed method are the low complexity and large key space. -- Abstract: An efficient algorithm for obtaining random bijective S-boxes based on chaotic maps and composition method is presented. The proposed method is based on compositions of S-boxes from a fixed starting set. The sequence of the indices of starting S-boxes used is obtained by using chaotic maps. The results of performance test show that the S-box presented in this paper has good cryptographic properties. The advantages of the proposed method are the low complexity and the possibility to achieve large key space

  2. Logic-based aggregation methods for ranking student applicants

    Directory of Open Access Journals (Sweden)

    Milošević Pavle

    2017-01-01

    Full Text Available In this paper, we present logic-based aggregation models used for ranking student applicants and we compare them with a number of existing aggregation methods, each more complex than the previous one. The proposed models aim to include depen- dencies in the data using Logical aggregation (LA. LA is a aggregation method based on interpolative Boolean algebra (IBA, a consistent multi-valued realization of Boolean algebra. This technique is used for a Boolean consistent aggregation of attributes that are logically dependent. The comparison is performed in the case of student applicants for master programs at the University of Belgrade. We have shown that LA has some advantages over other presented aggregation methods. The software realization of all applied aggregation methods is also provided. This paper may be of interest not only for student ranking, but also for similar problems of ranking people e.g. employees, team members, etc.

  3. Image Mosaic Method Based on SIFT Features of Line Segment

    Directory of Open Access Journals (Sweden)

    Jun Zhu

    2014-01-01

    Full Text Available This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

  4. Model based methods and tools for process systems engineering

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    need to be integrated with work-flows and data-flows for specific product-process synthesis-design problems within a computer-aided framework. The framework therefore should be able to manage knowledge-data, models and the associated methods and tools needed by specific synthesis-design work...... of model based methods and tools within a computer aided framework for product-process synthesis-design will be highlighted.......Process systems engineering (PSE) provides means to solve a wide range of problems in a systematic and efficient manner. This presentation will give a perspective on model based methods and tools needed to solve a wide range of problems in product-process synthesis-design. These methods and tools...

  5. Managerial Methods Based on Analysis, Recommended to a Boarding House

    Directory of Open Access Journals (Sweden)

    Solomia Andreş

    2015-06-01

    Full Text Available The paper presents a few theoretical and practical contributions regarding the implementing of analysis based methods, respectively a SWOT and an economic analysis, from the perspective and the demands of a firm management which functions with profits due to the activity of a boarding house. The two types of managerial methods recommended to the firm offer real and complex information necessary for the knowledge of the firm status and the elaboration of prediction for the maintaining of business viability.

  6. Validation of some FM-based fitness for purpose methods

    Energy Technology Data Exchange (ETDEWEB)

    Broekhoven, M J.G. [Ministry of Social Affairs, The Hague (Netherlands)

    1988-12-31

    The reliability of several FM-based fitness-for-purpose methods has been investigated on a number of objects for which accurate fracture data were available from experiments or from practice, viz. 23 wide plates, 30 mm thickness (surface and through thickness cracks, cracks at holes, with and without welds), 45 pipelines sections with cracks, pressure vessels and a T-joint. The methods applied mainly comprise ASME XI, PD 6493 and R6. This contribution reviews the results. (author). 11 refs.

  7. Supplier selection based on multi-criterial AHP method

    Directory of Open Access Journals (Sweden)

    Jana Pócsová

    2010-03-01

    Full Text Available This paper describes a case-study of supplier selection based on multi-criterial Analytic Hierarchy Process (AHP method.It is demonstrated that using adequate mathematical method can bring us “unprejudiced” conclusion, even if the alternatives (suppliercompanies are very similar in given selection-criteria. The result is the best possible supplier company from the viewpoint of chosen criteriaand the price of the product.

  8. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  9. Control method for biped locomotion robots based on ZMP information

    International Nuclear Information System (INIS)

    Kume, Etsuo

    1994-01-01

    The Human Acts Simulation Program (HASP) started as a ten year program of Computing and Information Systems Center (CISC) at Japan Atomic Energy Research Institute (JAERI) in 1987. A mechanical design study of biped locomotion robots for patrol and inspection in nuclear facilities is being performed as an item of the research scope. One of the goals of our research is to design a biped locomotion robot for practical use in nuclear facilities. So far, we have been studying for several dynamic walking patterns. In conventional control methods for biped locomotion robots, the program control is used based on preset walking patterns, so it dose not have the robustness such as a dynamic change of walking pattern. Therefore, a real-time control method based on dynamic information of the robot states is necessary for the high performance of walking. In this study a new control method based on Zero Moment Point (ZMP) information is proposed as one of real-time control methods. The proposed method is discussed and validated based on the numerical simulation. (author)

  10. An Image Encryption Method Based on Bit Plane Hiding Technology

    Institute of Scientific and Technical Information of China (English)

    LIU Bin; LI Zhitang; TU Hao

    2006-01-01

    A novel image hiding method based on the correlation analysis of bit plane is described in this paper. Firstly, based on the correlation analysis, different bit plane of a secret image is hided in different bit plane of several different open images. And then a new hiding image is acquired by a nesting "Exclusive-OR" operation on those images obtained from the first step. At last, by employing image fusion technique, the final hiding result is achieved. The experimental result shows that the method proposed in this paper is effective.

  11. Quartet-based methods to reconstruct phylogenetic networks.

    Science.gov (United States)

    Yang, Jialiang; Grünewald, Stefan; Xu, Yifei; Wan, Xiu-Feng

    2014-02-20

    Phylogenetic networks are employed to visualize evolutionary relationships among a group of nucleotide sequences, genes or species when reticulate events like hybridization, recombination, reassortant and horizontal gene transfer are believed to be involved. In comparison to traditional distance-based methods, quartet-based methods consider more information in the reconstruction process and thus have the potential to be more accurate. We introduce QuartetSuite, which includes a set of new quartet-based methods, namely QuartetS, QuartetA, and QuartetM, to reconstruct phylogenetic networks from nucleotide sequences. We tested their performances and compared them with other popular methods on two simulated nucleotide sequence data sets: one generated from a tree topology and the other from a complicated evolutionary history containing three reticulate events. We further validated these methods to two real data sets: a bacterial data set consisting of seven concatenated genes of 36 bacterial species and an influenza data set related to recently emerging H7N9 low pathogenic avian influenza viruses in China. QuartetS, QuartetA, and QuartetM have the potential to accurately reconstruct evolutionary scenarios from simple branching trees to complicated networks containing many reticulate events. These methods could provide insights into the understanding of complicated biological evolutionary processes such as bacterial taxonomy and reassortant of influenza viruses.

  12. Ontology-Based Method for Fault Diagnosis of Loaders.

    Science.gov (United States)

    Xu, Feixiang; Liu, Xinhui; Chen, Wei; Zhou, Chen; Cao, Bingwei

    2018-02-28

    This paper proposes an ontology-based fault diagnosis method which overcomes the difficulty of understanding complex fault diagnosis knowledge of loaders and offers a universal approach for fault diagnosis of all loaders. This method contains the following components: (1) An ontology-based fault diagnosis model is proposed to achieve the integrating, sharing and reusing of fault diagnosis knowledge for loaders; (2) combined with ontology, CBR (case-based reasoning) is introduced to realize effective and accurate fault diagnoses following four steps (feature selection, case-retrieval, case-matching and case-updating); and (3) in order to cover the shortages of the CBR method due to the lack of concerned cases, ontology based RBR (rule-based reasoning) is put forward through building SWRL (Semantic Web Rule Language) rules. An application program is also developed to implement the above methods to assist in finding the fault causes, fault locations and maintenance measures of loaders. In addition, the program is validated through analyzing a case study.

  13. Connecting clinical and actuarial prediction with rule-based methods.

    Science.gov (United States)

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  14. A online credit evaluation method based on AHP and SPA

    Science.gov (United States)

    Xu, Yingtao; Zhang, Ying

    2009-07-01

    Online credit evaluation is the foundation for the establishment of trust and for the management of risk between buyers and sellers in e-commerce. In this paper, a new credit evaluation method based on the analytic hierarchy process (AHP) and the set pair analysis (SPA) is presented to determine the credibility of the electronic commerce participants. It solves some of the drawbacks found in classical credit evaluation methods and broadens the scope of current approaches. Both qualitative and quantitative indicators are considered in the proposed method, then a overall credit score is achieved from the optimal perspective. In the end, a case analysis of China Garment Network is provided for illustrative purposes.

  15. Study on UPF Harmonic Current Detection Method Based on DSP

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, H J [Northwestern Polytechnical University, Xi' an 710072 (China); Pang, Y F [Xi' an University of Technology, Xi' an 710048 (China); Qiu, Z M [Xi' an University of Technology, Xi' an 710048 (China); Chen, M [Northwestern Polytechnical University, Xi' an 710072 (China)

    2006-10-15

    Unity power factor (UPF) harmonic current detection method applied to active power filter (APF) is presented in this paper. The intention of this method is to make nonlinear loads and active power filter in parallel to be an equivalent resistance. So after compensation, source current is sinusoidal, and has the same shape of source voltage. Meanwhile, there is no harmonic in source current, and the power factor becomes one. The mathematic model of proposed method and the optimum project for equivalent low pass filter in measurement are presented. Finally, the proposed detection method applied to a shunt active power filter experimental prototype based on DSP TMS320F2812 is developed. Simulation and experiment results indicate the method is simple and easy to implement, and can obtain the real-time calculation of harmonic current exactly.

  16. Measurement of unattached radon progeny based in electrostatic deposition method

    International Nuclear Information System (INIS)

    Canoba, A.C.; Lopez, F.O.

    1999-01-01

    A method for the measurement of unattached radon progeny based on its electrostatic deposition onto wire screens, using only one pump, has been implemented and calibrated. The importance of being able of making use of this method is related with the special radiological significance that has the unattached fraction of the short-lived radon progeny. Because of this, the assessment of exposure could be directly related to dose with far greater accuracy than before. The advantages of this method are its simplicity, even with the tools needed for the sample collection, as well as the measurement instruments used. Also, the suitability of this method is enhanced by the fact that it can effectively be used with a simple measuring procedure such as the Kusnetz method. (author)

  17. Innovative design method of automobile profile based on Fourier descriptor

    Science.gov (United States)

    Gao, Shuyong; Fu, Chaoxing; Xia, Fan; Shen, Wei

    2017-10-01

    Aiming at the innovation of the contours of automobile side, this paper presents an innovative design method of vehicle side profile based on Fourier descriptor. The design flow of this design method is: pre-processing, coordinate extraction, standardization, discrete Fourier transform, simplified Fourier descriptor, exchange descriptor innovation, inverse Fourier transform to get the outline of innovative design. Innovative concepts of the innovative methods of gene exchange among species and the innovative methods of gene exchange among different species are presented, and the contours of the innovative design are obtained separately. A three-dimensional model of a car is obtained by referring to the profile curve which is obtained by exchanging xenogeneic genes. The feasibility of the method proposed in this paper is verified by various aspects.

  18. Congestion management of electric distribution networks through market based methods

    DEFF Research Database (Denmark)

    Huang, Shaojun

     EVs and HPs. Market-based congestion management methods are the focus of the thesis. They handle the potential congestion at the energy planning stage; therefore, the aggregators can optimally plan the energy consumption and have the least impact on the customers. After reviewing and identifying...... the shortcomings of the existing methods, the thesis fully studies and improves the dynamic tariff (DT) method, and proposes two  new market-based  congestion management methods,  namely the  dynamic subsidy (DS) method and the flexible demand swap method. The thesis improves the DT method from four aspects......Rapidly increasing share of intermittent renewable energy production poses a great challenge of the management and operation of the modern power systems. Deployment of a large number of flexible demands, such as electrical vehicles (EVs) and heat pumps (HPs), is believed to be a promising solution...

  19. Level set method for image segmentation based on moment competition

    Science.gov (United States)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  20. Continuous energy Monte Carlo method based lattice homogeinzation

    International Nuclear Information System (INIS)

    Li Mancang; Yao Dong; Wang Kan

    2014-01-01

    Based on the Monte Carlo code MCNP, the continuous energy Monte Carlo multi-group constants generation code MCMC has been developed. The track length scheme has been used as the foundation of cross section generation. The scattering matrix and Legendre components require special techniques, and the scattering event method has been proposed to solve this problem. Three methods have been developed to calculate the diffusion coefficients for diffusion reactor core codes and the Legendre method has been applied in MCMC. To the satisfaction of the equivalence theory, the general equivalence theory (GET) and the superhomogenization method (SPH) have been applied to the Monte Carlo method based group constants. The super equivalence method (SPE) has been proposed to improve the equivalence. GET, SPH and SPE have been implemented into MCMC. The numerical results showed that generating the homogenization multi-group constants via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data library can be used for a wide range of applications due to the versatility. The MCMC scheme can be seen as a potential alternative to the widely used deterministic lattice codes. (authors)

  1. Microscope image based fully automated stomata detection and pore measurement method for grapevines

    Directory of Open Access Journals (Sweden)

    Hiranya Jayakody

    2017-11-01

    Full Text Available Abstract Background Stomatal behavior in grapevines has been identified as a good indicator of the water stress level and overall health of the plant. Microscope images are often used to analyze stomatal behavior in plants. However, most of the current approaches involve manual measurement of stomatal features. The main aim of this research is to develop a fully automated stomata detection and pore measurement method for grapevines, taking microscope images as the input. The proposed approach, which employs machine learning and image processing techniques, can outperform available manual and semi-automatic methods used to identify and estimate stomatal morphological features. Results First, a cascade object detection learning algorithm is developed to correctly identify multiple stomata in a large microscopic image. Once the regions of interest which contain stomata are identified and extracted, a combination of image processing techniques are applied to estimate the pore dimensions of the stomata. The stomata detection approach was compared with an existing fully automated template matching technique and a semi-automatic maximum stable extremal regions approach, with the proposed method clearly surpassing the performance of the existing techniques with a precision of 91.68% and an F1-score of 0.85. Next, the morphological features of the detected stomata were measured. Contrary to existing approaches, the proposed image segmentation and skeletonization method allows us to estimate the pore dimensions even in cases where the stomatal pore boundary is only partially visible in the microscope image. A test conducted using 1267 images of stomata showed that the segmentation and skeletonization approach was able to correctly identify the stoma opening 86.27% of the time. Further comparisons made with manually traced stoma openings indicated that the proposed method is able to estimate stomata morphological features with accuracies of 89.03% for area

  2. Method of coating an iron-based article

    Science.gov (United States)

    Magdefrau, Neal; Beals, James T.; Sun, Ellen Y.; Yamanis, Jean

    2016-11-29

    A method of coating an iron-based article includes a first heating step of heating a substrate that includes an iron-based material in the presence of an aluminum source material and halide diffusion activator. The heating is conducted in a substantially non-oxidizing environment, to cause the formation of an aluminum-rich layer in the iron-based material. In a second heating step, the substrate that has the aluminum-rich layer is heated in an oxidizing environment to oxidize the aluminum in the aluminum-rich layer.

  3. Statistical methods for mass spectrometry-based clinical proteomics

    NARCIS (Netherlands)

    Kakourou, A.

    2018-01-01

    The work presented in this thesis focuses on methods for the construction of diagnostic rules based on clinical mass spectrometry proteomic data. Mass spectrometry has become one of the key technologies for jointly measuring the expression of thousands of proteins in biological samples.

  4. Bead Collage: An Arts-Based Research Method

    Science.gov (United States)

    Kay, Lisa

    2013-01-01

    In this paper, "bead collage," an arts-based research method that invites participants to reflect, communicate and construct their experience through the manipulation of beads and found objects is explained. Emphasizing the significance of one's personal biography and experiences as a researcher, I discuss how my background as an…

  5. Preparing Students for Flipped or Team-Based Learning Methods

    Science.gov (United States)

    Balan, Peter; Clark, Michele; Restall, Gregory

    2015-01-01

    Purpose: Teaching methods such as Flipped Learning and Team-Based Learning require students to pre-learn course materials before a teaching session, because classroom exercises rely on students using self-gained knowledge. This is the reverse to "traditional" teaching when course materials are presented during a lecture, and students are…

  6. Dealing with defaulting suppliers using behavioral based governance methods

    DEFF Research Database (Denmark)

    Prosman, Ernst Johannes; Scholten, Kirstin; Power, Damien

    2016-01-01

    Purpose: The aim of this paper is to explore factors influencing the effectiveness of buyer initiated Behavioral Based Governance Methods (BBGMs). The ability of BBGMs to improve supplier performance is assessed considering power imbalances and the resource intensiveness of the BBGM. Agency Theory...

  7. Bioanalytical method transfer considerations of chromatographic-based assays.

    Science.gov (United States)

    Williard, Clark V

    2016-07-01

    Bioanalysis is an important part of the modern drug development process. The business practice of outsourcing and transferring bioanalytical methods from laboratory to laboratory has increasingly become a crucial strategy for successful and efficient delivery of therapies to the market. This chapter discusses important considerations when transferring various types of chromatographic-based assays in today's pharmaceutical research and development environment.

  8. Community Based Distribution of Child Spacing Methods at ...

    African Journals Online (AJOL)

    uses volunteer CBD agents. Mrs. E.F. Pelekamoyo. Service Delivery Officer. National Family Welfare Council of Malawi. Private Bag 308. Lilongwe 3. Malawi. Community Based Distribution of. Child Spacing Methods ... than us at the Hospital; male motivators by talking to their male counterparts help them to accept that their ...

  9. Heart rate-based lactate minimum test: a reproducible method.

    NARCIS (Netherlands)

    Strupler, M.; Muller, G.; Perret, C.

    2009-01-01

    OBJECTIVE: To find the individual intensity for aerobic endurance training, the lactate minimum test (LMT) seems to be a promising method. LMTs described in the literature consist of speed or work rate-based protocols, but for training prescription in daily practice mostly heart rate is used. The

  10. Dynamic Frames Based Verification Method for Concurrent Java Programs

    NARCIS (Netherlands)

    Mostowski, Wojciech

    2016-01-01

    In this paper we discuss a verification method for concurrent Java programs based on the concept of dynamic frames. We build on our earlier work that proposes a new, symbolic permission system for concurrent reasoning and we provide the following new contributions. First, we describe our approach

  11. Optimization-based Method for Automated Road Network Extraction

    International Nuclear Information System (INIS)

    Xiong, D

    2001-01-01

    Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction

  12. A fast method for linear waves based on geometrical optics

    NARCIS (Netherlands)

    Stolk, C.C.

    2009-01-01

    We develop a fast method for solving the one-dimensional wave equation based on geometrical optics. From geometrical optics (e.g., Fourier integral operator theory or WKB approximation) it is known that high-frequency waves split into forward and backward propagating parts, each propagating with the

  13. How to Reach Evidence-Based Usability Evaluation Methods

    NARCIS (Netherlands)

    Marcilly, Romaric; Peute, Linda

    2017-01-01

    This paper discusses how and why to build evidence-based knowledge on usability evaluation methods. At each step of building evidence, requisites and difficulties to achieve it are highlighted. Specifically, the paper presents how usability evaluation studies should be designed to allow capitalizing

  14. Effective Teaching Methods--Project-based Learning in Physics

    Science.gov (United States)

    Holubova, Renata

    2008-01-01

    The paper presents results of the research of new effective teaching methods in physics and science. It is found out that it is necessary to educate pre-service teachers in approaches stressing the importance of the own activity of students, in competences how to create an interdisciplinary project. Project-based physics teaching and learning…

  15. Planning of operation & maintenance using risk and reliability based methods

    DEFF Research Database (Denmark)

    Florian, Mihai; Sørensen, John Dalsgaard

    2015-01-01

    Operation and maintenance (OM) of offshore wind turbines contributes with a substantial part of the total levelized cost of energy (LCOE). The objective of this paper is to present an application of risk- and reliability-based methods for planning of OM. The theoretical basis is presented...

  16. Horizontal and Vertical Rule Bases Method in Fuzzy Controllers

    Directory of Open Access Journals (Sweden)

    Sadegh Aminifar

    2013-01-01

    Full Text Available Concept of horizontal and vertical rule bases is introduced. Using this method enables the designers to look for main behaviors of system and describes them with greater approximations. The rules which describe the system in first stage are called horizontal rule base. In the second stage, the designer modulates the obtained surface by describing needed changes on first surface for handling real behaviors of system. The rules used in the second stage are called vertical rule base. Horizontal and vertical rule bases method has a great roll in easing of extracting the optimum control surface by using too lesser rules than traditional fuzzy systems. This research involves with control of a system with high nonlinearity and in difficulty to model it with classical methods. As a case study for testing proposed method in real condition, the designed controller is applied to steaming room with uncertain data and variable parameters. A comparison between PID and traditional fuzzy counterpart and our proposed system shows that our proposed system outperforms PID and traditional fuzzy systems in point of view of number of valve switching and better surface following. The evaluations have done both with model simulation and DSP implementation.

  17. Lesson learned - CGID based on the Method 1 and Method 2 for digital equipment

    International Nuclear Information System (INIS)

    Hwang, Wonil; Sohn, Kwang Young; Cho, Chang Hwan; Kim, Sung Jong

    2015-01-01

    The acceptance methods associated with commercial-grade dedication are the following: 1) Special tests and inspection (Method 1) 2) Commercial-grade surveys (Method 2) 3) Source verification (Method 3) 4) An acceptable item and supplier performance record (Method 4) Special tests and inspections, often referred to as Method 1, are performed by the dedicating entity after the item is received to verify selected critical characteristics. Conducting a commercial-grade survey of a supplier is often referred to as Method 2. Supplier audits to verify compliance with a nuclear QA program do not meet the intent of a commercial-grade survey. Source verification, often referred to as Method 3, entails verification of critical characteristics during manufacture and testing of the item being procured. The performance history (good or bad) of the item and supplier is a consideration when determining the use of the other acceptance methods and the rigor with which they are used on a case-by-case basis. Some digital equipment system has the delivery reference and its operating history for Nuclear Power Plant as far as surveyed. However it was found that there is difficulty in collecting this of supporting data sheet, so that supplier usually decide to conduct the CGID based on the Method-1 and Method-2 based on the initial qualification likely. It is conceived that the Method-4 might be a better approach for CGID(Commercial Grade Item Dedication) even if there are some difficulties in data package for justifying CGID from the vendor and operating organization. This paper present the lesson learned from the consulting for Method-1 and 2 for digital equipment dedication. Considering all the information above, there are a couple of issues to remind in order to perform the CGID for Method-2. In doing commercial grade survey based on Method 2, quality personnel as well as technical engineer shall be involved for integral dedication. Other than this, the review of critical

  18. Numerical methods for characterization of synchrotron radiation based on the Wigner function method

    Directory of Open Access Journals (Sweden)

    Takashi Tanaka

    2014-06-01

    Full Text Available Numerical characterization of synchrotron radiation based on the Wigner function method is explored in order to accurately evaluate the light source performance. A number of numerical methods to compute the Wigner functions for typical synchrotron radiation sources such as bending magnets, undulators and wigglers, are presented, which significantly improve the computation efficiency and reduce the total computation time. As a practical example of the numerical characterization, optimization of betatron functions to maximize the brilliance of undulator radiation is discussed.

  19. Fast Reduction Method in Dominance-Based Information Systems

    Science.gov (United States)

    Li, Yan; Zhou, Qinghua; Wen, Yongchuan

    2018-01-01

    In real world applications, there are often some data with continuous values or preference-ordered values. Rough sets based on dominance relations can effectively deal with these kinds of data. Attribute reduction can be done in the framework of dominance-relation based approach to better extract decision rules. However, the computational cost of the dominance classes greatly affects the efficiency of attribute reduction and rule extraction. This paper presents an efficient method of computing dominance classes, and further compares it with traditional method with increasing attributes and samples. Experiments on UCI data sets show that the proposed algorithm obviously improves the efficiency of the traditional method, especially for large-scale data.

  20. Biogas slurry pricing method based on nutrient content

    Science.gov (United States)

    Zhang, Chang-ai; Guo, Honghai; Yang, Zhengtao; Xin, Shurong

    2017-11-01

    In order to promote biogas-slurry commercialization, A method was put forward to valuate biogas slurry based on its nutrient contents. Firstly, element contents of biogas slurry was measured; Secondly, each element was valuated based on its market price, and then traffic cost, using cost and market effect were taken into account, the pricing method of biogas slurry were obtained lastly. This method could be useful in practical production. Taking cattle manure raw meterial biogas slurry and con stalk raw material biogas slurry for example, their price were 38.50 yuan RMB per ton and 28.80 yuan RMB per ton. This paper will be useful for recognizing the value of biogas projects, ensuring biogas project running, and instructing the cyclic utilization of biomass resources in China.

  1. Matrix-based image reconstruction methods for tomography

    International Nuclear Information System (INIS)

    Llacer, J.; Meng, J.D.

    1984-10-01

    Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures

  2. Fast Pedestrian Recognition Based on Multisensor Fusion

    Directory of Open Access Journals (Sweden)

    Hongyu Hu

    2012-01-01

    Full Text Available A fast pedestrian recognition algorithm based on multisensor fusion is presented in this paper. Firstly, potential pedestrian locations are estimated by laser radar scanning in the world coordinates, and then their corresponding candidate regions in the image are located by camera calibration and the perspective mapping model. For avoiding time consuming in the training and recognition process caused by large numbers of feature vector dimensions, region of interest-based integral histograms of oriented gradients (ROI-IHOG feature extraction method is proposed later. A support vector machine (SVM classifier is trained by a novel pedestrian sample dataset which adapt to the urban road environment for online recognition. Finally, we test the validity of the proposed approach with several video sequences from realistic urban road scenarios. Reliable and timewise performances are shown based on our multisensor fusing method.

  3. NIM: A Node Influence Based Method for Cancer Classification

    Directory of Open Access Journals (Sweden)

    Yiwen Wang

    2014-01-01

    Full Text Available The classification of different cancer types owns great significance in the medical field. However, the great majority of existing cancer classification methods are clinical-based and have relatively weak diagnostic ability. With the rapid development of gene expression technology, it is able to classify different kinds of cancers using DNA microarray. Our main idea is to confront the problem of cancer classification using gene expression data from a graph-based view. Based on a new node influence model we proposed, this paper presents a novel high accuracy method for cancer classification, which is composed of four parts: the first is to calculate the similarity matrix of all samples, the second is to compute the node influence of training samples, the third is to obtain the similarity between every test sample and each class using weighted sum of node influence and similarity matrix, and the last is to classify each test sample based on its similarity between every class. The data sets used in our experiments are breast cancer, central nervous system, colon tumor, prostate cancer, acute lymphoblastic leukemia, and lung cancer. experimental results showed that our node influence based method (NIM is more efficient and robust than the support vector machine, K-nearest neighbor, C4.5, naive Bayes, and CART.

  4. Data assimilation method based on the constraints of confidence region

    Science.gov (United States)

    Li, Yong; Li, Siming; Sheng, Yao; Wang, Luheng

    2018-03-01

    The ensemble Kalman filter (EnKF) is a distinguished data assimilation method that is widely used and studied in various fields including methodology and oceanography. However, due to the limited sample size or imprecise dynamics model, it is usually easy for the forecast error variance to be underestimated, which further leads to the phenomenon of filter divergence. Additionally, the assimilation results of the initial stage are poor if the initial condition settings differ greatly from the true initial state. To address these problems, the variance inflation procedure is usually adopted. In this paper, we propose a new method based on the constraints of a confidence region constructed by the observations, called EnCR, to estimate the inflation parameter of the forecast error variance of the EnKF method. In the new method, the state estimate is more robust to both the inaccurate forecast models and initial condition settings. The new method is compared with other adaptive data assimilation methods in the Lorenz-63 and Lorenz-96 models under various model parameter settings. The simulation results show that the new method performs better than the competing methods.

  5. Genomic comparisons of Brucella spp. and closely related bacteria using base compositional and proteome based methods

    DEFF Research Database (Denmark)

    Bohlin, Jon; Snipen, Lars; Cloeckaert, Axel

    2010-01-01

    BACKGROUND: Classification of bacteria within the genus Brucella has been difficult due in part to considerable genomic homogeneity between the different species and biovars, in spite of clear differences in phenotypes. Therefore, many different methods have been used to assess Brucella taxonomy....... In the current work, we examine 32 sequenced genomes from genus Brucella representing the six classical species, as well as more recently described species, using bioinformatical methods. Comparisons were made at the level of genomic DNA using oligonucleotide based methods (Markov chain based genomic signatures...... between the oligonucleotide based methods used. Whilst the Markov chain based genomic signatures grouped the different species in genus Brucella according to host preference, the codon and amino acid frequencies based methods reflected small differences between the Brucella species. Only minor differences...

  6. A geometrically based method for automated radiosurgery planning

    International Nuclear Information System (INIS)

    Wagner, Thomas H.; Yi Taeil; Meeks, Sanford L.; Bova, Francis J.; Brechner, Beverly L.; Chen Yunmei; Buatti, John M.; Friedman, William A.; Foote, Kelly D.; Bouchet, Lionel G.

    2000-01-01

    Purpose: A geometrically based method of multiple isocenter linear accelerator radiosurgery treatment planning optimization was developed, based on a target's solid shape. Methods and Materials: Our method uses an edge detection process to determine the optimal sphere packing arrangement with which to cover the planning target. The sphere packing arrangement is converted into a radiosurgery treatment plan by substituting the isocenter locations and collimator sizes for the spheres. Results: This method is demonstrated on a set of 5 irregularly shaped phantom targets, as well as a set of 10 clinical example cases ranging from simple to very complex in planning difficulty. Using a prototype implementation of the method and standard dosimetric radiosurgery treatment planning tools, feasible treatment plans were developed for each target. The treatment plans generated for the phantom targets showed excellent dose conformity and acceptable dose homogeneity within the target volume. The algorithm was able to generate a radiosurgery plan conforming to the Radiation Therapy Oncology Group (RTOG) guidelines on radiosurgery for every clinical and phantom target examined. Conclusions: This automated planning method can serve as a valuable tool to assist treatment planners in rapidly and consistently designing conformal multiple isocenter radiosurgery treatment plans.

  7. [Galaxy/quasar classification based on nearest neighbor method].

    Science.gov (United States)

    Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun

    2011-09-01

    With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification.

  8. Research on Fault Diagnosis Method Based on Rule Base Neural Network

    Directory of Open Access Journals (Sweden)

    Zheng Ni

    2017-01-01

    Full Text Available The relationship between fault phenomenon and fault cause is always nonlinear, which influences the accuracy of fault location. And neural network is effective in dealing with nonlinear problem. In order to improve the efficiency of uncertain fault diagnosis based on neural network, a neural network fault diagnosis method based on rule base is put forward. At first, the structure of BP neural network is built and the learning rule is given. Then, the rule base is built by fuzzy theory. An improved fuzzy neural construction model is designed, in which the calculated methods of node function and membership function are also given. Simulation results confirm the effectiveness of this method.

  9. Photonic arbitrary waveform generator based on Taylor synthesis method

    DEFF Research Database (Denmark)

    Liao, Shasha; Ding, Yunhong; Dong, Jianji

    2016-01-01

    Arbitrary waveform generation has been widely used in optical communication, radar system and many other applications. We propose and experimentally demonstrate a silicon-on-insulator (SOI) on chip optical arbitrary waveform generator, which is based on Taylor synthesis method. In our scheme......, a Gaussian pulse is launched to some cascaded microrings to obtain first-, second- and third-order differentiations. By controlling amplitude and phase of the initial pulse and successive differentiations, we can realize an arbitrary waveform generator according to Taylor expansion. We obtain several typical...... waveforms such as square waveform, triangular waveform, flat-top waveform, sawtooth waveform, Gaussian waveform and so on. Unlike other schemes based on Fourier synthesis or frequency-to-time mapping, our scheme is based on Taylor synthesis method. Our scheme does not require any spectral disperser or large...

  10. Method of plasma etching Ga-based compound semiconductors

    Science.gov (United States)

    Qiu, Weibin; Goddard, Lynford L.

    2012-12-25

    A method of plasma etching Ga-based compound semiconductors includes providing a process chamber and a source electrode adjacent to the process chamber. The process chamber contains a sample comprising a Ga-based compound semiconductor. The sample is in contact with a platen which is electrically connected to a first power supply, and the source electrode is electrically connected to a second power supply. The method includes flowing SiCl.sub.4 gas into the chamber, flowing Ar gas into the chamber, and flowing H.sub.2 gas into the chamber. RF power is supplied independently to the source electrode and the platen. A plasma is generated based on the gases in the process chamber, and regions of a surface of the sample adjacent to one or more masked portions of the surface are etched to create a substantially smooth etched surface including features having substantially vertical walls beneath the masked portions.

  11. A Novel Assembly Line Balancing Method Based on PSO Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaomei Hu

    2014-01-01

    Full Text Available Assembly line is widely used in manufacturing system. Assembly line balancing problem is a crucial question during design and management of assembly lines since it directly affects the productivity of the whole manufacturing system. The model of assembly line balancing problem is put forward and a general optimization method is proposed. The key data on assembly line balancing problem is confirmed, and the precedence relations diagram is described. A double objective optimization model based on takt time and smoothness index is built, and balance optimization scheme based on PSO algorithm is proposed. Through the simulation experiments of examples, the feasibility and validity of the assembly line balancing method based on PSO algorithm is proved.

  12. Gradient-based methods for production optimization of oil reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Suwartadi, Eka

    2012-07-01

    Production optimization for water flooding in the secondary phase of oil recovery is the main topic in this thesis. The emphasis has been on numerical optimization algorithms, tested on case examples using simple hypothetical oil reservoirs. Gradientbased optimization, which utilizes adjoint-based gradient computation, is used to solve the optimization problems. The first contribution of this thesis is to address output constraint problems. These kinds of constraints are natural in production optimization. Limiting total water production and water cut at producer wells are examples of such constraints. To maintain the feasibility of an optimization solution, a Lagrangian barrier method is proposed to handle the output constraints. This method incorporates the output constraints into the objective function, thus avoiding additional computations for the constraints gradient (Jacobian) which may be detrimental to the efficiency of the adjoint method. The second contribution is the study of the use of second-order adjoint-gradient information for production optimization. In order to speedup convergence rate in the optimization, one usually uses quasi-Newton approaches such as BFGS and SR1 methods. These methods compute an approximation of the inverse of the Hessian matrix given the first-order gradient from the adjoint method. The methods may not give significant speedup if the Hessian is ill-conditioned. We have developed and implemented the Hessian matrix computation using the adjoint method. Due to high computational cost of the Newton method itself, we instead compute the Hessian-timesvector product which is used in a conjugate gradient algorithm. Finally, the last contribution of this thesis is on surrogate optimization for water flooding in the presence of the output constraints. Two kinds of model order reduction techniques are applied to build surrogate models. These are proper orthogonal decomposition (POD) and the discrete empirical interpolation method (DEIM

  13. A particle-based method for granular flow simulation

    KAUST Repository

    Chang, Yuanzhang; Bao, Kai; Zhu, Jian; Wu, Enhua

    2012-01-01

    We present a new particle-based method for granular flow simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke's law, is included in the momentum governing equation to handle the friction of granular materials. Viscosity force is also added to simulate the dynamic friction for the purpose of smoothing the velocity field and further maintaining the simulation stability. Benefiting from the Lagrangian nature of the SPH method, large flow deformation can be well handled easily and naturally. In addition, a signed distance field is also employed to enforce the solid boundary condition. The experimental results show that the proposed method is effective and efficient for handling the flow of granular materials, and different kinds of granular behaviors can be well simulated by adjusting just one parameter. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.

  14. Evaluation of proxy-based millennial reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Terry C.K.; Tsao, Min [University of Victoria, Department of Mathematics and Statistics, Victoria, BC (Canada); Zwiers, Francis W. [Environment Canada, Climate Research Division, Toronto, ON (Canada)

    2008-08-15

    A range of existing statistical approaches for reconstructing historical temperature variations from proxy data are compared using both climate model data and real-world paleoclimate proxy data. We also propose a new method for reconstruction that is based on a state-space time series model and Kalman filter algorithm. The state-space modelling approach and the recently developed RegEM method generally perform better than their competitors when reconstructing interannual variations in Northern Hemispheric mean surface air temperature. On the other hand, a variety of methods are seen to perform well when reconstructing surface air temperature variability on decadal time scales. An advantage of the new method is that it can incorporate additional, non-temperature, information into the reconstruction, such as the estimated response to external forcing, thereby permitting a simultaneous reconstruction and detection analysis as well as future projection. An application of these extensions is also demonstrated in the paper. (orig.)

  15. A Web service substitution method based on service cluster nets

    Science.gov (United States)

    Du, YuYue; Gai, JunJing; Zhou, MengChu

    2017-11-01

    Service substitution is an important research topic in the fields of Web services and service-oriented computing. This work presents a novel method to analyse and substitute Web services. A new concept, called a Service Cluster Net Unit, is proposed based on Web service clusters. A service cluster is converted into a Service Cluster Net Unit. Then it is used to analyse whether the services in the cluster can satisfy some service requests. Meanwhile, the substitution methods of an atomic service and a composite service are proposed. The correctness of the proposed method is proved, and the effectiveness is shown and compared with the state-of-the-art method via an experiment. It can be readily applied to e-commerce service substitution to meet the business automation needs.

  16. Novel welding image processing method based on fractal theory

    Institute of Scientific and Technical Information of China (English)

    陈强; 孙振国; 肖勇; 路井荣

    2002-01-01

    Computer vision has come into used in the fields of welding process control and automation. In order to improve precision and rapidity of welding image processing, a novel method based on fractal theory has been put forward in this paper. Compared with traditional methods, the image is preliminarily processed in the macroscopic regions then thoroughly analyzed in the microscopic regions in the new method. With which, an image is divided up to some regions according to the different fractal characters of image edge, and the fuzzy regions including image edges are detected out, then image edges are identified with Sobel operator and curved by LSM (Lease Square Method). Since the data to be processed have been decreased and the noise of image has been reduced, it has been testified through experiments that edges of weld seam or weld pool could be recognized correctly and quickly.

  17. Distributed Research Project Scheduling Based on Multi-Agent Methods

    Directory of Open Access Journals (Sweden)

    Constanta Nicoleta Bodea

    2011-01-01

    Full Text Available Different project planning and scheduling approaches have been developed. The Operational Research (OR provides two major planning techniques: CPM (Critical Path Method and PERT (Program Evaluation and Review Technique. Due to projects complexity and difficulty to use classical methods, new approaches were developed. Artificial Intelligence (AI initially promoted the automatic planner concept, but model-based planning and scheduling methods emerged later on. The paper adresses the project scheduling optimization problem, when projects are seen as Complex Adaptive Systems (CAS. Taken into consideration two different approaches for project scheduling optimization: TCPSP (Time- Constrained Project Scheduling and RCPSP (Resource-Constrained Project Scheduling, the paper focuses on a multiagent implementation in MATLAB for TCSP. Using the research project as a case study, the paper includes a comparison between two multi-agent methods: Genetic Algorithm (GA and Ant Colony Algorithm (ACO.

  18. A particle-based method for granular flow simulation

    KAUST Repository

    Chang, Yuanzhang

    2012-03-16

    We present a new particle-based method for granular flow simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the momentum governing equation to handle the friction of granular materials. Viscosity force is also added to simulate the dynamic friction for the purpose of smoothing the velocity field and further maintaining the simulation stability. Benefiting from the Lagrangian nature of the SPH method, large flow deformation can be well handled easily and naturally. In addition, a signed distance field is also employed to enforce the solid boundary condition. The experimental results show that the proposed method is effective and efficient for handling the flow of granular materials, and different kinds of granular behaviors can be well simulated by adjusting just one parameter. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.

  19. Training Methods to Improve Evidence-Based Medicine Skills

    Directory of Open Access Journals (Sweden)

    Filiz Ozyigit

    2010-06-01

    Full Text Available Evidence based medicine (EBM is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. It is estimated that only 15% of medical interventions is evidence-based. Increasing demand, new technological developments, malpractice legislations, a very speed increase in knowledge and knowledge sources push the physicians forward for EBM, but at the same time increase load of physicians by giving them the responsibility to improve their skills. Clinical maneuvers are needed more, as the number of clinical trials and observational studies increase. However, many of the physicians, who are in front row of patient care do not use this increasing evidence. There are several examples related to different training methods in order to improve skills of physicians for evidence based practice. There are many training methods to improve EBM skills and these trainings might be given during medical school, during residency or as continuous trainings to the actual practitioners in the field. It is important to discuss these different training methods in our country as well and encourage dissemination of feasible and effective methods. [TAF Prev Med Bull 2010; 9(3.000: 245-254

  20. Digital Resonant Controller based on Modified Tustin Discretization Method

    Directory of Open Access Journals (Sweden)

    STOJIC, D.

    2016-11-01

    Full Text Available Resonant controllers are used in power converter voltage and current control due to their simplicity and accuracy. However, digital implementation of resonant controllers introduces problems related to zero and pole mapping from the continuous to the discrete time domain. Namely, some discretization methods introduce significant errors in the digital controller resonant frequency, resulting in the loss of the asymptotic AC reference tracking, especially at high resonant frequencies. The delay compensation typical for resonant controllers can also be compromised. Based on the existing analysis, it can be concluded that the Tustin discretization with frequency prewarping represents a preferable choice from the point of view of the resonant frequency accuracy. However, this discretization method has a shortcoming in applications that require real-time frequency adaptation, since complex trigonometric evaluation is required for each frequency change. In order to overcome this problem, in this paper the modified Tustin discretization method is proposed based on the Taylor series approximation of the frequency prewarping function. By comparing the novel discretization method with commonly used two-integrator-based proportional-resonant (PR digital controllers, it is shown that the resulting digital controller resonant frequency and time delay compensation errors are significantly reduced for the novel controller.

  1. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  2. Statistical Bayesian method for reliability evaluation based on ADT data

    Science.gov (United States)

    Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong

    2018-05-01

    Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.

  3. High viscosity fluid simulation using particle-based method

    KAUST Repository

    Chang, Yuanzhang

    2011-03-01

    We present a new particle-based method for high viscosity fluid simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the traditional Navier-Stokes equation to simulate the movements of the high viscosity fluids. Benefiting from the Lagrangian nature of Smoothed Particle Hydrodynamics method, large flow deformation can be well handled easily and naturally. In addition, in order to eliminate the particle deficiency problem near the boundary, ghost particles are employed to enforce the solid boundary condition. Compared with Finite Element Methods with complicated and time-consuming remeshing operations, our method is much more straightforward to implement. Moreover, our method doesn\\'t need to store and compare to an initial rest state. The experimental results show that the proposed method is effective and efficient to handle the movements of highly viscous flows, and a large variety of different kinds of fluid behaviors can be well simulated by adjusting just one parameter. © 2011 IEEE.

  4. An Automated Baseline Correction Method Based on Iterative Morphological Operations.

    Science.gov (United States)

    Chen, Yunliang; Dai, Liankui

    2018-05-01

    Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.

  5. Multiple Beta Spectrum Analysis Method Based on Spectrum Fitting

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Uk Jae; Jung, Yun Song; Kim, Hee Reyoung [UNIST, Ulsan (Korea, Republic of)

    2016-05-15

    When the sample of several mixed radioactive nuclides is measured, it is difficult to divide each nuclide due to the overlapping of spectrums. For this reason, simple mathematical analysis method for spectrum analysis of the mixed beta ray source has been studied. However, existing research was in need of more accurate spectral analysis method as it has a problem of accuracy. The study will describe the contents of the separation methods of the mixed beta ray source through the analysis of the beta spectrum slope based on the curve fitting to resolve the existing problem. The fitting methods including It was understood that sum of sine fitting method was the best one of such proposed methods as Fourier, polynomial, Gaussian and sum of sine to obtain equation for distribution of mixed beta spectrum. It was shown to be the most appropriate for the analysis of the spectrum with various ratios of mixed nuclides. It was thought that this method could be applied to rapid spectrum analysis of the mixed beta ray source.

  6. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    Science.gov (United States)

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  7. Segmentation of Hyperacute Cerebral Infarcts Based on Sparse Representation of Diffusion Weighted Imaging

    Directory of Open Access Journals (Sweden)

    Xiaodong Zhang

    2016-01-01

    Full Text Available Segmentation of infarcts at hyperacute stage is challenging as they exhibit substantial variability which may even be hard for experts to delineate manually. In this paper, a sparse representation based classification method is explored. For each patient, four volumetric data items including three volumes of diffusion weighted imaging and a computed asymmetry map are employed to extract patch features which are then fed to dictionary learning and classification based on sparse representation. Elastic net is adopted to replace the traditional L0-norm/L1-norm constraints on sparse representation to stabilize sparse code. To decrease computation cost and to reduce false positives, regions-of-interest are determined to confine candidate infarct voxels. The proposed method has been validated on 98 consecutive patients recruited within 6 hours from onset. It is shown that the proposed method could handle well infarcts with intensity variability and ill-defined edges to yield significantly higher Dice coefficient (0.755 ± 0.118 than the other two methods and their enhanced versions by confining their segmentations within the regions-of-interest (average Dice coefficient less than 0.610. The proposed method could provide a potential tool to quantify infarcts from diffusion weighted imaging at hyperacute stage with accuracy and speed to assist the decision making especially for thrombolytic therapy.

  8. Protein-Based Nanoparticle Preparation via Nanoprecipitation Method

    Directory of Open Access Journals (Sweden)

    Mohamad Tarhini

    2018-03-01

    Full Text Available Nanoparticles are nowadays largely investigated in the field of drug delivery. Among nanoparticles, protein-based particles are of paramount importance since they are natural, biodegradable, biocompatible, and nontoxic. There are several methods to prepare proteins containing nanoparticles, but only a few studies have been dedicated to the preparation of protein- based nanoparticles. Then, the aim of this work was to report on the preparation of bovine serum albumin (BSA-based nanoparticles using a well-defined nanoprecipitation process. Special attention has been dedicated to a systematic study in order to understand separately the effect of each operating parameter of the method (such as protein concentration, solvent/non-solvent volume ratio, non-solvent injection rate, ionic strength of the buffer solution, pH, and cross-linking on the colloidal properties of the obtained nanoparticles. In addition, the mixing processes (batch or drop-wise were also investigated. Using a well-defined formulation, submicron protein-based nanoparticles have been obtained. All prepared particles have been characterized in terms of size, size distribution, morphology, and electrokinetic properties. In addition, the stability of nanoparticles was investigated using Ultraviolet (UV scan and electrophoresis, and the optimal conditions for preparing BSA nanoparticles by the nanoprecipitation method were concluded.

  9. AN OBJECT-BASED METHOD FOR CHINESE LANDFORM TYPES CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    H. Ding

    2016-06-01

    Full Text Available Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM. In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  10. AN IMPROVED INTERFEROMETRIC CALIBRATION METHOD BASED ON INDEPENDENT PARAMETER DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    J. Fan

    2018-04-01

    Full Text Available Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM. The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs. However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD. Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.

  11. An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition

    Science.gov (United States)

    Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.

    2018-04-01

    Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.

  12. Springback Compensation Based on FDM-DTF Method

    International Nuclear Information System (INIS)

    Liu Qiang; Kang Lan

    2010-01-01

    Stamping part error caused by springback is usually considered to be a tooling defect in sheet metal forming process. This problem can be corrected by adjusting the tooling shape to appropriate shape. In this paper, springback compensation based on FDM-DTF method is proposed to be used for design and modification of the tooling shape. Firstly, based on FDM method, the tooling shape is designed by reversing inner force's direction at the end of forming simulation, the required tooling shape can be got through some iterations. Secondly actual tooling is produced based on results got in the first step. When the tooling and part surface discrete data are investigated, the transfer function between numerical springback error and real springback error can be calculated based on wavelet transform results, which can be used in predicting the tooling shape for the desired product. Finally the FDM-DTF method is proved to control springback effectively after it has been applied in the 2D irregular product springback control.

  13. Development of redesign method of production system based on QFD

    Science.gov (United States)

    Kondoh, Shinsuke; Umeda, Yasusi; Togawa, Hisashi

    In order to catch up with rapidly changing market environment, rapid and flexible redesign of production system is quite important. For effective and rapid redesign of production system, a redesign support system is eagerly needed. To this end, this paper proposes a redesign method of production system based on Quality Function Deployment (QFD). This method represents a designer's intention in the form of QFD, collects experts' knowledge as “Production Method (PM) modules,” and formulates redesign guidelines as seven redesign operations so as to support a designer to find out improvement ideas in a systematical manner. This paper also illustrates a redesign support tool of a production system we have developed based on this method, and demonstrates its feasibility with a practical example of a production system of a contact probe. A result from this example shows that comparable cost reduction to those of veteran designers can be achieved by a novice designer. From this result, we conclude our redesign method is effective and feasible for supporting redesign of a production system.

  14. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    Science.gov (United States)

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  15. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method

    Directory of Open Access Journals (Sweden)

    Jure Tuta

    2018-03-01

    Full Text Available This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method, a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.. Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  16. [A retrieval method of drug molecules based on graph collapsing].

    Science.gov (United States)

    Qu, J W; Lv, X Q; Liu, Z M; Liao, Y; Sun, P H; Wang, B; Tang, Z

    2018-04-18

    To establish a compact and efficient hypergraph representation and a graph-similarity-based retrieval method of molecules to achieve effective and efficient medicine information retrieval. Chemical structural formula (CSF) was a primary search target as a unique and precise identifier for each compound at the molecular level in the research field of medicine information retrieval. To retrieve medicine information effectively and efficiently, a complete workflow of the graph-based CSF retrieval system was introduced. This system accepted the photos taken from smartphones and the sketches drawn on tablet personal computers as CSF inputs, and formalized the CSFs with the corresponding graphs. Then this paper proposed a compact and efficient hypergraph representation for molecules on the basis of analyzing factors that directly affected the efficiency of graph matching. According to the characteristics of CSFs, a hierarchical collapsing method combining graph isomorphism and frequent subgraph mining was adopted. There was yet a fundamental challenge, subgraph overlapping during the collapsing procedure, which hindered the method from establishing the correct compact hypergraph of an original CSF graph. Therefore, a graph-isomorphism-based algorithm was proposed to select dominant acyclic subgraphs on the basis of overlapping analysis. Finally, the spatial similarity among graphical CSFs was evaluated by multi-dimensional measures of similarity. To evaluate the performance of the proposed method, the proposed system was firstly compared with Wikipedia Chemical Structure Explorer (WCSE), the state-of-the-art system that allowed CSF similarity searching within Wikipedia molecules dataset, on retrieval accuracy. The system achieved higher values on mean average precision, discounted cumulative gain, rank-biased precision, and expected reciprocal rank than WCSE from the top-2 to the top-10 retrieved results. Specifically, the system achieved 10%, 1.41, 6.42%, and 1

  17. Vision-based method for tracking meat cuts in slaughterhouses

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo; Hviid, Marchen Sonja; Engbo Jørgensen, Mikkel

    2014-01-01

    Meat traceability is important for linking process and quality parameters from the individual meat cuts back to the production data from the farmer that produced the animal. Current tracking systems rely on physical tagging, which is too intrusive for individual meat cuts in a slaughterhouse envi...... (hanging, rough treatment and incorrect trimming) and our method is able to handle these perturbations gracefully. This study shows that the suggested vision-based approach to tracking is a promising alternative to the more intrusive methods currently available....

  18. Optimisation of test and maintenance based on probabilistic methods

    International Nuclear Information System (INIS)

    Cepin, M.

    2001-01-01

    This paper presents a method, which based on models and results of probabilistic safety assessment, minimises the nuclear power plant risk by optimisation of arrangement of safety equipment outages. The test and maintenance activities of the safety equipment are timely arranged, so the classical static fault tree models are extended with the time requirements to be capable to model real plant states. A house event matrix is used, which enables modelling of the equipment arrangements through the discrete points of time. The result of the method is determination of such configuration of equipment outages, which result in the minimal risk. Minimal risk is represented by system unavailability. (authors)

  19. Multi-band Image Registration Method Based on Fourier Transform

    Institute of Scientific and Technical Information of China (English)

    庹红娅; 刘允才

    2004-01-01

    This paper presented a registration method based on Fourier transform for multi-band images which is involved in translation and small rotation. Although different band images differ a lot in the intensity and features,they contain certain common information which we can exploit. A model was given that the multi-band images have linear correlations under the least-square sense. It is proved that the coefficients have no effect on the registration progress if two images have linear correlations. Finally, the steps of the registration method were proposed. The experiments show that the model is reasonable and the results are satisfying.

  20. A SAR IMAGE REGISTRATION METHOD BASED ON SIFT ALGORITHM

    Directory of Open Access Journals (Sweden)

    W. Lu

    2017-09-01

    Full Text Available In order to improve the stability and rapidity of synthetic aperture radar (SAR images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.

  1. Real reproduction and evaluation of color based on BRDF method

    Science.gov (United States)

    Qin, Feng; Yang, Weiping; Yang, Jia; Li, Hongning; Luo, Yanlin; Long, Hongli

    2013-12-01

    It is difficult to reproduce the original color of targets really in different illuminating environment using the traditional methods. So a function which can reconstruct the characteristics of reflection about every point on the surface of target is required urgently to improve the authenticity of color reproduction, which known as the Bidirectional Reflectance Distribution Function(BRDF). A method of color reproduction based on the BRDF measurement is introduced in this paper. Radiometry is combined with the colorimetric theories to measure the irradiance and radiance of GretagMacbeth 24 ColorChecker by using PR-715 Radiation Spectrophotometer of PHOTO RESEARCH, Inc, USA. The BRDF and BRF (Bidirectional Reflectance Factor) values of every color piece corresponding to the reference area are calculated according to irradiance and radiance, thus color tristimulus values of 24 ColorChecker are reconstructed. The results reconstructed by BRDF method are compared with values calculated by the reflectance using PR-715, at last, the chromaticity coordinates in color space and color difference between each other are analyzed. The experimental result shows average color difference and sample standard deviation between the method proposed in this paper and traditional reconstruction method depended on reflectance are 2.567 and 1.3049 respectively. The conclusion indicates that the method of color reproduction based on BRDF has the more obvious advantages to describe the color information of object than the reflectance in hemisphere space through the theoretical and experimental analysis. This method proposed in this paper is effective and feasible during the research of reproducing the chromaticity.

  2. MO-C-17A-02: A Novel Method for Evaluating Hepatic Stiffness Based On 4D-MRI and Deformable Image Registration

    Energy Technology Data Exchange (ETDEWEB)

    Cui, T [Duke University, Durham, NC (United States); Liang, X [Duke Unversity, Durham, NC (United States); Czito, B; Palta, M; Bashir, M; Yin, F; Cai, J [Duke University Medical Center, Durham, NC (United States)

    2014-06-15

    Purpose: Quantitative imaging of hepatic stiffness has significant potential in radiation therapy, ranging from treatment planning to response assessment. This study aims to develop a novel, noninvasive method to quantify liver stiffness with 3D strains liver maps using 4D-MRI and deformable image registration (DIR). Methods: Five patients with liver cancer were imaged with an institutionally developed 4D-MRI technique under an IRB-approved protocol. Displacement vector fields (DVFs) across the liver were generated via DIR of different phases of 4D-MRI. Strain tensor at each voxel of interest (VOI) was computed from the relative displacements between the VOI and each of the six adjacent voxels. Three principal strains (E{sub 1}, E{sub 2} and E{sub 3}) of the VOI were derived as the eigenvalue of the strain tensor, which represent the magnitudes of the maximum and minimum stretches. Strain tensors for two regions of interest (ROIs) were calculated and compared for each patient, one within the tumor (ROI{sub 1}) and the other in normal liver distant from the heart (ROI{sub 2}). Results: 3D strain maps were successfully generated fort each respiratory phase of 4D-MRI for all patients. Liver deformations induced by both respiration and cardiac motion were observed. Differences in strain values adjacent to the distant from the heart indicate significant deformation caused by cardiac expansion during diastole. The large E{sub 1}/E{sub 2} (∼2) and E{sub 1}/E{sub 2} (∼10) ratios reflect the predominance of liver deformation in the superior-inferior direction. The mean E{sub 1} in ROI{sub 1} (0.12±0.10) was smaller than in ROI{sub 2} (0.15±0.12), reflecting a higher degree of stiffness of the cirrhotic tumor. Conclusion: We have successfully developed a novel method for quantitatively evaluating regional hepatic stiffness based on DIR of 4D-MRI. Our initial findings indicate that liver strain is heterogeneous, and liver tumors may have lower principal strain values

  3. MO-C-17A-02: A Novel Method for Evaluating Hepatic Stiffness Based On 4D-MRI and Deformable Image Registration

    International Nuclear Information System (INIS)

    Cui, T; Liang, X; Czito, B; Palta, M; Bashir, M; Yin, F; Cai, J

    2014-01-01

    Purpose: Quantitative imaging of hepatic stiffness has significant potential in radiation therapy, ranging from treatment planning to response assessment. This study aims to develop a novel, noninvasive method to quantify liver stiffness with 3D strains liver maps using 4D-MRI and deformable image registration (DIR). Methods: Five patients with liver cancer were imaged with an institutionally developed 4D-MRI technique under an IRB-approved protocol. Displacement vector fields (DVFs) across the liver were generated via DIR of different phases of 4D-MRI. Strain tensor at each voxel of interest (VOI) was computed from the relative displacements between the VOI and each of the six adjacent voxels. Three principal strains (E 1 , E 2 and E 3 ) of the VOI were derived as the eigenvalue of the strain tensor, which represent the magnitudes of the maximum and minimum stretches. Strain tensors for two regions of interest (ROIs) were calculated and compared for each patient, one within the tumor (ROI 1 ) and the other in normal liver distant from the heart (ROI 2 ). Results: 3D strain maps were successfully generated fort each respiratory phase of 4D-MRI for all patients. Liver deformations induced by both respiration and cardiac motion were observed. Differences in strain values adjacent to the distant from the heart indicate significant deformation caused by cardiac expansion during diastole. The large E 1 /E 2 (∼2) and E 1 /E 2 (∼10) ratios reflect the predominance of liver deformation in the superior-inferior direction. The mean E 1 in ROI 1 (0.12±0.10) was smaller than in ROI 2 (0.15±0.12), reflecting a higher degree of stiffness of the cirrhotic tumor. Conclusion: We have successfully developed a novel method for quantitatively evaluating regional hepatic stiffness based on DIR of 4D-MRI. Our initial findings indicate that liver strain is heterogeneous, and liver tumors may have lower principal strain values than normal liver. Thorough validation of our method is

  4. An Extended Role Based Access Control Method for XML Documents

    Institute of Scientific and Technical Information of China (English)

    MENG Xiao-feng; LUO Dao-feng; OU Jian-bo

    2004-01-01

    As XML has been increasingly important as the Data-change format of Internet and Intranet, access-control-on-XML-properties rises as a new issue.Role-based access control (RBAC) is an access control method that has been widely used in Internet, Operation System and Relation Data Base these 10 years.Though RBAC is already relatively mature in the above fields, new problems occur when it is used in XML properties.This paper proposes an integrated model to resolve these problems, after the fully analysis on the features of XML and RBAC.

  5. Register-based statistics statistical methods for administrative data

    CERN Document Server

    Wallgren, Anders

    2014-01-01

    This book provides a comprehensive and up to date treatment of  theory and practical implementation in Register-based statistics. It begins by defining the area, before explaining how to structure such systems, as well as detailing alternative approaches. It explains how to create statistical registers, how to implement quality assurance, and the use of IT systems for register-based statistics. Further to this, clear details are given about the practicalities of implementing such statistical methods, such as protection of privacy and the coordination and coherence of such an undertaking. Thi

  6. Linear feature selection in texture analysis - A PLS based method

    DEFF Research Database (Denmark)

    Marques, Joselene; Igel, Christian; Lillholm, Martin

    2013-01-01

    We present a texture analysis methodology that combined uncommitted machine-learning techniques and partial least square (PLS) in a fully automatic framework. Our approach introduces a robust PLS-based dimensionality reduction (DR) step to specifically address outliers and high-dimensional feature...... and considering all CV groups, the methods selected 36 % of the original features available. The diagnosis evaluation reached a generalization area-under-the-ROC curve of 0.92, which was higher than established cartilage-based markers known to relate to OA diagnosis....

  7. An Efficient Evolutionary Based Method For Image Segmentation

    OpenAIRE

    Aslanzadeh, Roohollah; Qazanfari, Kazem; Rahmati, Mohammad

    2017-01-01

    The goal of this paper is to present a new efficient image segmentation method based on evolutionary computation which is a model inspired from human behavior. Based on this model, a four layer process for image segmentation is proposed using the split/merge approach. In the first layer, an image is split into numerous regions using the watershed algorithm. In the second layer, a co-evolutionary process is applied to form centers of finals segments by merging similar primary regions. In the t...

  8. Accurate measurement of imaging photoplethysmographic signals based camera using weighted average

    Science.gov (United States)

    Pang, Zongguang; Kong, Lingqin; Zhao, Yuejin; Sun, Huijuan; Dong, Liquan; Hui, Mei; Liu, Ming; Liu, Xiaohua; Liu, Lingling; Li, Xiaohui; Li, Rongji

    2018-01-01

    Imaging Photoplethysmography (IPPG) is an emerging technique for the extraction of vital signs of human being using video recordings. IPPG technology with its advantages like non-contact measurement, low cost and easy operation has become one research hot spot in the field of biomedicine. However, the noise disturbance caused by non-microarterial area cannot be removed because of the uneven distribution of micro-arterial, different signal strength of each region, which results in a low signal noise ratio of IPPG signals and low accuracy of heart rate. In this paper, we propose a method of improving the signal noise ratio of camera-based IPPG signals of each sub-region of the face using a weighted average. Firstly, we obtain the region of interest (ROI) of a subject's face based camera. Secondly, each region of interest is tracked and feature-based matched in each frame of the video. Each tracked region of face is divided into 60x60 pixel block. Thirdly, the weights of PPG signal of each sub-region are calculated, based on the signal-to-noise ratio of each sub-region. Finally, we combine the IPPG signal from all the tracked ROI using weighted average. Compared with the existing approaches, the result shows that the proposed method takes modest but significant effects on improvement of signal noise ratio of camera-based PPG estimated and accuracy of heart rate measurement.

  9. a Modeling Method of Fluttering Leaves Based on Point Cloud

    Science.gov (United States)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  10. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    Directory of Open Access Journals (Sweden)

    J. Tang

    2017-09-01

    Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  11. Arts-based methods for storylistening and storytelling with prisoners

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    , Wordquake in Prison. The texts were published in an edited book (Frølunde, Søgaard, and Weise 2016). The analysis of texts and reflexive narrative interviews is inspired by arts-based, dialogic, narrative methods on the arts and storytelling (Cole and Knowles 2008; Reiter 2014; Boje 2001), storylistening......The presentation concerns applying dialogic, arts-based methods, which respect for multiple voices, collaboration and difference. In the presentation, I focus on how storytelling and listening to stories are integral to a dialogic process. In a dialogic perspective, meaning-making is unfinalizable...... in narrative medicine (DasGupta 2014), and aesthetic reflection on artistic expression in arts therapy and education. In my analysis, I explore active listening as in terms of reflection and revision of stories with the young prisoners. I reflect on the tensions involved in listening in a sensitive prison...

  12. A sediment graph model based on SCS-CN method

    Science.gov (United States)

    Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.

    2008-01-01

    SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.

  13. A Case-Based Reasoning Method with Rank Aggregation

    Science.gov (United States)

    Sun, Jinhua; Du, Jiao; Hu, Jian

    2018-03-01

    In order to improve the accuracy of case-based reasoning (CBR), this paper addresses a new CBR framework with the basic principle of rank aggregation. First, the ranking methods are put forward in each attribute subspace of case. The ordering relation between cases on each attribute is got between cases. Then, a sorting matrix is got. Second, the similar case retrieval process from ranking matrix is transformed into a rank aggregation optimal problem, which uses the Kemeny optimal. On the basis, a rank aggregation case-based reasoning algorithm, named RA-CBR, is designed. The experiment result on UCI data sets shows that case retrieval accuracy of RA-CBR algorithm is higher than euclidean distance CBR and mahalanobis distance CBR testing.So we can get the conclusion that RA-CBR method can increase the performance and efficiency of CBR.

  14. Mutton Traceability Method Based on Internet of Things

    Directory of Open Access Journals (Sweden)

    Wu Min-Ning

    2014-01-01

    Full Text Available In order to improve the mutton traceability efficiency for Internet of Things and solve the problem of data transmission, analyzed existing tracking algorithm, proposed the food traceability application model, Petri network model of food traceability and food traceability of time series data of improved K-means algorithm based on the Internet of things. The food traceability application model to convert, integrate and mine the heterogeneous information, implementation of the food safety traceability information management, Petri network model for food traceability in the process of the state transition were analyzed and simulated and provides a theoretical basis to study the behavior described in the food traceability system and structural design. The experiments on simulation data show that the proposed traceability method based on Internet of Things is more effective for mutton traceability data than the traditional K-means methods.

  15. A novel method for human age group classification based on

    Directory of Open Access Journals (Sweden)

    Anuradha Yarlagadda

    2015-10-01

    Full Text Available In the computer vision community, easy categorization of a person’s facial image into various age groups is often quite precise and is not pursued effectively. To address this problem, which is an important area of research, the present paper proposes an innovative method of age group classification system based on the Correlation Fractal Dimension of complex facial image. Wrinkles appear on the face with aging thereby changing the facial edges of the image. The proposed method is rotation and poses invariant. The present paper concentrates on developing an innovative technique that classifies facial images into four categories i.e. child image (0–15, young adult image (15–30, middle-aged adult image (31–50, and senior adult image (>50 based on correlation FD value of a facial edge image.

  16. Harbourscape Aalborg - Design Based Methods in Waterfront Development

    DEFF Research Database (Denmark)

    Kiib, Hans

    2012-01-01

    How can city planners and developers gain knowledge and develop new sustainable concepts for water front developments? The waterfront is far too often threatened by new privatisation, lack of public access and bad architecture. And in a time where low growth rates and crises in the building...... industry is leaving great parts of the harbour as urban voids planners are in search of new tools for bridging the time gap until new projects can be a reality. This chapter presents the development of waterfront regeneration concepts that resulted from design based workshops, Harbourscape Aalborg in 2005...... and Performative Architecture Workshop in 2008, and evaluates the method and the thinking behind this. The design workshops provide different design-based development methods which can be tested with the purpose of developing new concepts for the relationship between the city and its harbour, and in addition...

  17. Novel crystal timing calibration method based on total variation

    Science.gov (United States)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  18. Edge detection methods based on generalized type-2 fuzzy logic

    CERN Document Server

    Gonzalez, Claudia I; Castro, Juan R; Castillo, Oscar

    2017-01-01

    In this book four new methods are proposed. In the first method the generalized type-2 fuzzy logic is combined with the morphological gra-dient technique. The second method combines the general type-2 fuzzy systems (GT2 FSs) and the Sobel operator; in the third approach the me-thodology based on Sobel operator and GT2 FSs is improved to be applied on color images. In the fourth approach, we proposed a novel edge detec-tion method where, a digital image is converted a generalized type-2 fuzzy image. In this book it is also included a comparative study of type-1, inter-val type-2 and generalized type-2 fuzzy systems as tools to enhance edge detection in digital images when used in conjunction with the morphologi-cal gradient and the Sobel operator. The proposed generalized type-2 fuzzy edge detection methods were tested with benchmark images and synthetic images, in a grayscale and color format. Another contribution in this book is that the generalized type-2 fuzzy edge detector method is applied in the preproc...

  19. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    OpenAIRE

    J. Tang; Y. Wang; Y. Zhao; Y. Zhao; W. Hao; X. Ning; K. Lv; Z. Shi; M. Zhao

    2017-01-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which ar...

  20. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S; Sedukhin, S [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  1. Sensitivity Analysis of the Proximal-Based Parallel Decomposition Methods

    Directory of Open Access Journals (Sweden)

    Feng Ma

    2014-01-01

    Full Text Available The proximal-based parallel decomposition methods were recently proposed to solve structured convex optimization problems. These algorithms are eligible for parallel computation and can be used efficiently for solving large-scale separable problems. In this paper, compared with the previous theoretical results, we show that the range of the involved parameters can be enlarged while the convergence can be still established. Preliminary numerical tests on stable principal component pursuit problem testify to the advantages of the enlargement.

  2. Towards risk-based structural integrity methods for PWRs

    International Nuclear Information System (INIS)

    Chapman, O.J.V.; Lloyd, R.B.

    1992-01-01

    This paper describes the development of risk-based structural integrity assurance methods and their application to Pressurized Water Reactor (PWR) plant. In-service inspection is introduced as a way of reducing the failure probability of high risk sites and the latter are identified using reliability analysis; the extent and interval of inspection can also be optimized. The methodology is illustrated by reference to the aspect of reliability of weldments in PWR systems. (author)

  3. Personnel Selection Method Based on Personnel-Job Matching

    OpenAIRE

    Li Wang; Xilin Hou; Lili Zhang

    2013-01-01

    The existing personnel selection decisions in practice are based on the evaluation of job seeker's human capital, and it may be difficult to make personnel-job matching and make each party satisfy. Therefore, this paper puts forward a new personnel selection method by consideration of bilateral matching. Starting from the employment thoughts of ¡°satisfy¡±, the satisfaction evaluation indicator system of each party are constructed. The multi-objective optimization model is given according to ...

  4. Towards Automatic Testing of Reference Point Based Interactive Methods

    OpenAIRE

    Ojalehto, Vesa; Podkopaev, Dmitry; Miettinen, Kaisa

    2016-01-01

    In order to understand strengths and weaknesses of optimization algorithms, it is important to have access to different types of test problems, well defined performance indicators and analysis tools. Such tools are widely available for testing evolutionary multiobjective optimization algorithms. To our knowledge, there do not exist tools for analyzing the performance of interactive multiobjective optimization methods based on the reference point approach to communicating ...

  5. A model based security testing method for protocol implementation.

    Science.gov (United States)

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  6. Geophysics-based method of locating a stationary earth object

    Science.gov (United States)

    Daily, Michael R [Albuquerque, NM; Rohde, Steven B [Corrales, NM; Novak, James L [Albuquerque, NM

    2008-05-20

    A geophysics-based method for determining the position of a stationary earth object uses the periodic changes in the gravity vector of the earth caused by the sun- and moon-orbits. Because the local gravity field is highly irregular over a global scale, a model of local tidal accelerations can be compared to actual accelerometer measurements to determine the latitude and longitude of the stationary object.

  7. An Intelligent Fleet Condition-Based Maintenance Decision Making Method Based on Multi-Agent

    OpenAIRE

    Bo Sun; Qiang Feng; Songjie Li

    2012-01-01

    According to the demand for condition-based maintenance online decision making among a mission oriented fleet, an intelligent maintenance decision making method based on Multi-agent and heuristic rules is proposed. The process of condition-based maintenance within an aircraft fleet (each containing one or more Line Replaceable Modules) based on multiple maintenance thresholds is analyzed. Then the process is abstracted into a Multi-Agent Model, a 2-layer model structure containing host negoti...

  8. PROMETHEE II: A knowledge-driven method for copper exploration

    Science.gov (United States)

    Abedi, Maysam; Ali Torabi, S.; Norouzi, Gholam-Hossain; Hamzeh, Mohammad; Elyasi, Gholam-Reza

    2012-09-01

    This paper describes the application of a well-known Multi Criteria Decision Making (MCDM) technique called Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE II) to explore porphyry copper deposits. Various raster-based evidential layers involving geological, geophysical, and geochemical geo-datasets are integrated to prepare a mineral prospectivity mapping (MPM). In a case study, thirteen layers of the Now Chun copper deposit located in the Kerman province of Iran are used to explore the region of interest. The PROMETHEE II technique is applied to produce the desired MPM, and the outputs are validated using twenty-one boreholes that have been classified into five classes. This proposed method shows a high performance when providing the MPM while reducing the cost of exploratory drilling in the study area.

  9. A method of orbital analysis for large-scale first-principles simulations

    International Nuclear Information System (INIS)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-01-01

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF 4 )

  10. Dim target detection method based on salient graph fusion

    Science.gov (United States)

    Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun

    2018-02-01

    Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.

  11. Research on image complexity evaluation method based on color information

    Science.gov (United States)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  12. A Swarm-Based Learning Method Inspired by Social Insects

    Science.gov (United States)

    He, Xiaoxian; Zhu, Yunlong; Hu, Kunyuan; Niu, Ben

    Inspired by cooperative transport behaviors of ants, on the basis of Q-learning, a new learning method, Neighbor-Information-Reference (NIR) learning method, is present in the paper. This is a swarm-based learning method, in which principles of swarm intelligence are strictly complied with. In NIR learning, the i-interval neighbor's information, namely its discounted reward, is referenced when an individual selects the next state, so that it can make the best decision in a computable local neighborhood. In application, different policies of NIR learning are recommended by controlling the parameters according to time-relativity of concrete tasks. NIR learning can remarkably improve individual efficiency, and make swarm more "intelligent".

  13. Face Recognition Method Based on Fuzzy 2DPCA

    Directory of Open Access Journals (Sweden)

    Xiaodong Li

    2014-01-01

    Full Text Available 2DPCA, which is one of the most important face recognition methods, is relatively sensitive to substantial variations in light direction, face pose, and facial expression. In order to improve the recognition performance of the traditional 2DPCA, a new 2DPCA algorithm based on the fuzzy theory is proposed in this paper, namely, the fuzzy 2DPCA (F2DPCA. In this method, applying fuzzy K-nearest neighbor (FKNN, the membership degree matrix of the training samples is calculated, which is used to get the fuzzy means of each class. The average of fuzzy means is then incorporated into the definition of the general scatter matrix with anticipation that it can improve classification result. The comprehensive experiments on the ORL, the YALE, and the FERET face database show that the proposed method can improve the classification rates and reduce the sensitivity to variations between face images caused by changes in illumination, face expression, and face pose.

  14. A rule-based automatic sleep staging method.

    Science.gov (United States)

    Liang, Sheng-Fu; Kuo, Chin-En; Hu, Yu-Han; Cheng, Yu-Shian

    2012-03-30

    In this paper, a rule-based automatic sleep staging method was proposed. Twelve features including temporal and spectrum analyses of the EEG, EOG, and EMG signals were utilized. Normalization was applied to each feature to eliminating individual differences. A hierarchical decision tree with fourteen rules was constructed for sleep stage classification. Finally, a smoothing process considering the temporal contextual information was applied for the continuity. The overall agreement and kappa coefficient of the proposed method applied to the all night polysomnography (PSG) of seventeen healthy subjects compared with the manual scorings by R&K rules can reach 86.68% and 0.79, respectively. This method can integrate with portable PSG system for sleep evaluation at-home in the near future. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. THE CPA QUALIFICATION METHOD BASED ON THE GAUSSIAN CURVE FITTING

    Directory of Open Access Journals (Sweden)

    M.T. Adithia

    2015-01-01

    Full Text Available The Correlation Power Analysis (CPA attack is an attack on cryptographic devices, especially smart cards. The results of the attack are correlation traces. Based on the correlation traces, an evaluation is done to observe whether significant peaks appear in the traces or not. The evaluation is done manually, by experts. If significant peaks appear then the smart card is not considered secure since it is assumed that the secret key is revealed. We develop a method that objectively detects peaks and decides which peak is significant. We conclude that using the Gaussian curve fitting method, the subjective qualification of the peak significance can be objectified. Thus, better decisions can be taken by security experts. We also conclude that the Gaussian curve fitting method is able to show the influence of peak sizes, especially the width and height, to a significance of a particular peak.

  16. Auto correct method of AD converters precision based on ethernet

    Directory of Open Access Journals (Sweden)

    NI Jifeng

    2013-10-01

    Full Text Available Ideal AD conversion should be a straight zero-crossing line in the Cartesian coordinate axis system. While in practical engineering, the signal processing circuit, chip performance and other factors have an impact on the accuracy of conversion. Therefore a linear fitting method is adopted to improve the conversion accuracy. An automatic modification of AD conversion based on Ethernet is presented by using software and hardware. Just by tapping the mouse, all the AD converter channel linearity correction can be automatically completed, and the error, SNR and ENOB (effective number of bits are calculated. Then the coefficients of linear modification are loaded into the onboard AD converter card's EEPROM. Compared with traditional methods, this method is more convenient, accurate and efficient,and has a broad application prospects.

  17. Traffic Speed Data Imputation Method Based on Tensor Completion

    Directory of Open Access Journals (Sweden)

    Bin Ran

    2015-01-01

    Full Text Available Traffic speed data plays a key role in Intelligent Transportation Systems (ITS; however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS. In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC, an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches.

  18. Traffic speed data imputation method based on tensor completion.

    Science.gov (United States)

    Ran, Bin; Tan, Huachun; Feng, Jianshuai; Liu, Ying; Wang, Wuhong

    2015-01-01

    Traffic speed data plays a key role in Intelligent Transportation Systems (ITS); however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS). In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC), an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS) database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches.

  19. A Pansharpening Method Based on HCT and Joint Sparse Model

    Directory of Open Access Journals (Sweden)

    XU Ning

    2016-04-01

    Full Text Available A novel fusion method based on the hyperspherical color transformation (HCT and joint sparsity model is proposed for decreasing the spectral distortion of fused image further. In the method, an intensity component and angles of each band of the multispectral image is obtained by HCT firstly, and then the intensity component is fused with the panchromatic image through wavelet transform and joint sparsity model. In the joint sparsity model, the redundant and complement information of the different images can be efficiently extracted and employed to yield the high quality results. Finally, the fused multi spectral image is obtained by inverse transforms of wavelet and HCT on the new lower frequency image and the angle components, respectively. Experimental results on Pleiades-1 and WorldView-2 satellites indicate that the proposed method achieves remarkable results.

  20. Deviation-based spam-filtering method via stochastic approach

    Science.gov (United States)

    Lee, Daekyung; Lee, Mi Jin; Kim, Beom Jun

    2018-03-01

    In the presence of a huge number of possible purchase choices, ranks or ratings of items by others often play very important roles for a buyer to make a final purchase decision. Perfectly objective rating is an impossible task to achieve, and we often use an average rating built on how previous buyers estimated the quality of the product. The problem of using a simple average rating is that it can easily be polluted by careless users whose evaluation of products cannot be trusted, and by malicious spammers who try to bias the rating result on purpose. In this letter we suggest how trustworthiness of individual users can be systematically and quantitatively reflected to build a more reliable rating system. We compute the suitably defined reliability of each user based on the user's rating pattern for all products she evaluated. We call our proposed method as the deviation-based ranking, since the statistical significance of each user's rating pattern with respect to the average rating pattern is the key ingredient. We find that our deviation-based ranking method outperforms existing methods in filtering out careless random evaluators as well as malicious spammers.

  1. Evolutionary game theory using agent-based methods.

    Science.gov (United States)

    Adami, Christoph; Schossau, Jory; Hintze, Arend

    2016-12-01

    Evolutionary game theory is a successful mathematical framework geared towards understanding the selective pressures that affect the evolution of the strategies of agents engaged in interactions with potential conflicts. While a mathematical treatment of the costs and benefits of decisions can predict the optimal strategy in simple settings, more realistic settings such as finite populations, non-vanishing mutations rates, stochastic decisions, communication between agents, and spatial interactions, require agent-based methods where each agent is modeled as an individual, carries its own genes that determine its decisions, and where the evolutionary outcome can only be ascertained by evolving the population of agents forward in time. While highlighting standard mathematical results, we compare those to agent-based methods that can go beyond the limitations of equations and simulate the complexity of heterogeneous populations and an ever-changing set of interactors. We conclude that agent-based methods can predict evolutionary outcomes where purely mathematical treatments cannot tread (for example in the weak selection-strong mutation limit), but that mathematics is crucial to validate the computational simulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Distant Supervision for Relation Extraction with Ranking-Based Methods

    Directory of Open Access Journals (Sweden)

    Yang Xiang

    2016-05-01

    Full Text Available Relation extraction has benefited from distant supervision in recent years with the development of natural language processing techniques and data explosion. However, distant supervision is still greatly limited by the quality of training data, due to its natural motivation for greatly reducing the heavy cost of data annotation. In this paper, we construct an architecture called MIML-sort (Multi-instance Multi-label Learning with Sorting Strategies, which is built on the famous MIML framework. Based on MIML-sort, we propose three ranking-based methods for sample selection with which we identify relation extractors from a subset of the training data. Experiments are set up on the KBP (Knowledge Base Propagation corpus, one of the benchmark datasets for distant supervision, which is large and noisy. Compared with previous work, the proposed methods produce considerably better results. Furthermore, the three methods together achieve the best F1 on the official testing set, with an optimal enhancement of F1 from 27.3% to 29.98%.

  3. Global positioning method based on polarized light compass system

    Science.gov (United States)

    Liu, Jun; Yang, Jiangtao; Wang, Yubo; Tang, Jun; Shen, Chong

    2018-05-01

    This paper presents a global positioning method based on a polarized light compass system. A main limitation of polarization positioning is the environment such as weak and locally destroyed polarization environments, and the solution to the positioning problem is given in this paper which is polarization image de-noising and segmentation. Therefore, the pulse coupled neural network is employed for enhancing positioning performance. The prominent advantages of the present positioning technique are as follows: (i) compared to the existing position method based on polarized light, better sun tracking accuracy can be achieved and (ii) the robustness and accuracy of positioning under weak and locally destroyed polarization environments, such as cloudy or building shielding, are improved significantly. Finally, some field experiments are given to demonstrate the effectiveness and applicability of the proposed global positioning technique. The experiments have shown that our proposed method outperforms the conventional polarization positioning method, the real time longitude and latitude with accuracy up to 0.0461° and 0.0911°, respectively.

  4. A novel word spotting method based on recurrent neural networks.

    Science.gov (United States)

    Frinken, Volkmar; Fischer, Andreas; Manmatha, R; Bunke, Horst

    2012-02-01

    Keyword spotting refers to the process of retrieving all instances of a given keyword from a document. In the present paper, a novel keyword spotting method for handwritten documents is described. It is derived from a neural network-based system for unconstrained handwriting recognition. As such it performs template-free spotting, i.e., it is not necessary for a keyword to appear in the training set. The keyword spotting is done using a modification of the CTC Token Passing algorithm in conjunction with a recurrent neural network. We demonstrate that the proposed systems outperform not only a classical dynamic time warping-based approach but also a modern keyword spotting system, based on hidden Markov models. Furthermore, we analyze the performance of the underlying neural networks when using them in a recognition task followed by keyword spotting on the produced transcription. We point out the advantages of keyword spotting when compared to classic text line recognition.

  5. Fully Digital Chaotic Differential Equation-based Systems And Methods

    KAUST Repository

    Radwan, Ahmed Gomaa Ahmed

    2012-09-06

    Various embodiments are provided for fully digital chaotic differential equation-based systems and methods. In one embodiment, among others, a digital circuit includes digital state registers and one or more digital logic modules configured to obtain a first value from two or more of the digital state registers; determine a second value based upon the obtained first values and a chaotic differential equation; and provide the second value to set a state of one of the plurality of digital state registers. In another embodiment, a digital circuit includes digital state registers, digital logic modules configured to obtain outputs from a subset of the digital shift registers and to provide the input based upon a chaotic differential equation for setting a state of at least one of the subset of digital shift registers, and a digital clock configured to provide a clock signal for operating the digital shift registers.

  6. Fully Digital Chaotic Differential Equation-based Systems And Methods

    KAUST Repository

    Radwan, Ahmed Gomaa Ahmed; Zidan, Mohammed A.; Salama, Khaled N.

    2012-01-01

    Various embodiments are provided for fully digital chaotic differential equation-based systems and methods. In one embodiment, among others, a digital circuit includes digital state registers and one or more digital logic modules configured to obtain a first value from two or more of the digital state registers; determine a second value based upon the obtained first values and a chaotic differential equation; and provide the second value to set a state of one of the plurality of digital state registers. In another embodiment, a digital circuit includes digital state registers, digital logic modules configured to obtain outputs from a subset of the digital shift registers and to provide the input based upon a chaotic differential equation for setting a state of at least one of the subset of digital shift registers, and a digital clock configured to provide a clock signal for operating the digital shift registers.

  7. Knowledge and method base for shape memory alloys

    Energy Technology Data Exchange (ETDEWEB)

    Welp, E.G.; Breidert, J. [Ruhr-University Bochum, Institute of Engineering Design, 44780 Bochum (Germany)

    2004-05-01

    It is often impossible for design engineers to decide whether it is possible to use shape memory alloys (SMA) for a particular task. In case of a decision to use SMA for product development, design engineers normally do not know in detail how to proceed in a correct and beneficial way. In order to support design engineers who have no previous knowledge about SMA and to assist in the transfer of results from basic research to industrial practice, an essential knowledge and method base has been developed. Through carefully conducted literature studies and patent analysis material and design information could be collected. All information is implemented into a computer supported knowledge and method base that provides design information with a particular focus on the conceptual and embodiment design phase. The knowledge and method base contains solution principles and data about effects, material and manufacturing as well as design guidelines and calculation methods for dimensioning and optimization. A browser-based user interface ensures that design engineers have immediate access to the latest version of the knowledge and method base. In order to ensure a user friendly application, an evaluation with several test users has been carried out. Reactions of design engineers from the industrial sector underline the need for support related to knowledge on SMA. (Abstract Copyright [2004], Wiley Periodicals, Inc.) [German] Fuer Konstrukteure ist es haeufig schwierig zu entscheiden, ob sich der Einsatz von Formgedaechtnislegierungen (FGL) fuer eine bestimmte Aufgabe eignet. Fuer den Fall, dass FGL fuer die Produktentwicklung genutzt werden sollen, besitzen Ingenieure zumeist nur unzureichende Detailkenntnisse, um Formgedaechtnislegierungen richtig und in vorteilhafter Weise anwenden zu koennen. Zur Unterstuetzung von Konstrukteuren, die ueber kein Vorwissen und keine Erfahrungen zu FGL verfuegen und zum Transfer von Forschungsergebnissen in die industrielle Praxis, ist eine

  8. A fuzzy logic based PROMETHEE method for material selection problems

    Directory of Open Access Journals (Sweden)

    Muhammet Gul

    2018-03-01

    Full Text Available Material selection is a complex problem in the design and development of products for diverse engineering applications. This paper presents a fuzzy PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation method based on trapezoidal fuzzy interval numbers that can be applied to the selection of materials for an automotive instrument panel. Also, it presents uniqueness in making a significant contribution to the literature in terms of the application of fuzzy decision-making approach to material selection problems. The method is illustrated, validated, and compared against three different fuzzy MCDM methods (fuzzy VIKOR, fuzzy TOPSIS, and fuzzy ELECTRE in terms of its ranking performance. Also, the relationships between the compared methods and the proposed scenarios for fuzzy PROMETHEE are evaluated via the Spearman’s correlation coefficient. Styrene Maleic Anhydride and Polypropylene are determined optionally as suitable materials for the automotive instrument panel case. We propose a generic fuzzy MCDM methodology that can be practically implemented to material selection problem. The main advantages of the methodology are consideration of the vagueness, uncertainty, and fuzziness to decision making environment.

  9. A Statistic-Based Calibration Method for TIADC System

    Directory of Open Access Journals (Sweden)

    Kuojun Yang

    2015-01-01

    Full Text Available Time-interleaved technique is widely used to increase the sampling rate of analog-to-digital converter (ADC. However, the channel mismatches degrade the performance of time-interleaved ADC (TIADC. Therefore, a statistic-based calibration method for TIADC is proposed in this paper. The average value of sampling points is utilized to calculate offset error, and the summation of sampling points is used to calculate gain error. After offset and gain error are obtained, they are calibrated by offset and gain adjustment elements in ADC. Timing skew is calibrated by an iterative method. The product of sampling points of two adjacent subchannels is used as a metric for calibration. The proposed method is employed to calibrate mismatches in a four-channel 5 GS/s TIADC system. Simulation results show that the proposed method can estimate mismatches accurately in a wide frequency range. It is also proved that an accurate estimation can be obtained even if the signal noise ratio (SNR of input signal is 20 dB. Furthermore, the results obtained from a real four-channel 5 GS/s TIADC system demonstrate the effectiveness of the proposed method. We can see that the spectra spurs due to mismatches have been effectively eliminated after calibration.

  10. A Blade Tip Timing Method Based on a Microwave Sensor

    Directory of Open Access Journals (Sweden)

    Jilong Zhang

    2017-05-01

    Full Text Available Blade tip timing is an effective method for blade vibration measurements in turbomachinery. This method is increasing in popularity because it is non-intrusive and has several advantages over the conventional strain gauge method. Different kinds of sensors have been developed for blade tip timing, including optical, eddy current and capacitance sensors. However, these sensors are unsuitable in environments with contaminants or high temperatures. Microwave sensors offer a promising potential solution to overcome these limitations. In this article, a microwave sensor-based blade tip timing measurement system is proposed. A patch antenna probe is used to transmit and receive the microwave signals. The signal model and process method is analyzed. Zero intermediate frequency structure is employed to maintain timing accuracy and dynamic performance, and the received signal can also be used to measure tip clearance. The timing method uses the rising and falling edges of the signal and an auto-gain control circuit to reduce the effect of tip clearance change. To validate the accuracy of the system, it is compared experimentally with a fiber optic tip timing system. The results show that the microwave tip timing system achieves good accuracy.

  11. An adjoint sensitivity-based data assimilation method and its comparison with existing variational methods

    Directory of Open Access Journals (Sweden)

    Yonghan Choi

    2014-01-01

    Full Text Available An adjoint sensitivity-based data assimilation (ASDA method is proposed and applied to a heavy rainfall case over the Korean Peninsula. The heavy rainfall case, which occurred on 26 July 2006, caused torrential rainfall over the central part of the Korean Peninsula. The mesoscale convective system (MCS related to the heavy rainfall was classified as training line/adjoining stratiform (TL/AS-type for the earlier period, and back building (BB-type for the later period. In the ASDA method, an adjoint model is run backwards with forecast-error gradient as input, and the adjoint sensitivity of the forecast error to the initial condition is scaled by an optimal scaling factor. The optimal scaling factor is determined by minimising the observational cost function of the four-dimensional variational (4D-Var method, and the scaled sensitivity is added to the original first guess. Finally, the observations at the analysis time are assimilated using a 3D-Var method with the improved first guess. The simulated rainfall distribution is shifted northeastward compared to the observations when no radar data are assimilated or when radar data are assimilated using the 3D-Var method. The rainfall forecasts are improved when radar data are assimilated using the 4D-Var or ASDA method. Simulated atmospheric fields such as horizontal winds, temperature, and water vapour mixing ratio are also improved via the 4D-Var or ASDA method. Due to the improvement in the analysis, subsequent forecasts appropriately simulate the observed features of the TL/AS- and BB-type MCSs and the corresponding heavy rainfall. The computational cost associated with the ASDA method is significantly lower than that of the 4D-Var method.

  12. Application of model-based and knowledge-based measuring methods as analytical redundancy

    International Nuclear Information System (INIS)

    Hampel, R.; Kaestner, W.; Chaker, N.; Vandreier, B.

    1997-01-01

    The safe operation of nuclear power plants requires the application of modern and intelligent methods of signal processing for the normal operation as well as for the management of accident conditions. Such modern and intelligent methods are model-based and knowledge-based ones being founded on analytical knowledge (mathematical models) as well as experiences (fuzzy information). In addition to the existing hardware redundancies analytical redundancies will be established with the help of these modern methods. These analytical redundancies support the operating staff during the decision-making. The design of a hybrid model-based and knowledge-based measuring method will be demonstrated by the example of a fuzzy-supported observer. Within the fuzzy-supported observer a classical linear observer is connected with a fuzzy-supported adaptation of the model matrices of the observer model. This application is realized for the estimation of the non-measurable variables as steam content and mixture level within pressure vessels with water-steam mixture during accidental depressurizations. For this example the existing non-linearities will be classified and the verification of the model will be explained. The advantages of the hybrid method in comparison to the classical model-based measuring methods will be demonstrated by the results of estimation. The consideration of the parameters which have an important influence on the non-linearities requires the inclusion of high-dimensional structures of fuzzy logic within the model-based measuring methods. Therefore methods will be presented which allow the conversion of these high-dimensional structures to two-dimensional structures of fuzzy logic. As an efficient solution of this problem a method based on cascaded fuzzy controllers will be presented. (author). 2 refs, 12 figs, 5 tabs

  13. Research on Automotive Dynamic Weighing Method Based on Piezoelectric Sensor

    Directory of Open Access Journals (Sweden)

    Zhang Wei

    2017-01-01

    Full Text Available In order to effectively measure the dynamic axle load of vehicles in motion, the dynamic weighing method of vehicles based on piezoelectric sensor was studied. Firstly, the influencing factors of the measurement accuracy in the dynamic weighing process were analyzed systematically, and the impacts of road irregularities and dynamic weighing system vibration on measurement error were discussed. On the basis of the analysis, the arithmetic mean filter method was used in the software algorithm to filter out the periodic interference added in the sensor signal, the most suitable n value was selected to get the better filtering result by simulation comparison. Then, the dynamic axle load calculation model of high speed vehicles was studied deeply, based on the theoretical response curve of the sensor, the dynamic axle load calculation method based on frequency reconstruction was established according to actual measurement signals of sensors and the analysis from time domain and frequency domain, also the least square method was used to realize the identification of temperature correction coefficient. A large amount of data that covered the usual vehicle weighing range was collected by experiment. The results show that the dynamic weighing signal system identification error all controlled within 10% at the same temperature and 60% of the vehicle data error can be controlled within 7%. The temperature correction coefficient and the correction formula at different temperatures ranges are well adapted to ensure that the vehicle temperature error at different temperatures can also be controlled within 10% and 70% of the vehicle data error within 7%. Furthermore, the weighing results remain stable regardless of the speed of the vehicle which meets the requirements for high-speed dynamic weighing.

  14. Spectral radiative property control method based on filling solution

    International Nuclear Information System (INIS)

    Jiao, Y.; Liu, L.H.; Hsu, P.-F.

    2014-01-01

    Controlling thermal radiation by tailoring spectral properties of microstructure is a promising method, can be applied in many industrial systems and have been widely researched recently. Among various property tailoring schemes, geometry design of microstructures is a commonly used method. However, the existing radiation property tailoring is limited by adjustability of processed microstructures. In other words, the spectral radiative properties of microscale structures are not possible to change after the gratings are fabricated. In this paper, we propose a method that adjusts the grating spectral properties by means of injecting filling solution, which could modify the thermal radiation in a fabricated microstructure. Therefore, this method overcomes the limitation mentioned above. Both mercury and water are adopted as the filling solution in this study. Aluminum and silver are selected as the grating materials to investigate the generality and limitation of this control method. The rigorous coupled-wave analysis is used to investigate the spectral radiative properties of these filling solution grating structures. A magnetic polaritons mechanism identification method is proposed based on LC circuit model principle. It is found that this control method could be used by different grating materials. Different filling solutions would enable the high absorption peak to move to longer or shorter wavelength band. The results show that the filling solution grating structures are promising for active control of spectral radiative properties. -- Highlights: • A filling solution grating structure is designed to adjust spectral radiative properties. • The mechanism of radiative property control is studied for engineering utilization. • Different grating materials are studied to find multi-functions for grating

  15. OWL-based reasoning methods for validating archetypes.

    Science.gov (United States)

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. A concern-based method to prioritize spill response activities

    International Nuclear Information System (INIS)

    Lamarche, A.; Bart, H.

    2002-01-01

    The Shoreline Cleanup Assessment Team (SCAT) of the Emergencies Division of Environment Canada in the Ontario Region developed a computerized method to help rank segments of shoreline according to levels of concern in the event of an oil spill. The original SCAT approach was designed to allow survey teams to acquire information about the state of shoreline spills so that assessments of oilings would be comparable in time and space. The SCAT method, which allows several decision makers to obtain an unbiased evaluation of the oiling situation, has been recognized in both Canada and the United States as a method that ensures the consistency of data gathering and processing for prioritization purposes. The concern-based prioritization system was integrated within the computerized response tools used by the SCAT team using tools such as the Great Lakes Electronic Environmental Sensitivities Atlas (GLEESA), a geographic information system (GIS) of environmental data, and Shore Assess, a GIS based computerized system used to provide support during a response phase of a spill. It was noted that this method is considered to be a practical response tool designed around the principles of performance support and cybernetics to help decision makers set priorities. It is not designed for pre-impact assessment. Instead, it ensures that existing knowledge of the spill characteristics and environmental conditions are used in a consistent and logical method to prioritize contingency plans. The factors used to evaluate concern for oiling, shoreline type and land use were described. Factors for concern assessment of biological organisms include the status of organisms as being either endangered, threatened, vulnerable, special concern, or not at risk. Characteristics of the species, potential effect of the pollutant and potential effect from response activities are other factors for concern. The method evaluates the concern for every category using a simple algorithm which is

  17. A genetic algorithm based method for neutron spectrum unfolding

    International Nuclear Information System (INIS)

    Suman, Vitisha; Sarkar, P.K.

    2013-03-01

    An approach to neutron spectrum unfolding based on a stochastic evolutionary search mechanism - Genetic Algorithm (GA) is presented. It is tested to unfold a set of simulated spectra, the unfolded spectra is compared to the output of a standard code FERDOR. The method was then applied to a set of measured pulse height spectrum of neutrons from the AmBe source as well as of emitted neutrons from Li(p,n) and Ag(C,n) nuclear reactions carried out in the accelerator environment. The unfolded spectra compared to the output of FERDOR show good agreement in the case of AmBe spectra and Li(p,n) spectra. In the case of Ag(C,n) spectra GA method results in some fluctuations. Necessity of carrying out smoothening of the obtained solution is also studied, which leads to approximation of the solution yielding an appropriate solution finally. Few smoothing techniques like second difference smoothing, Monte Carlo averaging, combination of both and gaussian based smoothing methods are also studied. Unfolded results obtained after inclusion of the smoothening criteria are in close agreement with the output obtained from the FERDOR code. The present method is also tested on a set of underdetermined problems, the outputs of which is compared to the unfolded spectra obtained from the FERDOR applied to a completely determined problem, shows a good match. The distribution of the unfolded spectra is also studied. Uncertainty propagation in the unfolded spectra due to the errors present in the measurement as well as the response function is also carried out. The method appears to be promising for unfolding the completely determined as well as underdetermined problems. It also has provisions to carry out the uncertainty analysis. (author)

  18. A CT-based method for fully quantitative TI SPECT

    International Nuclear Information System (INIS)

    Willowson, Kathy; Bailey, Dale; Baldock, Clive

    2009-01-01

    Full text: Objectives: To develop and validate a method for quantitative 2 0 l TI SPECT data based on corrections derived from X-ray CT data, and to apply the method in the clinic for quantitative determination of recurrence of brain tumours. Method: A previously developed method for achieving quantitative SPECT with 9 9 m Tc based on corrections derived from xray CT data was extended to apply to 2 0 l Tl. Experimental validation was performed on a cylindrical phantom by comparing known injected activity and measured concentration to quantitative calculations. Further evaluation was performed on a RSI Striatal Brain Phantom containing three 'lesions' with activity to background ratios of 1: 1, 1.5: I and 2: I. The method was subsequently applied to a series of scans from patients with suspected recurrence of brain tumours (principally glioma) to determine an SUV-like measure (Standardised Uptake Value). Results: The total activity and concentration in the phantom were calculated to within 3% and I % of the true values, respectively. The calculated values for the concentration of activity in the background and corresponding lesions of the brain phantom (in increasing ratios) were found to be within 2%,10%,1% and 2%, respectively, of the true concentrations. Patient studies showed that an initial SUV greater than 1.5 corresponded to a 56% mortality rate in the first 12 months, as opposed to a 14% mortality rate for those with a SUV less than 1.5. Conclusion: The quantitative technique produces accurate results for the radionuclide 2 0 l Tl. Initial investigation in clinical brain SPECT suggests correlation between quantitative uptake and survival.

  19. Hyperspectral image compressing using wavelet-based method

    Science.gov (United States)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  20. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  1. Hybrid perturbation methods based on statistical time series models

    Science.gov (United States)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  2. 2-D tiles declustering method based on virtual devices

    Science.gov (United States)

    Li, Zhongmin; Gao, Lu

    2009-10-01

    Generally, 2-D spatial data are divided as a series of tiles according to the plane grid. To satisfy the effect of vision, the tiles in the query window including the view point would be displayed quickly at the screen. Aiming at the performance difference of real storage devices, we propose a 2-D tiles declustering method based on virtual device. Firstly, we construct a group of virtual devices which have same storage performance and non-limited capacity, then distribute the tiles into M virtual devices according to the query window of 2-D tiles. Secondly, we equably map the tiles in M virtual devices into M equidistant intervals in [0, 1) using pseudo-random number generator. Finally, we devide [0, 1) into M intervals according to the tiles distribution percentage of every real storage device, and distribute the tiles in each interval in the corresponding real storage device. We have designed and realized a prototype GlobeSIGht, and give some related test results. The results show that the average response time of each tile in the query window including the view point using 2-D tiles declustering method based on virtual device is more efficient than using other methods.

  3. Dynamic model based on Bayesian method for energy security assessment

    International Nuclear Information System (INIS)

    Augutis, Juozas; Krikštolaitis, Ričardas; Pečiulytė, Sigita; Žutautaitė, Inga

    2015-01-01

    Highlights: • Methodology for dynamic indicator model construction and forecasting of indicators. • Application of dynamic indicator model for energy system development scenarios. • Expert judgement involvement using Bayesian method. - Abstract: The methodology for the dynamic indicator model construction and forecasting of indicators for the assessment of energy security level is presented in this article. An indicator is a special index, which provides numerical values to important factors for the investigated area. In real life, models of different processes take into account various factors that are time-dependent and dependent on each other. Thus, it is advisable to construct a dynamic model in order to describe these dependences. The energy security indicators are used as factors in the dynamic model. Usually, the values of indicators are obtained from statistical data. The developed dynamic model enables to forecast indicators’ variation taking into account changes in system configuration. The energy system development is usually based on a new object construction. Since the parameters of changes of the new system are not exactly known, information about their influences on indicators could not be involved in the model by deterministic methods. Thus, dynamic indicators’ model based on historical data is adjusted by probabilistic model with the influence of new factors on indicators using the Bayesian method

  4. A multiparameter chaos control method based on OGY approach

    International Nuclear Information System (INIS)

    Souza de Paula, Aline; Amorim Savi, Marcelo

    2009-01-01

    Chaos control is based on the richness of responses of chaotic behavior and may be understood as the use of tiny perturbations for the stabilization of a UPO embedded in a chaotic attractor. Since one of these UPO can provide better performance than others in a particular situation the use of chaos control can make this kind of behavior to be desirable in a variety of applications. The OGY method is a discrete technique that considers small perturbations promoted in the neighborhood of the desired orbit when the trajectory crosses a specific surface, such as a Poincare section. This contribution proposes a multiparameter semi-continuous method based on OGY approach in order to control chaotic behavior. Two different approaches are possible with this method: coupled approach, where all control parameters influences system dynamics although they are not active; and uncoupled approach that is a particular case where control parameters return to the reference value when they become passive parameters. As an application of the general formulation, it is investigated a two-parameter actuation of a nonlinear pendulum control employing coupled and uncoupled approaches. Analyses are carried out considering signals that are generated by numerical integration of the mathematical model using experimentally identified parameters. Results show that the procedure can be a good alternative for chaos control since it provides a more effective UPO stabilization than the classical single-parameter approach.

  5. An Improved Information Hiding Method Based on Sparse Representation

    Directory of Open Access Journals (Sweden)

    Minghai Yao

    2015-01-01

    Full Text Available A novel biometric authentication information hiding method based on the sparse representation is proposed for enhancing the security of biometric information transmitted in the network. In order to make good use of abundant information of the cover image, the sparse representation method is adopted to exploit the correlation between the cover and biometric images. Thus, the biometric image is divided into two parts. The first part is the reconstructed image, and the other part is the residual image. The biometric authentication image cannot be restored by any one part. The residual image and sparse representation coefficients are embedded into the cover image. Then, for the sake of causing much less attention of attackers, the visual attention mechanism is employed to select embedding location and embedding sequence of secret information. Finally, the reversible watermarking algorithm based on histogram is utilized for embedding the secret information. For verifying the validity of the algorithm, the PolyU multispectral palmprint and the CASIA iris databases are used as biometric information. The experimental results show that the proposed method exhibits good security, invisibility, and high capacity.

  6. TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL

    Directory of Open Access Journals (Sweden)

    N. Zhu

    2016-06-01

    Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  7. Outline-based morphometrics, an overlooked method in arthropod studies?

    Science.gov (United States)

    Dujardin, Jean-Pierre; Kaba, D; Solano, P; Dupraz, M; McCoy, K D; Jaramillo-O, N

    2014-12-01

    Modern methods allow a geometric representation of forms, separating size and shape. In entomology, as well as in many other fields involving arthropod studies, shape variation has proved useful for species identification and population characterization. In medical entomology, it has been applied to very specific questions such as population structure, reinfestation of insecticide-treated areas and cryptic species recognition. For shape comparisons, great importance is given to the quality of landmarks in terms of comparability. Two conceptually and statistically separate approaches are: (i) landmark-based morphometrics, based on the relative position of a few anatomical "true" or "traditional" landmarks, and (ii) outline-based morphometrics, which captures the contour of forms through a sequence of close "pseudo-landmarks". Most of the studies on insects of medical, veterinary or economic importance make use of the landmark approach. The present survey makes a case for the outline method, here based on elliptic Fourier analysis. The collection of pseudo-landmarks may require the manual digitization of many points and, for this reason, might appear less attractive. It, however, has the ability to compare homologous organs or structures having no landmarks at all. This strength offers the possibility to study a wider range of anatomical structures and thus, a larger range of arthropods. We present a few examples highlighting its interest for separating close or cryptic species, or characterizing conspecific geographic populations, in a series of different vector organisms. In this simple application, i.e. the recognition of close or cryptic forms, the outline approach provided similar scores as those obtained by the landmark-based approach. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Design of time interval generator based on hybrid counting method

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Yuan [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Wang, Zhaoqi [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Lu, Houbing [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Hefei Electronic Engineering Institute, Hefei 230037 (China); Chen, Lian [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Jin, Ge, E-mail: goldjin@ustc.edu.cn [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  9. Design of time interval generator based on hybrid counting method

    International Nuclear Information System (INIS)

    Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge

    2016-01-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  10. Integrated method for the measurement of trace nitrogenous atmospheric bases

    Directory of Open Access Journals (Sweden)

    D. Key

    2011-12-01

    Full Text Available Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace, atmospheric, gaseous nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications (e.g., methylamine, 1 pptv; ethylamine, 2 pptv; morpholine, 1 pptv; aniline, 1 pptv; hydrazine, 0.1 pptv; methylhydrazine, 2 pptv, as supported by field measurements in an urban park and in the exhaust of on-road vehicles.

  11. Convex-based void filling method for CAD-based Monte Carlo geometry modeling

    International Nuclear Information System (INIS)

    Yu, Shengpeng; Cheng, Mengyun; Song, Jing; Long, Pengcheng; Hu, Liqin

    2015-01-01

    Highlights: • We present a new void filling method named CVF for CAD based MC geometry modeling. • We describe convex based void description based and quality-based space subdivision. • The results showed improvements provided by CVF for both modeling and MC calculation efficiency. - Abstract: CAD based automatic geometry modeling tools have been widely applied to generate Monte Carlo (MC) calculation geometry for complex systems according to CAD models. Automatic void filling is one of the main functions in the CAD based MC geometry modeling tools, because the void space between parts in CAD models is traditionally not modeled while MC codes such as MCNP need all the problem space to be described. A dedicated void filling method, named Convex-based Void Filling (CVF), is proposed in this study for efficient void filling and concise void descriptions. The method subdivides all the problem space into disjointed regions using Quality based Subdivision (QS) and describes the void space in each region with complementary descriptions of the convex volumes intersecting with that region. It has been implemented in SuperMC/MCAM, the Multiple-Physics Coupling Analysis Modeling Program, and tested on International Thermonuclear Experimental Reactor (ITER) Alite model. The results showed that the new method reduced both automatic modeling time and MC calculation time

  12. Topology optimization based on the harmony search method

    International Nuclear Information System (INIS)

    Lee, Seung-Min; Han, Seog-Young

    2017-01-01

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  13. The Software Cost Estimation Method Based on Fuzzy Ontology

    Directory of Open Access Journals (Sweden)

    Plecka Przemysław

    2014-12-01

    Full Text Available In the course of sales process of Enterprise Resource Planning (ERP Systems, it turns out that the standard system must be extended or changed (modified according to specific customer’s requirements. Therefore, suppliers face the problem of determining the cost of additional works. Most methods of cost estimation bring satisfactory results only at the stage of pre-implementation analysis. However, suppliers need to know the estimated cost as early as at the stage of trade talks. During contract negotiations, they expect not only the information about the costs of works, but also about the risk of exceeding these costs or about the margin of safety. One method that gives more accurate results at the stage of trade talks is the method based on the ontology of implementation costs. This paper proposes modification of the method involving the use of fuzzy attributes, classes, instances and relations in the ontology. The result provides not only the information about the value of work, but also about the minimum and maximum expected cost, and the most likely range of costs. This solution allows suppliers to effectively negotiate the contract and increase the chances of successful completion of the project.

  14. Topology optimization based on the harmony search method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Min; Han, Seog-Young [Hanyang University, Seoul (Korea, Republic of)

    2017-06-15

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  15. A Method to Measure the Bracelet Based on Feature Energy

    Science.gov (United States)

    Liu, Hongmin; Li, Lu; Wang, Zhiheng; Huo, Zhanqiang

    2017-12-01

    To measure the bracelet automatically, a novel method based on feature energy is proposed. Firstly, the morphological method is utilized to preprocess the image, and the contour consisting of a concentric circle is extracted. Then, a feature energy function, which is relevant to the distances from one pixel to the edge points, is defined taking into account the geometric properties of the concentric circle. The input image is subsequently transformed to the feature energy distribution map (FEDM) by computing the feature energy of each pixel. The center of the concentric circle is thus located by detecting the maximum on the FEDM; meanwhile, the radii of the concentric circle are determined according to the feature energy function of the center pixel. Finally, with the use of a calibration template, the internal diameter and thickness of the bracelet are measured. The experimental results show that the proposed method can measure the true sizes of the bracelet accurately with the simplicity, directness and robustness compared to the existing methods.

  16. Transit Traffic Analysis Zone Delineating Method Based on Thiessen Polygon

    Directory of Open Access Journals (Sweden)

    Shuwei Wang

    2014-04-01

    Full Text Available A green transportation system composed of transit, busses and bicycles could be a significant in alleviating traffic congestion. However, the inaccuracy of current transit ridership forecasting methods is imposing a negative impact on the development of urban transit systems. Traffic Analysis Zone (TAZ delineating is a fundamental and essential step in ridership forecasting, existing delineating method in four-step models have some problems in reflecting the travel characteristics of urban transit. This paper aims to come up with a Transit Traffic Analysis Zone delineation method as supplement of traditional TAZs in transit service analysis. The deficiencies of current TAZ delineating methods were analyzed, and the requirements of Transit Traffic Analysis Zone (TTAZ were summarized. Considering these requirements, Thiessen Polygon was introduced into TTAZ delineating. In order to validate its feasibility, Beijing was then taken as an example to delineate TTAZs, followed by a spatial analysis of office buildings within a TTAZ and transit station departure passengers. Analysis result shows that the TTAZs based on Thiessen polygon could reflect the transit travel characteristic and is of in-depth research value.

  17. Evaluation and Comparison of Extremal Hypothesis-Based Regime Methods

    Directory of Open Access Journals (Sweden)

    Ishwar Joshi

    2018-03-01

    Full Text Available Regime channels are important for stable canal design and to determine river response to environmental changes, e.g., due to the construction of a dam, land use change, and climate shifts. A plethora of methods is available describing the hydraulic geometry of alluvial rivers in the regime. However, comparison of these methods using the same set of data seems lacking. In this study, we evaluate and compare four different extremal hypothesis-based regime methods, namely minimization of Froude number (MFN, maximum entropy and minimum energy dissipation rate (ME and MEDR, maximum flow efficiency (MFE, and Millar’s method, by dividing regime channel data into sand and gravel beds. The results show that for sand bed channels MFN gives a very high accuracy of prediction for regime channel width and depth. For gravel bed channels we find that MFN and ‘ME and MEDR’ give a very high accuracy of prediction for width and depth. Therefore the notion that extremal hypotheses which do not contain bank stability criteria are inappropriate for use is shown false as both MFN and ‘ME and MEDR’ lack bank stability criteria. Also, we find that bank vegetation has significant influence in the prediction of hydraulic geometry by MFN and ‘ME and MEDR’.

  18. Hybrid Fundamental Solution Based Finite Element Method: Theory and Applications

    Directory of Open Access Journals (Sweden)

    Changyong Cao

    2015-01-01

    Full Text Available An overview on the development of hybrid fundamental solution based finite element method (HFS-FEM and its application in engineering problems is presented in this paper. The framework and formulations of HFS-FEM for potential problem, plane elasticity, three-dimensional elasticity, thermoelasticity, anisotropic elasticity, and plane piezoelectricity are presented. In this method, two independent assumed fields (intraelement filed and auxiliary frame field are employed. The formulations for all cases are derived from the modified variational functionals and the fundamental solutions to a given problem. Generation of elemental stiffness equations from the modified variational principle is also described. Typical numerical examples are given to demonstrate the validity and performance of the HFS-FEM. Finally, a brief summary of the approach is provided and future trends in this field are identified.

  19. An Aerial Video Stabilization Method Based on SURF Feature

    Directory of Open Access Journals (Sweden)

    Wu Hao

    2016-01-01

    Full Text Available The video captured by Micro Aerial Vehicle is often degraded due to unexpected random trembling and jitter caused by wind and the shake of the aerial platform. An approach for stabilizing the aerial video based on SURF feature and Kalman filter is proposed. SURF feature points are extracted in each frame, and the feature points between adjacent frames are matched using Fast Library for Approximate Nearest Neighbors search method. Then Random Sampling Consensus matching algorithm and Least Squares Method are used to remove mismatching points pairs, and estimate the transformation between the adjacent images. Finally, Kalman filter is applied to smooth the motion parameters and separate Intentional Motion from Unwanted Motion to stabilize the aerial video. Experiments results show that the approach can stabilize aerial video efficiently with high accuracy, and it is robust to the translation, rotation and zooming motion of camera.

  20. Image based method for aberration measurement of lithographic tools

    Science.gov (United States)

    Xu, Shuang; Tao, Bo; Guo, Yongxing; Li, Gongfa

    2018-01-01

    Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.

  1. Improved artificial bee colony algorithm based gravity matching navigation method.

    Science.gov (United States)

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-07-18

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.

  2. A Learning-Based Steganalytic Method against LSB Matching Steganography

    Directory of Open Access Journals (Sweden)

    Z. Xia

    2011-04-01

    Full Text Available This paper considers the detection of spatial domain least significant bit (LSB matching steganography in gray images. Natural images hold some inherent properties, such as histogram, dependence between neighboring pixels, and dependence among pixels that are not adjacent to each other. These properties are likely to be disturbed by LSB matching. Firstly, histogram will become smoother after LSB matching. Secondly, the two kinds of dependence will be weakened by the message embedding. Accordingly, three features, which are respectively based on image histogram, neighborhood degree histogram and run-length histogram, are extracted at first. Then, support vector machine is utilized to learn and discriminate the difference of features between cover and stego images. Experimental results prove that the proposed method possesses reliable detection ability and outperforms the two previous state-of-the-art methods. Further more, the conclusions are drawn by analyzing the individual performance of three features and their fused feature.

  3. Analysis of equivalent antenna based on FDTD method

    Directory of Open Access Journals (Sweden)

    Yun-xing Yang

    2014-09-01

    Full Text Available An equivalent microstrip antenna used in radio proximity fuse is presented. The design of this antenna is based on multilayer multi-permittivity dielectric substrate which is analyzed by finite difference time domain (FDTD method. Equivalent iterative formula is modified in the condition of cylindrical coordinate system. The mixed substrate which contains two kinds of media (one of them is airtakes the place of original single substrate. The results of equivalent antenna simulation show that the resonant frequency of equivalent antenna is similar to that of the original antenna. The validity of analysis can be validated by means of antenna resonant frequency formula. Two antennas have same radiation pattern and similar gain. This method can be used to reduce the weight of antenna, which is significant to the design of missile-borne antenna.

  4. Application of DNA-based methods in forensic entomology.

    Science.gov (United States)

    Wells, Jeffrey D; Stevens, Jamie R

    2008-01-01

    A forensic entomological investigation can benefit from a variety of widely practiced molecular genotyping methods. The most commonly used is DNA-based specimen identification. Other applications include the identification of insect gut contents and the characterization of the population genetic structure of a forensically important insect species. The proper application of these procedures demands that the analyst be technically expert. However, one must also be aware of the extensive list of standards and expectations that many legal systems have developed for forensic DNA analysis. We summarize the DNA techniques that are currently used in, or have been proposed for, forensic entomology and review established genetic analyses from other scientific fields that address questions similar to those in forensic entomology. We describe how accepted standards for forensic DNA practice and method validation are likely to apply to insect evidence used in a death or other forensic entomological investigation.

  5. HAM-Based Adaptive Multiscale Meshless Method for Burgers Equation

    Directory of Open Access Journals (Sweden)

    Shu-Li Mei

    2013-01-01

    Full Text Available Based on the multilevel interpolation theory, we constructed a meshless adaptive multiscale interpolation operator (MAMIO with the radial basis function. Using this operator, any nonlinear partial differential equations such as Burgers equation can be discretized adaptively in physical spaces as a nonlinear matrix ordinary differential equation. In order to obtain the analytical solution of the system of ODEs, the homotopy analysis method (HAM proposed by Shijun Liao was developed to solve the system of ODEs by combining the precise integration method (PIM which can be employed to get the analytical solution of linear system of ODEs. The numerical experiences show that HAM is not sensitive to the time step, and so the arithmetic error is mainly derived from the discrete in physical space.

  6. Improved GIS-based Methods for Traffic Noise Impact Assessment

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker; Bloch, Karsten Sand

    1996-01-01

    When vector-based GIS-packages are used for traffic noise impact assessments, the buffer-technique is usually employed for the study: 1. For each road segment buffer-zones representing different noise-intervals are generated, 2. The buffers from all road segments are smoothed together, and 3....... The number of buildings within the buffers are enumerated. This technique provides an inaccurate assessment of the noise diffusion since it does not correct for buildings barrier and reflection to noise. The paper presents the results from a research project where the traditional noise buffer technique...... was compared with a new method which includes these corrections. Both methods follow the Common Nordic Noise Calculation Model, although the traditional buffer technique ignores parts of the model. The basis for the work was a digital map of roads and building polygons, combined with a traffic- and road...

  7. An FPGA-based heterogeneous image fusion system design method

    Science.gov (United States)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  8. Method to implement the CCD timing generator based on FPGA

    Science.gov (United States)

    Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin

    2010-07-01

    With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.

  9. Comparison of three sensory profiling methods based on consumer perception

    DEFF Research Database (Denmark)

    Reinbach, Helene Christine; Giacalone, Davide; Ribeiro, Letícia Machado

    2014-01-01

    The present study compares three profiling methods based on consumer perceptions in their ability to discriminate and describe eight beers. Consumers (N=135) evaluated eight different beers using Check-All-That-Apply (CATA) methodology in two variations, with (n=63) and without (n=73) rating...... the intensity of the checked descriptors. With CATA, consumers rated 38 descriptors grouped in 7 overall categories (berries, floral, hoppy, nutty, roasted, spicy/herbal and woody). Additionally 40 of the consumers evaluated the same samples by partial Napping® followed by Ultra Flash Profiling (UFP). ANOVA...... comparisons the RV coefficients varied between 0.90 and 0.97, indicating a very high similarity between all three methods. These results show that the precision and reproducibility of sensory information obtained by consumers by CATA is comparable to that of Napping. The choice of methodology for consumer...

  10. STAR-BASED METHODS FOR PLEIADES HR COMMISSIONING

    Directory of Open Access Journals (Sweden)

    S. Fourest

    2012-07-01

    Full Text Available PLEIADES is the highest resolution civilian earth observing system ever developed in Europe. This imagery program is conducted by the French National Space Agency, CNES. It has been operating since 2012 a first satellite PLEIADES-HR launched on 2011 December 17th, a second one should be launched by the end of the year. Each satellite is designed to provide optical 70 cm resolution colored images to civilian and defense users. Thanks to the extreme agility of the satellite, new calibration methods have been tested, based on the observation of celestial bodies, and stars in particular. It has then been made possible to perform MTF measurement, re-focusing, geometrical bias and focal plane assessment, absolute calibration, ghost images localization, micro-vibrations measurement, etc… Starting from an overview of the star acquisition process, this paper will discuss the methods and present the results obtained during the first four months of the commissioning phase.

  11. Dominant partition method. [based on a wave function formalism

    Science.gov (United States)

    Dixon, R. M.; Redish, E. F.

    1979-01-01

    By use of the L'Huillier, Redish, and Tandy (LRT) wave function formalism, a partially connected method, the dominant partition method (DPM) is developed for obtaining few body reductions of the many body problem in the LRT and Bencze, Redish, and Sloan (BRS) formalisms. The DPM maps the many body problem to a fewer body one by using the criterion that the truncated formalism must be such that consistency with the full Schroedinger equation is preserved. The DPM is based on a class of new forms for the irreducible cluster potential, which is introduced in the LRT formalism. Connectivity is maintained with respect to all partitions containing a given partition, which is referred to as the dominant partition. Degrees of freedom corresponding to the breakup of one or more of the clusters of the dominant partition are treated in a disconnected manner. This approach for simplifying the complicated BRS equations is appropriate for physical problems where a few body reaction mechanism prevails.

  12. A GPU-based mipmapping method for water surface visualization

    Science.gov (United States)

    Li, Hua; Quan, Wei; Xu, Chao; Wu, Yan

    2018-03-01

    Visualization of water surface is a hot topic in computer graphics. In this paper, we presented a fast method to generate wide range of water surface with good image quality both near and far from the viewpoint. This method utilized uniform mesh and Fractal Perlin noise to model water surface. Mipmapping technology was enforced to the surface textures, which adjust the resolution with respect to the distance from the viewpoint and reduce the computing cost. Lighting effect was computed based on shadow mapping technology, Snell's law and Fresnel term. The render pipeline utilizes a CPU-GPU shared memory structure, which improves the rendering efficiency. Experiment results show that our approach visualizes water surface with good image quality at real-time frame rates performance.

  13. Utility of Combining a Simulation-Based Method With a Lecture-Based Method for Fundoscopy Training in Neurology Residency.

    Science.gov (United States)

    Gupta, Deepak K; Khandker, Namir; Stacy, Kristin; Tatsuoka, Curtis M; Preston, David C

    2017-10-01

    Fundoscopic examination is an essential component of the neurologic examination. Competence in its performance is mandated as a required clinical skill for neurology residents by the American Council of Graduate Medical Education. Government and private insurance agencies require its performance and documentation for moderate- and high-level neurologic evaluations. Traditionally, assessment and teaching of this key clinical examination technique have been difficult in neurology residency training. To evaluate the utility of a simulation-based method and the traditional lecture-based method for assessment and teaching of fundoscopy to neurology residents. This study was a prospective, single-blinded, education research study of 48 neurology residents recruited from July 1, 2015, through June 30, 2016, at a large neurology residency training program. Participants were equally divided into control and intervention groups after stratification by training year. Baseline and postintervention assessments were performed using questionnaire, survey, and fundoscopy simulators. After baseline assessment, both groups initially received lecture-based training, which covered fundamental knowledge on the components of fundoscopy and key neurologic findings observed on fundoscopic examination. The intervention group additionally received simulation-based training, which consisted of an instructor-led, hands-on workshop that covered practical skills of performing fundoscopic examination and identifying neurologically relevant findings on another fundoscopy simulator. The primary outcome measures were the postintervention changes in fundoscopy knowledge, skills, and total scores. A total of 30 men and 18 women were equally distributed between the 2 groups. The intervention group had significantly higher mean (SD) increases in skills (2.5 [2.3] vs 0.8 [1.8], P = .01) and total (9.3 [4.3] vs 5.3 [5.8], P = .02) scores compared with the control group. Knowledge scores (6.8 [3

  14. Hybrid method based on embedded coupled simulation of vortex particles in grid based solution

    Science.gov (United States)

    Kornev, Nikolai

    2017-09-01

    The paper presents a novel hybrid approach developed to improve the resolution of concentrated vortices in computational fluid mechanics. The method is based on combination of a grid based and the grid free computational vortex (CVM) methods. The large scale flow structures are simulated on the grid whereas the concentrated structures are modeled using CVM. Due to this combination the advantages of both methods are strengthened whereas the disadvantages are diminished. The procedure of the separation of small concentrated vortices from the large scale ones is based on LES filtering idea. The flow dynamics is governed by two coupled transport equations taking two-way interaction between large and fine structures into account. The fine structures are mapped back to the grid if their size grows due to diffusion. Algorithmic aspects of the hybrid method are discussed. Advantages of the new approach are illustrated on some simple two dimensional canonical flows containing concentrated vortices.

  15. A New Method Based on TOPSIS and Response Surface Method for MCDM Problems with Interval Numbers

    Directory of Open Access Journals (Sweden)

    Peng Wang

    2015-01-01

    Full Text Available As the preference of design maker (DM is always ambiguous, we have to face many multiple criteria decision-making (MCDM problems with interval numbers in our daily life. Though there have been some methods applied to solve this sort of problem, it is always complex to comprehend and sometimes difficult to implement. The calculation processes are always ineffective when a new alternative is added or removed. In view of the weakness like this, this paper presents a new method based on TOPSIS and response surface method (RSM for MCDM problems with interval numbers, RSM-TOPSIS-IN for short. The key point of this approach is the application of deviation degree matrix, which ensures that the DM can get a simple response surface (RS model to rank the alternatives. In order to demonstrate the feasibility and effectiveness of the proposed method, three illustrative MCMD problems with interval numbers are analysed, including (a selection of investment program, (b selection of a right partner, and (c assessment of road transport technologies. The contrast of ranking results shows that the RSM-TOPSIS-IN method is in good agreement with those derived by earlier researchers, indicating it is suitable to solve MCDM problems with interval numbers.

  16. Method-Based Higher Education in Sustainability: The Potential of the Scenario Method

    Directory of Open Access Journals (Sweden)

    Richard Beecroft

    2014-05-01

    Full Text Available Both sustainability and education are challenging process-oriented objectives. When the aim is to combine both notions, as in Higher Education in Sustainability (HES, it is indispensable to first establish a common ground between them. In this paper, we characterise this common ground in terms of four aspects: future orientation, normativity, global perspective, and theory engaged in practice. Based on an analysis of the common ground, one method that is well-established in a wide range of sustainability sciences shows high potential for use in HES because it covers all four aspects in detail: the scenario method. We argue that a didactical reconstruction of the scenario method is necessary to utilise its potential and develop adequate forms of teaching in higher education. The scenario method is used to construct and analyse a set of alternative future developments to support decisions that have to be made in the present. Didactical reconstruction reveals a spectrum of objectives for which the scenario method can be employed: (1 projection; (2 teleological planning and (3 an explorative search for possibilities not yet considered. By studying and experimenting with this spectrum of objectives, students in HES can develop fundamental reflexive competencies in addressing the future in different ways that are relevant for both sustainability and education.

  17. Future Perspectives for Arts-Based Methods in Higher Education

    DEFF Research Database (Denmark)

    Chemi, Tatiana; Du, Xiangyun

    2018-01-01

    practices around the world while, on the other, addressing the challenges that these practices meet. Disruptive strategies must be given opportunities for reflection and reflexive spaces, opportunities for learning and teaching the artistic languages. The chapters show that long-term, systematic...... conversations between scholars and educators are needed, and that artists have a central role in the future developments of this field. Whether professional or amateur artists is no matter, but the craft and creativity of art practices in the flesh must lead any future direction of arts-based methods....

  18. Analysis advanced methods of data bases of industrial experience return

    International Nuclear Information System (INIS)

    Lannoy, A.; Procaccia, H.

    1994-05-01

    This is a presentation, through different conceptions of data bases on industrial experience return, of the principal methods for treatments and analyses of the collected data, going from the frequency statistic and factorial analysis, to the the Bayesian statistical decision theory, which is a real decision assistance tool for responsibles, conceivers and operators. Examples in various fields are given (OREDA: Offshore REliability DAta bank for marine drilling platforms, CEDB: Component Event Data Bank for european electric power industry, RDF 93: reliability of electronic components of ''France Telecom'', EVT: failure EVenTs data bank in the french nuclear power plants by ''EDF''). (A.B.). refs., figs., tabs

  19. An assembly sequence planning method based on composite algorithm

    Directory of Open Access Journals (Sweden)

    Enfu LIU

    2016-02-01

    Full Text Available To solve the combination explosion problem and the blind searching problem in assembly sequence planning of complex products, an assembly sequence planning method based on composite algorithm is proposed. In the composite algorithm, a sufficient number of feasible assembly sequences are generated using formalization reasoning algorithm as the initial population of genetic algorithm. Then fuzzy knowledge of assembly is integrated into the planning process of genetic algorithm and ant algorithm to get the accurate solution. At last, an example is conducted to verify the feasibility of composite algorithm.

  20. Terahertz spectral unmixing based method for identifying gastric cancer

    Science.gov (United States)

    Cao, Yuqi; Huang, Pingjie; Li, Xian; Ge, Weiting; Hou, Dibo; Zhang, Guangxin

    2018-02-01

    At present, many researchers are exploring biological tissue inspection using terahertz time-domain spectroscopy (THz-TDS) techniques. In this study, based on a modified hard modeling factor analysis method, terahertz spectral unmixing was applied to investigate the relationships between the absorption spectra in THz-TDS and certain biomarkers of gastric cancer in order to systematically identify gastric cancer. A probability distribution and box plot were used to extract the distinctive peaks that indicate carcinogenesis, and the corresponding weight distributions were used to discriminate the tissue types. The results of this work indicate that terahertz techniques have the potential to detect different levels of cancer, including benign tumors and polyps.

  1. Process identification method based on the Z transformation

    International Nuclear Information System (INIS)

    Zwingelstein, G.

    1968-01-01

    A simple method is described for identifying the transfer function of a linear retard-less system, based on the inversion of the Z transformation of the transmittance using a computer. It is assumed in this study that the signals at the entrance and at the exit of the circuit considered are of the deterministic type. The study includes: the theoretical principle of the inversion of the Z transformation, details about programming simulation, and identification of filters whose degrees vary from the first to the fifth order. (authors) [fr

  2. A synthetic method of solar spectrum based on LED

    Science.gov (United States)

    Wang, Ji-qiang; Su, Shi; Zhang, Guo-yu; Zhang, Jian

    2017-10-01

    A synthetic method of solar spectrum which based on the spectral characteristics of the solar spectrum and LED, and the principle of arbitrary spectral synthesis was studied by using 14 kinds of LED with different central wavelengths.The LED and solar spectrum data were selected by Origin Software firstly, then calculated the total number of LED for each center band by the transformation relation between brightness and illumination and Least Squares Curve Fit in Matlab.Finally, the spectrum curve of AM1.5 standard solar spectrum was obtained. The results met the technical indexes of the solar spectrum matching with ±20% and the solar constant with >0.5.

  3. Hybrid Modeling Method for a DEP Based Particle Manipulation

    Directory of Open Access Journals (Sweden)

    Mohamad Sawan

    2013-01-01

    Full Text Available In this paper, a new modeling approach for Dielectrophoresis (DEP based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results.

  4. Big data mining analysis method based on cloud computing

    Science.gov (United States)

    Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao

    2017-08-01

    Information explosion era, large data super-large, discrete and non-(semi) structured features have gone far beyond the traditional data management can carry the scope of the way. With the arrival of the cloud computing era, cloud computing provides a new technical way to analyze the massive data mining, which can effectively solve the problem that the traditional data mining method cannot adapt to massive data mining. This paper introduces the meaning and characteristics of cloud computing, analyzes the advantages of using cloud computing technology to realize data mining, designs the mining algorithm of association rules based on MapReduce parallel processing architecture, and carries out the experimental verification. The algorithm of parallel association rule mining based on cloud computing platform can greatly improve the execution speed of data mining.

  5. Storm surge model based on variational data assimilation method

    Directory of Open Access Journals (Sweden)

    Shi-li Huang

    2010-06-01

    Full Text Available By combining computation and observation information, the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting. It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge. By controlling the wind stress drag coefficient, the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon. In the data assimilation tests, the model accurately identified the wind stress drag coefficient and obtained results close to the true state. Then, the actual storm surge induced by Typhoon 0515 was forecast by the developed model, and the results demonstrate its efficiency in practical application.

  6. Transistor-based particle detection systems and methods

    Science.gov (United States)

    Jain, Ankit; Nair, Pradeep R.; Alam, Muhammad Ashraful

    2015-06-09

    Transistor-based particle detection systems and methods may be configured to detect charged and non-charged particles. Such systems may include a supporting structure contacting a gate of a transistor and separating the gate from a dielectric of the transistor, and the transistor may have a near pull-in bias and a sub-threshold region bias to facilitate particle detection. The transistor may be configured to change current flow through the transistor in response to a change in stiffness of the gate caused by securing of a particle to the gate, and the transistor-based particle detection system may configured to detect the non-charged particle at least from the change in current flow.

  7. Sparse data structure design for wavelet-based methods

    Directory of Open Access Journals (Sweden)

    Latu Guillaume

    2011-12-01

    Full Text Available This course gives an introduction to the design of efficient datatypes for adaptive wavelet-based applications. It presents some code fragments and benchmark technics useful to learn about the design of sparse data structures and adaptive algorithms. Material and practical examples are given, and they provide good introduction for anyone involved in the development of adaptive applications. An answer will be given to the question: how to implement and efficiently use the discrete wavelet transform in computer applications? A focus will be made on time-evolution problems, and use of wavelet-based scheme for adaptively solving partial differential equations (PDE. One crucial issue is that the benefits of the adaptive method in term of algorithmic cost reduction can not be wasted by overheads associated to sparse data management.

  8. OCL-BASED TEST CASE GENERATION USING CATEGORY PARTITIONING METHOD

    Directory of Open Access Journals (Sweden)

    A. Jalila

    2015-10-01

    Full Text Available The adoption of fault detection techniques during initial stages of software development life cycle urges to improve reliability of a software product. Specification-based testing is one of the major criterions to detect faults in the requirement specification or design of a software system. However, due to the non-availability of implementation details, test case generation from formal specifications become a challenging task. As a novel approach, the proposed work presents a methodology to generate test cases from OCL (Object constraint Language formal specification using Category Partitioning Method (CPM. The experiment results indicate that the proposed methodology is more effective in revealing specification based faults. Furthermore, it has been observed that OCL and CPM form an excellent combination for performing functional testing at the earliest to improve software quality with reduced cost.

  9. A PBOM configuration and management method based on templates

    Science.gov (United States)

    Guo, Kai; Qiao, Lihong; Qie, Yifan

    2018-03-01

    The design of Process Bill of Materials (PBOM) holds a hinge position in the process of product development. The requirements of PBOM configuration design and management for complex products are analysed in this paper, which include the reuse technique of configuration procedure and urgent management need of huge quantity of product family PBOM data. Based on the analysis, the function framework of PBOM configuration and management has been established. Configuration templates and modules are defined in the framework to support the customization and the reuse of configuration process. The configuration process of a detection sensor PBOM is shown as an illustration case in the end. The rapid and agile PBOM configuration and management can be achieved utilizing template-based method, which has a vital significance to improve the development efficiency for complex products.

  10. Biosensor method and system based on feature vector extraction

    Science.gov (United States)

    Greenbaum, Elias [Knoxville, TN; Rodriguez, Jr., Miguel; Qi, Hairong [Knoxville, TN; Wang, Xiaoling [San Jose, CA

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  11. Energy demand forecasting method based on international statistical data

    International Nuclear Information System (INIS)

    Glanc, Z.; Kerner, A.

    1997-01-01

    Poland is in a transition phase from a centrally planned to a market economy; data collected under former economic conditions do not reflect a market economy. Final energy demand forecasts are based on the assumption that the economic transformation in Poland will gradually lead the Polish economy, technologies and modes of energy use, to the same conditions as mature market economy countries. The starting point has a significant influence on the future energy demand and supply structure: final energy consumption per capita in 1992 was almost half the average of OECD countries; energy intensity, based on Purchasing Power Parities (PPP) and referred to GDP, is more than 3 times higher in Poland. A method of final energy demand forecasting based on regression analysis is described in this paper. The input data are: output of macroeconomic and population growth forecast; time series 1970-1992 of OECD countries concerning both macroeconomic characteristics and energy consumption; and energy balance of Poland for the base year of the forecast horizon. (author). 1 ref., 19 figs, 4 tabs

  12. Energy demand forecasting method based on international statistical data

    Energy Technology Data Exchange (ETDEWEB)

    Glanc, Z; Kerner, A [Energy Information Centre, Warsaw (Poland)

    1997-09-01

    Poland is in a transition phase from a centrally planned to a market economy; data collected under former economic conditions do not reflect a market economy. Final energy demand forecasts are based on the assumption that the economic transformation in Poland will gradually lead the Polish economy, technologies and modes of energy use, to the same conditions as mature market economy countries. The starting point has a significant influence on the future energy demand and supply structure: final energy consumption per capita in 1992 was almost half the average of OECD countries; energy intensity, based on Purchasing Power Parities (PPP) and referred to GDP, is more than 3 times higher in Poland. A method of final energy demand forecasting based on regression analysis is described in this paper. The input data are: output of macroeconomic and population growth forecast; time series 1970-1992 of OECD countries concerning both macroeconomic characteristics and energy consumption; and energy balance of Poland for the base year of the forecast horizon. (author). 1 ref., 19 figs, 4 tabs.

  13. A novel method for EMG decomposition based on matched filters

    Directory of Open Access Journals (Sweden)

    Ailton Luiz Dias Siqueira Júnior

    Full Text Available Introduction Decomposition of electromyography (EMG signals into the constituent motor unit action potentials (MUAPs can allow for deeper insights into the underlying processes associated with the neuromuscular system. The vast majority of the methods for EMG decomposition found in the literature depend on complex algorithms and specific instrumentation. As an attempt to contribute to solving these issues, we propose a method based on a bank of matched filters for the decomposition of EMG signals. Methods Four main units comprise our method: a bank of matched filters, a peak detector, a motor unit classifier and an overlapping resolution module. The system’s performance was evaluated with simulated and real EMG data. Classification accuracy was measured by comparing the responses of the system with known data from the simulator and with the annotations of a human expert. Results The results show that decomposition of non-overlapping MUAPs can be achieved with up to 99% accuracy for signals with up to 10 active motor units and a signal-to-noise ratio (SNR of 10 dB. For overlapping MUAPs with up to 10 motor units per signal and a SNR of 20 dB, the technique allows for correct classification of approximately 71% of the MUAPs. The method is capable of processing, decomposing and classifying a 50 ms window of data in less than 5 ms using a standard desktop computer. Conclusion This article contributes to the ongoing research on EMG decomposition by describing a novel technique capable of delivering high rates of success by means of a fast algorithm, suggesting its possible use in future real-time embedded applications, such as myoelectric prostheses control and biofeedback systems.

  14. Filmless versus film-based systems in radiographic examination costs: an activity-based costing method

    Directory of Open Access Journals (Sweden)

    Sase Yuji

    2011-09-01

    Full Text Available Abstract Background Since the shift from a radiographic film-based system to that of a filmless system, the change in radiographic examination costs and costs structure have been undetermined. The activity-based costing (ABC method measures the cost and performance of activities, resources, and cost objects. The purpose of this study is to identify the cost structure of a radiographic examination comparing a filmless system to that of a film-based system using the ABC method. Methods We calculated the costs of radiographic examinations for both a filmless and a film-based system, and assessed the costs or cost components by simulating radiographic examinations in a health clinic. The cost objects of the radiographic examinations included lumbar (six views, knee (three views, wrist (two views, and other. Indirect costs were allocated to cost objects using the ABC method. Results The costs of a radiographic examination using a filmless system are as follows: lumbar 2,085 yen; knee 1,599 yen; wrist 1,165 yen; and other 1,641 yen. The costs for a film-based system are: lumbar 3,407 yen; knee 2,257 yen; wrist 1,602 yen; and other 2,521 yen. The primary activities were "calling patient," "explanation of scan," "take photographs," and "aftercare" for both filmless and film-based systems. The cost of these activities cost represented 36.0% of the total cost for a filmless system and 23.6% of a film-based system. Conclusions The costs of radiographic examinations using a filmless system and a film-based system were calculated using the ABC method. Our results provide clear evidence that the filmless system is more effective than the film-based system in providing greater value services directly to patients.

  15. Detection algorithm for glass bottle mouth defect by continuous wavelet transform based on machine vision

    Science.gov (United States)

    Qian, Jinfang; Zhang, Changjiang

    2014-11-01

    An efficient algorithm based on continuous wavelet transform combining with pre-knowledge, which can be used to detect the defect of glass bottle mouth, is proposed. Firstly, under the condition of ball integral light source, a perfect glass bottle mouth image is obtained by Japanese Computar camera through the interface of IEEE-1394b. A single threshold method based on gray level histogram is used to obtain the binary image of the glass bottle mouth. In order to efficiently suppress noise, moving average filter is employed to smooth the histogram of original glass bottle mouth image. And then continuous wavelet transform is done to accurately determine the segmentation threshold. Mathematical morphology operations are used to get normal binary bottle mouth mask. A glass bottle to be detected is moving to the detection zone by conveyor belt. Both bottle mouth image and binary image are obtained by above method. The binary image is multiplied with normal bottle mask and a region of interest is got. Four parameters (number of connected regions, coordinate of centroid position, diameter of inner cycle, and area of annular region) can be computed based on the region of interest. Glass bottle mouth detection rules are designed by above four parameters so as to accurately detect and identify the defect conditions of glass bottle. Finally, the glass bottles of Coca-Cola Company are used to verify the proposed algorithm. The experimental results show that the proposed algorithm can accurately detect the defect conditions of the glass bottles and have 98% detecting accuracy.

  16. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  17. A physically based catchment partitioning method for hydrological analysis

    Science.gov (United States)

    Menduni, Giovanni; Riboni, Vittoria

    2000-07-01

    We propose a partitioning method for the topographic surface, which is particularly suitable for hydrological distributed modelling and shallow-landslide distributed modelling. The model provides variable mesh size and appears to be a natural evolution of contour-based digital terrain models. The proposed method allows the drainage network to be derived from the contour lines. The single channels are calculated via a search for the steepest downslope lines. Then, for each network node, the contributing area is determined by means of a search for both steepest upslope and downslope lines. This leads to the basin being partitioned into physically based finite elements delimited by irregular polygons. In particular, the distributed computation of local geomorphological parameters (i.e. aspect, average slope and elevation, main stream length, concentration time, etc.) can be performed easily for each single element. The contributing area system, together with the information on the distribution of geomorphological parameters provide a useful tool for distributed hydrological modelling and simulation of environmental processes such as erosion, sediment transport and shallow landslides.

  18. Novel Verification Method for Timing Optimization Based on DPSO

    Directory of Open Access Journals (Sweden)

    Chuandong Chen

    2018-01-01

    Full Text Available Timing optimization for logic circuits is one of the key steps in logic synthesis. Extant research data are mainly proposed based on various intelligence algorithms. Hence, they are neither comparable with timing optimization data collected by the mainstream electronic design automation (EDA tool nor able to verify the superiority of intelligence algorithms to the EDA tool in terms of optimization ability. To address these shortcomings, a novel verification method is proposed in this study. First, a discrete particle swarm optimization (DPSO algorithm was applied to optimize the timing of the mixed polarity Reed-Muller (MPRM logic circuit. Second, the Design Compiler (DC algorithm was used to optimize the timing of the same MPRM logic circuit through special settings and constraints. Finally, the timing optimization results of the two algorithms were compared based on MCNC benchmark circuits. The timing optimization results obtained using DPSO are compared with those obtained from DC, and DPSO demonstrates an average reduction of 9.7% in the timing delays of critical paths for a number of MCNC benchmark circuits. The proposed verification method directly ascertains whether the intelligence algorithm has a better timing optimization ability than DC.

  19. Interrogation of an autofluorescence-based method for protein fingerprinting.

    Science.gov (United States)

    Siddaramaiah, Manjunath; Rao, Bola Sadashiva S; Joshi, Manjunath B; Datta, Anirbit; Sandya, S; Vishnumurthy, Vasudha; Chandra, Subhash; Nayak, Subramanya G; Satyamoorthy, Kapaettu; Mahato, Krishna K

    2018-03-14

    In the present study, we have designed a laser-induced fluorescence (LIF) based instrumentation and developed a sensitive methodology for the effective separation, visualization, identification and analysis of proteins on a single platform. In this method, intrinsic fluorescence spectra of proteins were detected after separation on 1 or 2 dimensional Sodium Dodecyl Sulfate-Tris(2-carboxyethyl)phosphine (SDS-TCEP) polyacrylamide gel electrophoresis (PAGE) and the data were analyzed. The MATLAB assisted software was designed for the development of PAGE fingerprint for the visualization of protein after 1- and 2-dimensional protein separation. These provided objective parameters of intrinsic fluorescence intensity, emission peak, molecular weight and isoelectric point using a single platform. Further, the current architecture could differentiate the overlapping proteins in the PAGE gels which otherwise were not identifiable by conventional staining, imaging and tagging methods. Categorization of the proteins based on the presence or absence of tyrosine or tryptophan residues and assigning the corresponding emission peaks (309-356 nm) with pseudo colors allowed the detection of proportion of proteins within the given spectrum. The present methodology doesn't use stains or tags, hence amenable to couple with mass spectroscopic measurements. This technique may have relevance in the field of proteomics that is used for innumerable applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. THE FLUORBOARD A STATISTICALLY BASED DASHBOARD METHOD FOR IMPROVING SAFETY

    International Nuclear Information System (INIS)

    PREVETTE, S.S.

    2005-01-01

    The FluorBoard is a statistically based dashboard method for improving safety. Fluor Hanford has achieved significant safety improvements--including more than a 80% reduction in OSHA cases per 200,000 hours, during its work at the US Department of Energy's Hanford Site in Washington state. The massive project on the former nuclear materials production site is considered one of the largest environmental cleanup projects in the world. Fluor Hanford's safety improvements were achieved by a committed partnering of workers, managers, and statistical methodology. Safety achievements at the site have been due to a systematic approach to safety. This includes excellent cooperation between the field workers, the safety professionals, and management through OSHA Voluntary Protection Program principles. Fluor corporate values are centered around safety, and safety excellence is important for every manager in every project. In addition, Fluor Hanford has utilized a rigorous approach to using its safety statistics, based upon Dr. Shewhart's control charts, and Dr. Deming's management and quality methods