Sample records for automatic multiscale enhancement

  1. Multiscale based adaptive contrast enhancement (United States)

    Abir, Muhammad; Islam, Fahima; Wachs, Daniel; Lee, Hyoung


    A contrast enhancement algorithm is developed for enhancing the contrast of x-ray images. The algorithm is based on Laplacian pyramid image processing technique. The image is decomposed into three frequency sub-bands- low, medium, and high. Each sub-band contains different frequency information of the image. The detail structure of the image lies on the high frequency sub-band and the overall structure lies on the low frequency sub-band. Apparently it is difficult to extract detail structure from the high frequency sub-bands. Enhancement of the detail structures is necessary in order to find out the calcifications on the mammograms, cracks on any object such as fuel plate, etc. In our proposed method contrast enhancement is achieved from high and medium frequency sub-band images by decomposing the image based on multi-scale Laplacian pyramid and enhancing contrast by suitable image processing. Standard Deviation-based Modified Adaptive contrast enhancement (SDMACE) technique is applied to enhance the low-contrast information on the sub-bands without overshooting noise. An alpha-trimmed mean filter is used in SDMACE for sharpness enhancement. After modifying all sub-band images, the final image is derived from reconstruction of the sub-band images from lower resolution level to upper resolution level including the residual image. To demonstrate the effectiveness of the algorithm an x-ray of a fuel plate and two mammograms are analyzed. Subjective evaluation is performed to evaluate the effectiveness of the algorithm. The proposed algorithm is compared with the well-known contrast limited adaptive histogram equalization (CLAHE) algorithm. Experimental results prove that the proposed algorithm offers improved contrast of the x-ray images.

  2. Automatic Multi-Scale Calibration Procedure for Nested Hydrological-Hydrogeological Regional Models (United States)

    Labarthe, B.; Abasq, L.; Flipo, N.; de Fouquet, C. D.


    Large hydrosystem modelling and understanding is a complex process depending on regional and local processes. A nested interface concept has been implemented in the hydrosystem modelling platform for a large alluvial plain model (300 km2) part of a 11000 km2 multi-layer aquifer system, included in the Seine basin (65000 km2, France). The platform couples hydrological and hydrogeological processes through four spatially distributed modules (Mass balance, Unsaturated Zone, River and Groundwater). An automatic multi-scale calibration procedure is proposed. Using different data sets from regional scale (117 gauging stations and 183 piezometers over the 65000 km2) to the intermediate scale(dense past piezometric snapshot), it permits the calibration and homogenization of model parameters over scales.The stepwise procedure starts with the optimisation of the water mass balance parameters at regional scale using a conceptual 7 parameters bucket model coupled with the inverse modelling tool PEST. The multi-objective function is derived from river discharges and their de-composition by hydrograph separation. The separation is performed at each gauging station using an automatic procedure based one Chapman filter. Then, the model is run at the regional scale to provide recharge estimate and regional fluxes to the groundwater local model. Another inversion method is then used to determine the local hydrodynamic parameters. This procedure used an initial kriged transmissivity field which is successively updated until the simulated hydraulic head distribution equals a reference one obtained by krigging. Then, the local parameters are upscaled to the regional model by renormalisation procedure.This multi-scale automatic calibration procedure enhances both the local and regional processes representation. Indeed, it permits a better description of local heterogeneities and of the associated processes which are transposed into the regional model, improving the overall performances

  3. Automatic psoriasis lesion segmentation in two-dimensional skin images using multiscale superpixel clustering. (United States)

    George, Yasmeen; Aldeen, Mohammad; Garnavi, Rahil


    Psoriasis is a chronic skin disease that is assessed visually by dermatologists. The Psoriasis Area and Severity Index (PASI) is the current gold standard used to measure lesion severity by evaluating four parameters, namely, area, erythema, scaliness, and thickness. In this context, psoriasis skin lesion segmentation is required as the basis for PASI scoring. An automatic lesion segmentation method by leveraging multiscale superpixels and [Formula: see text]-means clustering is outlined. Specifically, we apply a superpixel segmentation strategy on CIE-[Formula: see text] color space using different scales. Also, we suppress the superpixels that belong to nonskin areas. Once similar regions on different scales are obtained, the [Formula: see text]-means algorithm is used to cluster each superpixel scale separately into normal and lesion skin areas. Features from both [Formula: see text] and [Formula: see text] color bands are used in the clustering process. Furthermore, majority voting is performed to fuse the segmentation results from different scales to obtain the final output. The proposed method is extensively evaluated on a set of 457 psoriasis digital images, acquired from the Royal Melbourne Hospital, Melbourne, Australia. Experimental results have shown evidence that the method is very effective and efficient, even when applied to images containing hairy skin and diverse lesion size, shape, and severity. It has also been ascertained that CIE-[Formula: see text] outperforms other color spaces for psoriasis lesion analysis and segmentation. In addition, we use three evaluation metrics, namely, Dice coefficient, Jaccard index, and pixel accuracy where scores of 0.783%, 0.698%, and 86.99% have been achieved by the proposed method for the three metrics, respectively. Finally, compared with existing methods that employ either skin decomposition and support vector machine classifier or Euclidean distance in the hue-chrome plane, our multiscale superpixel

  4. Automatic Matching of Multi-scale Road Networks under the Constraints of Smaller Scale Road Meshes

    Directory of Open Access Journals (Sweden)

    PEI Hongxing


    Full Text Available A new method is proposed to achieve automatic matching for multi-scale roads under the constraints of the smaller scale data. Firstly, meshes should be extracted from the two different scales road data. Secondly, several basic meshes in the larger scale road network will be merged as a composite one, which will be matched with one mesh from the smaller scale road network, so that the meshes with many-to-one and one-to-one matching relationships will be matched. Thirdly, meshes from the two different scale road data with many-to-many matching relationships will be matched. Finally, road will be classified into two categories under the constraints of meshes: mesh border roads and mesh internal roads, and then matching will be done in their own categories according to the matching relationships between the two scales meshes. The results showed that roads from different scale will be more precisely matched.

  5. Multiscale Retinex

    Directory of Open Access Journals (Sweden)

    Ana Belén Petro


    Full Text Available While the retinex theory aimed at explaining human color perception, its derivations have led to efficient algorithms enhancing local image contrast, thus permitting among other features, to "see in the shadows". Among these derived algorithms, Multiscale Retinex is probably the most successful center-surround image filter. In this paper, we offer an analysis and implementation of Multiscale Retinex. We point out and resolve some ambiguities of the method. In particular, we show that the important color correction final step of the method can be seriously improved. This analysis permits to come up with an automatic implementation of Multiscale Retinex which is as faithful as possible to the one described in the original paper. Overall, this implementation delivers excellent results and confirms the validity of Multiscale Retinex for image color restoration and contrast enhancement. Nevertheless, while the method parameters can be fixed, we show that a crucial choice must be left to the user, depending on the lightning condition of the image: the method must either be applied to each color independently if a color balance is required, or to the luminance only if the goal is to achieve local contrast enhancement. Thus, we propose two slightly different algorithms to deal with both cases.

  6. Automatic Nuclear Segmentation Using Multiscale Radial Line Scanning With Dynamic Programming. (United States)

    Xu, Hongming; Lu, Cheng; Berendt, Richard; Jha, Naresh; Mandal, Mrinal


    In the diagnosis of various cancers by analyzing histological images, automatic nuclear segmentation is an important step. However, nuclear segmentation is a difficult problem because of overlapping nuclei, inhomogeneous staining, and presence of noisy pixels and other tissue components. In this paper, we present an automatic technique for nuclear segmentation in skin histological images. The proposed technique first applies a bank of generalized Laplacian of Gaussian kernels to detect nuclear seeds. Based on the detected nuclear seeds, a multiscale radial line scanning method combined with dynamic programming is applied to extract a set of candidate nuclear boundaries. The gradient, intensity, and shape information are then integrated to determine the optimal boundary for each nucleus in the image. Nuclear overlap limitation is finally imposed based on a Dice coefficient measure such that the obtained nuclear contours do not severely intersect with each other. Experiments have been thoroughly performed on two datasets with H&E and Ki-67 stained images, which show that the proposed technique is superior to conventional schemes of nuclear segmentation.

  7. Automatic facial pore analysis system using multi-scale pore detection. (United States)

    Sun, J Y; Kim, S W; Lee, S H; Choi, J E; Ko, S J


    As facial pore widening and its treatments have become common concerns in the beauty care field, the necessity for an objective pore-analyzing system has been increased. Conventional apparatuses lack in usability requiring strong light sources and a cumbersome photographing process, and they often yield unsatisfactory analysis results. This study was conducted to develop an image processing technique for automatic facial pore analysis. The proposed method detects facial pores using multi-scale detection and optimal scale selection scheme and then extracts pore-related features such as total area, average size, depth, and the number of pores. Facial photographs of 50 subjects were graded by two expert dermatologists, and correlation analyses between the features and clinical grading were conducted. We also compared our analysis result with those of conventional pore-analyzing devices. The number of large pores and the average pore size were highly correlated with the severity of pore enlargement. In comparison with the conventional devices, the proposed analysis system achieved better performance showing stronger correlation with the clinical grading. The proposed system is highly accurate and reliable for measuring the severity of skin pore enlargement. It can be suitably used for objective assessment of the pore tightening treatments. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Enhanced water repellency of surfaces coated with multiscale carbon structures (United States)

    Marchalot, Julien; Ramos, Stella. M. M.; Pirat, Christophe; Journet, Catherine


    Low cost and well characterized superhydrophobic surfaces are frequently required for industrial applications. Materials are commonly structured at the micro or nano scale. Surfaces decorated with nanotube derivatives synthesized by plasma enhanced chemical vapor deposition (PECVD) are of particular interest, since suitable modifications in the growth parameters can lead to numerous designs. In this article, we present surfaces that are selected for their specific wetting features with patterns ranging from dense forests to jungles with concave (re-entrant) surface such as flake-like multiscale roughness. Once these surfaces are functionalized adequately, their wetting properties are investigated. Their ability to sustain a superhydrophobic state for sessile water drops is examined. Finally, we propose a design to achieve a robust so-called ;Fakir; state, even for micrometer-sized drops, whereas with classic nanotubes forests it is not achievable. Thus, the drop remains on the apex of the protrusions with a high contact angle and a low contact angle hysteresis, while the surface features demonstrate good mechanical resistance against capillary forces.

  9. A fully automatic multiscale 3-dimensional Hessian-based algorithm for vessel detection in breast DCE-MRI. (United States)

    Vignati, Anna; Giannini, Valentina; Bert, Alberto; Borrelli, Pasquale; De Luca, Massimo; Martincich, Laura; Sardanelli, Francesco; Regge, Daniele


    The objectives of this study were to develop a fully automatic method for detecting blood vessels in dynamic contrast-enhanced magnetic resonance imaging of the breast on the basis of a multiscale 3-dimensional Hessian-based algorithm and to evaluate the improvement in reducing the number of vessel voxels incorrectly classified as parenchymal lesions by a computer-aided diagnosis (CAD) system. The algorithm has been conceived to work on images obtained with different sequences, different acquisition parameters, such as the use of fat-saturation, and different contrast agents. The analysis was performed on 28 dynamic contrast-enhanced magnetic resonance imaging examinations, with 39 malignant (28 principal and 11 satellite) and 8 benign lesions, acquired at 2 centers using 2 different 1.5-T magnetic resonance scanners, radiofrequency coils, and contrast agents (14 studies from group A and 14 studies from group B). The method consists of 2 main steps: (a) the detection of linear structures on 3-dimensional images, with a multiscale analysis based on the second-order image derivatives and (b) the exclusion of non-vessel enhancements based on their morphological properties through the evaluation of the covariance matrix eigenvalues. To evaluate the algorithm performances, the identified vessels were converted into a 2-dimensional vasculature skeleton and then compared with manual tracking performed by an expert radiologist. When assessing the outcome of the algorithm performances in identifying vascular structures, the following terms must be considered: the correct-detection rate refers to pixels identified by both the algorithm and the radiologist, the missed-detection rate refers to pixels detected only by the radiologist, and the incorrect-detection rate refers to pixels detected only by the algorithm. The Wilcoxon rank sum test was used to assess differences between the performances of the 2 subgroups of images obtained from the different scanners. For the testing

  10. Color Image Enhancement Using Multiscale Retinex Based on Particle Swarm Optimization Method (United States)

    Matin, F.; Jeong, Y.; Kim, K.; Park, K.


    This paper introduces, a novel method for the image enhancement using multiscale retinex and practical swarm optimization. Multiscale retinex is widely used image enhancement technique which intemperately pertains on parameters such as Gaussian scales, gain and offset, etc. To achieve the privileged effect, the parameters need to be tuned manually according to the image. In order to handle this matter, a developed retinex algorithm based on PSO has been used. The PSO method adjusted the parameters for multiscale retinex with chromaticity preservation (MSRCP) attains better outcome to compare with other existing methods. The experimental result indicates that the proposed algorithm is an efficient one and not only provides true color loyalty in low light conditions but also avoid color distortion at the same time.


    International Nuclear Information System (INIS)

    Byrne, Jason P.; Morgan, Huw; Habbal, Shadia R.; Gallagher, Peter T.


    Studying coronal mass ejections (CMEs) in coronagraph data can be challenging due to their diffuse structure and transient nature, and user-specific biases may be introduced through visual inspection of the images. The large amount of data available from the Solar and Heliospheric Observatory (SOHO), Solar TErrestrial RElations Observatory (STEREO), and future coronagraph missions also makes manual cataloging of CMEs tedious, and so a robust method of detection and analysis is required. This has led to the development of automated CME detection and cataloging packages such as CACTus, SEEDS, and ARTEMIS. Here, we present the development of a new CORIMP (coronal image processing) CME detection and tracking technique that overcomes many of the drawbacks of current catalogs. It works by first employing the dynamic CME separation technique outlined in a companion paper, and then characterizing CME structure via a multiscale edge-detection algorithm. The detections are chained through time to determine the CME kinematics and morphological changes as it propagates across the plane of sky. The effectiveness of the method is demonstrated by its application to a selection of SOHO/LASCO and STEREO/SECCHI images, as well as to synthetic coronagraph images created from a model corona with a variety of CMEs. The algorithms described in this article are being applied to the whole LASCO and SECCHI data sets, and a catalog of results will soon be available to the public.

  12. Automatic ultrasound image enhancement for 2D semi-automatic breast-lesion segmentation (United States)

    Lu, Kongkuo; Hall, Christopher S.


    Breast cancer is the fastest growing cancer, accounting for 29%, of new cases in 2012, and second leading cause of cancer death among women in the United States and worldwide. Ultrasound (US) has been used as an indispensable tool for breast cancer detection/diagnosis and treatment. In computer-aided assistance, lesion segmentation is a preliminary but vital step, but the task is quite challenging in US images, due to imaging artifacts that complicate detection and measurement of the suspect lesions. The lesions usually present with poor boundary features and vary significantly in size, shape, and intensity distribution between cases. Automatic methods are highly application dependent while manual tracing methods are extremely time consuming and have a great deal of intra- and inter- observer variability. Semi-automatic approaches are designed to counterbalance the advantage and drawbacks of the automatic and manual methods. However, considerable user interaction might be necessary to ensure reasonable segmentation for a wide range of lesions. This work proposes an automatic enhancement approach to improve the boundary searching ability of the live wire method to reduce necessary user interaction while keeping the segmentation performance. Based on the results of segmentation of 50 2D breast lesions in US images, less user interaction is required to achieve desired accuracy, i.e. live-wire segmentation.

  13. Automatic vertebral bodies detection of x-ray images using invariant multiscale template matching (United States)

    Sharifi Sarabi, Mona; Villaroman, Diane; Beckett, Joel; Attiah, Mark; Marcus, Logan; Ahn, Christine; Babayan, Diana; Gaonkar, Bilwaj; Macyszyn, Luke; Raghavendra, Cauligi


    Lower back pain and pathologies related to it are one of the most common results for a referral to a neurosurgical clinic in the developed and the developing world. Quantitative evaluation of these pathologies is a challenge. Image based measurements of angles/vertebral heights and disks could provide a potential quantitative biomarker for tracking and measuring these pathologies. Detection of vertebral bodies is a key element and is the focus of the current work. From the variety of medical imaging techniques, MRI and CT scans have been typically used for developing image segmentation methods. However, CT scans are known to give a large dose of x-rays, increasing cancer risk [8]. MRI can be substituted for CTs when the risk is high [8] but are difficult to obtain in smaller facilities due to cost and lack of expertise in the field [2]. X-rays provide another option with its ability to control the x-ray dosage, especially for young people, and its accessibility for smaller facilities. Hence, the ability to create quantitative biomarkers from x-ray data is especially valuable. Here, we develop a multiscale template matching, inspired by [9], to detect centers of vertebral bodies from x-ray data. The immediate application of such detection lies in developing quantitative biomarkers and in querying similar images in a database. Previously, shape similarity classification methods have been used to address this problem, but these are challenging to use in the presence of variation due to gross pathology and even subtle effects [1].

  14. Remote Sensing Images Super Resolution Reconstruction Based on Multi-scale Detail Enhancement

    Directory of Open Access Journals (Sweden)

    ZHU Hong


    Full Text Available The existing methods are hard to highlight the details after super resolution reconstruction, so it is proposed a super-resolution model frame to enhance the multi-scale details. Firstly, the sequence images are multi-scale deposed to keep the edge structure and the deposed multi-scale image information are differenced. Then, the smoothing information and detail information are interpolated, and a texture detail enhancement function is built to improve the scope of small details. Finally, the coarse-scale image information and small-medium-scale information are confused to get the premier super-resolution reconstruction result, and a local optimizing model is built to further promote the premier image quality. The experiments on the same period and different period remote sensing images show that the objective evaluation index are both largely improved comparing with the interpolation method, traditional total variation(TVmethod,and maximum a posterior(MAP method. The details of the reconstruction image are improved distinctly. The reconstruction image produced using the proposed method provides more high frequency details, and the method proves to be robust and universal for different kinds of satellite remote sensing images.

  15. Automatic Classification of Normal and Cancer Lung CT Images Using Multiscale AM-FM Features

    Directory of Open Access Journals (Sweden)

    Eman Magdy


    Full Text Available Computer-aided diagnostic (CAD systems provide fast and reliable diagnosis for medical images. In this paper, CAD system is proposed to analyze and automatically segment the lungs and classify each lung into normal or cancer. Using 70 different patients’ lung CT dataset, Wiener filtering on the original CT images is applied firstly as a preprocessing step. Secondly, we combine histogram analysis with thresholding and morphological operations to segment the lung regions and extract each lung separately. Amplitude-Modulation Frequency-Modulation (AM-FM method thirdly, has been used to extract features for ROIs. Then, the significant AM-FM features have been selected using Partial Least Squares Regression (PLSR for classification step. Finally, K-nearest neighbour (KNN, support vector machine (SVM, naïve Bayes, and linear classifiers have been used with the selected AM-FM features. The performance of each classifier in terms of accuracy, sensitivity, and specificity is evaluated. The results indicate that our proposed CAD system succeeded to differentiate between normal and cancer lungs and achieved 95% accuracy in case of the linear classifier.

  16. Automatic Classification of Normal and Cancer Lung CT Images Using Multiscale AM-FM Features (United States)

    Zayed, Nourhan; Fakhr, Mahmoud


    Computer-aided diagnostic (CAD) systems provide fast and reliable diagnosis for medical images. In this paper, CAD system is proposed to analyze and automatically segment the lungs and classify each lung into normal or cancer. Using 70 different patients' lung CT dataset, Wiener filtering on the original CT images is applied firstly as a preprocessing step. Secondly, we combine histogram analysis with thresholding and morphological operations to segment the lung regions and extract each lung separately. Amplitude-Modulation Frequency-Modulation (AM-FM) method thirdly, has been used to extract features for ROIs. Then, the significant AM-FM features have been selected using Partial Least Squares Regression (PLSR) for classification step. Finally, K-nearest neighbour (KNN), support vector machine (SVM), naïve Bayes, and linear classifiers have been used with the selected AM-FM features. The performance of each classifier in terms of accuracy, sensitivity, and specificity is evaluated. The results indicate that our proposed CAD system succeeded to differentiate between normal and cancer lungs and achieved 95% accuracy in case of the linear classifier. PMID:26451137

  17. Automatic grade classification of Barretts Esophagus through feature enhancement (United States)

    Ghatwary, Noha; Ahmed, Amr; Ye, Xujiong; Jalab, Hamid


    Barretts Esophagus (BE) is a precancerous condition that affects the esophagus tube and has the risk of developing esophageal adenocarcinoma. BE is the process of developing metaplastic intestinal epithelium and replacing the normal cells in the esophageal area. The detection of BE is considered difficult due to its appearance and properties. The diagnosis is usually done through both endoscopy and biopsy. Recently, Computer Aided Diagnosis systems have been developed to support physicians opinion when facing difficulty in detection/classification in different types of diseases. In this paper, an automatic classification of Barretts Esophagus condition is introduced. The presented method enhances the internal features of a Confocal Laser Endomicroscopy (CLE) image by utilizing a proposed enhancement filter. This filter depends on fractional differentiation and integration that improve the features in the discrete wavelet transform of an image. Later on, various features are extracted from each enhanced image on different levels for the multi-classification process. Our approach is validated on a dataset that consists of a group of 32 patients with 262 images with different histology grades. The experimental results demonstrated the efficiency of the proposed technique. Our method helps clinicians for more accurate classification. This potentially helps to reduce the need for biopsies needed for diagnosis, facilitate the regular monitoring of treatment/development of the patients case and can help train doctors with the new endoscopy technology. The accurate automatic classification is particularly important for the Intestinal Metaplasia (IM) type, which could turn into deadly cancerous. Hence, this work contributes to automatic classification that facilitates early intervention/treatment and decreasing biopsy samples needed.

  18. Fractional Directional Differentiation and Its Application for Multiscale Texture Enhancement

    Directory of Open Access Journals (Sweden)

    Chaobang Gao


    Full Text Available This paper derives the directional derivative expression of Taylor formula for two-variable function from Taylor formula of one-variable function. Further, it proposes a new concept, fractional directional differentiation (FDD, and corresponding theories. To achieve the numerical calculation, the paper deduces power series expression of FDD. Moreover, the paper discusses the construction of FDD mask in the four quadrants, respectively, for digital image. The differential coefficients of every direction are not the same along the eight directions in the four quadrants, which is the biggest difference by contrast to general fractional differentiation and can reflect different fractional change rates along different directions, and this benefits to enlarge the differences among the image textures. Experiments show that, for texture-rich digital images, the capability of nonlinearly enhancing comprehensive texture details by FDD is better than those by the general fractional differentiation and Butterworth filter. By quantity analysis, it shows that state-of-the-art effect of texture enhancement is obtained by FDD.

  19. Feature and Contrast Enhancement of Mammographic Image Based on Multiscale Analysis and Morphology

    Directory of Open Access Journals (Sweden)

    Shibin Wu


    Full Text Available A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR, and contrast improvement index (CII.

  20. Feature and contrast enhancement of mammographic image based on multiscale analysis and morphology. (United States)

    Wu, Shibin; Yu, Shaode; Yang, Yuhan; Xie, Yaoqin


    A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII).

  1. Enhanced inertia from lossy effective fluids using multi-scale sonic crystals

    Directory of Open Access Journals (Sweden)

    Matthew D. Guild


    Full Text Available In this work, a recent theoretically predicted phenomenon of enhanced permittivity with electromagnetic waves using lossy materials is investigated for the analogous case of mass density and acoustic waves, which represents inertial enhancement. Starting from fundamental relationships for the homogenized quasi-static effective density of a fluid host with fluid inclusions, theoretical expressions are developed for the conditions on the real and imaginary parts of the constitutive fluids to have inertial enhancement, which are verified with numerical simulations. Realizable structures are designed to demonstrate this phenomenon using multi-scale sonic crystals, which are fabricated using a 3D printer and tested in an acoustic impedance tube, yielding good agreement with the theoretical predictions and demonstrating enhanced inertia.

  2. Fault Detection Enhancement in Rolling Element Bearings via Peak-Based Multiscale Decomposition and Envelope Demodulation

    Directory of Open Access Journals (Sweden)

    Hua-Qing Wang


    Full Text Available Vibration signals of rolling element bearings faults are usually immersed in background noise, which makes it difficult to detect the faults. Wavelet-based methods being used commonly can reduce some types of noise, but there is still plenty of room for improvement due to the insufficient sparseness of vibration signals in wavelet domain. In this work, in order to eliminate noise and enhance the weak fault detection, a new kind of peak-based approach combined with multiscale decomposition and envelope demodulation is developed. First, to preserve effective middle-low frequency signals while making high frequency noise more significant, a peak-based piecewise recombination is utilized to convert middle frequency components into low frequency ones. The newly generated signal becomes so smoother that it will have a sparser representation in wavelet domain. Then a noise threshold is applied after wavelet multiscale decomposition, followed by inverse wavelet transform and backward peak-based piecewise transform. Finally, the amplitude of fault characteristic frequency is enhanced by means of envelope demodulation. The effectiveness of the proposed method is validated by rolling bearings faults experiments. Compared with traditional wavelet-based analysis, experimental results show that fault features can be enhanced significantly and detected easily by the proposed method.

  3. Enhanced method for multiscale wind simulations over complex terrain for wind resource assessment (United States)

    Flores-Maradiaga, A.; Benoit, R.; Masson, C.


    Due to the natural variability of the wind, it is necessary to conduct thorough wind resource assessments to determine how much energy can be extracted at a given site. Lately, important advancements have been achieved in numerical methods of multiscale models used for high resolution wind simulations over steep topography. As a contribution to this effort, an enhanced numerical method was devised in the mesoscale compressible community (MC2) model of the Meteorological Service of Canada, adapting a new semi-implicit scheme with its imbedded large-eddy simulation (LES) capability for mountainous terrain. This implementation has been verified by simulating the neutrally stratified atmospheric boundary layer (ABL) over flat terrain and a Gaussian ridge. These preliminary results indicate that the enhanced MC2-LES model reproduces efficiently the results reported by other researchers who use similar models with more sophisticated sub-grid scale turbulence schemes. The proposed multiscale method also provides a new wind initialization scheme and additional utilities to improve numerical accuracy and stability. The resulting model can be used to assess the wind resource at meso- and micro-scales, reducing significantly the wind speed overestimation in mountainous areas.

  4. Automatic image equalization and contrast enhancement using Gaussian mixture modeling. (United States)

    Celik, Turgay; Tjahjadi, Tardi


    In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.

  5. Plasmonic amplifiers: engineering giant light enhancements by tuning resonances in multiscale plasmonic nanostructures. (United States)

    Chen, Aiqing; Miller, Ryan L; DePrince, A Eugene; Joshi-Imre, Alexandra; Shevchenko, Elena; Ocola, Leonidas E; Gray, Stephen K; Welp, Ulrich; Vlasko-Vlasov, Vitalii K


    The unique ability of plasmonic nanostructures to guide, enhance, and manipulate subwavelength light offers multiple novel applications in chemical and biological sensing, imaging, and photonic microcircuitry. Here the reproducible, giant light amplification in multiscale plasmonic structures is demonstrated. These structures combine strongly coupled components of different dimensions and topologies that resonate at the same optical frequency. A light amplifier is constructed using a silver mirror carrying light-enhancing surface plasmons, dielectric gratings forming distributed Bragg cavities on top of the mirror, and gold nanoparticle arrays self-assembled into the grating grooves. By tuning the resonances of the individual components to the same frequency, multiple enhancement of the light intensity in the nanometer gaps between the particles is achieved. Using a monolayer of benzenethiol molecules on this structure, an average SERS enhancement factor ∼10⁸ is obtained, and the maximum enhancement in the interparticle hot-spots is ∼3 × 10¹⁰, in good agreement with FDTD calculations. The high enhancement factor, large density of well-ordered hot-spots, and good fidelity of the SERS signal make this design a promising platform for quantitative SERS sensing, optical detection, efficient solid state lighting, advanced photovoltaics, and other emerging photonic applications. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. A Multiscale Study on the Penetration Enhancement Mechanism of Menthol to Osthole. (United States)

    Yang, Shufang; Wang, Ran; Wan, Guang; Wu, Zhimin; Guo, Shujuan; Dai, Xingxing; Shi, Xinyuan; Qiao, Yanjiang


    Menthol is a widely used penetration enhancer in clinical medicine due to its high efficiency and relative safety. However, details of the penetration enhancement mechanism of menthol on the molecular level is rarely involved in the discussion. In this work, the penetration enhancement (PE) mechanism of menthol is explored by a multiscale method containing molecular dynamics simulations, in vitro penetration experiments, and transmission electron microscopy. Osthole is chosen to be the tested drug due to its common use in external preparations and because it often accompanies menthol as a PE in the preparations. The results show that menthol in each testing concentration can impair the lipid packing of stratum corneum (SC) and promote osthole permeating into SC, and the penetration promoting effect has an optimal concentration. At a low concentration, menthol causes the bilayer to relax with a reduction in thickness and increment in the lipid headgroup area. At a high concentration, menthol destroys the bilayer structure of SC and causes lipids to form a reversed micelle structure. The penetration enhancement mechanism of menthol is characterized mainly by the disruption of the highly ordered SC lipid in low concentrations and an improvement in the partitioning of drugs into the SC in high concentrations. The results can provide some assistance for additional studies and applications of menthol as a penetration enhancer.

  7. Enhancing Automaticity through Task-Based Language Learning (United States)

    De Ridder, Isabelle; Vangehuchten, Lieve; Gomez, Marta Sesena


    In general terms automaticity could be defined as the subconscious condition wherein "we perform a complex series of tasks very quickly and efficiently, without having to think about the various components and subcomponents of action involved" (DeKeyser 2001: 125). For language learning, Segalowitz (2003) characterised automaticity as a…

  8. Multiscale Modeling of Plasmon-Enhanced Power Conversion Efficiency in Nanostructured Solar Cells. (United States)

    Meng, Lingyi; Yam, ChiYung; Zhang, Yu; Wang, Rulin; Chen, GuanHua


    The unique optical properties of nanometallic structures can be exploited to confine light at subwavelength scales. This excellent light trapping is critical to improve light absorption efficiency in nanoscale photovoltaic devices. Here, we apply a multiscale quantum mechanics/electromagnetics (QM/EM) method to model the current-voltage characteristics and optical properties of plasmonic nanowire-based solar cells. The QM/EM method features a combination of first-principles quantum mechanical treatment of the photoactive component and classical description of electromagnetic environment. The coupled optical-electrical QM/EM simulations demonstrate a dramatic enhancement for power conversion efficiency of nanowire solar cells due to the surface plasmon effect of nanometallic structures. The improvement is attributed to the enhanced scattering of light into the photoactive layer. We further investigate the optimal configuration of the nanostructured solar cell. Our QM/EM simulation result demonstrates that a further increase of internal quantum efficiency can be achieved by scattering light into the n-doped region of the device.

  9. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    International Nuclear Information System (INIS)

    Qiu, J; Li, H. Harlod; Zhang, T; Yang, D; Ma, F


    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools

  10. Restrained eaters show enhanced automatic approach tendencies towards food

    NARCIS (Netherlands)

    Veenstra, Esther M.; de Jong, Peter J.

    Although restrained eaters intend to limit their caloric intake, they nevertheless frequently fail and indulge in exactly the foods they want to avoid. Because automatic food-relevant approach tendencies and affective associations may both (independently) contribute to the dysregulation of food

  11. An enhanced model for automatically extracting topic phrase from ...

    African Journals Online (AJOL)

    The key benefit foreseen from this automatic document classification is not only related to search engines, but also to many other fields like, document organization, text filtering and semantic index managing. Key words: Keyphrase extraction, machine learning, search engine snippet, document classification, topic tracking ...

  12. Automatic Ship Detection in Remote Sensing Images from Google Earth of Complex Scenes Based on Multiscale Rotation Dense Feature Pyramid Networks

    Directory of Open Access Journals (Sweden)

    Xue Yang


    Full Text Available Ship detection has been playing a significant role in the field of remote sensing for a long time, but it is still full of challenges. The main limitations of traditional ship detection methods usually lie in the complexity of application scenarios, the difficulty of intensive object detection, and the redundancy of the detection region. In order to solve these problems above, we propose a framework called Rotation Dense Feature Pyramid Networks (R-DFPN which can effectively detect ships in different scenes including ocean and port. Specifically, we put forward the Dense Feature Pyramid Network (DFPN, which is aimed at solving problems resulting from the narrow width of the ship. Compared with previous multiscale detectors such as Feature Pyramid Network (FPN, DFPN builds high-level semantic feature-maps for all scales by means of dense connections, through which feature propagation is enhanced and feature reuse is encouraged. Additionally, in the case of ship rotation and dense arrangement, we design a rotation anchor strategy to predict the minimum circumscribed rectangle of the object so as to reduce the redundant detection region and improve the recall. Furthermore, we also propose multiscale region of interest (ROI Align for the purpose of maintaining the completeness of the semantic and spatial information. Experiments based on remote sensing images from Google Earth for ship detection show that our detection method based on R-DFPN representation has state-of-the-art performance.

  13. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement (United States)

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming


    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893

  14. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement. (United States)

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming


    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.

  15. Computer-aided detection of microcalcification clusters on full-field digital mammograms: multiscale pyramid enhancement and false positive reduction using an artificial neural network (United States)

    Ge, Jun; Wei, Jun; Hadjiiski, Lubomir M.; Sahiner, Berkman; Chan, Heang-Ping; Helvie, Mark A.; Zhou, Chuan


    We are developing a computer-aided detection (CAD) system to detect microcalcification clusters automatically on full field digital mammograms (FFDMs). The CAD system includes five stages: preprocessing, image enhancement and/or box-rim filtering, segmentation of microcalcification candidates, false positive (FP) reduction, and clustering. In this study, we investigated the performance of a nonlinear multiscale Laplacian pyramid enhancement method in comparison with a box-rim filter at the image enhancement stage and the use of a new error metric to improve the efficiency and robustness of the training of a convolution neural network (CNN) at the FP reduction stage of our CAD system. A data set of 96 cases with 200 images was collected at the University of Michigan. This data set contained 215 microcalcification clusters, of which 64 clusters were proven by biopsy to be malignant and 151 were proven to be benign. The data set was separated into two independent data sets. One data set was used to train and validate the CNN in our CAD system. The other data set was used to evaluate the detection performance. For this data set, Laplacian pyramid multiscale enhancement did not improve the performance of the microcalcification detection system in comparison with our box-rim filter previously optimized for digitized screen-film mammograms. With the new error metric, the training of CNN could be accelerated and the classification performance in validation was improved from an Az value of 0.94 to 0.97 on average. The CNN in combination with rule-based classifiers could reduce FPs with a small tradeoff in sensitivity. By using the free-response receiver operating characteristic (FROC) methodology, it was found that our CAD system can achieve a cluster-based sensitivity of 70%, 80%, and 88% at 0.23, 0.39, and 0.71 FP marks/image, respectively. For case-based performance evaluation, a sensitivity of 80%, 90%, and 98% can be achieved at 0.17, 0.27, and 0.51 FP marks

  16. Enhanced electromagnetic interference shielding properties of carbon fiber veil/Fe3O4 nanoparticles/epoxy multiscale composites (United States)

    Chen, Wei; Wang, Jun; Zhang, Bin; Wu, Qilei; Su, Xiaogang


    The multiscale approach has been adapted to enhance the electromagnetic interference shielding properties of carbon fiber (CF) veil epoxy-based composites. The Fe3O4 nanoparticles (NPs) were homogeneously dispersed in the epoxy matrix after surface modification by using silane coupling agent. The CF veil/Fe3O4 NPs/epoxy multiscale composites were manufactured by impregnating the CF veils with Fe3O4 NPs/epoxy mixture to prepare prepreg followed by vacuum bagging process. The electromagnetic interference shielding properties combined with the complex permittivity and complex permeability of the composites were investigated in the X-band (8.2–12.4 GHz) range. The total shielding effectiveness (SET) increases with increasing Fe3O4 NPs loadings and the maximum SET is 51.5 dB at low thickness of 1 mm. The incorporation of Fe3O4 NPs into the composites enhances the complex permittivity and complex permeability thus enhancing the electromagnetic wave absorption capability. The increased SET dominated by absorption loss SEA is attributed to the enhanced magnetic loss and dielectric loss generated by Fe3O4 NPs and multilayer construction of the composites. The microwave conductivity increases and the skin depth decreases with increasing Fe3O4 NPs loadings.

  17. Automatic Adjustment of Keyboard Settings Can Enhance Typing. (United States)

    Koester, Heidi Horstmann; Mankowski, Jennifer


    We developed and evaluated a software tool for the automatic configuration of Windows keyboard settings. The software is intended to accommodate the needs of people with physical impairments, with a goal of improved productivity and comfort during typing. The prototype software, called AutoIDA, monitors user activity during performance of regular computer tasks and recommends the Sticky Keys and key repeat settings to meet the user's specific needs. The evaluation study included fourteen individuals with upper extremity impairments. AutoIDA recommended changes to the default keyboard settings for 10 of the 14 participants. For these individuals, average typing speed was essentially the same whether users typed with the default keyboard settings (5.5 wpm) or the AutoIDA-recommended settings (5.3 wpm). Average typing errors decreased with use of the recommended settings, from 17.6% to 13.3%, but this was not quite statistically significant (p = .10). On an individual basis, four participants appeared to improve their overall typing performance with AutoIDA-recommended settings. For more specific metrics, AutoIDA prevented about 90% of inadvertent key repeats (with a revised algorithm) and increased the efficiency and accuracy of entering modified (shifted) characters. Participants agreed that software like AutoIDA would be useful to them (average rating 4.1, where 5 = strongly agree).

  18. Automatic Attribute Threshold Selection for Blood Vessel Enhancement

    NARCIS (Netherlands)

    Kiwanuka, Fred N.; Wilkinson, Michael H.F.


    Attribute filters allow enhancement and extraction of features without distorting their borders, and never introduce new image features. These are highly desirable properties in biomedical imaging, where accurate shape analysis is paramount. However, setting the attribute-threshold parameters has to

  19. Automatic switching between noise classification and speech enhancement for hearing aid devices. (United States)

    Saki, Fatemeh; Kehtarnavaz, Nasser


    This paper presents a voice activity detector (VAD) for automatic switching between a noise classifier and a speech enhancer as part of the signal processing pipeline of hearing aid devices. The developed VAD consists of a computationally efficient feature extractor and a random forest classifier. Previously used signal features as well as two newly introduced signal features are extracted and fed into the classifier to perform automatic switching. This switching approach is compared to two popular VADs. The results obtained indicate the introduced approach outperforms these existing approaches in terms of both detection rate and processing time.

  20. Automatic exact histogram specification for contrast enhancement and visual system based quantitative evaluation. (United States)

    Sen, Debashis; Pal, Sankar K


    Histogram equalization, which aims at information maximization, is widely used in different ways to perform contrast enhancement in images. In this paper, an automatic exact histogram specification technique is proposed and used for global and local contrast enhancement of images. The desired histogram is obtained by first subjecting the image histogram to a modification process and then by maximizing a measure that represents increase in information and decrease in ambiguity. A new method of measuring image contrast based upon local band-limited approach and center-surround retinal receptive field model is also devised in this paper. This method works at multiple scales (frequency bands) and combines the contrast measures obtained at different scales using L(p)-norm. In comparison to a few existing methods, the effectiveness of the proposed automatic exact histogram specification technique in enhancing contrasts of images is demonstrated through qualitative analysis and the proposed image contrast measure based quantitative analysis.

  1. Automatic Side-Scan Sonar Image Enhancement in Curvelet Transform Domain

    Directory of Open Access Journals (Sweden)

    Yan Zhou


    Full Text Available We propose a novel automatic side-scan sonar image enhancement algorithm based on curvelet transform. The proposed algorithm uses the curvelet transform to construct a multichannel enhancement structure based on human visual system (HVS and adopts a new adaptive nonlinear mapping scheme to modify the curvelet transform coefficients in each channel independently and automatically. Firstly, the noisy and low-contrast sonar image is decomposed into a low frequency channel and a series of high frequency channels by using curvelet transform. Secondly, a new nonlinear mapping scheme, which coincides with the logarithmic nonlinear enhancement characteristic of the HVS perception, is designed without any parameter tuning to adjust the curvelet transform coefficients in each channel. Finally, the enhanced image can be reconstructed with the modified coefficients via inverse curvelet transform. The enhancement is achieved by amplifying subtle features, improving contrast, and eliminating noise simultaneously. Experiment results show that the proposed algorithm produces better enhanced results than state-of-the-art algorithms.

  2. Automatic detection of arterial input function in dynamic contrast enhanced MRI based on affinity propagation clustering. (United States)

    Shi, Lin; Wang, Defeng; Liu, Wen; Fang, Kui; Wang, Yi-Xiang J; Huang, Wenhua; King, Ann D; Heng, Pheng Ann; Ahuja, Anil T


    To automatically and robustly detect the arterial input function (AIF) with high detection accuracy and low computational cost in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). In this study, we developed an automatic AIF detection method using an accelerated version (Fast-AP) of affinity propagation (AP) clustering. The validity of this Fast-AP-based method was proved on two DCE-MRI datasets, i.e., rat kidney and human head and neck. The detailed AIF detection performance of this proposed method was assessed in comparison with other clustering-based methods, namely original AP and K-means, as well as the manual AIF detection method. Both the automatic AP- and Fast-AP-based methods achieved satisfactory AIF detection accuracy, but the computational cost of Fast-AP could be reduced by 64.37-92.10% on rat dataset and 73.18-90.18% on human dataset compared with the cost of AP. The K-means yielded the lowest computational cost, but resulted in the lowest AIF detection accuracy. The experimental results demonstrated that both the AP- and Fast-AP-based methods were insensitive to the initialization of cluster centers, and had superior robustness compared with K-means method. The Fast-AP-based method enables automatic AIF detection with high accuracy and efficiency. Copyright © 2013 Wiley Periodicals, Inc.

  3. Automatic segmentation of canine retinal OCT using adaptive gradient enhancement and region growing (United States)

    He, Yufan; Sun, Yankui; Chen, Min; Zheng, Yuanjie; Liu, Hui; Leon, Cecilia; Beltran, William; Gee, James C.


    In recent years, several studies have shown that the canine retina model offers important insight for our understanding of human retinal diseases. Several therapies developed to treat blindness in such models have already moved onto human clinical trials, with more currently under development [1]. Optical coherence tomography (OCT) offers a high resolution imaging modality for performing in-vivo analysis of the retinal layers. However, existing algorithms for automatically segmenting and analyzing such data have been mostly focused on the human retina. As a result, canine retinal images are often still being analyzed using manual segmentations, which is a slow and laborious task. In this work, we propose a method for automatically segmenting 5 boundaries in canine retinal OCT. The algorithm employs the position relationships between different boundaries to adaptively enhance the gradient map. A region growing algorithm is then used on the enhanced gradient maps to find the five boundaries separately. The automatic segmentation was compared against manual segmentations showing an average absolute error of 5.82 +/- 4.02 microns.

  4. Design and FPGA implementation of real-time automatic image enhancement algorithm (United States)

    Dong, GuoWei; Hou, ZuoXun; Tang, Qi; Pan, Zheng; Li, Xin


    In order to improve image processing quality and boost processing rate, this paper proposes an real-time automatic image enhancement algorithm. It is based on the histogram equalization algorithm and the piecewise linear enhancement algorithm, and it calculate the relationship of the histogram and the piecewise linear function by analyzing the histogram distribution for adaptive image enhancement. Furthermore, the corresponding FPGA processing modules are designed to implement the methods. Especially, the high-performance parallel pipelined technology and inner potential parallel processing ability of the modules are paid more attention to ensure the real-time processing ability of the complete system. The simulations and the experimentations show that the algorithm is based on the design and implementation of FPGA hardware circuit less cost on hardware, high real-time performance, the good processing performance in different sceneries. The algorithm can effectively improve the image quality, and would have wide prospect on imaging processing field.

  5. Quadrant Dynamic with Automatic Plateau Limit Histogram Equalization for Image Enhancement

    Directory of Open Access Journals (Sweden)

    P. Jagatheeswari


    Full Text Available The fundamental and important preprocessing stage in image processing is the image contrast enhancement technique. Histogram equalization is an effective contrast enhancement technique. In this paper, a histogram equalization based technique called quadrant dynamic with automatic plateau limit histogram equalization (QDAPLHE is introduced. In this method, a hybrid of dynamic and clipped histogram equalization methods are used to increase the brightness preservation and to reduce the overenhancement. Initially, the proposed QDAPLHE algorithm passes the input image through a median filter to remove the noises present in the image. Then the histogram of the filtered image is divided into four subhistograms while maintaining second separated point as the mean brightness. Then the clipping process is implemented by calculating automatically the plateau limit as the clipped level. The clipped portion of the histogram is modified to reduce the loss of image intensity value. Finally the clipped portion is redistributed uniformly to the entire dynamic range and the conventional histogram equalization is executed in each subhistogram independently. Based on the qualitative and the quantitative analysis, the QDAPLHE method outperforms some existing methods in literature.

  6. Multi-scale Morphological Image Enhancement of Chest Radiographs by a Hybrid Scheme. (United States)

    Alavijeh, Fatemeh Shahsavari; Mahdavi-Nasab, Homayoun


    Chest radiography is a common diagnostic imaging test, which contains an enormous amount of information about a patient. However, its interpretation is highly challenging. The accuracy of the diagnostic process is greatly influenced by image processing algorithms; hence enhancement of the images is indispensable in order to improve visibility of the details. This paper aims at improving radiograph parameters such as contrast, sharpness, noise level, and brightness to enhance chest radiographs, making use of a triangulation method. Here, contrast limited adaptive histogram equalization technique and noise suppression are simultaneously performed in wavelet domain in a new scheme, followed by morphological top-hat and bottom-hat filtering. A unique implementation of morphological filters allows for adjustment of the image brightness and significant enhancement of the contrast. The proposed method is tested on chest radiographs from Japanese Society of Radiological Technology database. The results are compared with conventional enhancement techniques such as histogram equalization, contrast limited adaptive histogram equalization, Retinex, and some recently proposed methods to show its strengths. The experimental results reveal that the proposed method can remarkably improve the image contrast while keeping the sensitive chest tissue information so that radiologists might have a more precise interpretation.

  7. Spatial enhancement of ECG using diagnostic similarity score based lead selective multi-scale linear model. (United States)

    Nallikuzhy, Jiss J; Dandapat, S


    In this work, a new patient-specific approach to enhance the spatial resolution of ECG is proposed and evaluated. The proposed model transforms a three-lead ECG into a standard twelve-lead ECG thereby enhancing its spatial resolution. The three leads used for prediction are obtained from the standard twelve-lead ECG. The proposed model takes advantage of the improved inter-lead correlation in wavelet domain. Since the model is patient-specific, it also selects the optimal predictor leads for a given patient using a lead selection algorithm. The lead selection algorithm is based on a new diagnostic similarity score which computes the diagnostic closeness between the original and the spatially enhanced leads. Standard closeness measures are used to assess the performance of the model. The similarity in diagnostic information between the original and the spatially enhanced leads are evaluated using various diagnostic measures. Repeatability and diagnosability are performed to quantify the applicability of the model. A comparison of the proposed model is performed with existing models that transform a subset of standard twelve-lead ECG into the standard twelve-lead ECG. From the analysis of the results, it is evident that the proposed model preserves diagnostic information better compared to other models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Chest CT window settings with multiscale adaptive histogram equalization: pilot study. (United States)

    Fayad, Laura M; Jin, Yinpeng; Laine, Andrew F; Berkmen, Yahya M; Pearson, Gregory D; Freedman, Benjamin; Van Heertum, Ronald


    Multiscale adaptive histogram equalization (MAHE), a wavelet-based algorithm, was investigated as a method of automatic simultaneous display of the full dynamic contrast range of a computed tomographic image. Interpretation times were significantly lower for MAHE-enhanced images compared with those for conventionally displayed images. Diagnostic accuracy, however, was insufficient in this pilot study to allow recommendation of MAHE as a replacement for conventional window display.

  9. Enhancement of the automatic ultrasonic signal processing system using digital technology

    International Nuclear Information System (INIS)

    Koo, In Soo; Park, H. Y.; Suh, Y. S.; Kim, D. Hoon; Huh, S.; Sung, S. H.; Jang, G. S.; Ryoo, S. G.; Choi, J. H.; Kim, Y. H.; Lee, J. C.; Kim, D. Hyun; Park, H. J.; Kim, Y. C.; Lee, J. P.; Park, C. H.; Kim, M. S.


    The objective of this study is to develop the automatic ultrasonic signal processing system which can be used in the inspection equipment to assess the integrity of the reactor vessel by enhancing the performance of the ultrasonic signal processing system. Main activities of this study divided into three categories such as the development of the circuits for generating ultrasonic signal and receiving the signal from the inspection equipment, the development of signal processing algorithm and H/W of the data processing system, and the development of the specification for application programs and system S/W for the analysis and evaluation computer. The results of main activities are as follows 1) the design of the ultrasonic detector and the automatic ultrasonic signal processing system by using the investigation of the state-of-the-art technology in the inside and outside of the country. 2) the development of H/W and S/W of the data processing system based on the results. Especially, the H/W of the data processing system, which have both advantages of digital and analog controls through the real-time digital signal processing, was developed using the DSP which can process the digital signal in the real-time, and was developed not only firmware of the data processing system in order for the peripherals but also the test algorithm of specimen for the calibration. The application programs and the system S/W of the analysis/evaluation computer were developed. Developed equipment was verified by the performance test. Based on developed prototype for the automatic ultrasonic signal processing system, the localization of the overall ultrasonic inspection equipment for nuclear industries would be expected through the further studies of the H/W establishment of real applications, developing the S/W specification of the analysis computer. (author)

  10. Automatic fringe enhancement with novel bidimensional sinusoids-assisted empirical mode decomposition. (United States)

    Wang, Chenxing; Kemao, Qian; Da, Feipeng


    Fringe-based optical measurement techniques require reliable fringe analysis methods, where empirical mode decomposition (EMD) is an outstanding one due to its ability of analyzing complex signals and the merit of being data-driven. However, two challenging issues hinder the application of EMD in practical measurement. One is the tricky mode mixing problem (MMP), making the decomposed intrinsic mode functions (IMFs) have equivocal physical meaning; the other is the automatic and accurate extraction of the sinusoidal fringe from the IMFs when unpredictable and unavoidable background and noise exist in real measurements. Accordingly, in this paper, a novel bidimensional sinusoids-assisted EMD (BSEMD) is proposed to decompose a fringe pattern into mono-component bidimensional IMFs (BIMFs), with the MMP solved; properties of the resulted BIMFs are then analyzed to recognize and enhance the useful fringe component. The decomposition and the fringe recognition are integrated and the latter provides a feedback to the former, helping to automatically stop the decomposition to make the algorithm simpler and more reliable. A series of experiments show that the proposed method is accurate, efficient and robust to various fringe patterns even with poor quality, rendering it a potential tool for practical use.

  11. Automatic x-ray image contrast enhancement based on parameter auto-optimization. (United States)

    Qiu, Jianfeng; Harold Li, H; Zhang, Tiezhi; Ma, Fangfang; Yang, Deshan


    Insufficient image contrast associated with radiation therapy daily setup x-ray images could negatively affect accurate patient treatment setup. We developed a method to perform automatic and user-independent contrast enhancement on 2D kilo voltage (kV) and megavoltage (MV) x-ray images. The goal was to provide tissue contrast optimized for each treatment site in order to support accurate patient daily treatment setup and the subsequent offline review. The proposed method processes the 2D x-ray images with an optimized image processing filter chain, which consists of a noise reduction filter and a high-pass filter followed by a contrast limited adaptive histogram equalization (CLAHE) filter. The most important innovation is to optimize the image processing parameters automatically to determine the required image contrast settings per disease site and imaging modality. Three major parameters controlling the image processing chain, i.e., the Gaussian smoothing weighting factor for the high-pass filter, the block size, and the clip limiting parameter for the CLAHE filter, were determined automatically using an interior-point constrained optimization algorithm. Fifty-two kV and MV x-ray images were included in this study. The results were manually evaluated and ranked with scores from 1 (worst, unacceptable) to 5 (significantly better than adequate and visually praise worthy) by physicians and physicists. The average scores for the images processed by the proposed method, the CLAHE, and the best window-level adjustment were 3.92, 2.83, and 2.27, respectively. The percentage of the processed images received a score of 5 were 48, 29, and 18%, respectively. The proposed method is able to outperform the standard image contrast adjustment procedures that are currently used in the commercial clinical systems. When the proposed method is implemented in the clinical systems as an automatic image processing filter, it could be useful for allowing quicker and potentially more

  12. Multi-scale Control and Enhancement of Reactor Boiling Heat Flux by Reagents and Nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Manglik, R M; Athavale, A; Kalaikadal, D S; Deodhar, A; Verma, U


    The phenomenological characterization of the use of non-invasive and passive techniques to enhance the boiling heat transfer in water has been carried out in this extended study. It provides fundamental enhanced heat transfer data for nucleate boiling and discusses the associated physics with the aim of addressing future and next-generation reactor thermal-hydraulic management. It essentially addresses the hypothesis that in phase-change processes during boiling, the primary mechanisms can be related to the liquid-vapor interfacial tension and surface wetting at the solidliquid interface. These interfacial characteristics can be significantly altered and decoupled by introducing small quantities of additives in water, such as surface-active polymers, surfactants, and nanoparticles. The changes are fundamentally caused at a molecular-scale by the relative bulk molecular dynamics and adsorption-desorption of the additive at the liquid-vapor interface, and its physisorption and electrokinetics at the liquid-solid interface. At the micro-scale, the transient transport mechanisms at the solid-liquid-vapor interface during nucleation and bubblegrowth can be attributed to thin-film spreading, surface-micro-cavity activation, and micro-layer evaporation. Furthermore at the macro-scale, the heat transport is in turn governed by the bubble growth and distribution, macro-layer heat transfer, bubble dynamics (bubble coalescence, collapse, break-up, and translation), and liquid rheology. Some of these behaviors and processes are measured and characterized in this study, the outcomes of which advance the concomitant fundamental physics, as well as provide insights for developing control strategies for the molecular-scale manipulation of interfacial tension and surface wetting in boiling by means of polymeric reagents, surfactants, and other soluble surface-active additives.

  13. Multi-scale enhancement of climate prediction over land by improving the model sensitivity to vegetation variability (United States)

    Alessandri, A.; Catalano, F.; De Felice, M.; Hurk, B. V. D.; Doblas-Reyes, F. J.; Boussetta, S.; Balsamo, G.; Miller, P. A.


    Here we demonstrate, for the first time, that the implementation of a realistic representation of vegetation in Earth System Models (ESMs) can significantly improve climate simulation and prediction across multiple time-scales. The effective sub-grid vegetation fractional coverage vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the surface resistance to evapotranspiration, albedo, roughness lenght, and soil field capacity. To adequately represent this effect in the EC-Earth ESM, we included an exponential dependence of the vegetation cover on the Leaf Area Index.By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal (2-4 months) and weather (4 days) time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation-cover consistently correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in

  14. Knickzone Extraction Tool (KET) - A new ArcGIS toolset for automatic extraction of knickzones from a DEM based on multi-scale stream gradients (United States)

    Zahra, Tuba; Paudel, Uttam; Hayakawa, Yuichi S.; Oguchi, Takashi


    Extraction of knickpoints or knickzones from a Digital Elevation Model (DEM) has gained immense significance owing to the increasing implications of knickzones on landform development. However, existing methods for knickzone extraction tend to be subjective or require time-intensive data processing. This paper describes the proposed Knickzone Extraction Tool (KET), a new raster-based Python script deployed in the form of an ArcGIS toolset that automates the process of knickzone extraction and is both fast and more user-friendly. The KET is based on multi-scale analysis of slope gradients along a river course, where any locally steep segment (knickzone) can be extracted as an anomalously high local gradient. We also conducted a comparative analysis of the KET and other contemporary knickzone identification techniques. The relationship between knickzone distribution and its morphometric characteristics are also examined through a case study of a mountainous watershed in Japan.

  15. Multi-Scale Computational Enzymology: Enhancing Our Understanding of Enzymatic Catalysis

    Directory of Open Access Journals (Sweden)

    Rami Gherib


    Full Text Available Elucidating the origin of enzymatic catalysis stands as one the great challenges of contemporary biochemistry and biophysics. The recent emergence of computational enzymology has enhanced our atomistic-level description of biocatalysis as well the kinetic and thermodynamic properties of their mechanisms. There exists a diversity of computational methods allowing the investigation of specific enzymatic properties. Small or large density functional theory models allow the comparison of a plethora of mechanistic reactive species and divergent catalytic pathways. Molecular docking can model different substrate conformations embedded within enzyme active sites and determine those with optimal binding affinities. Molecular dynamics simulations provide insights into the dynamics and roles of active site components as well as the interactions between substrate and enzymes. Hybrid quantum mechanical/molecular mechanical (QM/MM can model reactions in active sites while considering steric and electrostatic contributions provided by the surrounding environment. Using previous studies done within our group, on OvoA, EgtB, ThrRS, LuxS and MsrA enzymatic systems, we will review how these methods can be used either independently or cooperatively to get insights into enzymatic catalysis.

  16. Multiscale simulations of defect dipole-enhanced electromechanical coupling at dilute defect concentrations (United States)

    Liu, Shi; Cohen, R. E.


    The role of defects in solids of mixed ionic-covalent bonds such as ferroelectric oxides is complex. Current understanding of defects on ferroelectric properties at the single-defect level remains mostly at the empirical level, and the detailed atomistic mechanisms for many defect-mediated polarization-switching processes have not been convincingly revealed quantum mechanically. We simulate the polarization-electric field (P-E) and strain-electric field (ɛ-E) hysteresis loops for BaTiO3 in the presence of generic defect dipoles with large-scale molecular dynamics and provide a detailed atomistic picture of the defect dipole-enhanced electromechanical coupling. We develop a general first-principles-based atomistic model, enabling a quantitative understanding of the relationship between macroscopic ferroelectric properties and dipolar impurities of different orientations, concentrations, and dipole moments. We find that the collective orientation of dipolar defects relative to the external field is the key microscopic structure feature that strongly affects materials hardening/softening and electromechanical coupling. We show that a small concentration (≈0.1 at. %) of defect dipoles dramatically improves electromechanical responses. This offers the opportunity to improve the performance of inexpensive polycrystalline ferroelectric ceramics through defect dipole engineering for a range of applications including piezoelectric sensors, actuators, and transducers.

  17. Automatic segmentation of myocardium at risk from contrast enhanced SSFP CMR: validation against expert readers and SPECT

    International Nuclear Information System (INIS)

    Tufvesson, Jane; Carlsson, Marcus; Aletras, Anthony H.; Engblom, Henrik; Deux, Jean-François; Koul, Sasha; Sörensson, Peder; Pernow, John; Atar, Dan; Erlinge, David; Arheden, Håkan; Heiberg, Einar


    Efficacy of reperfusion therapy can be assessed as myocardial salvage index (MSI) by determining the size of myocardium at risk (MaR) and myocardial infarction (MI), (MSI = 1-MI/MaR). Cardiovascular magnetic resonance (CMR) can be used to assess MI by late gadolinium enhancement (LGE) and MaR by either T2-weighted imaging or contrast enhanced SSFP (CE-SSFP). Automatic segmentation algorithms have been developed and validated for MI by LGE as well as for MaR by T2-weighted imaging. There are, however, no algorithms available for CE-SSFP. Therefore, the aim of this study was to develop and validate automatic segmentation of MaR in CE-SSFP. The automatic algorithm applies surface coil intensity correction and classifies myocardial intensities by Expectation Maximization to define a MaR region based on a priori regional criteria, and infarct region from LGE. Automatic segmentation was validated against manual delineation by expert readers in 183 patients with reperfused acute MI from two multi-center randomized clinical trials (RCT) (CHILL-MI and MITOCARE) and against myocardial perfusion SPECT in an additional set (n = 16). Endocardial and epicardial borders were manually delineated at end-diastole and end-systole. Manual delineation of MaR was used as reference and inter-observer variability was assessed for both manual delineation and automatic segmentation of MaR in a subset of patients (n = 15). MaR was expressed as percent of left ventricular mass (%LVM) and analyzed by bias (mean ± standard deviation). Regional agreement was analyzed by Dice Similarity Coefficient (DSC) (mean ± standard deviation). MaR assessed by manual and automatic segmentation were 36 ± 10 % and 37 ± 11 %LVM respectively with bias 1 ± 6 %LVM and regional agreement DSC 0.85 ± 0.08 (n = 183). MaR assessed by SPECT and CE-SSFP automatic segmentation were 27 ± 10 %LVM and 29 ± 7 %LVM respectively with bias 2 ± 7 %LVM. Inter-observer variability was 0 ± 3 %LVM for manual delineation and

  18. A highly versatile automatized setup for quantitative measurements of PHIP enhancements (United States)

    Kiryutin, Alexey S.; Sauer, Grit; Hadjiali, Sara; Yurkovskaya, Alexandra V.; Breitzke, Hergen; Buntkowsky, Gerd


    The design and application of a versatile and inexpensive experimental extension to NMR spectrometers is described that allows to carry out highly reproducible PHIP experiments directly in the NMR sample tube, i.e. under PASADENA condition, followed by the detection of the NMR spectra of hyperpolarized products with high spectral resolution. Employing this high resolution it is feasible to study kinetic processes in the solution with high accuracy. As a practical example the dissolution of hydrogen gas in the liquid and the PHIP kinetics during the hydrogenation reaction of Fmoc-O-propargyl-L-tyrosine in acetone-d6 are monitored. The timing of the setup is fully controlled by the pulse-programmer of the NMR spectrometer. By flushing with an inert gas it is possible to efficiently quench the hydrogenation reaction in a controlled fashion and to detect the relaxation of hyperpolarization without a background reaction. The proposed design makes it possible to carry out PHIP experiments in an automatic mode and reliably determine the enhancement of polarized signals.

  19. Multiscale Hessian fracture filtering for the enhancement and segmentation of narrow fractures in 3D image data (United States)

    Voorn, Maarten; Exner, Ulrike; Rath, Alexander


    Narrow fractures—or more generally narrow planar features—can be difficult to extract from 3D image datasets, and available methods are often unsuitable or inapplicable. A proper extraction is however in many cases required for visualisation or future processing steps. We use the example of 3D X-ray micro-Computed Tomography (µCT) data of narrow fractures through core samples from a dolomitic hydrocarbon reservoir (Hauptdolomit below the Vienna Basin, Austria). The extraction and eventual binary segmentation of the fractures in these datasets is required for porosity determination and permeability modelling. In this paper, we present the multiscale Hessian fracture filtering technique for extracting narrow fractures from a 3D image dataset. The second-order information in the Hessian matrix is used to distinguish planar features from the dataset. Different results are obtained for different scales of analysis in the calculation of the Hessian matrix. By combining these various scales of analysis, the final output is multiscale; i.e. narrow fractures of different apertures are detected. The presented technique is implemented and made available as macro code for the multiplatform public domain image processing software ImageJ. Serial processing of blocks of data ensures that full 3D processing of relatively large datasets (example dataset: 1670×1670×1546 voxels) is possible on a desktop computer. Here, several hours of processing time are required, but interaction is only required in the beginning. Various post-processing steps (calibration, connectivity filtering, and binarisation) can be applied, depending on the goals of research. The multiscale Hessian fracture filtering technique provides very good results for extracting the narrow fractures in our example dataset, despite several drawbacks inherent to the use of the Hessian matrix. Although we apply the technique on a specific example, the general implementation makes the filter suitable for different

  20. Multiscale CNNs for Brain Tumor Segmentation and Diagnosis

    Directory of Open Access Journals (Sweden)

    Liya Zhao


    Full Text Available Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs. Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.

  1. Multiscale CNNs for Brain Tumor Segmentation and Diagnosis. (United States)

    Zhao, Liya; Jia, Kebin


    Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs). Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.

  2. Diagnostic accuracy of three-dimensional contrast-enhanced automatic moving-table MR angiography in patients with peripheral arterial occlusive disease in comparison with digital subtraction angiography

    Directory of Open Access Journals (Sweden)

    Hazem Soliman


    Conclusion: Our prospective comparison shows that three-dimensional contrast-enhanced automatic moving-table MRA is a noninvasive imaging modality that has a diagnostic accuracy comparable to DSA for the assessment of peripheral arterial occlusive disease.

  3. Two Methods of Automatic Evaluation of Speech Signal Enhancement Recorded in the Open-Air MRI Environment (United States)

    Přibil, Jiří; Přibilová, Anna; Frollo, Ivan


    The paper focuses on two methods of evaluation of successfulness of speech signal enhancement recorded in the open-air magnetic resonance imager during phonation for the 3D human vocal tract modeling. The first approach enables to obtain a comparison based on statistical analysis by ANOVA and hypothesis tests. The second method is based on classification by Gaussian mixture models (GMM). The performed experiments have confirmed that the proposed ANOVA and GMM classifiers for automatic evaluation of the speech quality are functional and produce fully comparable results with the standard evaluation based on the listening test method.

  4. Enhanced Representation of Soil NO Emissions in the Community Multiscale Air Quality (CMAQ) Model Version 5.0.2 (United States)

    Rasool, Quazi Z.; Zhang, Rui; Lash, Benjamin; Cohan, Daniel S.; Cooter, Ellen J.; Bash, Jesse O.; Lamsal, Lok N.


    Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions.

  5. Hierarchical multiscale modeling for flows in fractured media using generalized multiscale finite element method

    KAUST Repository

    Efendiev, Yalchin R.


    In this paper, we develop a multiscale finite element method for solving flows in fractured media. Our approach is based on generalized multiscale finite element method (GMsFEM), where we represent the fracture effects on a coarse grid via multiscale basis functions. These multiscale basis functions are constructed in the offline stage via local spectral problems following GMsFEM. To represent the fractures on the fine grid, we consider two approaches (1) discrete fracture model (DFM) (2) embedded fracture model (EFM) and their combination. In DFM, the fractures are resolved via the fine grid, while in EFM the fracture and the fine grid block interaction is represented as a source term. In the proposed multiscale method, additional multiscale basis functions are used to represent the long fractures, while short-size fractures are collectively represented by a single basis functions. The procedure is automatically done via local spectral problems. In this regard, our approach shares common concepts with several approaches proposed in the literature as we discuss. We would like to emphasize that our goal is not to compare DFM with EFM, but rather to develop GMsFEM framework which uses these (DFM or EFM) fine-grid discretization techniques. Numerical results are presented, where we demonstrate how one can adaptively add basis functions in the regions of interest based on error indicators. We also discuss the use of randomized snapshots (Calo et al. Randomized oversampling for generalized multiscale finite element methods, 2014), which reduces the offline computational cost.

  6. Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation. (United States)

    Pereira, Sérgio; Meier, Raphael; McKinley, Richard; Wiest, Roland; Alves, Victor; Silva, Carlos A; Reyes, Mauricio


    Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Carbon nanotube integrated multifunctional multiscale composites (United States)

    Qiu, Jingjing; Zhang, Chuck; Wang, Ben; Liang, Richard


    Carbon nanotubes (CNTs) demonstrate extraordinary properties and show great promise in enhancing out-of-plane properties of traditional polymer composites and enabling functionality, but current manufacturing challenges hinder the realization of their potential. This paper presents a method to fabricate multifunctional multiscale composites through an effective infiltration-based vacuum-assisted resin transfer moulding (VARTM) process. Multi-walled carbon nanotubes (MWNTs) were infused through and between glass-fibre tows along the through-thickness direction. Both pristine and functionalized MWNTs were used in fabricating multiscale glass-fibre-reinforced epoxy composites. It was demonstrated that the mechanical properties of multiscale composites were remarkably enhanced, especially in the functionalized MWNT multiscale composites. With only 1 wt% loading of functionalized MWNTs, tensile strength was increased by 14% and Young's modulus by 20%, in comparison with conventional fibre-reinforced composites. Moreover, the shear strength and short-beam modulus were increased by 5% and 8%, respectively, indicating the improved inter-laminar properties. The strain-stress tests also suggested noticeable enhancement in toughness. Scanning electron microscopy (SEM) characterization confirmed an enhanced interfacial bonding when functionalized MWNTs were integrated into epoxy/glass-fibre composites. The coefficient thermal expansion (CTE) of functionalized nanocomposites indicated a reduction of 25.2% compared with epoxy/glass-fibre composites. The desired improvement of electrical conductivities was also achieved. The multiscale composites indicated a way to leverage the benefits of CNTs and opened up new opportunities for high-performance multifunctional multiscale composites.

  8. Retinex enhancement of infrared images. (United States)

    Li, Ying; He, Renjie; Xu, Guizhi; Hou, Changzhi; Sun, Yunyan; Guo, Lei; Rao, Liyun; Yan, Weili


    With the ability of imaging the temperature distribution of body, infrared imaging is promising in diagnostication and prognostication of diseases. However the poor quality of the raw original infrared images prevented applications and one of the essential problems is the low contrast appearance of the imagined object. In this paper, the image enhancement technique based on the Retinex theory is studied, which is a process that automatically retrieve the visual realism to images. The algorithms, including Frackle-McCann algorithm, McCann99 algorithm, single-scale Retinex algorithm, multi-scale Retinex algorithm and multi-scale Retinex algorithm with color restoration, are experienced to the enhancement of infrared images. The entropy measurements along with the visual inspection were compared and results shown the algorithms based on Retinex theory have the ability in enhancing the infrared image. Out of the algorithms compared, MSRCR demonstrated the best performance.

  9. Adult attachment anxiety is associated with enhanced automatic neural response to positive facial expression. (United States)

    Donges, Uta-Susan; Kugel, Harald; Stuhrmann, Anja; Grotegerd, Dominik; Redlich, Ronny; Lichev, Vladimir; Rosenberg, Nicole; Ihme, Klas; Suslow, Thomas; Dannlowski, Udo


    According to social psychology models of adult attachment, a fundamental dimension of attachment is anxiety. Individuals who are high in attachment anxiety are motivated to achieve intimacy in relationships, but are mistrustful of others and their availability. Behavioral research has shown that anxiously attached persons are vigilant for emotional facial expression, but the neural substrates underlying this perceptual sensitivity remain largely unknown. In the present study functional magnetic resonance imaging was used to examine automatic brain reactivity to approach-related facial emotions as a function of attachment anxiety in a sample of 109 healthy adults. Pictures of sad and happy faces were presented masked by neutral faces. The Relationship Scales Questionnaire (RSQ) was used to assess attachment style. Attachment anxiety was correlated with depressivity, trait anxiety, and attachment avoidance. Controlling for these variables, attachment-related anxiety was positively related to responses in left inferior, middle, and medial prefrontal areas, globus pallidus, claustrum, and right cerebellum to masked happy facial expression. Attachment anxiety was not found to be associated with brain activation due to masked sad faces. Our findings suggest that anxiously attached adults are automatically more responsive to positive approach-related facial expression in brain areas that are involved in the perception of facial emotion, facial mimicry, or the assessment of affective value and social distance. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  10. Multifunctional multiscale composites: Processing, modeling and characterization (United States)

    Qiu, Jingjing

    Carbon nanotubes (CNTs) demonstrate extraordinary properties and show great promise in enhancing out-of-plane properties of traditional polymer/fiber composites and enabling functionality. However, current manufacturing challenges hinder the realization of their potential. In the dissertation research, both experimental and computational efforts have been conducted to investigate effective manufacturing techniques of CNT integrated multiscale composites. The fabricated composites demonstrated significant improvements in physical properties, such as tensile strength, tensile modulus, inter-laminar shear strength, thermal dimension stability and electrical conductivity. Such multiscale composites were truly multifunctional with the addition of CNTs. Furthermore, a novel hierarchical multiscale modeling method was developed in this research. Molecular dynamic (MD) simulation offered reasonable explanation of CNTs dispersion and their motion in polymer solution. Bi-mode finite-extensible-nonlinear-elastic (FENE) dumbbell simulation was used to analyze the influence of CNT length distribution on the stress tensor and shear-rate-dependent viscosity. Based on the simulated viscosity profile and empirical equations from experiments, a macroscale flow simulation model on the finite element method (FEM) method was developed and validated to predict resin flow behavior in the processing of CNT-enhanced multiscale composites. The proposed multiscale modeling method provided a comprehensive understanding of micro/nano flow in both atomistic details and mesoscale. The simulation model can be used to optimize process design and control of the mold-filling process in multiscale composite manufacturing. This research provided systematic investigations into the CNT-based multiscale composites. The results from this study may be used to leverage the benefits of CNTs and open up new application opportunities for high-performance multifunctional multiscale composites. Keywords. Carbon

  11. Automatic assessment of coronary artery calcium score from contrast-enhanced 256-row coronary computed tomography angiography. (United States)

    Rubinshtein, Ronen; Halon, David A; Gaspar, Tamar; Lewis, Basil S; Peled, Nathan


    The coronary artery calcium score (CS), an independent predictor of cardiovascular events, can be obtained from a stand-alone nonenhanced computed tomography (CT) scan (CSCT) or as an additional nonenhanced procedure before contrast-enhanced coronary CT angiography (CCTA). We evaluated the accuracy of a novel fully automatic tool for computing CS from the CCTA examination. One hundred thirty-six consecutive symptomatic patients (aged 59 ± 11 years, 40% female) without known coronary artery disease who underwent both 256-row CSCT and CCTA were studied. Original scan reconstruction (slice thickness) was maintained (3 mm for CSCT and 0.67 mm for CCTA). CS was computed from CCTA by an automatic tool (COR Analyzer, rcadia Medical Imaging, Haifa, Israel) and compared with CS results obtained by standard assessment of nonenhanced CSCT (HeartBeat CS, Philips, Cleveland, Ohio). We also compared both methods for classification into 5 commonly used CS categories (0, 1 to 10, 11 to 100, 101 to 400, >400 Agatston units). All scans were of diagnostic quality. CS obtained by the COR Analyzer from CCTA classified 111 of 136 (82%) of patients into identical categories as CS by CSCT and 24 of remaining 25 into an adjacent category. Overall, CS values from CCTA showed high correlation with CS values from CSCT (Spearman rank correlation = 0.95, p automatically computed from 256-row CCTA correlated highly with standard CS values obtained from nonenhanced CSCT. CS obtained directly from CCTA may obviate the need for an additional scan and attendant radiation. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Multiscale principal component analysis

    International Nuclear Information System (INIS)

    Akinduko, A A; Gorban, A N


    Principal component analysis (PCA) is an important tool in exploring data. The conventional approach to PCA leads to a solution which favours the structures with large variances. This is sensitive to outliers and could obfuscate interesting underlying structures. One of the equivalent definitions of PCA is that it seeks the subspaces that maximize the sum of squared pairwise distances between data projections. This definition opens up more flexibility in the analysis of principal components which is useful in enhancing PCA. In this paper we introduce scales into PCA by maximizing only the sum of pairwise distances between projections for pairs of datapoints with distances within a chosen interval of values [l,u]. The resulting principal component decompositions in Multiscale PCA depend on point (l,u) on the plane and for each point we define projectors onto principal components. Cluster analysis of these projectors reveals the structures in the data at various scales. Each structure is described by the eigenvectors at the medoid point of the cluster which represent the structure. We also use the distortion of projections as a criterion for choosing an appropriate scale especially for data with outliers. This method was tested on both artificial distribution of data and real data. For data with multiscale structures, the method was able to reveal the different structures of the data and also to reduce the effect of outliers in the principal component analysis

  13. Automatic untargeted metabolic profiling analysis coupled with Chemometrics for improving metabolite identification quality to enhance geographical origin discrimination capability. (United States)

    Han, Lu; Zhang, Yue-Ming; Song, Jing-Jing; Fan, Mei-Juan; Yu, Yong-Jie; Liu, Ping-Ping; Zheng, Qing-Xia; Chen, Qian-Si; Bai, Chang-Cai; Sun, Tao; She, Yuan-Bin


    Untargeted metabolic profiling analysis is employed to screen metabolites for specific purposes, such as geographical origin discrimination. However, the data analysis remains a challenging task. In this work, a new automatic untargeted metabolic profiling analysis coupled with a chemometric strategy was developed to improve the metabolite identification results and to enhance the geographical origin discrimination capability. Automatic untargeted metabolic profiling analysis with chemometrics (AuMPAC) was used to screen the total ion chromatographic (TIC) peaks that showed significant differences among the various geographical regions. Then, a chemometric peak resolution strategy is employed for the screened TIC peaks. The retrieved components were further analyzed using ANOVA, and those that showed significant differences were used to build a geographical origin discrimination model by using two-way encoding partial least squares. To demonstrate its performance, a geographical origin discrimination of flaxseed samples from six geographical regions in China was conducted, and 18 TIC peaks were screened. A total of 19 significant different metabolites were obtained after the peak resolution. The accuracy of the geographical origin discrimination was up to 98%. A comparison of the AuMPAC, AMDIS, and XCMS indicated that AuMPACobtained the best geographical origin discrimination results. In conclusion, AuMPAC provided another method for data analysis. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Multi-scale enhancement of climate prediction over land by increasing the model sensitivity to vegetation variability in EC-Earth (United States)

    Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul A.


    The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (twentieth century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2 m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over

  15. Technology-Enhanced Assessment of Math Fact Automaticity: Patterns of Performance for Low- and Typically Achieving Students (United States)

    Stickney, Eric M.; Sharp, Lindsay B.; Kenyon, Amanda S.


    Because math fact automaticity has been identified as a key barrier for students struggling with mathematics, we examined how initial math achievement levels influenced the path to automaticity (e.g., variation in number of attempts, speed of retrieval, and skill maintenance over time) and the relation between attainment of automaticity and gains…

  16. Fatigue of multiscale composites with secondary nanoplatelet reinforcement: 3D computational analysis

    DEFF Research Database (Denmark)

    Dai, Gaoming; Mishnaevsky, Leon, Jr.


    3D numerical simulations of fatigue damage of multiscale fiber reinforced polymer composites with secondary nanoclay reinforcement are carried out. Macro–micro FE models of the multiscale composites are generated automatically using Python based software. The effect of the nanoclay reinforcement...

  17. A new combined technique for automatic contrast enhancement of digital images

    Directory of Open Access Journals (Sweden)

    Ismail A. Humied


    Full Text Available Some low contrast images have certain characteristics makes it difficult to use traditional methods to improve it. An example of these characteristics, that the amplitudes of images histogram components are very high at one location on the gray scale and very small in the rest of the gray scale. In the present paper, a new method is described. It can deal with such cases. The proposed method is a combination of Histogram Equalization (HE and Fast Gray-Level Grouping (FGLG. The basic procedure of this method is segments the original histogram of a low contrast image into two sub-histograms according to the location of the highest amplitude of the histogram components, and achieving contrast enhancement by equalizing the left segment of the histogram components using (HE technique and using (FGLG technique to equalize the right segment of this histogram components. The results have shown that the proposed method does not only produce better results than each individual contrast enhancement technique, but it is also fully automated. Moreover, it is applicable to a broad variety of images that satisfy the properties mentioned above and suffer from low contrast.

  18. Using Adaptive Tone Mapping to Enhance Edge-Preserving Color Image Automatically

    Directory of Open Access Journals (Sweden)

    Lu Min-Yao


    Full Text Available One common characteristic of most high-contrast images is the coexistence of dark shadows and bright light source in one scene. It is very difficult to present details in both dark and bright areas simultaneously on most display devices. In order to resolve this problem, a new method utilizing bilateral filter combined with adaptive tone-mapping method is proposed to improve image quality. First of all, bilateral filter is used to decompose image into two layers: large-scale layer and detail layer. Then, the large-scale layer image is divided into three regions: bright, mid-tone, and dark region. Finally, an appropriate tone-mapping method is chosen to process each region according to its individual property. Only large-scale layer image is enhanced by using adaptive tone mapping; therefore, the details of the original image can be preserved. The experiment results demonstrate the success of proposed method. Furthermore, the proposed method can also avoid posterization produced by methods using histogram equalization.

  19. Resistance training exercise program for intervention to enhance gait function in elderly chronically ill patients: multivariate multiscale entropy for center of pressure signal analysis. (United States)

    Chen, Ming-Shu; Jiang, Bernard C


    Falls are unpredictable accidents, and the resulting injuries can be serious in the elderly, particularly those with chronic diseases. Regular exercise is recommended to prevent and treat hypertension and other chronic diseases by reducing clinical blood pressure. The "complexity index" (CI), based on multiscale entropy (MSE) algorithm, has been applied in recent studies to show a person's adaptability to intrinsic and external perturbations and widely used measure of postural sway or stability. The multivariate multiscale entropy (MMSE) was advanced algorithm used to calculate the complexity index (CI) values of the center of pressure (COP) data. In this study, we applied the MSE & MMSE to analyze gait function of 24 elderly, chronically ill patients (44% female; 56% male; mean age, 67.56 ± 10.70 years) with either cardiovascular disease, diabetes mellitus, or osteoporosis. After a 12-week training program, postural stability measurements showed significant improvements. Our results showed beneficial effects of resistance training, which can be used to improve postural stability in the elderly and indicated that MMSE algorithms to calculate CI of the COP data were superior to the multiscale entropy (MSE) algorithm to identify the sense of balance in the elderly.

  20. Resistance Training Exercise Program for Intervention to Enhance Gait Function in Elderly Chronically Ill Patients: Multivariate Multiscale Entropy for Center of Pressure Signal Analysis

    Directory of Open Access Journals (Sweden)

    Ming-Shu Chen


    Full Text Available Falls are unpredictable accidents, and the resulting injuries can be serious in the elderly, particularly those with chronic diseases. Regular exercise is recommended to prevent and treat hypertension and other chronic diseases by reducing clinical blood pressure. The “complexity index” (CI, based on multiscale entropy (MSE algorithm, has been applied in recent studies to show a person’s adaptability to intrinsic and external perturbations and widely used measure of postural sway or stability. The multivariate multiscale entropy (MMSE was advanced algorithm used to calculate the complexity index (CI values of the center of pressure (COP data. In this study, we applied the MSE & MMSE to analyze gait function of 24 elderly, chronically ill patients (44% female; 56% male; mean age, 67.56±10.70 years with either cardiovascular disease, diabetes mellitus, or osteoporosis. After a 12-week training program, postural stability measurements showed significant improvements. Our results showed beneficial effects of resistance training, which can be used to improve postural stability in the elderly and indicated that MMSE algorithms to calculate CI of the COP data were superior to the multiscale entropy (MSE algorithm to identify the sense of balance in the elderly.

  1. Modeling of time-lapse multi-scale seismic monitoring of CO2 injected into a fault zone to enhance the characterization of permeability in enhanced geothermal systems (United States)

    Zhang, R.; Borgia, A.; Daley, T. M.; Oldenburg, C. M.; Jung, Y.; Lee, K. J.; Doughty, C.; Altundas, B.; Chugunov, N.; Ramakrishnan, T. S.


    Subsurface permeable faults and fracture networks play a critical role for enhanced geothermal systems (EGS) by providing conduits for fluid flow. Characterization of the permeable flow paths before and after stimulation is necessary to evaluate and optimize energy extraction. To provide insight into the feasibility of using CO2 as a contrast agent to enhance fault characterization by seismic methods, we model seismic monitoring of supercritical CO2 (scCO2) injected into a fault. During the CO2 injection, the original brine is replaced by scCO2, which leads to variations in geophysical properties of the formation. To explore the technical feasibility of the approach, we present modeling results for different time-lapse seismic methods including surface seismic, vertical seismic profiling (VSP), and a cross-well survey. We simulate the injection and production of CO2 into a normal fault in a system based on the Brady's geothermal field and model pressure and saturation variations in the fault zone using TOUGH2-ECO2N. The simulation results provide changing fluid properties during the injection, such as saturation and salinity changes, which allow us to estimate corresponding changes in seismic properties of the fault and the formation. We model the response of the system to active seismic monitoring in time-lapse mode using an anisotropic finite difference method with modifications for fracture compliance. Results to date show that even narrow fault and fracture zones filled with CO2 can be better detected using the VSP and cross-well survey geometry, while it would be difficult to image the CO2 plume by using surface seismic methods.

  2. Multiscale Cancer Modeling (United States)

    Macklin, Paul; Cristini, Vittorio


    Simulating cancer behavior across multiple biological scales in space and time, i.e., multiscale cancer modeling, is increasingly being recognized as a powerful tool to refine hypotheses, focus experiments, and enable more accurate predictions. A growing number of examples illustrate the value of this approach in providing quantitative insight on the initiation, progression, and treatment of cancer. In this review, we introduce the most recent and important multiscale cancer modeling works that have successfully established a mechanistic link between different biological scales. Biophysical, biochemical, and biomechanical factors are considered in these models. We also discuss innovative, cutting-edge modeling methods that are moving predictive multiscale cancer modeling toward clinical application. Furthermore, because the development of multiscale cancer models requires a new level of collaboration among scientists from a variety of fields such as biology, medicine, physics, mathematics, engineering, and computer science, an innovative Web-based infrastructure is needed to support this growing community. PMID:21529163

  3. Automatic gallbladder segmentation using combined 2D and 3D shape features to perform volumetric analysis in native and secretin-enhanced MRCP sequences. (United States)

    Gloger, Oliver; Bülow, Robin; Tönnies, Klaus; Völzke, Henry


    We aimed to develop the first fully automated 3D gallbladder segmentation approach to perform volumetric analysis in volume data of magnetic resonance (MR) cholangiopancreatography (MRCP) sequences. Volumetric gallbladder analysis is performed for non-contrast-enhanced and secretin-enhanced MRCP sequences. Native and secretin-enhanced MRCP volume data were produced with a 1.5-T MR system. Images of coronal maximum intensity projections (MIP) are used to automatically compute 2D characteristic shape features of the gallbladder in the MIP images. A gallbladder shape space is generated to derive 3D gallbladder shape features, which are then combined with 2D gallbladder shape features in a support vector machine approach to detect gallbladder regions in MRCP volume data. A region-based level set approach is used for fine segmentation. Volumetric analysis is performed for both sequences to calculate gallbladder volume differences between both sequences. The approach presented achieves segmentation results with mean Dice coefficients of 0.917 in non-contrast-enhanced sequences and 0.904 in secretin-enhanced sequences. This is the first approach developed to detect and segment gallbladders in MR-based volume data automatically in both sequences. It can be used to perform gallbladder volume determination in epidemiological studies and to detect abnormal gallbladder volumes or shapes. The positive volume differences between both sequences may indicate the quantity of the pancreatobiliary reflux.

  4. Multiscale analysis of MR-mammography data

    International Nuclear Information System (INIS)

    Lessmann, B.; Nattkemper, T.W.; Kessar, P.; Pointon, L.; Khazen, M.; Leach, M.O.; Degenhard, A.


    In this work we propose a method for automatically discriminating between different types of tissue in MR mammography datasets. This is accomplished by employing a wavelet-based multiscale analysis. After the data has been wavelet-transformed unsupervised machine learning methods are employed to identify typical patterns in the wavelet domain. To demonstrate the potential of the proposed approach we apply a filtering procedure that extracts the wavelet-based image information related to tumour tissue. In this way we obtain a robust segmentation of suspicious tissue in the MR image. (orig.)

  5. Dynamic contrast-enhanced MRI for automatic detection of foci @]@of residual or recurrent disease after prostatectomy

    Energy Technology Data Exchange (ETDEWEB)

    Parra, N.A.; Orman, Amber; Abramowitz, Matthew; Pollack, Alan; Stoyanova, Radka [University of Miami Miller School of Medicine, Department of Radiation Oncology, Miami, FL (United States); Padgett, Kyle [University of Miami Miller School of Medicine, Department of Radiation Oncology, Miami, FL (United States); University of Miami Miller School of Medicine, Department of Radiology, Miami, FL (United States); Casillas, Victor [University of Miami Miller School of Medicine, Department of Radiology, Miami, FL (United States); Punnen, Sanoj [University of Miami Miller School of Medicine, Department of Urology, Miami, FL (United States)


    This study aimed to develop an automated procedure for identifying suspicious foci of residual/recurrent disease in the prostate bed using dynamic contrast-enhanced-MRI (DCE-MRI) in prostate cancer patients after prostatectomy. Data of 22 patients presenting for salvage radiotherapy (RT) with an identified gross tumor volume (GTV) in the prostate bed were analyzed retrospectively. An unsupervised pattern recognition method was used to analyze DCE-MRI curves from the prostate bed. Data were represented as a product of a number of signal-vs.-time patterns and their weights. The temporal pattern, characterized by fast wash-in and gradual wash-out, was considered the ''tumor'' pattern. The corresponding weights were thresholded based on the number (1, 1.5, 2, 2.5) of standard deviations away from the mean, denoted as DCE1.0,.., DCE2.5, and displayed on the T2-weighted MRI. The resultant four volumes were compared with the GTV and maximum pre-RT prostate-specific antigen (PSA) level. Pharmacokinetic modeling was also carried out. Principal component analysis determined 2-4 significant patterns in patients' DCE-MRI. Analysis and display of the identified suspicious foci was performed in commercial software (MIM Corporation, Cleveland, OH, USA). In general, DCE1.0/DCE1.5 highlighted larger areas than GTV. DCE2.0 and GTV were significantly correlated (r = 0.60, p < 0.05). DCE2.0/DCA2.5 were also significantly correlated with PSA (r = 0.52, 0.67, p < 0.05). K{sup trans} for DCE2.5 was statistically higher than the GTV's K{sup trans} (p < 0.05), indicating that the automatic volume better captures areas of malignancy. A software tool was developed for identification and visualization of the suspicious foci in DCE-MRI from post-prostatectomy patients and was integrated into the treatment planning system. (orig.) [German] Entwicklung eines automatischen Analyseverfahrens, um nach Prostatektomie mittels dynamischer kontrastmittelverstaerkter

  6. Modeling of heterogeneous elastic materials by the multiscale hp-adaptive finite element method (United States)

    Klimczak, Marek; Cecot, Witold


    We present an enhancement of the multiscale finite element method (MsFEM) by combining it with the hp-adaptive FEM. Such a discretization-based homogenization technique is a versatile tool for modeling heterogeneous materials with fast oscillating elasticity coefficients. No assumption on periodicity of the domain is required. In order to avoid direct, so-called overkill mesh computations, a coarse mesh with effective stiffness matrices is used and special shape functions are constructed to account for the local heterogeneities at the micro resolution. The automatic adaptivity (hp-type at the macro resolution and h-type at the micro resolution) increases efficiency of computation. In this paper details of the modified MsFEM are presented and a numerical test performed on a Fichera corner domain is presented in order to validate the proposed approach.

  7. Automatic sequences

    CERN Document Server

    Haeseler, Friedrich


    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  8. Open Quotient Measurements Based on Multiscale Product of Speech Signal Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Aïcha Bouzid


    Full Text Available This paper describes a multiscale product method (MPM for open quotient measure in voiced speech. The method is based on determining the glottal closing and opening instants. The proposed approach consists of making the products of wavelet transform of speech signal at different scales in order to enhance the edge detection and parameter estimation. We show that the proposed method is effective and robust for detecting speech singularity. Accurate estimation of glottal closing instants (GCIs and opening instants (GOIs is important in a wide range of speech processing tasks. In this paper, accurate estimation of GCIs and GOIs is used to measure the local open quotient (Oq which is the ratio of the open time by the pitch period. Multiscale product operates automatically on speech signal; the reference electroglottogram (EGG signal is used for performance evaluation. The ratio of good GCI detection is 95.5% and that of GOI is 76%. The pitch period relative error is 2.6% and the open phase relative error is 5.6%. The relative error measured on open quotient reaches 3% for the whole Keele database.

  9. Fully automatic segmentation and objective assessment of atrial scars for long-standing persistent atrial fibrillation patients using late gadolinium-enhanced MRI. (United States)

    Yang, Guang; Zhuang, Xiahai; Khan, Habib; Haldar, Shouvik; Nyktari, Eva; Li, Lei; Wage, Ricardo; Ye, Xujiong; Slabaugh, Greg; Mohiaddin, Raad; Wong, Tom; Keegan, Jennifer; Firmin, David


    Atrial fibrillation (AF) is the most common heart rhythm disorder and causes considerable morbidity and mortality, resulting in a large public health burden that is increasing as the population ages. It is associated with atrial fibrosis, the amount and distribution of which can be used to stratify patients and to guide subsequent electrophysiology ablation treatment. Atrial fibrosis may be assessed noninvasively using late gadolinium-enhanced (LGE) magnetic resonance imaging (MRI) where scar tissue is visualized as a region of signal enhancement. However, manual segmentation of the heart chambers and of the atrial scar tissue is time consuming and subject to interoperator variability, particularly as image quality in AF is often poor. In this study, we propose a novel fully automatic pipeline to achieve accurate and objective segmentation of the heart (from MRI Roadmap data) and of scar tissue within the heart (from LGE MRI data) acquired in patients with AF. Our fully automatic pipeline uniquely combines: (a) a multiatlas-based whole heart segmentation (MA-WHS) to determine the cardiac anatomy from an MRI Roadmap acquisition which is then mapped to LGE MRI, and (b) a super-pixel and supervised learning based approach to delineate the distribution and extent of atrial scarring in LGE MRI. We compared the accuracy of the automatic analysis to manual ground truth segmentations in 37 patients with persistent long-standing AF. Both our MA-WHS and atrial scarring segmentations showed accurate delineations of cardiac anatomy (mean Dice = 89%) and atrial scarring (mean Dice = 79%), respectively, compared to the established ground truth from manual segmentation. In addition, compared to the ground truth, we obtained 88% segmentation accuracy, with 90% sensitivity and 79% specificity. Receiver operating characteristic analysis achieved an average area under the curve of 0.91. Compared with previously studied methods with manual interventions, our innovative pipeline

  10. Towards distributed multiscale computing for the VPH

    NARCIS (Netherlands)

    Hoekstra, A.G.; Coveney, P.


    Multiscale modeling is fundamental to the Virtual Physiological Human (VPH) initiative. Most detailed three-dimensional multiscale models lead to prohibitive computational demands. As a possible solution we present MAPPER, a computational science infrastructure for Distributed Multiscale Computing

  11. Automatic dosage of hydrogen peroxide in solar photo-Fenton plants: Development of a control strategy for efficiency enhancement

    Energy Technology Data Exchange (ETDEWEB)

    Ortega-Gomez, E. [Department of Chemical Engineering, University of Almeria, 04120 Almeria (Spain); CIESOL, Joint Centre of the University of Almeria-CIEMAT, 04120 Almeria (Spain); Moreno Ubeda, J.C. [Department of Language and Computation, University of Almeria, 04120 Almeria (Spain); Alvarez Hervas, J.D. [Department of Language and Computation, University of Almeria, 04120 Almeria (Spain); Department of Language and Computation, University of Sevilla, 41092 Sevilla (Spain); Casas Lopez, J.L.; Santos-Juanes Jorda, L. [Department of Chemical Engineering, University of Almeria, 04120 Almeria (Spain); CIESOL, Joint Centre of the University of Almeria-CIEMAT, 04120 Almeria (Spain); Sanchez Perez, J.A., E-mail: [Department of Chemical Engineering, University of Almeria, 04120 Almeria (Spain); CIESOL, Joint Centre of the University of Almeria-CIEMAT, 04120 Almeria (Spain)


    Highlights: Black-Right-Pointing-Pointer Dissolved oxygen monitoring is used for automatic dosage of H{sub 2}O{sub 2} in photo-Fenton. Black-Right-Pointing-Pointer PI with anti-windup minimises H{sub 2}O{sub 2} consumption. Black-Right-Pointing-Pointer The H{sub 2}O{sub 2} consumption was reduced up to 50% with respect to manual addition strategies. Black-Right-Pointing-Pointer Appropriate H{sub 2}O{sub 2} dosage is achieved by PI with anti-windup under disturbances. - Abstract: The solar photo-Fenton process is widely used for the elimination of pollutants in aqueous effluent and, as such, is amply cited in the literature. In this process, hydrogen peroxide represents the highest operational cost. Up until now, manual dosing of H{sub 2}O{sub 2} has led to low process performance. Consequently, there is a need to automate the hydrogen peroxide dosage for use in industrial applications. As it has been demonstrated that a relationship exists between dissolved oxygen (DO) concentration and hydrogen peroxide consumption, DO can be used as a variable in optimising the hydrogen peroxide dosage. For this purpose, a model was experimentally obtained linking the dynamic behaviour of DO to hydrogen peroxide consumption. Following this, a control system was developed based on this model. This control system - a proportional and integral controller (PI) with an anti-windup mechanism - has been tested experimentally. The assays were carried out in a pilot plant under sunlight conditions and with paracetamol used as the model pollutant. In comparison with non-assisted addition methods (a sole initial or continuous addition), a decrease of 50% in hydrogen peroxide consumption was achieved when the automatic controller was used, driving an economic saving and an improvement in process efficiency.

  12. Multiscale Simulations Using Particles

    DEFF Research Database (Denmark)

    Walther, Jens Honore

    We are developing particle methods as a general framework for large scale simulations of discrete and continuous systems in science and engineering. The specific application and research areas include: discrete element simulations of granular flow, smoothed particle hydrodynamics and particle...... vortex methods for problems in continuum fluid dynamics, dissipative particle dynamics for flow at the meso scale, and atomistic molecular dynamics simulations of nanofluidic systems. We employ multiscale techniques to breach the atomistic and continuum scales to study fundamental problems in fluid...

  13. Multiscale singularity trees

    DEFF Research Database (Denmark)

    Somchaipeng, Kerawit; Sporring, Jon; Johansen, Peter


    We propose MultiScale Singularity Trees (MSSTs) as a structure to represent images, and we propose an algorithm for image comparison based on comparing MSSTs. The algorithm is tested on 3 public image databases and compared to 2 state-of-theart methods. We conclude that the computational complexity...... of our algorithm only allows for the comparison of small trees, and that the results of our method are comparable with state-of-the-art using much fewer parameters for image representation....

  14. Multiscale modelling of nanostructures

    International Nuclear Information System (INIS)

    Vvedensky, Dimitri D


    Most materials phenomena are manifestations of processes that are operative over a vast range of length and time scales. A complete understanding of the behaviour of materials thereby requires theoretical and computational tools that span the atomic-scale detail of first-principles methods and the more coarse-grained description provided by continuum equations. Recent efforts have focused on combining traditional methodologies-density functional theory, molecular dynamics, Monte Carlo methods and continuum descriptions-within a unified multiscale framework. This review covers the techniques that have been developed to model various aspects of materials behaviour with the ultimate aim of systematically coupling the atomistic to the continuum descriptions. The approaches described typically have been motivated by particular applications but can often be applied in wider contexts. The self-assembly of quantum dot ensembles will be used as a case study for the issues that arise and the methods used for all nanostructures. Although quantum dots can be obtained with all the standard growth methods and for a variety of material systems, their appearance is a quite selective process, involving the competition between equilibrium and kinetic effects, and the interplay between atomistic and long-range interactions. Most theoretical models have addressed particular aspects of the ordering kinetics of quantum dot ensembles, with far fewer attempts at a comprehensive synthesis of this inherently multiscale phenomenon. We conclude with an assessment of the current status of multiscale modelling strategies and highlight the main outstanding issues. (topical review)

  15. Self-organizing neural networks for automatic detection and classification of contrast-enhancing lesions in dynamic MR-mammography

    International Nuclear Information System (INIS)

    Vomweg, T.W.; Teifke, A.; Kauczor, H.U.; Achenbach, T.; Rieker, O.; Schreiber, W.G.; Heitmann, K.R.; Beier, T.; Thelen, M.


    Purpose: Investigation and statistical evaluation of 'Self-Organizing Maps', a special type of neural networks in the field of artificial intelligence, classifying contrast enhancing lesions in dynamic MR-mammography. Material and Methods: 176 investigations with proven histology after core biopsy or operation were randomly divided into two groups. Several Self-Organizing Maps were trained by investigations of the first group to detect and classify contrast enhancing lesions in dynamic MR-mammography. Each single pixel's signal/time curve of all patients within the second group was analyzed by the Self-Organizing Maps. The likelihood of malignancy was visualized by color overlays on the MR-images. At last assessment of contrast-enhancing lesions by each different network was rated visually and evaluated statistically. Results: A well balanced neural network achieved a sensitivity of 90.5% and a specificity of 72.2% in predicting malignancy of 88 enhancing lesions. Detailed analysis of false-positive results revealed that every second fibroadenoma showed a 'typical malignant' signal/time curve without any chance to differentiate between fibroadenomas and malignant tissue regarding contrast enhancement alone; but this special group of lesions was represented by a well-defined area of the Self-Organizing Map. Discussion: Self-Organizing Maps are capable of classifying a dynamic signal/time curve as 'typical benign' or 'typical malignant'. Therefore, they can be used as second opinion. In view of the now known localization of fibroadenomas enhancing like malignant tumors at the Self-Organizing Map, these lesions could be passed to further analysis by additional post-processing elements (e.g., based on T2-weighted series or morphology analysis) in the future. (orig.)

  16. Semi-automatic motion compensation of contrast-enhanced ultrasound images from abdominal organs for perfusion analysis

    Czech Academy of Sciences Publication Activity Database

    Schafer, S.; Nylund, K.; Saevik, F.; Engjom, T.; Mézl, M.; Jiřík, Radovan; Dimcevski, G.; Gilja, O.H.; Tönnies, K.


    Roč. 63, AUG 1 (2015), s. 229-237 ISSN 0010-4825 R&D Projects: GA ČR GAP102/12/2380 Institutional support: RVO:68081731 Keywords : ultrasonography * motion analysis * motion compensation * registration * CEUS * contrast-enhanced ultrasound * perfusion * perfusion modeling Subject RIV: FS - Medical Facilities ; Equipment Impact factor: 1.521, year: 2015

  17. Multiscale Signal Analysis and Modeling

    CERN Document Server

    Zayed, Ahmed


    Multiscale Signal Analysis and Modeling presents recent advances in multiscale analysis and modeling using wavelets and other systems. This book also presents applications in digital signal processing using sampling theory and techniques from various function spaces, filter design, feature extraction and classification, signal and image representation/transmission, coding, nonparametric statistical signal processing, and statistical learning theory. This book also: Discusses recently developed signal modeling techniques, such as the multiscale method for complex time series modeling, multiscale positive density estimations, Bayesian Shrinkage Strategies, and algorithms for data adaptive statistics Introduces new sampling algorithms for multidimensional signal processing Provides comprehensive coverage of wavelets with presentations on waveform design and modeling, wavelet analysis of ECG signals and wavelet filters Reviews features extraction and classification algorithms for multiscale signal and image proce...

  18. Multiscale computing in the exascale era

    NARCIS (Netherlands)

    Alowayyed, S.; Groen, D.; Coveney, P.V.; Hoekstra, A.G.

    We expect that multiscale simulations will be one of the main high performance computing workloads in the exascale era. We propose multiscale computing patterns as a generic vehicle to realise load balanced, fault tolerant and energy aware high performance multiscale computing. Multiscale computing

  19. Change Detection in Synthetic Aperture Radar Images Using a Multiscale-Driven Approach

    Directory of Open Access Journals (Sweden)

    Olaniyi A. Ajadi


    Full Text Available Despite the significant progress that was achieved throughout the recent years, to this day, automatic change detection and classification from synthetic aperture radar (SAR images remains a difficult task. This is, in large part, due to (a the high level of speckle noise that is inherent to SAR data; (b the complex scattering response of SAR even for rather homogeneous targets; (c the low temporal sampling that is often achieved with SAR systems, since sequential images do not always have the same radar geometry (incident angle, orbit path, etc.; and (d the typically limited performance of SAR in delineating the exact boundary of changed regions. With this paper we present a promising change detection method that utilizes SAR images and provides solutions for these previously mentioned difficulties. We will show that the presented approach enables automatic and high-performance change detection across a wide range of spatial scales (resolution levels. The developed method follows a three-step approach of (i initial pre-processing; (ii data enhancement/filtering; and (iii wavelet-based, multi-scale change detection. The stand-alone property of our approach is the high flexibility in applying the change detection approach to a wide range of change detection problems. The performance of the developed approach is demonstrated using synthetic data as well as a real-data application to wildfire progression near Fairbanks, Alaska.

  20. Automatized spleen segmentation in non-contrast-enhanced MR volume data using subject-specific shape priors (United States)

    Gloger, Oliver; Tönnies, Klaus; Bülow, Robin; Völzke, Henry


    To develop the first fully automated 3D spleen segmentation framework derived from T1-weighted magnetic resonance (MR) imaging data and to verify its performance for spleen delineation and volumetry. This approach considers the issue of low contrast between spleen and adjacent tissue in non-contrast-enhanced MR images. Native T1-weighted MR volume data was performed on a 1.5 T MR system in an epidemiological study. We analyzed random subsamples of MR examinations without pathologies to develop and verify the spleen segmentation framework. The framework is modularized to include different kinds of prior knowledge into the segmentation pipeline. Classification by support vector machines differentiates between five different shape types in computed foreground probability maps and recognizes characteristic spleen regions in axial slices of MR volume data. A spleen-shape space generated by training produces subject-specific prior shape knowledge that is then incorporated into a final 3D level set segmentation method. Individually adapted shape-driven forces as well as image-driven forces resulting from refined foreground probability maps steer the level set successfully to the segment the spleen. The framework achieves promising segmentation results with mean Dice coefficients of nearly 0.91 and low volumetric mean errors of 6.3%. The presented spleen segmentation approach can delineate spleen tissue in native MR volume data. Several kinds of prior shape knowledge including subject-specific 3D prior shape knowledge can be used to guide segmentation processes achieving promising results.

  1. The Magnetospheric Multiscale Constellation (United States)

    Tooley, C. R.; Black, R. K.; Robertson, B. P.; Stone, J. M.; Pope, S. E.; Davis, G. T.


    The Magnetospheric Multiscale (MMS) mission is the fourth mission of the Solar Terrestrial Probe (STP) program of the National Aeronautics and Space Administration (NASA). The MMS mission was launched on March 12, 2015. The MMS mission consists of four identically instrumented spin-stabilized observatories which are flown in formation to perform the first definitive study of magnetic reconnection in space. The MMS mission was presented with numerous technical challenges, including the simultaneous construction and launch of four identical large spacecraft with 100 instruments total, stringent electromagnetic cleanliness requirements, closed-loop precision maneuvering and pointing of spinning flexible spacecraft, on-board GPS based orbit determination far above the GPS constellation, and a flight dynamics design that enables formation flying with separation distances as small as 10 km. This paper describes the overall mission design and presents an overview of the design, testing, and early on-orbit operation of the spacecraft systems and instrument suite.

  2. Toward combining thematic information with hierarchical multiscale segmentations using tree Markov random field model (United States)

    Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi


    It has been a common idea to produce multiscale segmentations to represent the various geographic objects in high-spatial resolution remote sensing (HR) images. However, it remains a great challenge to automatically select the proper segmentation scale(s) just according to the image information. In this study, we propose a novel way of information fusion at object level by combining hierarchical multiscale segmentations with existed thematic information produced by classification or recognition. The tree Markov random field (T-MRF) model is designed for the multiscale combination framework, through which the object type is determined as close as the existed thematic information. At the same time, the object boundary is jointly determined by the thematic labels and the multiscale segments through the minimization of the energy function. The benefits of the proposed T-MRF combination model include: (1) reducing the dependence of segmentation scale selection when utilizing multiscale segmentations; (2) exploring the hierarchical context naturally imbedded in the multiscale segmentations. The HR images in both urban and rural areas are used in the experiments to show the effectiveness of the proposed combination framework on these two aspects.

  3. Multiscale modeling in nanomaterials science

    Energy Technology Data Exchange (ETDEWEB)

    Karakasidis, T.E. [Department of Civil Engineering, University of Thessaly, Pedion Areos, GR-38834 Volos (Greece)], E-mail:; Charitidis, C.A. [National Technical University of Athens, School of Chemical Engineering, 9 Heroon, Polytechniou st., Zografos, GR-157 80 Athens (Greece)


    Nanoscience is an area with increasing interest both in the physicochemical phenomena involved and the potential applications such as silicon carbide films, carbon nanotubes, quantum dots, MEMS etc. These materials exhibit very interesting properties (electronic, optical, mechanical) at various length/time scales necessitating better insight. Modern fabrication techniques, such as CVD, also require better understanding in a wide range of length/time scales, in order to achieve better process control. Multiscale modeling is a new, fast developing and challenging scientific field with contributions from many scientific disciplines in an effort to assure materials simulation across length/time scales. In this paper we present a brief review of recent advances in multiscale materials modeling. First, a classification of existing simulation methods based on time and length scales is presented along with basic principles of the multiscale approach. More specifically, we focus on electronic structure calculations, classical atomistic simulation with molecular dynamics or monte carlo methods at the nano/micro scale, Kinetic Monte Carlo for larger system/time scales and finite elements for very large scales. Then, we present the hierarchical and the hybrid strategies of multiscale modeling to couple these methods. Finally, we deal with selected applications concerning thin film CVD deposition and mechanical behavior of carbon nanotubes and we conclude presenting an overview of future trends of multiscale modeling.

  4. Multi-scale simulation for homogenization of cement media

    International Nuclear Information System (INIS)

    Abballe, T.


    To solve diffusion problems on cement media, two scales must be taken into account: a fine scale, which describes the micrometers wide microstructures present in the media, and a work scale, which is usually a few meters long. Direct numerical simulations are almost impossible because of the huge computational resources (memory, CPU time) required to assess both scales at the same time. To overcome this problem, we present in this thesis multi-scale resolution methods using both Finite Volumes and Finite Elements, along with their efficient implementations. More precisely, we developed a multi-scale simulation tool which uses the SALOME platform to mesh domains and post-process data, and the parallel calculation code MPCube to solve problems. This SALOME/MPCube tool can solve automatically and efficiently multi-scale simulations. Parallel structure of computer clusters can be use to dispatch the more time-consuming tasks. We optimized most functions to account for cement media specificities. We presents numerical experiments on various cement media samples, e.g. mortar and cement paste. From these results, we manage to compute a numerical effective diffusivity of our cement media and to reconstruct a fine scale solution. (author) [fr

  5. Multi-technology Integration Based on Low-contrast Microscopic Image Enhancement

    Directory of Open Access Journals (Sweden)

    Haoge Ma


    Full Text Available Microscopic image enhancement is an important issue of image processing technique, which is used to improve the visual quality of image. This paper describes a novel multi resolution image segmentation algorithm for low DOF images. The algorithm is designed to separate a sharply focused object of interest from other foreground or background objects. The algorithm is fully automatic in that all parameters are image in dependent. A multiscale-approach based on high frequency wavelet coefficients and their statistics is used to perform context dependent classification of individual blocks of the image. Compared with the state of the art algorithms, this new algorithm provides better accuracy at higher speed.

  6. Multiscale Fractal Characterization of Hierarchical Heterogeneity in Sandstone Reservoirs (United States)

    Liu, Yanfeng; Liu, Yuetian; Sun, Lu; Liu, Jian


    Heterogeneities affecting reservoirs often develop at different scales. Previous studies have described these heterogeneities using different parameters depending on their size, and there is no one comprehensive method of reservoir evaluation that considers every scale. This paper introduces a multiscale fractal approach to quantify consistently the hierarchical heterogeneities of sandstone reservoirs. Materials taken from typical depositional pattern and aerial photography are used to represent three main types of sandstone reservoir: turbidite, braided, and meandering river system. Subsequent multiscale fractal dimension analysis using the Bouligand-Minkowski method characterizes well the hierarchical heterogeneity of the sandstone reservoirs. The multiscale fractal dimension provides a curve function that describes the heterogeneity at different scales. The heterogeneity of a reservoir’s internal structure decreases as the observational scale increases. The shape of a deposit’s facies is vital for quantitative determination of the sedimentation type, and thus enhanced oil recovery. Characterization of hierarchical heterogeneity by multiscale fractal dimension can assist reservoir evaluation, geological modeling, and even the design of well patterns.

  7. Enhancement of the efficiency of the automatic control system to control the thermal load of steam boilers fired with fuels of several types (United States)

    Ismatkhodzhaev, S. K.; Kuzishchin, V. F.


    An automatic control system to control the thermal load (ACS) in a drum-type boiler under random fluctuations in the blast-furnace and coke-oven gas consumption rates and to control action on the natural gas consumption is considered. The system provides for use of a compensator by the basic disturbance, the blast-furnace gas consumption rate. To enhance the performance of the system, it is proposed to use more accurate mathematical second-order delay models of the channels of the object under control in combination with calculation by frequency methods of the controller parameters as well as determination of the structure and parameters of the compensator considering the statistical characteristics of the disturbances and using simulation. The statistical characteristics of the random blast-furnace gas consumption signal based on experimental data are provided. The random signal is presented in the form of the low-frequency (LF) and high-frequency (HF) components. The models of the correlation functions and spectral densities are developed. The article presents the results of calculating the optimal settings of the control loop with the controlled variable in the form of the "heat" signal with the restricted frequency variation index using three variants of the control performance criteria, viz., the linear and quadratic integral indices under step disturbance and the control error variance under random disturbance by the blastfurnace gas consumption rate. It is recommended to select a compensator designed in the form of series connection of two parts, one of which corresponds to the operator inverse to the transfer function of the PI controller, i.e., in the form of a really differentiating element. This facilitates the realization of the second part of the compensator by the invariance condition similar to transmitting the compensating signal to the object input. The results of simulation under random disturbance by the blast-furnace gas consumption are reported

  8. An Automatic Cloud Detection Method for ZY-3 Satellite

    Directory of Open Access Journals (Sweden)

    CHEN Zhenwei


    Full Text Available Automatic cloud detection for optical satellite remote sensing images is a significant step in the production system of satellite products. For the browse images cataloged by ZY-3 satellite, the tree discriminate structure is adopted to carry out cloud detection. The image was divided into sub-images and their features were extracted to perform classification between clouds and grounds. However, due to the high complexity of clouds and surfaces and the low resolution of browse images, the traditional classification algorithms based on image features are of great limitations. In view of the problem, a prior enhancement processing to original sub-images before classification was put forward in this paper to widen the texture difference between clouds and surfaces. Afterwards, with the secondary moment and first difference of the images, the feature vectors were extended in multi-scale space, and then the cloud proportion in the image was estimated through comprehensive analysis. The presented cloud detection algorithm has already been applied to the ZY-3 application system project, and the practical experiment results indicate that this algorithm is capable of promoting the accuracy of cloud detection significantly.

  9. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network. (United States)

    Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di


    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.

  10. Heat and mass transfer intensification and shape optimization a multi-scale approach

    CERN Document Server


    Is the heat and mass transfer intensification defined as a new paradigm of process engineering, or is it just a common and old idea, renamed and given the current taste? Where might intensification occur? How to achieve intensification? How the shape optimization of thermal and fluidic devices leads to intensified heat and mass transfers? To answer these questions, Heat & Mass Transfer Intensification and Shape Optimization: A Multi-scale Approach clarifies  the definition of the intensification by highlighting the potential role of the multi-scale structures, the specific interfacial area, the distribution of driving force, the modes of energy supply and the temporal aspects of processes.   A reflection on the methods of process intensification or heat and mass transfer enhancement in multi-scale structures is provided, including porous media, heat exchangers, fluid distributors, mixers and reactors. A multi-scale approach to achieve intensification and shape optimization is developed and clearly expla...

  11. Automatic Evaluation Of Interferograms (United States)

    Becker, Friedhelm; Meier, Gerd E. A.; Wegner, Horst


    A system for the automatic evaluation of interference patterns has been developed. After digitizing the interferograms from classical and holografic interferometers with a television digitizer and performing different picture enhancement operations the fringe loci are extracted by use of a floating-threshold method. The fringes are numbered using a special scheme after the removal of any fringe disconnections which might appear if there was insufficient contrast in the interferograms. The reconstruction of the object function from the numbered fringe field is achieved by a local polynomial least-squares approximation. Applications are given, demonstrating the evaluation of interferograms of supersonic flow fields and the analysis of holografic interferograms of car-tyres.

  12. Multiscale expansions in discrete world

    Indian Academy of Sciences (India)

    Multiscale expansions in discrete world. ÖMER ÜNSAL, FILIZ TASCAN. ∗ and MEHMET NACI ÖZER. Eskisehir Osmangazi University, Art-Science Faculty, Department of Mathematics and Computer. Sciences, Eskisehir-Türkiye. ∗. Corresponding author. E-mail: MS received 12 April 2013; accepted 16 ...

  13. Multiscale expansions in discrete world

    Indian Academy of Sciences (India)

    ... multiscale expansions discretely. The power of this manageable method is confirmed by applying it to two selected nonlinear Schrödinger evolution equations. This approach can also be applied to other nonlinear discrete evolution equations. All the computations have been made with Maple computer packet program.

  14. Multiscale expansions in discrete world

    Indian Academy of Sciences (India)

    This approach can also be applied to other nonlinear discrete evolution equations. All the computations have been made with Maple computer packet program. Keywords. Multiscale expansion; discrete evolution equation; modified nonlinear Schrödinger equation; third-order nonlinear Schrödinger equation; KdV equation.

  15. Multiscale Multifunctional Progressive Fracture of Composite Structures (United States)

    Chamis, C. C.; Minnetyan, L.


    A new approach is described for evaluating fracture in composite structures. This approach is independent of classical fracture mechanics parameters like fracture toughness. It relies on computational simulation and is programmed in a stand-alone integrated computer code. It is multiscale, multifunctional because it includes composite mechanics for the composite behavior and finite element analysis for predicting the structural response. It contains seven modules; layered composite mechanics (micro, macro, laminate), finite element, updating scheme, local fracture, global fracture, stress based failure modes, and fracture progression. The computer code is called CODSTRAN (Composite Durability Structural ANalysis). It is used in the present paper to evaluate the global fracture of four composite shell problems and one composite built-up structure. Results show that the composite shells. Global fracture is enhanced when internal pressure is combined with shear loads. The old reference denotes that nothing has been added to this comprehensive report since then.

  16. Multiscale Thermohydrologic Model

    Energy Technology Data Exchange (ETDEWEB)

    T. Buscheck


    The purpose of the multiscale thermohydrologic model (MSTHM) is to predict the possible range of thermal-hydrologic conditions, resulting from uncertainty and variability, in the repository emplacement drifts, including the invert, and in the adjoining host rock for the repository at Yucca Mountain. Thus, the goal is to predict the range of possible thermal-hydrologic conditions across the repository; this is quite different from predicting a single expected thermal-hydrologic response. The MSTHM calculates the following thermal-hydrologic parameters: temperature, relative humidity, liquid-phase saturation, evaporation rate, air-mass fraction, gas-phase pressure, capillary pressure, and liquid- and gas-phase fluxes (Table 1-1). These thermal-hydrologic parameters are required to support ''Total System Performance Assessment (TSPA) Model/Analysis for the License Application'' (BSC 2004 [DIRS 168504]). The thermal-hydrologic parameters are determined as a function of position along each of the emplacement drifts and as a function of waste package type. These parameters are determined at various reference locations within the emplacement drifts, including the waste package and drip-shield surfaces and in the invert. The parameters are also determined at various defined locations in the adjoining host rock. The MSTHM uses data obtained from the data tracking numbers (DTNs) listed in Table 4.1-1. The majority of those DTNs were generated from the following analyses and model reports: (1) ''UZ Flow Model and Submodels'' (BSC 2004 [DIRS 169861]); (2) ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004); (3) ''Calibrated Properties Model'' (BSC 2004 [DIRS 169857]); (4) ''Thermal Conductivity of the Potential Repository Horizon'' (BSC 2004 [DIRS 169854]); (5) ''Thermal Conductivity of the Non-Repository Lithostratigraphic Layers

  17. Multi-scale Mexican spotted owl (Strix occidentalis lucida) nest/roost habitat selection in Arizona and a comparison with single-scale modeling results (United States)

    Brad C. Timm; Kevin McGarigal; Samuel A. Cushman; Joseph L. Ganey


    Efficacy of future habitat selection studies will benefit by taking a multi-scale approach. In addition to potentially providing increased explanatory power and predictive capacity, multi-scale habitat models enhance our understanding of the scales at which species respond to their environment, which is critical knowledge required to implement effective...

  18. Semi-automatic fluoroscope

    International Nuclear Information System (INIS)

    Tarpley, M.W.


    Extruded aluminum-clad uranium-aluminum alloy fuel tubes must pass many quality control tests before irradiation in Savannah River Plant nuclear reactors. Nondestructive test equipment has been built to automatically detect high and low density areas in the fuel tubes using x-ray absorption techniques with a video analysis system. The equipment detects areas as small as 0.060-in. dia with 2 percent penetrameter sensitivity. These areas are graded as to size and density by an operator using electronic gages. Video image enhancement techniques permit inspection of ribbed cylindrical tubes and make possible the testing of areas under the ribs. Operation of the testing machine, the special low light level television camera, and analysis and enhancement techniques are discussed

  19. Enhancing automatic closed-loop glucose control in type 1 diabetes with an adaptive meal bolus calculator - in silico evaluation under intra-day variability. (United States)

    Herrero, Pau; Bondia, Jorge; Adewuyi, Oloruntoba; Pesl, Peter; El-Sharkawy, Mohamed; Reddy, Monika; Toumazou, Chris; Oliver, Nick; Georgiou, Pantelis


    Current prototypes of closed-loop systems for glucose control in type 1 diabetes mellitus, also referred to as artificial pancreas systems, require a pre-meal insulin bolus to compensate for delays in subcutaneous insulin absorption in order to avoid initial post-prandial hyperglycemia. Computing such a meal bolus is a challenging task due to the high intra-subject variability of insulin requirements. Most closed-loop systems compute this pre-meal insulin dose by a standard bolus calculation, as is commonly found in insulin pumps. However, the performance of these calculators is limited due to a lack of adaptiveness in front of dynamic changes in insulin requirements. Despite some initial attempts to include adaptation within these calculators, challenges remain. In this paper we present a new technique to automatically adapt the meal-priming bolus within an artificial pancreas. The technique consists of using a novel adaptive bolus calculator based on Case-Based Reasoning and Run-To-Run control, within a closed-loop controller. Coordination between the adaptive bolus calculator and the controller was required to achieve the desired performance. For testing purposes, the clinically validated Imperial College Artificial Pancreas controller was employed. The proposed system was evaluated against itself but without bolus adaptation. The UVa-Padova T1DM v3.2 system was used to carry out a three-month in silico study on 11 adult and 11 adolescent virtual subjects taking into account inter-and intra-subject variability of insulin requirements and uncertainty on carbohydrate intake. Overall, the closed-loop controller enhanced by an adaptive bolus calculator improves glycemic control when compared to its non-adaptive counterpart. In particular, the following statistically significant improvements were found (non-adaptive vs. adaptive). Adults: mean glucose 142.2 ± 9.4vs. 131.8 ± 4.2mg/dl; percentage time in target [70, 180]mg/dl, 82.0 ± 7.0vs. 89.5 ± 4


    Energy Technology Data Exchange (ETDEWEB)

    T. Buscheck


    The intended purpose of the multiscale thermohydrologic model (MSTHM) is to predict the possible range of thermal-hydrologic conditions, resulting from uncertainty and variability, in the repository emplacement drifts, including the invert, and in the adjoining host rock for the repository at Yucca Mountain. The goal of the MSTHM is to predict a reasonable range of possible thermal-hydrologic conditions within the emplacement drift. To be reasonable, this range includes the influence of waste-package-to-waste-package heat output variability relevant to the license application design, as well as the influence of uncertainty and variability in the geologic and hydrologic conditions relevant to predicting the thermal-hydrologic response in emplacement drifts. This goal is quite different from the goal of a model to predict a single expected thermal-hydrologic response. As a result, the development and validation of the MSTHM and the associated analyses using this model are focused on the goal of predicting a reasonable range of thermal-hydrologic conditions resulting from parametric uncertainty and waste-package-to-waste-package heat-output variability. Thermal-hydrologic conditions within emplacement drifts depend primarily on thermal-hydrologic conditions in the host rock at the drift wall and on the temperature difference between the drift wall and the drip-shield and waste-package surfaces. Thus, the ability to predict a reasonable range of relevant in-drift MSTHM output parameters (e.g., temperature and relative humidity) is based on valid predictions of thermal-hydrologic processes in the host rock, as well as valid predictions of heat-transfer processes between the drift wall and the drip-shield and waste-package surfaces. Because the invert contains crushed gravel derived from the host rock, the invert is, in effect, an extension of the host rock, with thermal and hydrologic properties that have been modified by virtue of the crushing (and the resulting

  1. The Magnetospheric Multiscale Mission (United States)

    Burch, James

    Magnetospheric Multiscale (MMS), a NASA four-spacecraft mission scheduled for launch in November 2014, will investigate magnetic reconnection in the boundary regions of the Earth’s magnetosphere, particularly along its dayside boundary with the solar wind and the neutral sheet in the magnetic tail. Among the important questions about reconnection that will be addressed are the following: Under what conditions can magnetic-field energy be converted to plasma energy by the annihilation of magnetic field through reconnection? How does reconnection vary with time, and what factors influence its temporal behavior? What microscale processes are responsible for reconnection? What determines the rate of reconnection? In order to accomplish its goals the MMS spacecraft must probe both those regions in which the magnetic fields are very nearly antiparallel and regions where a significant guide field exists. From previous missions we know the approximate speeds with which reconnection layers move through space to be from tens to hundreds of km/s. For electron skin depths of 5 to 10 km, the full 3D electron population (10 eV to above 20 keV) has to be sampled at rates greater than 10/s. The MMS Fast-Plasma Instrument (FPI) will sample electrons at greater than 30/s. Because the ion skin depth is larger, FPI will make full ion measurements at rates of greater than 6/s. 3D E-field measurements will be made by MMS once every ms. MMS will use an Active Spacecraft Potential Control device (ASPOC), which emits indium ions to neutralize the photoelectron current and keep the spacecraft from charging to more than +4 V. Because ion dynamics in Hall reconnection depend sensitively on ion mass, MMS includes a new-generation Hot Plasma Composition Analyzer (HPCA) that corrects problems with high proton fluxes that have prevented accurate ion-composition measurements near the dayside magnetospheric boundary. Finally, Energetic Particle Detector (EPD) measurements of electrons and

  2. On enhancing energy harvesting performance of the photovoltaic modules using an automatic cooling system and assessing its economic benefits of mitigating greenhouse effects on the environment (United States)

    Wang, Jen-Cheng; Liao, Min-Sheng; Lee, Yeun-Chung; Liu, Cheng-Yue; Kuo, Kun-Chang; Chou, Cheng-Ying; Huang, Chen-Kang; Jiang, Joe-Air


    The performance of photovoltaic (PV) modules under outdoor operation is greatly affected by their location and environmental conditions. The temperature of a PV module gradually increases as it is exposed to solar irradiation, resulting in degradation of its electrical characteristics and power generation efficiency. This study adopts wireless sensor network (WSN) technology to develop an automatic water-cooling system for PV modules in order to improve their PV power generation efficiency. A temperature estimation method is developed to quickly and accurately estimate the PV module temperatures based on weather data provided from the WSN monitoring system. Further, an estimation method is also proposed for evaluation of the electrical characteristics and output power of the PV modules, which is performed remotely via a control platform. The automatic WSN-based water-cooling mechanism is designed to avoid the PV module temperature from reaching saturation. Equipping each PV module with the WSN-based cooling system, the ambient conditions are monitored automatically so that the temperature of the PV module is controlled by sprinkling water on the panel surface. The field-test experiment results show an increase in the energy harvested by the PV modules of approximately 17.75% when using the proposed WSN-based cooling system.

  3. Integrated Multiscale Latent Variable Regression and Application to Distillation Columns

    Directory of Open Access Journals (Sweden)

    Muddu Madakyaru


    Full Text Available Proper control of distillation columns requires estimating some key variables that are challenging to measure online (such as compositions, which are usually estimated using inferential models. Commonly used inferential models include latent variable regression (LVR techniques, such as principal component regression (PCR, partial least squares (PLS, and regularized canonical correlation analysis (RCCA. Unfortunately, measured practical data are usually contaminated with errors, which degrade the prediction abilities of inferential models. Therefore, noisy measurements need to be filtered to enhance the prediction accuracy of these models. Multiscale filtering has been shown to be a powerful feature extraction tool. In this work, the advantages of multiscale filtering are utilized to enhance the prediction accuracy of LVR models by developing an integrated multiscale LVR (IMSLVR modeling algorithm that integrates modeling and feature extraction. The idea behind the IMSLVR modeling algorithm is to filter the process data at different decomposition levels, model the filtered data from each level, and then select the LVR model that optimizes a model selection criterion. The performance of the developed IMSLVR algorithm is illustrated using three examples, one using synthetic data, one using simulated distillation column data, and one using experimental packed bed distillation column data. All examples clearly demonstrate the effectiveness of the IMSLVR algorithm over the conventional methods.

  4. Multiscale NTP Fuel Element Materials Simulation (United States)

    National Aeronautics and Space Administration — Project will leverage a multiscale modeling approach pioneered for light water reactor (LWR) fuels to simulate performance in a prototypical environment. The...

  5. Numerical Analysis of Multiscale Computations

    CERN Document Server

    Engquist, Björn; Tsai, Yen-Hsi R


    This book is a snapshot of current research in multiscale modeling, computations and applications. It covers fundamental mathematical theory, numerical algorithms as well as practical computational advice for analysing single and multiphysics models containing a variety of scales in time and space. Complex fluids, porous media flow and oscillatory dynamical systems are treated in some extra depth, as well as tools like analytical and numerical homogenization, and fast multipole method.

  6. Rolling bearing fault detection and diagnosis based on composite multiscale fuzzy entropy and ensemble support vector machines (United States)

    Zheng, Jinde; Pan, Haiyang; Cheng, Junsheng


    To timely detect the incipient failure of rolling bearing and find out the accurate fault location, a novel rolling bearing fault diagnosis method is proposed based on the composite multiscale fuzzy entropy (CMFE) and ensemble support vector machines (ESVMs). Fuzzy entropy (FuzzyEn), as an improvement of sample entropy (SampEn), is a new nonlinear method for measuring the complexity of time series. Since FuzzyEn (or SampEn) in single scale can not reflect the complexity effectively, multiscale fuzzy entropy (MFE) is developed by defining the FuzzyEns of coarse-grained time series, which represents the system dynamics in different scales. However, the MFE values will be affected by the data length, especially when the data are not long enough. By combining information of multiple coarse-grained time series in the same scale, the CMFE algorithm is proposed in this paper to enhance MFE, as well as FuzzyEn. Compared with MFE, with the increasing of scale factor, CMFE obtains much more stable and consistent values for a short-term time series. In this paper CMFE is employed to measure the complexity of vibration signals of rolling bearings and is applied to extract the nonlinear features hidden in the vibration signals. Also the physically meanings of CMFE being suitable for rolling bearing fault diagnosis are explored. Based on these, to fulfill an automatic fault diagnosis, the ensemble SVMs based multi-classifier is constructed for the intelligent classification of fault features. Finally, the proposed fault diagnosis method of rolling bearing is applied to experimental data analysis and the results indicate that the proposed method could effectively distinguish different fault categories and severities of rolling bearings.

  7. Community effort endorsing multiscale modelling, multiscale data science and multiscale computing for systems medicine. (United States)

    Zanin, Massimiliano; Chorbev, Ivan; Stres, Blaz; Stalidzans, Egils; Vera, Julio; Tieri, Paolo; Castiglione, Filippo; Groen, Derek; Zheng, Huiru; Baumbach, Jan; Schmid, Johannes A; Basilio, José; Klimek, Peter; Debeljak, Nataša; Rozman, Damjana; Schmidt, Harald H H W


    Systems medicine holds many promises, but has so far provided only a limited number of proofs of principle. To address this road block, possible barriers and challenges of translating systems medicine into clinical practice need to be identified and addressed. The members of the European Cooperation in Science and Technology (COST) Action CA15120 Open Multiscale Systems Medicine (OpenMultiMed) wish to engage the scientific community of systems medicine and multiscale modelling, data science and computing, to provide their feedback in a structured manner. This will result in follow-up white papers and open access resources to accelerate the clinical translation of systems medicine. © The Author 2017. Published by Oxford University Press.

  8. Differential geometry based multiscale models. (United States)

    Wei, Guo-Wei


    Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atomistic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier-Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson-Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson-Nernst-Planck equations that are

  9. Differential Geometry Based Multiscale Models (United States)

    Wei, Guo-Wei


    Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atom-istic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier–Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson–Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson–Nernst–Planck equations that

  10. Mixed Generalized Multiscale Finite Element Methods and Applications

    KAUST Repository

    Chung, Eric T.


    In this paper, we present a mixed generalized multiscale finite element method (GMsFEM) for solving flow in heterogeneous media. Our approach constructs multiscale basis functions following a GMsFEM framework and couples these basis functions using a mixed finite element method, which allows us to obtain a mass conservative velocity field. To construct multiscale basis functions for each coarse edge, we design a snapshot space that consists of fine-scale velocity fields supported in a union of two coarse regions that share the common interface. The snapshot vectors have zero Neumann boundary conditions on the outer boundaries, and we prescribe their values on the common interface. We describe several spectral decompositions in the snapshot space motivated by the analysis. In the paper, we also study oversampling approaches that enhance the accuracy of mixed GMsFEM. A main idea of oversampling techniques is to introduce a small dimensional snapshot space. We present numerical results for two-phase flow and transport, without updating basis functions in time. Our numerical results show that one can achieve good accuracy with a few basis functions per coarse edge if one selects appropriate offline spaces. © 2015 Society for Industrial and Applied Mathematics.

  11. Multivariate refined composite multiscale entropy analysis

    Energy Technology Data Exchange (ETDEWEB)

    Humeau-Heurtier, Anne, E-mail:


    Multiscale entropy (MSE) has become a prevailing method to quantify signals complexity. MSE relies on sample entropy. However, MSE may yield imprecise complexity estimation at large scales, because sample entropy does not give precise estimation of entropy when short signals are processed. A refined composite multiscale entropy (RCMSE) has therefore recently been proposed. Nevertheless, RCMSE is for univariate signals only. The simultaneous analysis of multi-channel (multivariate) data often over-performs studies based on univariate signals. We therefore introduce an extension of RCMSE to multivariate data. Applications of multivariate RCMSE to simulated processes reveal its better performances over the standard multivariate MSE. - Highlights: • Multiscale entropy quantifies data complexity but may be inaccurate at large scale. • A refined composite multiscale entropy (RCMSE) has therefore recently been proposed. • Nevertheless, RCMSE is adapted to univariate time series only. • We herein introduce an extension of RCMSE to multivariate data. • It shows better performances than the standard multivariate multiscale entropy.

  12. Multi-scale lines and edges in V1 and beyond: brightness, object categorization and recognition, and consciousness. (United States)

    Rodrigues, João; du Buf, J M Hans


    In this paper we present an improved model for line and edge detection in cortical area V1. This model is based on responses of simple and complex cells, and it is multi-scale with no free parameters. We illustrate the use of the multi-scale line/edge representation in different processes: visual reconstruction or brightness perception, automatic scale selection and object segregation. A two-level object categorization scenario is tested in which pre-categorization is based on coarse scales only and final categorization on coarse plus fine scales. We also present a multi-scale object and face recognition model. Processing schemes are discussed in the framework of a complete cortical architecture. The fact that brightness perception and object recognition may be based on the same symbolic image representation is an indication that the entire (visual) cortex is involved in consciousness.

  13. Laser Writing of Multiscale Chiral Polymer Metamaterials

    Directory of Open Access Journals (Sweden)

    E. P. Furlani


    Full Text Available A new approach to metamaterials is presented that involves laser-based patterning of novel chiral polymer media, wherein chirality is realized at two distinct length scales, intrinsically at the molecular level and geometrically at a length scale on the order of the wavelength of the incident field. In this approach, femtosecond-pulsed laser-induced two-photon lithography (TPL is used to pattern a photoresist-chiral polymer mixture into planar chiral shapes. Enhanced bulk chirality can be realized by tuning the wavelength-dependent chiral response at both the molecular and geometric level to ensure an overlap of their respective spectra. The approach is demonstrated via the fabrication of a metamaterial consisting of a two-dimensional array of chiral polymer-based L-structures. The fabrication process is described and modeling is performed to demonstrate the distinction between molecular and planar geometric-based chirality and the effects of the enhanced multiscale chirality on the optical response of such media. This new approach to metamaterials holds promise for the development of tunable, polymer-based optical metamaterials with low loss.

  14. Multiscale wavelet representations for mammographic feature analysis (United States)

    Laine, Andrew F.; Song, Shuwu


    This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet coefficients, enhanced by linear, exponential and constant weight functions localized in scale space. By improving the visualization of breast pathology we can improve the changes of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).

  15. Chinese license plate character segmentation using multiscale template matching (United States)

    Tian, Jiangmin; Wang, Guoyou; Liu, Jianguo; Xia, Yuanchun


    Character segmentation (CS) plays an important role in automatic license plate recognition and has been studied for decades. A method using multiscale template matching is proposed to settle the problem of CS for Chinese license plates. It is carried out on a binary image integrated from maximally stable extremal region detection and Otsu thresholding. Afterward, a uniform harrow-shaped template with variable length is designed, by virtue of which a three-dimensional matching space is constructed for searching of candidate segmentations. These segmentations are detected at matches with local minimum responses. Finally, the vertical boundaries of each single character are located for subsequent recognition. Experiments on a data set including 2349 license plate images of different quality levels show that the proposed method can achieve a higher accuracy at comparable time cost and is robust to images in poor conditions.

  16. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Directory of Open Access Journals (Sweden)

    H. Yue


    Full Text Available Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  17. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.


    In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.

  18. A multiscale dataset for understanding complex eco-hydrological processes in a heterogeneous oasis system (United States)

    Li, Xin; Liu, Shaomin; Xiao, Qin; Ma, Mingguo; Jin, Rui; Che, Tao; Wang, Weizhen; Hu, Xiaoli; Xu, Ziwei; Wen, Jianguang; Wang, Liangxu


    We introduce a multiscale dataset obtained from Heihe Watershed Allied Telemetry Experimental Research (HiWATER) in an oasis-desert area in 2012. Upscaling of eco-hydrological processes on a heterogeneous surface is a grand challenge. Progress in this field is hindered by the poor availability of multiscale observations. HiWATER is an experiment designed to address this challenge through instrumentation on hierarchically nested scales to obtain multiscale and multidisciplinary data. The HiWATER observation system consists of a flux observation matrix of eddy covariance towers, large aperture scintillometers, and automatic meteorological stations; an eco-hydrological sensor network of soil moisture and leaf area index; hyper-resolution airborne remote sensing using LiDAR, imaging spectrometer, multi-angle thermal imager, and L-band microwave radiometer; and synchronical ground measurements of vegetation dynamics, and photosynthesis processes. All observational data were carefully quality controlled throughout sensor calibration, data collection, data processing, and datasets generation. The data are freely available at figshare and the Cold and Arid Regions Science Data Centre. The data should be useful for elucidating multiscale eco-hydrological processes and developing upscaling methods.

  19. Peridynamic Multiscale Finite Element Methods

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Timothy [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bond, Stephen D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Littlewood, David John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Moore, Stan Gerald [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    The problem of computing quantum-accurate design-scale solutions to mechanics problems is rich with applications and serves as the background to modern multiscale science research. The prob- lem can be broken into component problems comprised of communicating across adjacent scales, which when strung together create a pipeline for information to travel from quantum scales to design scales. Traditionally, this involves connections between a) quantum electronic structure calculations and molecular dynamics and between b) molecular dynamics and local partial differ- ential equation models at the design scale. The second step, b), is particularly challenging since the appropriate scales of molecular dynamic and local partial differential equation models do not overlap. The peridynamic model for continuum mechanics provides an advantage in this endeavor, as the basic equations of peridynamics are valid at a wide range of scales limiting from the classical partial differential equation models valid at the design scale to the scale of molecular dynamics. In this work we focus on the development of multiscale finite element methods for the peridynamic model, in an effort to create a mathematically consistent channel for microscale information to travel from the upper limits of the molecular dynamics scale to the design scale. In particular, we first develop a Nonlocal Multiscale Finite Element Method which solves the peridynamic model at multiple scales to include microscale information at the coarse-scale. We then consider a method that solves a fine-scale peridynamic model to build element-support basis functions for a coarse- scale local partial differential equation model, called the Mixed Locality Multiscale Finite Element Method. Given decades of research and development into finite element codes for the local partial differential equation models of continuum mechanics there is a strong desire to couple local and nonlocal models to leverage the speed and state of the

  20. Capillary electrophoresis enhanced by automatic two-way background correction using cubic smoothing splines and multivariate data analysis applied to the characterisation of mixtures of surfactants. (United States)

    Bernabé-Zafón, Virginia; Torres-Lapasió, José R; Ortega-Gadea, Silvia; Simó-Alfonso, Ernesto F; Ramis-Ramos, Guillermo


    Mixtures of the surfactant classes coconut diethanolamide, cocamido propyl betaine and alkylbenzene sulfonate were separated by capillary electrophoresis in several media containing organic solvents and anionic solvophobic agents. Good resolution between both the surfactant classes and the homologues within the classes was achieved in a BGE containing 80 mM borate buffer of pH 8.5, 20% n-propanol and 40 mM sodium deoxycholate. Full resolution, assistance in peak assignment to the classes (including the recognition of solutes not belonging to the classes), and improvement of the signal-to-noise ratio was achieved by multivariate data analysis of the time-wavelength electropherograms. Cubic smoothing splines were used to develop an algorithm capable of automatically modelling the two-way background, which increased the sensitivity and reliability of the multivariate analysis of the corrected signal. The exclusion of significant signals from the background model was guaranteed by the conservativeness of the criteria used and the safeguards adopted all along the point selection process, where the CSS algorithm supported the addition of new points to the initially reduced background sample. Efficient background modelling made the application of multivariate deconvolution within extensive time windows possible. This increased the probability of finding quality spectra for each solute class by orthogonal projection approach. The concentration profiles of the classes were improved by subsequent application of alternating least squares. The two-way electropherograms were automatically processed, with minimal supervision by the user, in less than 2 min. The procedure was successfully applied to the identification and quantification of the surfactants in household cleaners.

  1. Multiscale Clock Ensembling Using Wavelets (United States)


    allows an energy decomposition of the signal as well, referred to as the wavelet variance. This variance is defined by ) var ()( 2 llX Wv  (11...and it can be shown that for a very wide class of signals and for an appropriately chosen wavelet that ) var ()( 1 2 Xv l lX     . One such...42 nd Annual Precise Time and Time Interval (PTTI) Meeting 527 MULTISCALE CLOCK ENSEMBLING USING WAVELETS Ken Senior Naval Center

  2. Automatic fluid dispenser (United States)

    Sakellaris, P. C. (Inventor)


    Fluid automatically flows to individual dispensing units at predetermined times from a fluid supply and is available only for a predetermined interval of time after which an automatic control causes the fluid to drain from the individual dispensing units. Fluid deprivation continues until the beginning of a new cycle when the fluid is once again automatically made available at the individual dispensing units.

  3. Rapid automatic segmentation of abnormal tissue in late gadolinium enhancement cardiovascular magnetic resonance images for improved management of long-standing persistent atrial fibrillation. (United States)

    Giannakidis, Archontis; Nyktari, Eva; Keegan, Jennifer; Pierce, Iain; Suman Horduna, Irina; Haldar, Shouvik; Pennell, Dudley J; Mohiaddin, Raad; Wong, Tom; Firmin, David N


    Atrial fibrillation (AF) is the most common heart rhythm disorder. In order for late Gd enhancement cardiovascular magnetic resonance (LGE CMR) to ameliorate the AF management, the ready availability of the accurate enhancement segmentation is required. However, the computer-aided segmentation of enhancement in LGE CMR of AF is still an open question. Additionally, the number of centres that have reported successful application of LGE CMR to guide clinical AF strategies remains low, while the debate on LGE CMR's diagnostic ability for AF still holds. The aim of this study is to propose a method that reliably distinguishes enhanced (abnormal) from non-enhanced (healthy) tissue within the left atrial wall of (pre-ablation and 3 months post-ablation) LGE CMR data-sets from long-standing persistent AF patients studied at our centre. Enhancement segmentation was achieved by employing thresholds benchmarked against the statistics of the whole left atrial blood-pool (LABP). The test-set cross-validation mechanism was applied to determine the input feature representation and algorithm that best predict enhancement threshold levels. Global normalized intensity threshold levels T PRE  = 1 1/4 and T POST  = 1 5/8 were found to segment enhancement in data-sets acquired pre-ablation and at 3 months post-ablation, respectively. The segmentation results were corroborated by using visual inspection of LGE CMR brightness levels and one endocardial bipolar voltage map. The measured extent of pre-ablation fibrosis fell within the normal range for the specific arrhythmia phenotype. 3D volume renderings of segmented post-ablation enhancement emulated the expected ablation lesion patterns. By comparing our technique with other related approaches that proposed different threshold levels (although they also relied on reference regions from within the LABP) for segmenting enhancement in LGE CMR data-sets of AF patients, we illustrated that the cut-off levels employed by other centres

  4. A rate-dependent multi-scale crack model for concrete

    NARCIS (Netherlands)

    Karamnejad, A.; Nguyen, V.P.; Sluys, L.J.


    A multi-scale numerical approach for modeling cracking in heterogeneous quasi-brittle materials under dynamic loading is presented. In the model, a discontinuous crack model is used at macro-scale to simulate fracture and a gradient-enhanced damage model has been used at meso-scale to simulate

  5. Expected Navigation Flight Performance for the Magnetospheric Multiscale (MMS) Mission (United States)

    Olson, Corwin; Wright, Cinnamon; Long, Anne


    The Magnetospheric Multiscale (MMS) mission consists of four formation-flying spacecraft placed in highly eccentric elliptical orbits about the Earth. The primary scientific mission objective is to study magnetic reconnection within the Earth s magnetosphere. The baseline navigation concept is the independent estimation of each spacecraft state using GPS pseudorange measurements (referenced to an onboard Ultra Stable Oscillator) and accelerometer measurements during maneuvers. State estimation for the MMS spacecraft is performed onboard each vehicle using the Goddard Enhanced Onboard Navigation System, which is embedded in the Navigator GPS receiver. This paper describes the latest efforts to characterize expected navigation flight performance using upgraded simulation models derived from recent analyses.

  6. An automated vessel segmentation of retinal images using multiscale vesselness

    International Nuclear Information System (INIS)

    Ben Abdallah, M.; Malek, J.; Tourki, R.; Krissian, K.


    The ocular fundus image can provide information on pathological changes caused by local ocular diseases and early signs of certain systemic diseases, such as diabetes and hypertension. Automated analysis and interpretation of fundus images has become a necessary and important diagnostic procedure in ophthalmology. The extraction of blood vessels from retinal images is an important and challenging task in medical analysis and diagnosis. In this paper, we introduce an implementation of the anisotropic diffusion which allows reducing the noise and better preserving small structures like vessels in 2D images. A vessel detection filter, based on a multi-scale vesselness function, is then applied to enhance vascular structures.

  7. A Tensor-Product-Kernel Framework for Multiscale Neural Activity Decoding and Control (United States)

    Li, Lin; Brockmeier, Austin J.; Choi, John S.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.


    Brain machine interfaces (BMIs) have attracted intense attention as a promising technology for directly interfacing computers or prostheses with the brain's motor and sensory areas, thereby bypassing the body. The availability of multiscale neural recordings including spike trains and local field potentials (LFPs) brings potential opportunities to enhance computational modeling by enriching the characterization of the neural system state. However, heterogeneity on data type (spike timing versus continuous amplitude signals) and spatiotemporal scale complicates the model integration of multiscale neural activity. In this paper, we propose a tensor-product-kernel-based framework to integrate the multiscale activity and exploit the complementary information available in multiscale neural activity. This provides a common mathematical framework for incorporating signals from different domains. The approach is applied to the problem of neural decoding and control. For neural decoding, the framework is able to identify the nonlinear functional relationship between the multiscale neural responses and the stimuli using general purpose kernel adaptive filtering. In a sensory stimulation experiment, the tensor-product-kernel decoder outperforms decoders that use only a single neural data type. In addition, an adaptive inverse controller for delivering electrical microstimulation patterns that utilizes the tensor-product kernel achieves promising results in emulating the responses to natural stimulation. PMID:24829569

  8. Multi-scale surface topography to minimize adherence and viability of nosocomial drug-resistant bacteria. (United States)

    Hasan, Jafar; Jain, Shubham; Padmarajan, Rinsha; Purighalla, Swathi; Sambandamurthy, Vasan K; Chatterjee, Kaushik


    Toward minimizing bacterial colonization of surfaces, we present a one-step etching technique that renders aluminum alloys with micro- and nano-scale roughness. Such a multi-scale surface topography exhibited enhanced antibacterial effect against a wide range of pathogens. Multi-scale topography of commercially grade pure aluminum killed 97% of Escherichia coli and 28% of Staphylococcus aureus cells in comparison to 7% and 3%, respectively, on the smooth surfaces. Multi-scale topography on Al 5052 surface was shown to kill 94% of adhered E . coli cells. The microscale features on the etched Al 1200 alloy were not found to be significantly bactericidal, but shown to decrease the adherence of S . aureus cells by one-third. The fabrication method is easily scalable for industrial applications. Analysis of roughness parameters determined by atomic force microscopy revealed a set of significant parameters that can yield a highly bactericidal surface; thereby providing the design to make any surface bactericidal irrespective of the method of fabrication. The multi-scale roughness of Al 5052 alloy was also highly bactericidal to nosocomial isolates of E . coli , K . pneumoniae and P . aeruginosa . We envisage the potential application of engineered surfaces with multi-scale topography to minimize the spread of nosocomial infections.

  9. Navigation Operations for the Magnetospheric Multiscale Mission (United States)

    Long, Anne; Farahmand, Mitra; Carpenter, Russell


    The Magnetospheric Multiscale (MMS) mission employs four identical spinning spacecraft flying in highly elliptical Earth orbits. These spacecraft will fly in a series of tetrahedral formations with separations of less than 10 km. MMS navigation operations use onboard navigation to satisfy the mission definitive orbit and time determination requirements and in addition to minimize operations cost and complexity. The onboard navigation subsystem consists of the Navigator GPS receiver with Goddard Enhanced Onboard Navigation System (GEONS) software, and an Ultra-Stable Oscillator. The four MMS spacecraft are operated from a single Mission Operations Center, which includes a Flight Dynamics Operations Area (FDOA) that supports MMS navigation operations, as well as maneuver planning, conjunction assessment and attitude ground operations. The System Manager component of the FDOA automates routine operations processes. The GEONS Ground Support System component of the FDOA provides the tools needed to support MMS navigation operations. This paper provides an overview of the MMS mission and associated navigation requirements and constraints and discusses MMS navigation operations and the associated MMS ground system components built to support navigation-related operations.

  10. Processing Digital Imagery to Enhance Perceptions of Realism (United States)

    Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur


    Multi-scale retinex with color restoration (MSRCR) is a method of processing digital image data based on Edwin Land s retinex (retina + cortex) theory of human color vision. An outgrowth of basic scientific research and its application to NASA s remote-sensing mission, MSRCR is embodied in a general-purpose algorithm that greatly improves the perception of visual realism and the quantity and quality of perceived information in a digitized image. In addition, the MSRCR algorithm includes provisions for automatic corrections to accelerate and facilitate what could otherwise be a tedious image-editing process. The MSRCR algorithm has been, and is expected to continue to be, the basis for development of commercial image-enhancement software designed to extend and refine its capabilities for diverse applications.

  11. Multiscale soil-landscape process modeling

    NARCIS (Netherlands)

    Schoorl, J.M.; Veldkamp, A.


    The general objective of this chapter is to illustrate the role of soils and geomorphological processes in the multiscale soil-lanscape context. Included in this context is the fourth dimension (temporal dimension) and the human role (fifth dimension)

  12. Collaborating for Multi-Scale Chemical Science

    Energy Technology Data Exchange (ETDEWEB)

    William H. Green


    Advanced model reduction methods were developed and integrated into the CMCS multiscale chemical science simulation software. The new technologies were used to simulate HCCI engines and burner flames with exceptional fidelity.

  13. Multiscale Modeling of Hall Thrusters Project (United States)

    National Aeronautics and Space Administration — New multiscale modeling capability for analyzing advanced Hall thrusters is proposed. This technology offers NASA the ability to reduce development effort of new...

  14. Multiscale Modeling of Wear Degradation

    KAUST Repository

    Moraes, Alvaro


    Cylinder liners of diesel engines used for marine propulsion are naturally subjected to a wear process, and may fail when their wear exceeds a specified limit. Since failures often represent high economical costs, it is utterly important to predict and avoid them. In this work [4], we model the wear process using a pure jump process. Therefore, the inference goal here is to estimate: the number of possible jumps, its sizes, the coefficients and the shapes of the jump intensities. We propose a multiscale approach for the inference problem that can be seen as an indirect inference scheme. We found that using a Gaussian approximation based on moment expansions, it is possible to accurately estimate the jump intensities and the jump amplitudes. We obtained results equivalent to the state of the art but using a simpler and less expensive approach.

  15. Multiscale Modeling of Wear Degradation

    KAUST Repository

    Moraes, Alvaro


    Cylinder liners of diesel engines used for marine propulsion are naturally subjected to a wear process, and may fail when their wear exceeds a specified limit. Since failures often represent high economical costs, it is utterly important to predict and avoid them. In this work [4], we model the wear process using a pure jump process. Therefore, the inference goal here is to estimate: the number of possible jumps, its sizes, the coefficients and the shapes of the jump intensities. We propose a multiscale approach for the inference problem that can be seen as an indirect inference scheme. We found that using a Gaussian approximation based on moment expansions, it is possible to accurately estimate the jump intensities and the jump amplitudes. We obtained results equivalent to the state of the art but using a simpler and less expensive approach.

  16. Multiscale Modeling of Wear Degradation

    KAUST Repository

    Moraes, Alvaro


    Cylinder liners of diesel engines used for marine propulsion are naturally subjected to a wear process, and may fail when their wear exceeds a specified limit. Since failures often represent high economical costs, it is utterly important to predict and avoid them. In this work [4], we model the wear process using a pure jump process. Therefore, the inference goal here is to estimate: the number of possible jumps, its sizes, the coefficients and the shapes of the jump intensities. We propose a multiscale approach for the inference problem that can be seen as an indirect inference scheme. We found that using a Gaussian approximation based on moment expansions, it is possible to accurately estimate the jump intensities and the jump amplitudes. We obtained results equivalent to the state of the art but using a simpler and less expensive approach.

  17. Multiscale modelling of DNA mechanics. (United States)

    Dršata, Tomáš; Lankaš, Filip


    Mechanical properties of DNA are important not only in a wide range of biological processes but also in the emerging field of DNA nanotechnology. We review some of the recent developments in modeling these properties, emphasizing the multiscale nature of the problem. Modern atomic resolution, explicit solvent molecular dynamics simulations have contributed to our understanding of DNA fine structure and conformational polymorphism. These simulations may serve as data sources to parameterize rigid base models which themselves have undergone major development. A consistent buildup of larger entities involving multiple rigid bases enables us to describe DNA at more global scales. Free energy methods to impose large strains on DNA, as well as bead models and other approaches, are also briefly discussed.

  18. Wavelets and multiscale signal processing

    CERN Document Server

    Cohen, Albert


    Since their appearance in mid-1980s, wavelets and, more generally, multiscale methods have become powerful tools in mathematical analysis and in applications to numerical analysis and signal processing. This book is based on "Ondelettes et Traitement Numerique du Signal" by Albert Cohen. It has been translated from French by Robert D. Ryan and extensively updated by both Cohen and Ryan. It studies the existing relations between filter banks and wavelet decompositions and shows how these relations can be exploited in the context of digital signal processing. Throughout, the book concentrates on the fundamentals. It begins with a chapter on the concept of multiresolution analysis, which contains complete proofs of the basic results. The description of filter banks that are related to wavelet bases is elaborated in both the orthogonal case (Chapter 2), and in the biorthogonal case (Chapter 4). The regularity of wavelets, how this is related to the properties of the filters and the importance of regularity for t...

  19. Multiscale modeling of pedestrian dynamics

    CERN Document Server

    Cristiani, Emiliano; Tosin, Andrea


    This book presents mathematical models and numerical simulations of crowd dynamics. The core topic is the development of a new multiscale paradigm, which bridges the microscopic and macroscopic scales taking the most from each of them for capturing the relevant clues of complexity of crowds. The background idea is indeed that most of the complex trends exhibited by crowds are due to an intrinsic interplay between individual and collective behaviors. The modeling approach promoted in this book pursues actively this intuition and profits from it for designing general mathematical structures susceptible of application also in fields different from the inspiring original one. The book considers also the two most traditional points of view: the microscopic one, in which pedestrians are tracked individually, and the macroscopic one, in which pedestrians are assimilated to a continuum. Selected existing models are critically analyzed. The work is addressed to researchers and graduate students.

  20. Multiphysics/multiscale multifluid computations

    International Nuclear Information System (INIS)

    Yadigaroglu, George


    Regarding experimentation, interesting examples of multi-scale approaches are found: the small-scale experiments to understand the mechanisms of counter-current flow limitations (CCFL) such as the growth of instabilities on films, droplet entrainment, etc; meso-scale experiments to quantify the CCFL conditions in typical geometries such as tubes and gaps between parallel plates, and finally full-scale experimentation in a typical reactor geometry - the UPTF tests. Another example is the mixing of the atmosphere produced by plumes and jets in a reactor containment: one needs first basic turbulence information that can be obtained at the microscopic level; follow medium-scale experiments to understand the behaviour of jets and plumes; finally reactor-scale tests can be conducted in facilities such as PANDA at PSI, in Switzerland to study the phenomena at large scale

  1. Integrated multi-scale modelling and simulation of nuclear fuels

    International Nuclear Information System (INIS)

    Valot, C.; Bertolus, M.; Masson, R.; Malerba, L.; Rachid, J.; Besmann, T.; Phillpot, S.; Stan, M.


    This chapter aims at discussing the objectives, implementation and integration of multi-scale modelling approaches applied to nuclear fuel materials. We will first show why the multi-scale modelling approach is required, due to the nature of the materials and by the phenomena involved under irradiation. We will then present the multiple facets of multi-scale modelling approach, while giving some recommendations with regard to its application. We will also show that multi-scale modelling must be coupled with appropriate multi-scale experiments and characterisation. Finally, we will demonstrate how multi-scale modelling can contribute to solving technology issues. (authors)

  2. A concurrent multiscale micromorphic molecular dynamics

    International Nuclear Information System (INIS)

    Li, Shaofan; Tong, Qi


    In this work, we have derived a multiscale micromorphic molecular dynamics (MMMD) from first principle to extend the (Andersen)-Parrinello-Rahman molecular dynamics to mesoscale and continuum scale. The multiscale micromorphic molecular dynamics is a con-current three-scale dynamics that couples a fine scale molecular dynamics, a mesoscale micromorphic dynamics, and a macroscale nonlocal particle dynamics together. By choosing proper statistical closure conditions, we have shown that the original Andersen-Parrinello-Rahman molecular dynamics is the homogeneous and equilibrium case of the proposed multiscale micromorphic molecular dynamics. In specific, we have shown that the Andersen-Parrinello-Rahman molecular dynamics can be rigorously formulated and justified from first principle, and its general inhomogeneous case, i.e., the three scale con-current multiscale micromorphic molecular dynamics can take into account of macroscale continuum mechanics boundary condition without the limitation of atomistic boundary condition or periodic boundary conditions. The discovered multiscale scale structure and the corresponding multiscale dynamics reveal a seamless transition from atomistic scale to continuum scale and the intrinsic coupling mechanism among them based on first principle formulation

  3. Automatic Fiscal Stabilizers

    Directory of Open Access Journals (Sweden)

    Narcis Eduard Mitu


    Full Text Available Policies or institutions (built into an economic system that automatically tend to dampen economic cycle fluctuations in income, employment, etc., without direct government intervention. For example, in boom times, progressive income tax automatically reduces money supply as incomes and spendings rise. Similarly, in recessionary times, payment of unemployment benefits injects more money in the system and stimulates demand. Also called automatic stabilizers or built-in stabilizers.

  4. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. (comp.)


    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  5. Distributed multiscale computing with MUSCLE 2, the Multiscale Coupling Library and Environment

    NARCIS (Netherlands)

    Borgdorff, J.; Mamonski, M.; Bosak, B.; Kurowski, K.; Ben Belgacem, M.; Chopard, B.; Groen, D.; Coveney, P.V.; Hoekstra, A.G.


    We present the Multiscale Coupling Library and Environment: MUSCLE 2. This multiscale component-based execution environment has a simple to use Java, C++, C, Python and Fortran API, compatible with MPI, OpenMP and threading codes. We demonstrate its local and distributed computing capabilities and

  6. Multiscale Model Reduction with Generalized Multiscale Finite Element Methods in Geomathematics

    KAUST Repository

    Efendiev, Yalchin R.


    In this chapter, we discuss multiscale model reduction using Generalized Multiscale Finite Element Methods (GMsFEM) in a number of geomathematical applications. GMsFEM has been recently introduced (Efendiev et al. 2012) and applied to various problems. In the current chapter, we consider some of these applications and outline the basic methodological concepts.

  7. A mathematical framework for multiscale science and engineering : the variational multiscale method and interscale transfer operators.

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Gregory John (Sandia National Laboratories, Livermore, CA); Collis, Samuel Scott; Templeton, Jeremy Alan (Sandia National Laboratories, Livermore, CA); Lehoucq, Richard B.; Parks, Michael L.; Jones, Reese E. (Sandia National Laboratories, Livermore, CA); Silling, Stewart Andrew; Scovazzi, Guglielmo; Bochev, Pavel B.


    This report is a collection of documents written as part of the Laboratory Directed Research and Development (LDRD) project A Mathematical Framework for Multiscale Science and Engineering: The Variational Multiscale Method and Interscale Transfer Operators. We present developments in two categories of multiscale mathematics and analysis. The first, continuum-to-continuum (CtC) multiscale, includes problems that allow application of the same continuum model at all scales with the primary barrier to simulation being computing resources. The second, atomistic-to-continuum (AtC) multiscale, represents applications where detailed physics at the atomistic or molecular level must be simulated to resolve the small scales, but the effect on and coupling to the continuum level is frequently unclear.

  8. Automatic control systems engineering

    International Nuclear Information System (INIS)

    Shin, Yun Gi


    This book gives descriptions of automatic control for electrical electronics, which indicates history of automatic control, Laplace transform, block diagram and signal flow diagram, electrometer, linearization of system, space of situation, state space analysis of electric system, sensor, hydro controlling system, stability, time response of linear dynamic system, conception of root locus, procedure to draw root locus, frequency response, and design of control system.

  9. Neural Bases of Automaticity (United States)

    Servant, Mathieu; Cassey, Peter; Woodman, Geoffrey F.; Logan, Gordon D.


    Automaticity allows us to perform tasks in a fast, efficient, and effortless manner after sufficient practice. Theories of automaticity propose that across practice processing transitions from being controlled by working memory to being controlled by long-term memory retrieval. Recent event-related potential (ERP) studies have sought to test this…

  10. Focusing Automatic Code Inspections

    NARCIS (Netherlands)

    Boogerd, C.J.


    Automatic Code Inspection tools help developers in early detection of defects in software. A well-known drawback of many automatic inspection approaches is that they yield too many warnings and require a clearer focus. In this thesis, we provide such focus by proposing two methods to prioritize

  11. Automatic differentiation of functions

    International Nuclear Information System (INIS)

    Douglas, S.R.


    Automatic differentiation is a method of computing derivatives of functions to any order in any number of variables. The functions must be expressible as combinations of elementary functions. When evaluated at specific numerical points, the derivatives have no truncation error and are automatically found. The method is illustrated by simple examples. Source code in FORTRAN is provided


    African Journals Online (AJOL)

    Both the nursing staff shortage and the need for precise control in the administration of dangerous drugs intra- venously have led to the development of various devices to achieve an automatic system. The continuous automatic control of the drip rate eliminates errors due to any physical effect such as movement of the ...

  13. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike


    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  14. Multiscale brain-machine interface decoders. (United States)

    Han-Lin Hsieh; Shanechi, Maryam M


    Brain-machine interfaces (BMI) have vastly used a single scale of neural activity, e.g., spikes or electrocorticography (ECoG), as their control signal. New technology allows for simultaneous recording of multiple scales of neural activity, from spikes to local field potentials (LFP) and ECoG. These advances introduce the new challenge of modeling and decoding multiple scales of neural activity jointly. Such multi-scale decoding is challenging for two reasons. First, spikes are discrete-valued and ECoG/LFP are continuous-valued, resulting in fundamental differences in statistical characteristics. Second, the time-scales of these signals are different, with spikes having a millisecond time-scale and ECoG/LFP having much slower time-scales on the order of tens of milliseconds. Here we develop a new multiscale modeling and decoding framework that addresses these challenges. Our multiscale decoder extracts information from ECoG/LFP in addition to spikes, while operating at the fast time-scale of the spikes. The multiscale decoder specializes to a Kalman filter (KF) or to a point process filter (PPF) when no spikes or ECoG/LFP are available, respectively. Using closed-loop BMI simulations, we show that compared to PPF decoding of spikes alone or KF decoding of LFP/ECoG alone, the multiscale decoder significantly improves the accuracy and error performance of BMI control and runs at the fast millisecond time-scale of the spikes. This new multiscale modeling and decoding framework has the potential to improve BMI control using simultaneous multiscale neural activity.

  15. Gradient design for liquid chromatography using multi-scale optimization. (United States)

    López-Ureña, S; Torres-Lapasió, J R; Donat, R; García-Alvarez-Coque, M C


    In reversed phase-liquid chromatography, the usual solution to the "general elution problem" is the application of gradient elution with programmed changes of organic solvent (or other properties). A correct quantification of chromatographic peaks in liquid chromatography requires well resolved signals in a proper analysis time. When the complexity of the sample is high, the gradient program should be accommodated to the local resolution needs of each analyte. This makes the optimization of such situations rather troublesome, since enhancing the resolution for a given analyte may imply a collateral worsening of the resolution of other analytes. The aim of this work is to design multi-linear gradients that maximize the resolution, while fulfilling some restrictions: all peaks should be eluted before a given maximal time, the gradient should be flat or increasing, and sudden changes close to eluting peaks are penalized. Consequently, an equilibrated baseline resolution for all compounds is sought. This goal is achieved by splitting the optimization problem in a multi-scale framework. In each scale κ, an optimization problem is solved with N κ  ≈ 2 κ variables that are used to build the gradients. The N κ variables define cubic splines written in terms of a B-spline basis. This allows expressing gradients as polygonals of M points approximating the splines. The cubic splines are built using subdivision schemes, a technique of fast generation of smooth curves, compatible with the multi-scale framework. Owing to the nature of the problem and the presence of multiple local maxima, the algorithm used in the optimization problem of each scale κ should be "global", such as the pattern-search algorithm. The multi-scale optimization approach is successfully applied to find the best multi-linear gradient for resolving a mixture of amino acid derivatives. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. An automatic computer-aided detection scheme for pneumoconiosis on digital chest radiographs. (United States)

    Yu, Peichun; Xu, Hao; Zhu, Ying; Yang, Chao; Sun, Xiwen; Zhao, Jun


    This paper presents an automatic computer-aided detection scheme on digital chest radiographs to detect pneumoconiosis. Firstly, the lung fields are segmented from a digital chest X-ray image by using the active shape model method. Then, the lung fields are subdivided into six non-overlapping regions, according to Chinese diagnosis criteria of pneumoconiosis. The multi-scale difference filter bank is applied to the chest image to enhance the details of the small opacities, and the texture features are calculated from each region of the original and the processed images, respectively. After extracting the most relevant ones from the feature sets, support vector machine classifiers are utilized to separate the samples into the normal and the abnormal sets. Finally, the final classification is performed by the chest-based report-out and the classification probability values of six regions. Experiments are conducted on randomly selected images from our chest database. Both the training and the testing sets have 300 normal and 125 pneumoconiosis cases. In the training phase, training models and weighting factors for each region are derived. We evaluate the scheme using the full feature vectors or the selected feature vectors of the testing set. The results show that the classification performances are high. Compared with the previous methods, our fully automated scheme has a higher accuracy and a more convenient interaction. The scheme is very helpful to mass screening of pneumoconiosis in clinic.

  17. Multiscale analysis and computation for flows in heterogeneous media

    Energy Technology Data Exchange (ETDEWEB)

    Efendiev, Yalchin [Texas A & M Univ., College Station, TX (United States); Hou, T. Y. [California Inst. of Technology (CalTech), Pasadena, CA (United States); Durlofsky, L. J. [Stanford Univ., CA (United States); Tchelepi, H. [Stanford Univ., CA (United States)


    Our work in this project is aimed at making fundamental advances in multiscale methods for flow and transport in highly heterogeneous porous media. The main thrust of this research is to develop a systematic multiscale analysis and efficient coarse-scale models that can capture global effects and extend existing multiscale approaches to problems with additional physics and uncertainties. A key emphasis is on problems without an apparent scale separation. Multiscale solution methods are currently under active investigation for the simulation of subsurface flow in heterogeneous formations. These procedures capture the effects of fine-scale permeability variations through the calculation of specialized coarse-scale basis functions. Most of the multiscale techniques presented to date employ localization approximations in the calculation of these basis functions. For some highly correlated (e.g., channelized) formations, however, global effects are important and these may need to be incorporated into the multiscale basis functions. Other challenging issues facing multiscale simulations are the extension of existing multiscale techniques to problems with additional physics, such as compressibility, capillary effects, etc. In our project, we explore the improvement of multiscale methods through the incorporation of additional (single-phase flow) information and the development of a general multiscale framework for flows in the presence of uncertainties, compressible flow and heterogeneous transport, and geomechanics. We have considered (1) adaptive local-global multiscale methods, (2) multiscale methods for the transport equation, (3) operator-based multiscale methods and solvers, (4) multiscale methods in the presence of uncertainties and applications, (5) multiscale finite element methods for high contrast porous media and their generalizations, and (6) multiscale methods for geomechanics. Below, we present a brief overview of each of these contributions.

  18. Multiscale Simulation Framework for Coupled Fluid Flow and Mechanical Deformation

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Thomas [California Inst. of Technology (CalTech), Pasadena, CA (United States); Efendiev, Yalchin [Stanford Univ., CA (United States); Tchelepi, Hamdi [Texas A & M Univ., College Station, TX (United States); Durlofsky, Louis [Stanford Univ., CA (United States)


    Our work in this project is aimed at making fundamental advances in multiscale methods for flow and transport in highly heterogeneous porous media. The main thrust of this research is to develop a systematic multiscale analysis and efficient coarse-scale models that can capture global effects and extend existing multiscale approaches to problems with additional physics and uncertainties. A key emphasis is on problems without an apparent scale separation. Multiscale solution methods are currently under active investigation for the simulation of subsurface flow in heterogeneous formations. These procedures capture the effects of fine-scale permeability variations through the calculation of specialized coarse-scale basis functions. Most of the multiscale techniques presented to date employ localization approximations in the calculation of these basis functions. For some highly correlated (e.g., channelized) formations, however, global effects are important and these may need to be incorporated into the multiscale basis functions. Other challenging issues facing multiscale simulations are the extension of existing multiscale techniques to problems with additional physics, such as compressibility, capillary effects, etc. In our project, we explore the improvement of multiscale methods through the incorporation of additional (single-phase flow) information and the development of a general multiscale framework for flows in the presence of uncertainties, compressible flow and heterogeneous transport, and geomechanics. We have considered (1) adaptive local-global multiscale methods, (2) multiscale methods for the transport equation, (3) operator-based multiscale methods and solvers, (4) multiscale methods in the presence of uncertainties and applications, (5) multiscale finite element methods for high contrast porous media and their generalizations, and (6) multiscale methods for geomechanics.

  19. A multiscale transport model for non-classical nanochannel electroosmosis (United States)

    Bhadauria, Ravi; Aluru, N. R.


    We present a multiscale model describing the electroosmotic flow (EOF) in nanoscale channels involving high surface charge liquid-solid interfaces. The departure of the EOF velocity profiles from classical predictions is explained by the non-classical charge distribution in the confined direction including charge inversion, reduced mobility of interfacial counter-ions, and subsequent enhancement of the local viscosity. The excess component of the local solvent viscosity is modeled by the local application of the Fuoss-Onsager theory and the Hubbard-Onsager electro-hydrodynamic equation based dielectric friction theory. The electroosmotic slip velocity is estimated from the interfacial friction coefficient, which in turn is calculated using a generalized Langevin equation based dynamical framework. The proposed model for local viscosity enhancement and EOF velocity shows good agreement of corresponding physical quantities against relevant molecular dynamics simulation results, including the cases of anomalous transport such as EOF reversal.

  20. Automatic segmentation of clinical texts. (United States)

    Apostolova, Emilia; Channin, David S; Demner-Fushman, Dina; Furst, Jacob; Lytinen, Steven; Raicu, Daniela


    Clinical narratives, such as radiology and pathology reports, are commonly available in electronic form. However, they are also commonly entered and stored as free text. Knowledge of the structure of clinical narratives is necessary for enhancing the productivity of healthcare departments and facilitating research. This study attempts to automatically segment medical reports into semantic sections. Our goal is to develop a robust and scalable medical report segmentation system requiring minimum user input for efficient retrieval and extraction of information from free-text clinical narratives. Hand-crafted rules were used to automatically identify a high-confidence training set. This automatically created training dataset was later used to develop metrics and an algorithm that determines the semantic structure of the medical reports. A word-vector cosine similarity metric combined with several heuristics was used to classify each report sentence into one of several pre-defined semantic sections. This baseline algorithm achieved 79% accuracy. A Support Vector Machine (SVM) classifier trained on additional formatting and contextual features was able to achieve 90% accuracy. Plans for future work include developing a configurable system that could accommodate various medical report formatting and content standards.

  1. Intelligent Fault Diagnosis of Rotary Machinery Based on Unsupervised Multiscale Representation Learning (United States)

    Jiang, Guo-Qian; Xie, Ping; Wang, Xiao; Chen, Meng; He, Qun


    The performance of traditional vibration based fault diagnosis methods greatly depends on those handcrafted features extracted using signal processing algorithms, which require significant amounts of domain knowledge and human labor, and do not generalize well to new diagnosis domains. Recently, unsupervised representation learning provides an alternative promising solution to feature extraction in traditional fault diagnosis due to its superior learning ability from unlabeled data. Given that vibration signals usually contain multiple temporal structures, this paper proposes a multiscale representation learning (MSRL) framework to learn useful features directly from raw vibration signals, with the aim to capture rich and complementary fault pattern information at different scales. In our proposed approach, a coarse-grained procedure is first employed to obtain multiple scale signals from an original vibration signal. Then, sparse filtering, a newly developed unsupervised learning algorithm, is applied to automatically learn useful features from each scale signal, respectively, and then the learned features at each scale to be concatenated one by one to obtain multiscale representations. Finally, the multiscale representations are fed into a supervised classifier to achieve diagnosis results. Our proposed approach is evaluated using two different case studies: motor bearing and wind turbine gearbox fault diagnosis. Experimental results show that the proposed MSRL approach can take full advantages of the availability of unlabeled data to learn discriminative features and achieved better performance with higher accuracy and stability compared to the traditional approaches.

  2. Development of porous structure simulator for multi-scale simulation of irregular porous catalysts

    International Nuclear Information System (INIS)

    Koyama, Michihisa; Suzuki, Ai; Sahnoun, Riadh; Tsuboi, Hideyuki; Hatakeyama, Nozomu; Endou, Akira; Takaba, Hiromitsu; Kubo, Momoji; Del Carpio, Carlos A.; Miyamoto, Akira


    Efficient development of highly functional porous materials, used as catalysts in the automobile industry, demands a meticulous knowledge of the nano-scale interface at the electronic and atomistic scale. However, it is often difficult to correlate the microscopic interfacial interactions with macroscopic characteristics of the materials; for instance, the interaction between a precious metal and its support oxide with long-term sintering properties of the catalyst. Multi-scale computational chemistry approaches can contribute to bridge the gap between micro- and macroscopic characteristics of these materials; however this type of multi-scale simulations has been difficult to apply especially to porous materials. To overcome this problem, we have developed a novel mesoscopic approach based on a porous structure simulator. This simulator can construct automatically irregular porous structures on a computer, enabling simulations with complex meso-scale structures. Moreover, in this work we have developed a new method to simulate long-term sintering properties of metal particles on porous catalysts. Finally, we have applied the method to the simulation of sintering properties of Pt on alumina support. This newly developed method has enabled us to propose a multi-scale simulation approach for porous catalysts

  3. Multiscale Modeling of Astrophysical Jets

    Directory of Open Access Journals (Sweden)

    James H. Beall


    Full Text Available We are developing the capability for a multi-scale code to model the energy deposition rate and momentum transfer rate of an astrophysical jet which generates strong plasma turbulence in its interaction with the ambient medium through which it propagates. We start with a highly parallelized version of the VH-1 Hydrodynamics Code (Coella and Wood 1984, and Saxton et al., 2005. We are also considering the PLUTO code (Mignone et al. 2007 to model the jet in the magnetohydrodynamic (MHD and relativistic, magnetohydrodynamic (RMHD regimes. Particle-in-Cell approaches are also being used to benchmark a wave-population models of the two-stream instability and associated plasma processes in order to determine energy deposition and momentum transfer rates for these modes of jet-ambient medium interactions. We show some elements of the modeling of these jets in this paper, including energy loss and heating via plasma processes, and large scale hydrodynamic and relativistic hydrodynamic simulations. A preliminary simulation of a jet from the galactic center region is used to lend credence to the jet as the source of the so-called the Fermi Bubble (see, e.g., Su, M. & Finkbeiner, D. P., 2012*It is with great sorrow that we acknowledge the loss of our colleague and friend of more than thirty years, Dr. John Ural Guillory, to his battle with cancer.

  4. Multiscale Processes in Magnetic Reconnection (United States)

    Surjalal Sharma, A.; Jain, Neeraj

    The characteristic scales of the plasma processes in magnetic reconnection range from the elec-tron skin-depth to the magnetohydrodynamic (MHD) scale, and cross-scale coupling among them play a key role. Modeling these processes requires different physical models, viz. kinetic, electron-magnetohydrodynamics (EMHD), Hall-MHD, and MHD. The shortest scale processes are at the electron scale and these are modeled using an EMHD code, which provides many features of the multiscale behavior. In simulations using initial conditions consisting of pertur-bations with many scale sizes the reconnection takes place at many sites and the plasma flows from these interact with each other. This leads to thin current sheets with length less than 10 electron skin depths. The plasma flows also generate current sheets with multiple peaks, as observed by Cluster. The quadrupole structure of the magnetic field during reconnection starts on the electron scale and the interaction of inflow to the secondary sites and outflow from the dominant site generates a nested structure. In the outflow regions, the interaction of the electron outflows generated at the neighboring sites lead to the development of electron vortices. A signature of the nested structure of the Hall field is seen in Cluster observations, and more details of these features are expected from MMS.

  5. Multiscale modeling of composites subjected to high speed impact (United States)

    Lee, Minhyung; Cha, Myung S.; Shang, Shu; Kim, Nam H.


    The simulation of high speed impact into composite panels is a challenging task. This is partly due to the fact macro-scale simulation requires integrating the local response at various locations, i.e. integration points. If a huge number of integration points exist for enhanced accuracy, it is often suggested to calculate the micro-scale simulation using massive parallel processing. In this paper, multiscale modeling methodology has been applied to simulate the relatively thick composite panels subjected to high speed local impact loading. Instead of massive parallel processing, we propose to use surrogate modeling to bridge micro-scale and macro-scale. Multiscale modeling of fracture phenomena of composite materials will consist of (1) micro-scale modeling of fiber-matrix structure using the unit-volume-element technique; (2) macro-scale simulation of composite panels under high strain-rate impact using material response calculated from micro-scale modeling; and (3) surrogate modeling to integrate the two scales. In order to validate the predictions, first we did the material level lab experiment such as tension test. And later we also did the field test of bullet impact into composite panels made of 4 ply and 8 ply fibers. The impact velocity ranges from 300 ~ 600 m/s. Special Thanks to grants (UD120053GD).

  6. Multiscale Molecular Dynamics Approach to Energy Transfer in Nanomaterials. (United States)

    Espinosa-Duran, John M; Sereda, Yuriy V; Abi-Mansour, Andrew; Ortoleva, Peter


    After local transient fluctuations are dissipated, in an energy transfer process, a system evolves to a state where the energy density field varies slowly in time relative to the dynamics of atomic collisions and vibrations. Furthermore, the energy density field remains strongly coupled to the atomic scale processes (collisions and vibrations), and it can serve as the basis of a multiscale theory of energy transfer. Here, a method is introduced to capture the long scale energy density variations as they coevolve with the atomistic state in a way that yields insights into the basic physics and implies an efficient algorithm for energy transfer simulations. The approach is developed based on the N-atom Liouville equation and an interatomic force field and avoids the need for conjectured phenomenological equations for energy transfer and other processes. The theory is demonstrated for sodium chloride and silicon dioxide nanoparticles immersed in a water bath via molecular dynamics simulations of the energy transfer between a nanoparticle and its aqueous host fluid. The energy density field is computed for different sets of symmetric grid densities, and the multiscale theory holds when slowly varying energy densities at the nodes are obtained. Results strongly depend on grid density and nanoparticle constituent material. A nonuniform temperature distribution, larger thermal fluctuations in the nanoparticle than in the bath, and enhancement of fluctuations at the surface, which are expressed due to the atomic nature of the systems, are captured by this method rather than by phenomenological continuum energy transfer models.

  7. The center for multiscale plasma dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Kevrekidis, Yannis G [Princeton Univ., Princeton, NJ (United States)


    This final report describes research performed in Princeton University, led by Professor Yannis G. Kevrekidis, over a period of six years (August 1, 2014 to July 31, 2010, including a one-year, no-cost extension) as part of the Center for Multiscale Plasma Dynamics led by the University of Maryland. The work resulted in the development and implementation of several multiscale algorithms based on the equation-free approach pioneered by the PI, including its applications in plasma dynamics problems. These algoriithms include coarse projective integration and coarse stability/bifurcation computations. In the later stages of the work, new links were made between this multiscale, coarse-graining approach and advances in data mining/machine learning algorithms.

  8. Multiscale modeling in biomechanics and mechanobiology

    CERN Document Server

    Hwang, Wonmuk; Kuhl, Ellen


    Presenting a state-of-the-art overview of theoretical and computational models that link characteristic biomechanical phenomena, this book provides guidelines and examples for creating multiscale models in representative systems and organisms. It develops the reader's understanding of and intuition for multiscale phenomena in biomechanics and mechanobiology, and introduces a mathematical framework and computational techniques paramount to creating predictive multiscale models.   Biomechanics involves the study of the interactions of physical forces with biological systems at all scales – including molecular, cellular, tissue and organ scales. The emerging field of mechanobiology focuses on the way that cells produce and respond to mechanical forces – bridging the science of mechanics with the disciplines of genetics and molecular biology. Linking disparate spatial and temporal scales using computational techniques is emerging as a key concept in investigating some of the complex problems underlying these...

  9. Multiscale Computational Fluid Dynamics: Methodology and Application to PECVD of Thin Film Solar Cells

    Directory of Open Access Journals (Sweden)

    Marquis Crose


    Full Text Available This work focuses on the development of a multiscale computational fluid dynamics (CFD simulation framework with application to plasma-enhanced chemical vapor deposition of thin film solar cells. A macroscopic, CFD model is proposed which is capable of accurately reproducing plasma chemistry and transport phenomena within a 2D axisymmetric reactor geometry. Additionally, the complex interactions that take place on the surface of a-Si:H thin films are coupled with the CFD simulation using a novel kinetic Monte Carlo scheme which describes the thin film growth, leading to a multiscale CFD model. Due to the significant computational challenges imposed by this multiscale CFD model, a parallel computation strategy is presented which allows for reduced processing time via the discretization of both the gas-phase mesh and microscopic thin film growth processes. Finally, the multiscale CFD model has been applied to the PECVD process at industrially relevant operating conditions revealing non-uniformities greater than 20% in the growth rate of amorphous silicon films across the radius of the wafer.

  10. Automatic Test Systems Aquisition

    National Research Council Canada - National Science Library


    We are providing this final memorandum report for your information and use. This report discusses the efforts to achieve commonality in standards among the Military Departments as part of the DoD policy for automatic test systems (ATS...

  11. Multiscale Methods for Nuclear Reactor Analysis (United States)

    Collins, Benjamin S.

    The ability to accurately predict local pin powers in nuclear reactors is necessary to understand the mechanisms that cause fuel pin failure during steady state and transient operation. In the research presented here, methods are developed to improve the local solution using high order methods with boundary conditions from a low order global solution. Several different core configurations were tested to determine the improvement in the local pin powers compared to the standard techniques, that use diffusion theory and pin power reconstruction (PPR). Two different multiscale methods were developed and analyzed; the post-refinement multiscale method and the embedded multiscale method. The post-refinement multiscale methods use the global solution to determine boundary conditions for the local solution. The local solution is solved using either a fixed boundary source or an albedo boundary condition; this solution is "post-refinement" and thus has no impact on the global solution. The embedded multiscale method allows the local solver to change the global solution to provide an improved global and local solution. The post-refinement multiscale method is assessed using three core designs. When the local solution has more energy groups, the fixed source method has some difficulties near the interface: however the albedo method works well for all cases. In order to remedy the issue with boundary condition errors for the fixed source method, a buffer region is used to act as a filter, which decreases the sensitivity of the solution to the boundary condition. Both the albedo and fixed source methods benefit from the use of a buffer region. Unlike the post-refinement method, the embedded multiscale method alters the global solution. The ability to change the global solution allows for refinement in areas where the errors in the few group nodal diffusion are typically large. The embedded method is shown to improve the global solution when it is applied to a MOX/LEU assembly

  12. Multi-scale Regions from Edge Fragments

    DEFF Research Database (Denmark)

    Kazmi, Wajahat; Andersen, Hans Jørgen


    In this article we introduce a novel method for detecting multi-scale salient regions around edges using a graph based image compression algorithm. Images are recursively decomposed into triangles arranged into a binary tree using linear interpolation. The entropy of any local region of the image...... to estimate regions. Salient regions are thus formed as stable regions around edges. Tree hierarchy is then used to generate multi-scale regions. We evaluate our detector by performing image retrieval tests on our building database which shows that combined with Spin Images (Lazebnik et al., 2003...

  13. Multiscale phase inversion of seismic marine data

    KAUST Repository

    Fu, Lei


    We test the feasibility of applying multiscale phase inversion (MPI) to seismic marine data. To avoid cycle-skipping, the multiscale strategy temporally integrates the traces several times, i.e. high-order integration, to produce low-boost seismograms that are used as input data for the initial iterations of MPI. As the iterations proceed, higher frequencies in the data are boosted by using integrated traces of lower order as the input data. Results with synthetic data and field data from the Gulf of Mexico produce robust and accurate results if the model does not contain strong velocity contrasts such as salt-sediment interfaces.

  14. Deductive multiscale simulation using order parameters (United States)

    Ortoleva, Peter J.


    Illustrative embodiments of systems and methods for the deductive multiscale simulation of macromolecules are disclosed. In one illustrative embodiment, a deductive multiscale simulation method may include (i) constructing a set of order parameters that model one or more structural characteristics of a macromolecule, (ii) simulating an ensemble of atomistic configurations for the macromolecule using instantaneous values of the set of order parameters, (iii) simulating thermal-average forces and diffusivities for the ensemble of atomistic configurations, and (iv) evolving the set of order parameters via Langevin dynamics using the thermal-average forces and diffusivities.

  15. Multiscale Data Fusion Regulated by a Mixture-of-Experts Network (United States)

    Slatton, K. C.


    Laser altimetry (LIDAR) and interferometric synthetic aperture radar (InSAR) have emerged as important tools for remotely sensing topography at fine and medium scales, respectively. Strip-map InSAR provides large coverage areas, but at spatial resolutions that are often insufficient for many applications. Conversely, LIDAR provides higher resolution, but covering large areas can be impractical. Slatton, et al. (2001) demonstrated that digital elevation models (DEMs) derived from LIDAR and InSAR data could be fused to provide large coverage areas, while maintaining high resolution locally. A multiscale Kalman smoother (MKS) employing a fractional Brownian motion stochastic model allowed the estimation of fused elevations with uncertainty measures at every pixel. However, the standard MKS algorithm with a single stochastic model does not incorporate spatial variations in the elevation statistics. For example, rough undulating terrain yields an elevation surface with a shorter correlation length than flat smooth terrain. In this work, multiscale Kalman filters are defined in a multiple-model configuration that accommodates local variations in elevation statistics. Stochastic model realizations for long and short correlation length surfaces are blended together with a simple Mixture-of-Experts (ME) network. Implementing classical multiple-model approaches, such as a Magill filter bank, on multiscale data structures would require that a particular model be selected for every node in the quadtree. The selection of the best model at a parent node becomes potentially problematic if different models were selected as best at the children nodes. The need to explicitly map different stochastic models to the quadtree nodes of a multiscale estimator is obviated in the ME approach because the relative weighting of the individual Kalman estimates is automatically determined based on the innovation sequences to provide an adaptive estimate of the elevations.

  16. Automatic requirements traceability


    Andžiulytė, Justė


    This paper focuses on automatic requirements traceability and algorithms that automatically find recommendation links for requirements. The main objective of this paper is the evaluation of these algorithms and preparation of the method defining algorithms to be used in different cases. This paper presents and examines probabilistic, vector space and latent semantic indexing models of information retrieval and association rule mining using authors own implementations of these algorithms and o...

  17. Position automatic determination technology

    International Nuclear Information System (INIS)


    This book tells of method of position determination and characteristic, control method of position determination and point of design, point of sensor choice for position detector, position determination of digital control system, application of clutch break in high frequency position determination, automation technique of position determination, position determination by electromagnetic clutch and break, air cylinder, cam and solenoid, stop position control of automatic guide vehicle, stacker crane and automatic transfer control.

  18. Fast randomized Hough transformation track initiation algorithm based on multi-scale clustering (United States)

    Wan, Minjie; Gu, Guohua; Chen, Qian; Qian, Weixian; Wang, Pengcheng


    A fast randomized Hough transformation track initiation algorithm based on multi-scale clustering is proposed to overcome existing problems in traditional infrared search and track system(IRST) which cannot provide movement information of the initial target and select the threshold value of correlation automatically by a two-dimensional track association algorithm based on bearing-only information . Movements of all the targets are presumed to be uniform rectilinear motion throughout this new algorithm. Concepts of space random sampling, parameter space dynamic linking table and convergent mapping of image to parameter space are developed on the basis of fast randomized Hough transformation. Considering the phenomenon of peak value clustering due to shortcomings of peak detection itself which is built on threshold value method, accuracy can only be ensured on condition that parameter space has an obvious peak value. A multi-scale idea is added to the above-mentioned algorithm. Firstly, a primary association is conducted to select several alternative tracks by a low-threshold .Then, alternative tracks are processed by multi-scale clustering methods , through which accurate numbers and parameters of tracks are figured out automatically by means of transforming scale parameters. The first three frames are processed by this algorithm in order to get the first three targets of the track , and then two slightly different gate radius are worked out , mean value of which is used to be the global threshold value of correlation. Moreover, a new model for curvilinear equation correction is applied to the above-mentioned track initiation algorithm for purpose of solving the problem of shape distortion when a space three-dimensional curve is mapped to a two-dimensional bearing-only space. Using sideways-flying, launch and landing as examples to build models and simulate, the application of the proposed approach in simulation proves its effectiveness , accuracy , and adaptivity

  19. 3D multi-scale FCN with random modality voxel dropout learning for Intervertebral Disc Localization and Segmentation from Multi-modality MR Images. (United States)

    Li, Xiaomeng; Dou, Qi; Chen, Hao; Fu, Chi-Wing; Qi, Xiaojuan; Belavý, Daniel L; Armbrecht, Gabriele; Felsenberg, Dieter; Zheng, Guoyan; Heng, Pheng-Ann


    Intervertebral discs (IVDs) are small joints that lie between adjacent vertebrae. The localization and segmentation of IVDs are important for spine disease diagnosis and measurement quantification. However, manual annotation is time-consuming and error-prone with limited reproducibility, particularly for volumetric data. In this work, our goal is to develop an automatic and accurate method based on fully convolutional networks (FCN) for the localization and segmentation of IVDs from multi-modality 3D MR data. Compared with single modality data, multi-modality MR images provide complementary contextual information, which contributes to better recognition performance. However, how to effectively integrate such multi-modality information to generate accurate segmentation results remains to be further explored. In this paper, we present a novel multi-scale and modality dropout learning framework to locate and segment IVDs from four-modality MR images. First, we design a 3D multi-scale context fully convolutional network, which processes the input data in multiple scales of context and then merges the high-level features to enhance the representation capability of the network for handling the scale variation of anatomical structures. Second, to harness the complementary information from different modalities, we present a random modality voxel dropout strategy which alleviates the co-adaption issue and increases the discriminative capability of the network. Our method achieved the 1st place in the MICCAI challenge on automatic localization and segmentation of IVDs from multi-modality MR images, with a mean segmentation Dice coefficient of 91.2% and a mean localization error of 0.62 mm. We further conduct extensive experiments on the extended dataset to validate our method. We demonstrate that the proposed modality dropout strategy with multi-modality images as contextual information improved the segmentation accuracy significantly. Furthermore, experiments conducted on

  20. Multiscale information modelling for heart morphogenesis

    International Nuclear Information System (INIS)

    Abdulla, T; Imms, R; Summers, R; Schleich, J M


    Science is made feasible by the adoption of common systems of units. As research has become more data intensive, especially in the biomedical domain, it requires the adoption of a common system of information models, to make explicit the relationship between one set of data and another, regardless of format. This is being realised through the OBO Foundry to develop a suite of reference ontologies, and NCBO Bioportal to provide services to integrate biomedical resources and functionality to visualise and create mappings between ontology terms. Biomedical experts tend to be focused at one level of spatial scale, be it biochemistry, cell biology, or anatomy. Likewise, the ontologies they use tend to be focused at a particular level of scale. There is increasing interest in a multiscale systems approach, which attempts to integrate between different levels of scale to gain understanding of emergent effects. This is a return to physiological medicine with a computational emphasis, exemplified by the worldwide Physiome initiative, and the European Union funded Network of Excellence in the Virtual Physiological Human. However, little work has been done on how information modelling itself may be tailored to a multiscale systems approach. We demonstrate how this can be done for the complex process of heart morphogenesis, which requires multiscale understanding in both time and spatial domains. Such an effort enables the integration of multiscale metrology.

  1. Multiscale phenomenology of the cosmic web

    NARCIS (Netherlands)

    Aragón-Calvo, Miguel A.; van de Weygaert, Rien; Jones, Bernard J. T.


    We analyse the structure and connectivity of the distinct morphologies that define the cosmic web. With the help of our multiscale morphology filter (MMF), we dissect the matter distribution of a cosmological Lambda cold dark matter N-body computer simulation into cluster, filaments and walls. The

  2. Multiscale information modelling for heart morphogenesis (United States)

    Abdulla, T.; Imms, R.; Schleich, J. M.; Summers, R.


    Science is made feasible by the adoption of common systems of units. As research has become more data intensive, especially in the biomedical domain, it requires the adoption of a common system of information models, to make explicit the relationship between one set of data and another, regardless of format. This is being realised through the OBO Foundry to develop a suite of reference ontologies, and NCBO Bioportal to provide services to integrate biomedical resources and functionality to visualise and create mappings between ontology terms. Biomedical experts tend to be focused at one level of spatial scale, be it biochemistry, cell biology, or anatomy. Likewise, the ontologies they use tend to be focused at a particular level of scale. There is increasing interest in a multiscale systems approach, which attempts to integrate between different levels of scale to gain understanding of emergent effects. This is a return to physiological medicine with a computational emphasis, exemplified by the worldwide Physiome initiative, and the European Union funded Network of Excellence in the Virtual Physiological Human. However, little work has been done on how information modelling itself may be tailored to a multiscale systems approach. We demonstrate how this can be done for the complex process of heart morphogenesis, which requires multiscale understanding in both time and spatial domains. Such an effort enables the integration of multiscale metrology.

  3. Towards distributed multiscale simulation of biological processes

    NARCIS (Netherlands)

    Bernsdorf, J.; Berti, G.; Chopard, B.; Hegewald, J.; Krafczyk, M.; Wang, D.; Lorenz, E.; Hoekstra, A.


    The understanding of biological processes, e.g. related to cardio-vascular disease and treatment, can significantly be improved by numerical simulation. In this paper, we present an approach for a multiscale simulation environment, applied for the prediction of in-stent re-stenos is. Our focus is on

  4. Generalized multiscale finite element methods: Oversampling strategies

    KAUST Repository

    Efendiev, Yalchin R.


    In this paper, we propose oversampling strategies in the generalized multiscale finite element method (GMsFEM) framework. The GMsFEM, which has been recently introduced in Efendiev et al. (2013b) [Generalized Multiscale Finite Element Methods, J. Comput. Phys., vol. 251, pp. 116-135, 2013], allows solving multiscale parameter-dependent problems at a reduced computational cost by constructing a reduced-order representation of the solution on a coarse grid. The main idea of the method consists of (1) the construction of snapshot space, (2) the construction of the offline space, and (3) construction of the online space (the latter for parameter-dependent problems). In Efendiev et al. (2013b) [Generalized Multiscale Finite Element Methods, J. Comput. Phys., vol. 251, pp. 116-135, 2013], it was shown that the GMsFEM provides a flexible tool to solve multiscale problems with a complex input space by generating appropriate snapshot, offline, and online spaces. In this paper, we develop oversampling techniques to be used in this context (see Hou and Wu (1997) where oversampling is introduced for multiscale finite element methods). It is known (see Hou and Wu (1997)) that the oversampling can improve the accuracy of multiscale methods. In particular, the oversampling technique uses larger regions (larger than the target coarse block) in constructing local basis functions. Our motivation stems from the analysis presented in this paper, which shows that when using oversampling techniques in the construction of the snapshot space and offline space, GMsFEM will converge independent of small scales and high contrast under certain assumptions. We consider the use of a multiple eigenvalue problems to improve the convergence and discuss their relation to single spectral problems that use oversampled regions. The oversampling procedures proposed in this paper differ from those in Hou and Wu (1997). In particular, the oversampling domains are partially used in constructing local

  5. Pricing perpetual American options under multiscale stochastic elasticity of variance

    International Nuclear Information System (INIS)

    Yoon, Ji-Hun


    Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk

  6. Multiscale Permutation Entropy Based Rolling Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Jinde Zheng


    Full Text Available A new rolling bearing fault diagnosis approach based on multiscale permutation entropy (MPE, Laplacian score (LS, and support vector machines (SVMs is proposed in this paper. Permutation entropy (PE was recently proposed and defined to measure the randomicity and detect dynamical changes of time series. However, for the complexity of mechanical systems, the randomicity and dynamic changes of the vibration signal will exist in different scales. Thus, the definition of MPE is introduced and employed to extract the nonlinear fault characteristics from the bearing vibration signal in different scales. Besides, the SVM is utilized to accomplish the fault feature classification to fulfill diagnostic procedure automatically. Meanwhile, in order to avoid a high dimension of features, the Laplacian score (LS is used to refine the feature vector by ranking the features according to their importance and correlations with the main fault information. Finally, the rolling bearing fault diagnosis method based on MPE, LS, and SVM is proposed and applied to the experimental data. The experimental data analysis results indicate that the proposed method could identify the fault categories effectively.

  7. Automatic text summarization

    CERN Document Server

    Torres Moreno, Juan Manuel


    This new textbook examines the motivations and the different algorithms for automatic document summarization (ADS). We performed a recent state of the art. The book shows the main problems of ADS, difficulties and the solutions provided by the community. It presents recent advances in ADS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several exemples are included in order to clarify the theoretical concepts.  The books currently available in the area of Automatic Document Summarization are not recent. Powerful algorithms have been develop

  8. Automatic Ultrasound Scanning

    DEFF Research Database (Denmark)

    Moshavegh, Ramin

    on the scanners, and to improve the computer-aided diagnosis (CAD) in ultrasound by introducing new quantitative measures. Thus, four major issues concerning automation of the medical ultrasound are addressed in this PhD project. They touch upon gain adjustments in ultrasound, automatic synthetic aperture image...... on the user adjustments on the scanner interface to optimize the scan settings. This explains the huge interest in the subject of this PhD project entitled “AUTOMATIC ULTRASOUND SCANNING”. The key goals of the project have been to develop automated techniques to minimize the unnecessary settings...

  9. Semi-Automatic Science Workflow Synthesis for High-End Computing on the NASA Earth Exchange (United States)

    National Aeronautics and Space Administration — Enhance capabilities for collaborative data analysis and modeling in Earth sciences. Develop components for automatic workflow capture, archiving and management....

  10. The adaptive value of habitat preferences from a multi-scale spatial perspective: insights from marsh-nesting avian species

    Directory of Open Access Journals (Sweden)

    Jan Jedlikowski


    Full Text Available Background Habitat selection and its adaptive outcomes are crucial features for animal life-history strategies. Nevertheless, congruence between habitat preferences and breeding success has been rarely demonstrated, which may result from the single-scale evaluation of animal choices. As habitat selection is a complex multi-scale process in many groups of animal species, investigating adaptiveness of habitat selection in a multi-scale framework is crucial. In this study, we explore whether habitat preferences acting at different spatial scales enhance the fitness of bird species, and check the appropriateness of single vs. multi-scale models. We expected that variables found to be more important for habitat selection at individual scale(s, would coherently play a major role in affecting nest survival at the same scale(s. Methods We considered habitat preferences of two Rallidae species, little crake (Zapornia parva and water rail (Rallus aquaticus, at three spatial scales (landscape, territory, and nest-site and related them to nest survival. Single-scale versus multi-scale models (GLS and glmmPQL were compared to check which model better described adaptiveness of habitat preferences. Consistency between the effect of variables on habitat selection and on nest survival was checked to investigate their adaptive value. Results In both species, multi-scale models for nest survival were more supported than single-scale ones. In little crake, the multi-scale model indicated vegetation density and water depth at the territory scale, as well as vegetation height at nest-site scale, as the most important variables. The first two variables were among the most important for nest survival and habitat selection, and the coherent effects suggested the adaptive value of habitat preferences. In water rail, the multi-scale model of nest survival showed vegetation density at territory scale and extent of emergent vegetation within landscape scale as the most

  11. Reactor component automatic grapple

    International Nuclear Information System (INIS)

    Greenaway, P.R.


    A grapple for handling nuclear reactor components in a medium such as liquid sodium which, upon proper seating and alignment of the grapple with the component as sensed by a mechanical logic integral to the grapple, automatically seizes the component. The mechanical logic system also precludes seizure in the absence of proper seating and alignment. (author)

  12. Automatic Commercial Permit Sets

    Energy Technology Data Exchange (ETDEWEB)

    Grana, Paul [Folsom Labs, Inc., San Francisco, CA (United States)


    Final report for Folsom Labs’ Solar Permit Generator project, which has successfully completed, resulting in the development and commercialization of a software toolkit within the cloud-based HelioScope software environment that enables solar engineers to automatically generate and manage draft documents for permit submission.

  13. Automatic Complexity Analysis

    DEFF Research Database (Denmark)

    Rosendahl, Mads


    One way to analyse programs is to to derive expressions for their computational behaviour. A time bound function (or worst-case complexity) gives an upper bound for the computation time as a function of the size of input. We describe a system to derive such time bounds automatically using abstrac...

  14. Multiscale Simulations for Coupled Flow and Transport Using the Generalized Multiscale Finite Element Method

    KAUST Repository

    Chung, Eric


    In this paper, we develop a mass conservative multiscale method for coupled flow and transport in heterogeneous porous media. We consider a coupled system consisting of a convection-dominated transport equation and a flow equation. We construct a coarse grid solver based on the Generalized Multiscale Finite Element Method (GMsFEM) for a coupled system. In particular, multiscale basis functions are constructed based on some snapshot spaces for the pressure and the concentration equations and some local spectral decompositions in the snapshot spaces. The resulting approach uses a few multiscale basis functions in each coarse block (for both the pressure and the concentration) to solve the coupled system. We use the mixed framework, which allows mass conservation. Our main contributions are: (1) the development of a mass conservative GMsFEM for the coupled flow and transport; (2) the development of a robust multiscale method for convection-dominated transport problems by choosing appropriate test and trial spaces within Petrov-Galerkin mixed formulation. We present numerical results and consider several heterogeneous permeability fields. Our numerical results show that with only a few basis functions per coarse block, we can achieve a good approximation.

  15. Multiscale image contrast amplification (MUSICA) (United States)

    Vuylsteke, Pieter; Schoeters, Emile P.


    This article presents a novel approach to the problem of detail contrast enhancement, based on multiresolution representation of the original image. The image is decomposed into a weighted sum of smooth, localized, 2D basis functions at multiple scales. Each transform coefficient represents the amount of local detail at some specific scale and at a specific position in the image. Detail contrast is enhanced by non-linear amplification of the transform coefficients. An inverse transform is then applied to the modified coefficients. This yields a uniformly contrast- enhanced image without artefacts. The MUSICA-algorithm is being applied routinely to computed radiography images of chest, skull, spine, shoulder, pelvis, extremities, and abdomen examinations, with excellent acceptance. It is useful for a wide range of applications in the medical, graphical, and industrial area.

  16. Multiscale mechanics of graphene oxide and graphene based composite films (United States)

    Cao, Changhong

    The mechanical behavior of graphene oxide is length scale dependent: orders of magnitude different between the bulk forms and monolayer counterparts. Understanding the underlying mechanisms plays a significant role in their versatile application. A systematic multiscale mechanical study from monolayer to multilayer, including the interactions between layers of GO, can provide fundamental support for material engineering. In this thesis, an experimental coupled with simulation approach was used to study the multiscale mechanics of graphene oxide (GO) and the methods developed for GO study are proved to be applicable also to mechanical study of graphene based composites. GO is a layered nanomaterial comprised of hierarchical units whose characteristic dimension lies between monolayer GO (0.7 nm - 1.2 nm) and bulk GO papers (≥ 1 mum). Mechanical behaviors of monolayer GO and GO nanosheets (10 nm- 100 nm) were comprehensively studied this work. Monolayer GO was measured to have an average strength of 24.7 GPa,, orders of magnitude higher than previously reported values for GO paper and approximately 50% of the 2D intrinsic strength of pristine graphene. The huge discrepancy between the strength of monolayer GO and that of bulk GO paper motivated the study of GO at the intermediate length scale (GO nanosheets). Experimental results showed that GO nanosheets possess high strength in the gigapascal range. Molecular Dynamic simulations showed that the transition in the failure behavior from interplanar fracture to intraplanar fracture was responsible for the huge strength discrepancy between nanometer scale GO and bulk GO papers. Additionally, the interfacial shear strength between GO layers was found to be a key contributing factor to the distinct mechanical behavior among hierarchical units of GO. The understanding of the multiscale mechanics of GO is transferrable in heterogeneous layered nanomaterials, such as graphene-metal oxide based anode materials in Li

  17. A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kuprat, Andrew P.; Kabilan, Senthil; Carson, James P.; Corley, Richard A.; Einstein, Daniel R.


    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton’s Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple

  18. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kuprat, A.P., E-mail: [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Kabilan, S., E-mail: [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Carson, J.P., E-mail: [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Corley, R.A., E-mail: [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Einstein, D.R., E-mail: [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States)


    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton’s method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD–ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  19. A bidirectional coupling procedure applied to multiscale respiratory modeling

    International Nuclear Information System (INIS)

    Kuprat, A.P.; Kabilan, S.; Carson, J.P.; Corley, R.A.; Einstein, D.R.


    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton’s method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD–ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  20. Improved anomaly detection using multi-scale PLS and generalized likelihood ratio test

    KAUST Repository

    Madakyaru, Muddu


    Process monitoring has a central role in the process industry to enhance productivity, efficiency, and safety, and to avoid expensive maintenance. In this paper, a statistical approach that exploit the advantages of multiscale PLS models (MSPLS) and those of a generalized likelihood ratio (GLR) test to better detect anomalies is proposed. Specifically, to consider the multivariate and multi-scale nature of process dynamics, a MSPLS algorithm combining PLS and wavelet analysis is used as modeling framework. Then, GLR hypothesis testing is applied using the uncorrelated residuals obtained from MSPLS model to improve the anomaly detection abilities of these latent variable based fault detection methods even further. Applications to a simulated distillation column data are used to evaluate the proposed MSPLS-GLR algorithm.

  1. Multiscale carbon nanotube-carbon fiber reinforcement for advanced epoxy composites. (United States)

    Bekyarova, E; Thostenson, E T; Yu, A; Kim, H; Gao, J; Tang, J; Hahn, H T; Chou, T-W; Itkis, M E; Haddon, R C


    We report an approach to the development of advanced structural composites based on engineered multiscale carbon nanotube-carbon fiber reinforcement. Electrophoresis was utilized for the selective deposition of multi- and single-walled carbon nanotubes (CNTs) on woven carbon fabric. The CNT-coated carbon fabric panels were subsequently infiltrated with epoxy resin using vacuum-assisted resin transfer molding (VARTM) to fabricate multiscale hybrid composites in which the nanotubes were completely integrated into the fiber bundles and reinforced the matrix-rich regions. The carbon nanotube/carbon fabric/epoxy composites showed approximately 30% enhancement of the interlaminar shear strength as compared to that of carbon fiber/epoxy composites without carbon nanotubes and demonstrate significantly improved out-of-plane electrical conductivity.

  2. Multiscale Polymer Composites: A Review of the Interlaminar Fracture Toughness Improvement

    Directory of Open Access Journals (Sweden)

    Vishwesh Dikshit


    Full Text Available Composite materials are prone to delamination as they are weaker in the thickness direction. Carbon nanotubes (CNTs are introduced as a multiscale reinforcement into the fiber reinforced polymer composites to suppress the delamination phenomenon. This review paper presents the detailed progress made by the scientific and research community to-date in improving the Mode I and Mode II interlaminar fracture toughness (ILFT by various methodologies including the effect of multiscale reinforcement. Methods of measuring the Mode I and Mode II fracture toughness of the composites along with the solutions to improve them are presented. The use of different methodologies and approaches along with their performance in enhancing the fracture toughness of the composites is summarized. The current state of polymer-fiber-nanotube composites and their future perspective are also deliberated.

  3. A Multiscale Time-Splitting Discrete Fracture Model of Nanoparticles Transport in Fractured Porous Media

    KAUST Repository

    El-Amin, Mohamed F.


    Recently, applications of nanoparticles have been considered in many branches of petroleum engineering, especially, enhanced oil recovery. The current paper is devoted to investigate the problem of nanoparticles transport in fractured porous media, numerically. We employed the discrete-fracture model (DFM) to represent the flow and transport in the fractured formations. The system of the governing equations consists of the mass conservation law, Darcy\\'s law, nanoparticles concentration in water, deposited nanoparticles concentration on the pore-wall, and entrapped nanoparticles concentration in the pore-throat. The variation of porosity and permeability due to the nanoparticles deposition/entrapment on/in the pores is also considered. We employ the multiscale time-splitting strategy to control different time-step sizes for different physics, such as pressure and concentration. The cell-centered finite difference (CCFD) method is used for the spatial discretization. Numerical examples are provided to demonstrate the efficiency of the proposed multiscale time splitting approach.

  4. Multiscale Object Recognition and Feature Extraction Using Wavelet Networks

    National Research Council Canada - National Science Library

    Jaggi, Seema; Karl, W. C; Krim, Hamid; Willsky, Alan S


    In this work we present a novel method of object recognition and feature generation based on multiscale object descriptions obtained using wavelet networks in combination with morphological filtering...

  5. Bayesian learning of sparse multiscale image representations. (United States)

    Hughes, James Michael; Rockmore, Daniel N; Wang, Yang


    Multiscale representations of images have become a standard tool in image analysis. Such representations offer a number of advantages over fixed-scale methods, including the potential for improved performance in denoising, compression, and the ability to represent distinct but complementary information that exists at various scales. A variety of multiresolution transforms exist, including both orthogonal decompositions such as wavelets as well as nonorthogonal, overcomplete representations. Recently, techniques for finding adaptive, sparse representations have yielded state-of-the-art results when applied to traditional image processing problems. Attempts at developing multiscale versions of these so-called dictionary learning models have yielded modest but encouraging results. However, none of these techniques has sought to combine a rigorous statistical formulation of the multiscale dictionary learning problem and the ability to share atoms across scales. We present a model for multiscale dictionary learning that overcomes some of the drawbacks of previous approaches by first decomposing an input into a pyramid of distinct frequency bands using a recursive filtering scheme, after which we perform dictionary learning and sparse coding on the individual levels of the resulting pyramid. The associated image model allows us to use a single set of adapted dictionary atoms that is shared--and learned--across all scales in the model. The underlying statistical model of our proposed method is fully Bayesian and allows for efficient inference of parameters, including the level of additive noise for denoising applications. We apply the proposed model to several common image processing problems including non-Gaussian and nonstationary denoising of real-world color images.

  6. Multiscale Study of Currents Affected by Topography (United States)


    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Multiscale Study of Currents Affected by Topography ...the effects of topography on the ocean general and regional circulation with a focus on the wide range of scales of interactions. The small-scale...details of the topography and the waves, eddies, drag, and turbulence it generates (at spatial scales ranging from meters to mesoscale) interact in the

  7. Multiscale simulation of microbe structure and dynamics. (United States)

    Joshi, Harshad; Singharoy, Abhishek; Sereda, Yuriy V; Cheluvaraja, Srinath C; Ortoleva, Peter J


    A multiscale mathematical and computational approach is developed that captures the hierarchical organization of a microbe. It is found that a natural perspective for understanding a microbe is in terms of a hierarchy of variables at various levels of resolution. This hierarchy starts with the N -atom description and terminates with order parameters characterizing a whole microbe. This conceptual framework is used to guide the analysis of the Liouville equation for the probability density of the positions and momenta of the N atoms constituting the microbe and its environment. Using multiscale mathematical techniques, we derive equations for the co-evolution of the order parameters and the probability density of the N-atom state. This approach yields a rigorous way to transfer information between variables on different space-time scales. It elucidates the interplay between equilibrium and far-from-equilibrium processes underlying microbial behavior. It also provides framework for using coarse-grained nanocharacterization data to guide microbial simulation. It enables a methodical search for free-energy minimizing structures, many of which are typically supported by the set of macromolecules and membranes constituting a given microbe. This suite of capabilities provides a natural framework for arriving at a fundamental understanding of microbial behavior, the analysis of nanocharacterization data, and the computer-aided design of nanostructures for biotechnical and medical purposes. Selected features of the methodology are demonstrated using our multiscale bionanosystem simulator DeductiveMultiscaleSimulator. Systems used to demonstrate the approach are structural transitions in the cowpea chlorotic mosaic virus, RNA of satellite tobacco mosaic virus, virus-like particles related to human papillomavirus, and iron-binding protein lactoferrin. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Multiscale modeling of mucosal immune responses (United States)


    Computational modeling techniques are playing increasingly important roles in advancing a systems-level mechanistic understanding of biological processes. Computer simulations guide and underpin experimental and clinical efforts. This study presents ENteric Immune Simulator (ENISI), a multiscale modeling tool for modeling the mucosal immune responses. ENISI's modeling environment can simulate in silico experiments from molecular signaling pathways to tissue level events such as tissue lesion formation. ENISI's architecture integrates multiple modeling technologies including ABM (agent-based modeling), ODE (ordinary differential equations), SDE (stochastic modeling equations), and PDE (partial differential equations). This paper focuses on the implementation and developmental challenges of ENISI. A multiscale model of mucosal immune responses during colonic inflammation, including CD4+ T cell differentiation and tissue level cell-cell interactions was developed to illustrate the capabilities, power and scope of ENISI MSM. Background Computational techniques are becoming increasingly powerful and modeling tools for biological systems are of greater needs. Biological systems are inherently multiscale, from molecules to tissues and from nano-seconds to a lifespan of several years or decades. ENISI MSM integrates multiple modeling technologies to understand immunological processes from signaling pathways within cells to lesion formation at the tissue level. This paper examines and summarizes the technical details of ENISI, from its initial version to its latest cutting-edge implementation. Implementation Object-oriented programming approach is adopted to develop a suite of tools based on ENISI. Multiple modeling technologies are integrated to visualize tissues, cells as well as proteins; furthermore, performance matching between the scales is addressed. Conclusion We used ENISI MSM for developing predictive multiscale models of the mucosal immune system during gut

  9. A theoretical foundation for multi-scale regular vegetation patterns. (United States)

    Tarnita, Corina E; Bonachela, Juan A; Sheffer, Efrat; Guyton, Jennifer A; Coverdale, Tyler C; Long, Ryan A; Pringle, Robert M


    Self-organized regular vegetation patterns are widespread and thought to mediate ecosystem functions such as productivity and robustness, but the mechanisms underlying their origin and maintenance remain disputed. Particularly controversial are landscapes of overdispersed (evenly spaced) elements, such as North American Mima mounds, Brazilian murundus, South African heuweltjies, and, famously, Namibian fairy circles. Two competing hypotheses are currently debated. On the one hand, models of scale-dependent feedbacks, whereby plants facilitate neighbours while competing with distant individuals, can reproduce various regular patterns identified in satellite imagery. Owing to deep theoretical roots and apparent generality, scale-dependent feedbacks are widely viewed as a unifying and near-universal principle of regular-pattern formation despite scant empirical evidence. On the other hand, many overdispersed vegetation patterns worldwide have been attributed to subterranean ecosystem engineers such as termites, ants, and rodents. Although potentially consistent with territorial competition, this interpretation has been challenged theoretically and empirically and (unlike scale-dependent feedbacks) lacks a unifying dynamical theory, fuelling scepticism about its plausibility and generality. Here we provide a general theoretical foundation for self-organization of social-insect colonies, validated using data from four continents, which demonstrates that intraspecific competition between territorial animals can generate the large-scale hexagonal regularity of these patterns. However, this mechanism is not mutually exclusive with scale-dependent feedbacks. Using Namib Desert fairy circles as a case study, we present field data showing that these landscapes exhibit multi-scale patterning-previously undocumented in this system-that cannot be explained by either mechanism in isolation. These multi-scale patterns and other emergent properties, such as enhanced resistance to

  10. Automated image enhancement using power law transformations

    Indian Academy of Sciences (India)

    Automatically enhancing contrast of an image has been a challenging task since the digital image can represent variety of scene types. Trifonov et al (2001) performed automatic contrast enhancement by automatically determining the measure of central tendency of the brightness histogram of an image and shifting and ...

  11. Acoustics of multiscale sorptive porous materials (United States)

    Venegas, R.; Boutin, C.; Umnova, O.


    This paper investigates sound propagation in multiscale rigid-frame porous materials that support mass transfer processes, such as sorption and different types of diffusion, in addition to the usual visco-thermo-inertial interactions. The two-scale asymptotic method of homogenization for periodic media is successively used to derive the macroscopic equations describing sound propagation through the material. This allowed us to conclude that the macroscopic mass balance is significantly modified by sorption, inter-scale (micro- to/from nanopore scales) mass diffusion, and inter-scale (pore to/from micro- and nanopore scales) pressure diffusion. This modification is accounted for by the dynamic compressibility of the effective saturating fluid that presents atypical properties that lead to slower speed of sound and higher sound attenuation, particularly at low frequencies. In contrast, it is shown that the physical processes occurring at the micro-nano-scale do not affect the macroscopic fluid flow through the material. The developed theory is exemplified by introducing an analytical model for multiscale sorptive granular materials, which is experimentally validated by comparing its predictions with acoustic measurements on granular activated carbons. Furthermore, we provide empirical evidence supporting an alternative method for measuring sorption and mass diffusion properties of multiscale sorptive materials using sound waves.

  12. Control for Intelligent Manufacturing: A Multiscale Challenge

    Directory of Open Access Journals (Sweden)

    Han-Xiong Li


    Full Text Available The Made in China 2025 initiative will require full automation in all sectors, from customers to production. This will result in great challenges to manufacturing systems in all sectors. In the future of manufacturing, all devices and systems should have sensing and basic intelligence capabilities for control and adaptation. In this study, after discussing multiscale dynamics of the modern manufacturing system, a five-layer functional structure is proposed for uncertainties processing. Multiscale dynamics include: multi-time scale, space-time scale, and multi-level dynamics. Control action will differ at different scales, with more design being required at both fast and slow time scales. More quantitative action is required in low-level operations, while more qualitative action is needed regarding high-level supervision. Intelligent manufacturing systems should have the capabilities of flexibility, adaptability, and intelligence. These capabilities will require the control action to be distributed and integrated with different approaches, including smart sensing, optimal design, and intelligent learning. Finally, a typical jet dispensing system is taken as a real-world example for multiscale modeling and control.

  13. Multiscale Phase Inversion of Seismic Data

    KAUST Repository

    Fu, Lei


    We present a scheme for multiscale phase inversion (MPI) of seismic data that is less sensitive to the unmodeled physics of wave propagation and a poor starting model than standard full waveform inversion (FWI). To avoid cycle-skipping, the multiscale strategy temporally integrates the traces several times, i.e. high-order integration, to produce low-boost seismograms that are used as input data for the initial iterations of MPI. As the iterations proceed, higher frequencies in the data are boosted by using integrated traces of lower order as the input data. The input data are also filtered into different narrow frequency bands for the MPI implementation. At low frequencies, we show that MPI with windowed reflections approximates wave equation inversion of the reflection traveltimes, except no traveltime picking is needed. Numerical results with synthetic acoustic data show that MPI is more robust than conventional multiscale FWI when the initial model is far from the true model. Results from synthetic viscoacoustic and elastic data show that MPI is less sensitive than FWI to some of the unmodeled physics. Inversion of marine data shows that MPI is more robust and produces modestly more accurate results than FWI for this data set.

  14. Engineering Digestion: Multiscale Processes of Food Digestion. (United States)

    Bornhorst, Gail M; Gouseti, Ourania; Wickham, Martin S J; Bakalis, Serafim


    Food digestion is a complex, multiscale process that has recently become of interest to the food industry due to the developing links between food and health or disease. Food digestion can be studied by using either in vitro or in vivo models, each having certain advantages or disadvantages. The recent interest in food digestion has resulted in a large number of studies in this area, yet few have provided an in-depth, quantitative description of digestion processes. To provide a framework to develop these quantitative comparisons, a summary is given here between digestion processes and parallel unit operations in the food and chemical industry. Characterization parameters and phenomena are suggested for each step of digestion. In addition to the quantitative characterization of digestion processes, the multiscale aspect of digestion must also be considered. In both food systems and the gastrointestinal tract, multiple length scales are involved in food breakdown, mixing, absorption. These different length scales influence digestion processes independently as well as through interrelated mechanisms. To facilitate optimized development of functional food products, a multiscale, engineering approach may be taken to describe food digestion processes. A framework for this approach is described in this review, as well as examples that demonstrate the importance of process characterization as well as the multiple, interrelated length scales in the digestion process. © 2016 Institute of Food Technologists®

  15. Finite Dimensional Approximations for Continuum Multiscale Problems

    Energy Technology Data Exchange (ETDEWEB)

    Berlyand, Leonid [Pennsylvania State Univ., University Park, PA (United States)


    The completed research project concerns the development of novel computational techniques for modeling nonlinear multiscale physical and biological phenomena. Specifically, it addresses the theoretical development and applications of the homogenization theory (coarse graining) approach to calculation of the effective properties of highly heterogenous biological and bio-inspired materials with many spatial scales and nonlinear behavior. This theory studies properties of strongly heterogeneous media in problems arising in materials science, geoscience, biology, etc. Modeling of such media raises fundamental mathematical questions, primarily in partial differential equations (PDEs) and calculus of variations, the subject of the PI’s research. The focus of completed research was on mathematical models of biological and bio-inspired materials with the common theme of multiscale analysis and coarse grain computational techniques. Biological and bio-inspired materials offer the unique ability to create environmentally clean functional materials used for energy conversion and storage. These materials are intrinsically complex, with hierarchical organization occurring on many nested length and time scales. The potential to rationally design and tailor the properties of these materials for broad energy applications has been hampered by the lack of computational techniques, which are able to bridge from the molecular to the macroscopic scale. The project addressed the challenge of computational treatments of such complex materials by the development of a synergistic approach that combines innovative multiscale modeling/analysis techniques with high performance computing.

  16. Multivariate multiscale entropy of financial markets (United States)

    Lu, Yunfan; Wang, Jun


    In current process of quantifying the dynamical properties of the complex phenomena in financial market system, the multivariate financial time series are widely concerned. In this work, considering the shortcomings and limitations of univariate multiscale entropy in analyzing the multivariate time series, the multivariate multiscale sample entropy (MMSE), which can evaluate the complexity in multiple data channels over different timescales, is applied to quantify the complexity of financial markets. Its effectiveness and advantages have been detected with numerical simulations with two well-known synthetic noise signals. For the first time, the complexity of four generated trivariate return series for each stock trading hour in China stock markets is quantified thanks to the interdisciplinary application of this method. We find that the complexity of trivariate return series in each hour show a significant decreasing trend with the stock trading time progressing. Further, the shuffled multivariate return series and the absolute multivariate return series are also analyzed. As another new attempt, quantifying the complexity of global stock markets (Asia, Europe and America) is carried out by analyzing the multivariate returns from them. Finally we utilize the multivariate multiscale entropy to assess the relative complexity of normalized multivariate return volatility series with different degrees.

  17. On automatic visual inspection of reflective surfaces

    DEFF Research Database (Denmark)

    Kulmann, Lionel


    lighting methods in a framework, general usable for inspecting reflective surfaces. Special attention has been given to the design of illumination techniques to enhance defects of highly reflective aluminum sheets. The chosen optical system setup has been used to enhance surface defects of other reflective......This thesis descrbes different methods to perform automatic visual inspection of reflective manufactured products, with the aim of increasing productivity, reduce cost and improve the quality level of the production. We investigate two different systems performing automatic visual inspection....... The first is the inspection of highly reflective aluminum sheets, used by the Danish company Bang & Olufsen, as a part of the exterior design and general appearance of their audio and video products. The second is the inspection of IBM hard disk read/write heads for defects during manufacturing. We have...

  18. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin


    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  19. Automatic Program Development

    DEFF Research Database (Denmark)

    Automatic Program Development is a tribute to Robert Paige (1947-1999), our accomplished and respected colleague, and moreover our good friend, whose untimely passing was a loss to our academic and research community. We have collected the revised, updated versions of the papers published in his ...... a renewed stimulus for continuing and deepening Bob's research visions. A familiar touch is given to the book by some pictures kindly provided to us by his wife Nieba, the personal recollections of his brother Gary and some of his colleagues and friends....... honor in the Higher-Order and Symbolic Computation Journal in the years 2003 and 2005. Among them there are two papers by Bob: (i) a retrospective view of his research lines, and (ii) a proposal for future studies in the area of the automatic program derivation. The book also includes some papers...

  20. Automaticity or active control

    DEFF Research Database (Denmark)

    Tudoran, Ana Alina; Olsen, Svein Ottar

    This study addresses the quasi-moderating role of habit strength in explaining action loyalty. A model of loyalty behaviour is proposed that extends the traditional satisfaction–intention–action loyalty network. Habit strength is conceptualised as a cognitive construct to refer to the psychological...... aspects of the construct, such as routine, inertia, automaticity, or very little conscious deliberation. The data consist of 2962 consumers participating in a large European survey. The results show that habit strength significantly moderates the association between satisfaction and action loyalty, and......, respectively, between intended loyalty and action loyalty. At high levels of habit strength, consumers are more likely to free up cognitive resources and incline the balance from controlled to routine and automatic-like responses....

  1. Multiscale Persistent Functions for Biomolecular Structure Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Kelin [Nanyang Technological University (Singapore). Division of Mathematical Sciences, School of Physical, Mathematical Sciences and School of Biological Sciences; Li, Zhiming [Central China Normal University, Wuhan (China). Key Laboratory of Quark and Lepton Physics (MOE) and Institute of Particle Physics; Mu, Lin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Computer Science and Mathematics Division


    Here in this paper, we introduce multiscale persistent functions for biomolecular structure characterization. The essential idea is to combine our multiscale rigidity functions (MRFs) with persistent homology analysis, so as to construct a series of multiscale persistent functions, particularly multiscale persistent entropies, for structure characterization. To clarify the fundamental idea of our method, the multiscale persistent entropy (MPE) model is discussed in great detail. Mathematically, unlike the previous persistent entropy (Chintakunta et al. in Pattern Recognit 48(2):391–401, 2015; Merelli et al. in Entropy 17(10):6872–6892, 2015; Rucco et al. in: Proceedings of ECCS 2014, Springer, pp 117–128, 2016), a special resolution parameter is incorporated into our model. Various scales can be achieved by tuning its value. Physically, our MPE can be used in conformational entropy evaluation. More specifically, it is found that our method incorporates in it a natural classification scheme. This is achieved through a density filtration of an MRF built from angular distributions. To further validate our model, a systematical comparison with the traditional entropy evaluation model is done. Additionally, it is found that our model is able to preserve the intrinsic topological features of biomolecular data much better than traditional approaches, particularly for resolutions in the intermediate range. Moreover, by comparing with traditional entropies from various grid sizes, bond angle-based methods and a persistent homology-based support vector machine method (Cang et al. in Mol Based Math Biol 3:140–162, 2015), we find that our MPE method gives the best results in terms of average true positive rate in a classic protein structure classification test. More interestingly, all-alpha and all-beta protein classes can be clearly separated from each other with zero error only in our model. Finally, a special protein structure index (PSI) is proposed, for the first

  2. Automatic food decisions

    DEFF Research Database (Denmark)

    Mueller Loose, Simone

    Consumers' food decisions are to a large extent shaped by automatic processes, which are either internally directed through learned habits and routines or externally influenced by context factors and visual information triggers. Innovative research methods such as eye tracking, choice experiments...... and food diaries allow us to better understand the impact of unconscious processes on consumers' food choices. Simone Mueller Loose will provide an overview of recent research insights into the effects of habit and context on consumers' food choices....

  3. Automatic Language Identification (United States)


    hundreds guish one language from another. The reader is referred of input languages would need to be supported , the cost of to the linguistics literature...eventually obtained bet- 108 TRAINING FRENCH GERMAN ITRAIING FRENCH M- ALGORITHM - __ GERMAN NHSPANISH TRAINING SPEECH SET OF MODELS: UTTERANCES ONE MODEL...i.e. vowels ) for each speech utterance are located malized to be insensitive to overall amplitude, pitch and automatically. Next, feature vectors

  4. Automatic Road Pavement Assessment with Image Processing: Review and Comparison

    Directory of Open Access Journals (Sweden)

    Sylvie Chambon


    Full Text Available In the field of noninvasive sensing techniques for civil infrastructures monitoring, this paper addresses the problem of crack detection, in the surface of the French national roads, by automatic analysis of optical images. The first contribution is a state of the art of the image-processing tools applied to civil engineering. The second contribution is about fine-defect detection in pavement surface. The approach is based on a multi-scale extraction and a Markovian segmentation. Third, an evaluation and comparison protocol which has been designed for evaluating this difficult task—the road pavement crack detection—is introduced. Finally, the proposed method is validated, analysed, and compared to a detection approach based on morphological tools.


    Directory of Open Access Journals (Sweden)

    M. Sakamoto


    Full Text Available From the last decade, the multi-scale image segmentation is getting a particular interest and practically being used for object-based image analysis. In this study, we have addressed the issues on multi-scale image segmentation, especially, in improving the performances for validity of merging and variety of derived region’s shape. Firstly, we have introduced constraints on the application of spectral criterion which could suppress excessive merging between dissimilar regions. Secondly, we have extended the evaluation for smoothness criterion by modifying the definition on the extent of the object, which was brought for controlling the shape’s diversity. Thirdly, we have developed new shape criterion called aspect ratio. This criterion helps to improve the reproducibility on the shape of object to be matched to the actual objectives of interest. This criterion provides constraint on the aspect ratio in the bounding box of object by keeping properties controlled with conventional shape criteria. These improvements and extensions lead to more accurate, flexible, and diverse segmentation results according to the shape characteristics of the target of interest. Furthermore, we also investigated a technique for quantitative and automatic parameterization in multi-scale image segmentation. This approach is achieved by comparing segmentation result with training area specified in advance by considering the maximization of the average area in derived objects or satisfying the evaluation index called F-measure. Thus, it has been possible to automate the parameterization that suited the objectives especially in the view point of shape’s reproducibility.

  6. A tantalum strength model using a multiscale approach: version 2

    Energy Technology Data Exchange (ETDEWEB)

    Becker, R; Arsenlis, A; Hommes, G; Marian, J; Rhee, M; Yang, L H


    built using only the screw relations. In light of this, the screw dislocation mobility relation is chosen as the starting point for the present tantalum model. Edge dislocations are not included explicitly in the current model. A significant change from the previous models is in the functional dependence of the dislocation evolution equations. The prior multiscale models assumed that the dislocation evolution rate depended on stress as well dislocation velocity. This crated an implicit dependence on the kinetic relation and required an iterative solution for the dislocation density. In the present model the integration scheme is simplified by casting the dislocation evolution terms of strain rate and current dislocation density. The final notable change was in the transition relation from the thermally activated regime to phonon drag. Historically, and in the prior models, this transition had been through a harmonic average on the strain rates making it impossible to determine the stress directly when given the plastic strain rate. After considering alternative transition relations, it was determined that an equally suitable fit to the molecular dynamics simulation data in the transition region could be constructed by averaging stresses rather than strain rates. The end result of these enhancements is a simpler (fewer parameters) and more straight forward model formulation where the material strength can be evaluated directly when given the plastic strain rate, temperature, pressure and the dislocation density at the beginning of the time step. This greatly improves the computational efficiency and robustness over the prior models where additional iteration loops were required.

  7. The multiscale nature of streamers

    International Nuclear Information System (INIS)

    Ebert, U; Montijn, C; Briels, T M P; Hundsdorfer, W; Meulenbroek, B; Rocco, A; Veldhuizen, E M van


    Streamers are a generic mode of electric breakdown of large gas volumes. They play a role in the initial stages of sparks and lightning, in technical corona reactors and in high altitude sprite discharges above thunderclouds. Streamers are characterized by a self-generated field enhancement at the head of the growing discharge channel. We briefly review recent streamer experiments and sprite observations. Then we sketch our recent work on computations of growing and branching streamers, we discuss concepts and solutions of analytical model reductions and we review different branching concepts and outline a hierarchy of model reductions

  8. Transitions of the Multi-Scale Singularity Trees

    DEFF Research Database (Denmark)

    Somchaipeng, Kerawit; Sporring, Jon; Kreiborg, Sven


    Multi-Scale Singularity Trees(MSSTs) [10] are multi-scale image descriptors aimed at representing the deep structures of images. Changes in images are directly translated to changes in the deep structures; therefore transitions in MSSTs. Because MSSTs can be used to represent the deep structure o...

  9. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection (United States)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans


    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  10. Information Management Workflow and Tools Enabling Multiscale Modeling Within ICME Paradigm (United States)

    Arnold, Steven M.; Bednarcyk, Brett A.; Austin, Nic; Terentjev, Igor; Cebon, Dave; Marsden, Will


    With the increased emphasis on reducing the cost and time to market of new materials, the need for analytical tools that enable the virtual design and optimization of materials throughout their processing - internal structure - property - performance envelope, along with the capturing and storing of the associated material and model information across its lifecycle, has become critical. This need is also fueled by the demands for higher efficiency in material testing; consistency, quality and traceability of data; product design; engineering analysis; as well as control of access to proprietary or sensitive information. Fortunately, material information management systems and physics-based multiscale modeling methods have kept pace with the growing user demands. Herein, recent efforts to establish workflow for and demonstrate a unique set of web application tools for linking NASA GRC's Integrated Computational Materials Engineering (ICME) Granta MI database schema and NASA GRC's Integrated multiscale Micromechanics Analysis Code (ImMAC) software toolset are presented. The goal is to enable seamless coupling between both test data and simulation data, which is captured and tracked automatically within Granta MI®, with full model pedigree information. These tools, and this type of linkage, are foundational to realizing the full potential of ICME, in which materials processing, microstructure, properties, and performance are coupled to enable application-driven design and optimization of materials and structures.

  11. Mechanical characterization of epoxy composite with multiscale reinforcements: Carbon nanotubes and short carbon fibers

    International Nuclear Information System (INIS)

    Rahmanian, S.; Suraya, A.R.; Shazed, M.A.; Zahari, R.; Zainudin, E.S.


    Highlights: • Multiscale composite was prepared by incorporation of carbon nanotubes and fibers. • Carbon nanotubes were also grown on short carbon fibers to enhance stress transfer. • Significant improvements were achieved in mechanical properties of composites. • Synergic effect of carbon nanotubes and fibers was demonstrated. - Abstract: Carbon nanotubes (CNT) and short carbon fibers were incorporated into an epoxy matrix to fabricate a high performance multiscale composite. To improve the stress transfer between epoxy and carbon fibers, CNT were also grown on fibers through chemical vapor deposition (CVD) method to produce CNT grown short carbon fibers (CSCF). Mechanical characterization of composites was performed to investigate the synergy effects of CNT and CSCF in the epoxy matrix. The multiscale composites revealed significant improvement in elastic and storage modulus, strength as well as impact resistance in comparison to CNT–epoxy or CSCF–epoxy composites. An optimum content of CNT was found which provided the maximum stiffness and strength. The synergic reinforcing effects of combined fillers were analyzed on the fracture surface of composites through optical and scanning electron microscopy (SEM)

  12. A comparison of nanoscale and multiscale PCL/gelatin scaffolds prepared by disc-electrospinning. (United States)

    Li, Dawei; Chen, Weiming; Sun, Binbin; Li, Haoxuan; Wu, Tong; Ke, Qinfei; Huang, Chen; Ei-Hamshary, Hany; Al-Deyab, Salem S; Mo, Xiumei


    Electrospinning is a versatile and convenient technology to generate nanofibers suitable for tissue engineering. However, the low production rate of traditional needle electrospinning hinders its applications. Needleless electrospinning is a potential strategy to promote the application of electrospun nanofiber in various fields. In this study, disc-electrospinning (one kind of needleless electrospinning) was conducted to produce poly(ε-caprolactone)/gelatin (PCL/GT) scaffolds of different structure, namely the nanoscale structure constructed by nanofiber and multiscale structure consisting of nanofiber and microfiber. It was found that, due to the inhomogeneity of PCL/GT solution, disc-electrospun PCL-GT scaffold presented multiscale structure with larger pores than that of the acid assisted one (PCL-GT-A). Scanning electron microscopy images indicated the PCL-GT scaffold was constructed by nanofibers and microfibers. Mouse fibroblasts and rat bone marrow stromal cells both showed higher proliferation rates on multiscale scaffold than nanoscale scaffolds. It was proposed that the nanofibers bridged between the microfibers enhanced cell adhesion and spreading, while the large pores on the three dimensional (3D) PCL-GT scaffold provide more effective space for cells to proliferate and migrate. However, the uniform nanofibers and densely packed structure in PCL-GT-A scaffold limited the cells on the surface. This study demonstrated the potential of disc-electrospun PCL-GT scaffold containing nanofiber and microfiber for 3D tissue regeneration. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Automatic TLI recognition system, general description

    Energy Technology Data Exchange (ETDEWEB)

    Lassahn, G.D.


    This report is a general description of an automatic target recognition system developed at the Idaho National Engineering Laboratory for the Department of Energy. A user`s manual is a separate volume, Automatic TLI Recognition System, User`s Guide, and a programmer`s manual is Automatic TLI Recognition System, Programmer`s Guide. This system was designed as an automatic target recognition system for fast screening of large amounts of multi-sensor image data, based on low-cost parallel processors. This system naturally incorporates image data fusion, and it gives uncertainty estimates. It is relatively low cost, compact, and transportable. The software is easily enhanced to expand the system`s capabilities, and the hardware is easily expandable to increase the system`s speed. In addition to its primary function as a trainable target recognition system, this is also a versatile, general-purpose tool for image manipulation and analysis, which can be either keyboard-driven or script-driven. This report includes descriptions of three variants of the computer hardware, a description of the mathematical basis if the training process, and a description with examples of the system capabilities.

  14. Multi-Scale Validation of a Nanodiamond Drug Delivery System and Multi-Scale Engineering Education (United States)

    Schwalbe, Michelle Kristin


    This dissertation has two primary concerns: (i) evaluating the uncertainty and prediction capabilities of a nanodiamond drug delivery model using Bayesian calibration and bias correction, and (ii) determining conceptual difficulties of multi-scale analysis from an engineering education perspective. A Bayesian uncertainty quantification scheme…

  15. Efficient algorithms for multiscale modeling in porous media

    KAUST Repository

    Wheeler, Mary F.


    We describe multiscale mortar mixed finite element discretizations for second-order elliptic and nonlinear parabolic equations modeling Darcy flow in porous media. The continuity of flux is imposed via a mortar finite element space on a coarse grid scale, while the equations in the coarse elements (or subdomains) are discretized on a fine grid scale. We discuss the construction of multiscale mortar basis and extend this concept to nonlinear interface operators. We present a multiscale preconditioning strategy to minimize the computational cost associated with construction of the multiscale mortar basis. We also discuss the use of appropriate quadrature rules and approximation spaces to reduce the saddle point system to a cell-centered pressure scheme. In particular, we focus on multiscale mortar multipoint flux approximation method for general hexahedral grids and full tensor permeabilities. Numerical results are presented to verify the accuracy and efficiency of these approaches. © 2010 John Wiley & Sons, Ltd.

  16. A complete categorization of multiscale models of infectious disease systems. (United States)

    Garira, Winston


    Modelling of infectious disease systems has entered a new era in which disease modellers are increasingly turning to multiscale modelling to extend traditional modelling frameworks into new application areas and to achieve higher levels of detail and accuracy in characterizing infectious disease systems. In this paper we present a categorization framework for categorizing multiscale models of infectious disease systems. The categorization framework consists of five integration frameworks and five criteria. We use the categorization framework to give a complete categorization of host-level immuno-epidemiological models (HL-IEMs). This categorization framework is also shown to be applicable in categorizing other types of multiscale models of infectious diseases beyond HL-IEMs through modifying the initial categorization framework presented in this study. Categorization of multiscale models of infectious disease systems in this way is useful in bringing some order to the discussion on the structure of these multiscale models.

  17. A New Multiscale Technique for Time-Accurate Geophysics Simulations (United States)

    Omelchenko, Y. A.; Karimabadi, H.


    Large-scale geophysics systems are frequently described by multiscale reactive flow models (e.g., wildfire and climate models, multiphase flows in porous rocks, etc.). Accurate and robust simulations of such systems by traditional time-stepping techniques face a formidable computational challenge. Explicit time integration suffers from global (CFL and accuracy) timestep restrictions due to inhomogeneous convective and diffusion processes, as well as closely coupled physical and chemical reactions. Application of adaptive mesh refinement (AMR) to such systems may not be always sufficient since its success critically depends on a careful choice of domain refinement strategy. On the other hand, implicit and timestep-splitting integrations may result in a considerable loss of accuracy when fast transients in the solution become important. To address this issue, we developed an alternative explicit approach to time-accurate integration of such systems: Discrete-Event Simulation (DES). DES enables asynchronous computation by automatically adjusting the CPU resources in accordance with local timescales. This is done by encapsulating flux- conservative updates of numerical variables in the form of events, whose execution and synchronization is explicitly controlled by imposing accuracy and causality constraints. As a result, at each time step DES self- adaptively updates only a fraction of the global system state, which eliminates unnecessary computation of inactive elements. DES can be naturally combined with various mesh generation techniques. The event-driven paradigm results in robust and fast simulation codes, which can be efficiently parallelized via a new preemptive event processing (PEP) technique. We discuss applications of this novel technology to time-dependent diffusion-advection-reaction and CFD models representative of various geophysics applications.

  18. From seconds to months: an overview of multi-scale dynamics of mobile telephone calls (United States)

    Saramäki, Jari; Moro, Esteban


    Big Data on electronic records of social interactions allow approaching human behaviour and sociality from a quantitative point of view with unforeseen statistical power. Mobile telephone Call Detail Records (CDRs), automatically collected by telecom operators for billing purposes, have proven especially fruitful for understanding one-to-one communication patterns as well as the dynamics of social networks that are reflected in such patterns. We present an overview of empirical results on the multi-scale dynamics of social dynamics and networks inferred from mobile telephone calls. We begin with the shortest timescales and fastest dynamics, such as burstiness of call sequences between individuals, and "zoom out" towards longer temporal and larger structural scales, from temporal motifs formed by correlated calls between multiple individuals to long-term dynamics of social groups. We conclude this overview with a future outlook.

  19. Classification of high-resolution remote sensing images based on multi-scale superposition (United States)

    Wang, Jinliang; Gao, Wenjie; Liu, Guangjie


    Landscape structures and process on different scale show different characteristics. In the study of specific target landmarks, the most appropriate scale for images can be attained by scale conversion, which improves the accuracy and efficiency of feature identification and classification. In this paper, the authors carried out experiments on multi-scale classification by taking the Shangri-la area in the north-western Yunnan province as the research area and the images from SPOT5 HRG and GF-1 Satellite as date sources. Firstly, the authors upscaled the two images by cubic convolution, and calculated the optimal scale for different objects on the earth shown in images by variation functions. Then the authors conducted multi-scale superposition classification on it by Maximum Likelyhood, and evaluated the classification accuracy. The results indicates that: (1) for most of the object on the earth, the optimal scale appears in the bigger scale instead of the original one. To be specific, water has the biggest optimal scale, i.e. around 25-30m; farmland, grassland, brushwood, roads, settlement places and woodland follows with 20-24m. The optimal scale for shades and flood land is basically as the same as the original one, i.e. 8m and 10m respectively. (2) Regarding the classification of the multi-scale superposed images, the overall accuracy of the ones from SPOT5 HRG and GF-1 Satellite is 12.84% and 14.76% higher than that of the original multi-spectral images, respectively, and Kappa coefficient is 0.1306 and 0.1419 higher, respectively. Hence, the multi-scale superposition classification which was applied in the research area can enhance the classification accuracy of remote sensing images .

  20. Automatic Fabric Defect Detection with a Multi-Scale Convolutional Denoising Autoencoder Network Model

    Directory of Open Access Journals (Sweden)

    Shuang Mei


    Full Text Available Fabric defect detection is a necessary and essential step of quality control in the textile manufacturing industry. Traditional fabric inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. In this paper, we propose an unsupervised learning-based automated approach to detect and localize fabric defects without any manual intervention. This approach is used to reconstruct image patches with a convolutional denoising autoencoder network at multiple Gaussian pyramid levels and to synthesize detection results from the corresponding resolution channels. The reconstruction residual of each image patch is used as the indicator for direct pixel-wise prediction. By segmenting and synthesizing the reconstruction residual map at each resolution level, the final inspection result can be generated. This newly developed method has several prominent advantages for fabric defect detection. First, it can be trained with only a small amount of defect-free samples. This is especially important for situations in which collecting large amounts of defective samples is difficult and impracticable. Second, owing to the multi-modal integration strategy, it is relatively more robust and accurate compared to general inspection methods (the results at each resolution level can be viewed as a modality. Third, according to our results, it can address multiple types of textile fabrics, from simple to more complex. Experimental results demonstrate that the proposed model is robust and yields good overall performance with high precision and acceptable recall rates.

  1. Automatic First-Arrival Detection and Picking With Multiscale Wavelet Analysis (United States)

    Zhang, H.; Thurber, C.; Rowe, C.


    Quickly detecting and accurately picking the P wave first-arrival is of great importance in locating earthquakes and characterizing velocity structure, especially in the era of large volumes of digital and real-time seismic data. The detector should be capable of finding the onset of the P-wave arrival against the background of microseismic and cultural noise. Normally, P-wave onset is characterized by a rapid change in amplitude and/or the arrival of high-frequency energy. The wavelet transform decomposes the signal at different scales, thus adaptively characterizing its components at different resolutions. Wavelet coefficients at high resolutions show the fine structure of the signal, and those at low resolution characterize its coarse features. The main features in the signal will be retained over several resolution scales and irrelevant ones will decay quickly at larger scales. We move a 30 s time window from the first sample of the earthquake data and decompose the signal in the window into 3 different resolutions with the fast wavelet transform. The border effect of the wavelet transform is compensated for by overlapping neighboring time windows by 5 s at both ends. At different resolutions, the Akaike Information Criteria (AIC) picker is used on the corresponding wavelet coefficients. If no two time picks in different resolution bands are within 0.6 s, then it is concluded that there is no P first-arrival in this window. The window is then moved forward in time until a P first-arrival is found. We test our method on regional earthquake data from Dead Sea Rift region and find that it can detect about 95% of P first-arrivals correctly. It will detect the wrong P-wave onset when the time window only includes an isolated glitch. When the detector finds the P first-arrival, the picker will determine the onset time and its uncertainty based on the features of the time picks corresponding to the different resolutions. Compared with manual picks, our picker provides onset times and uncertainties with high confidence. 92% of the autopicks are within 0.15 seconds of analyst picks for our data set.

  2. Automatic quantitative renal scintigraphy

    International Nuclear Information System (INIS)

    Valeyre, J.; Deltour, G.; Delisle, M.J.; Bouchard, A.


    Renal scintigraphy data may be analyzed automatically by the use of a processing system coupled to an Anger camera (TRIDAC-MULTI 8 or CINE 200). The computing sequence is as follows: normalization of the images; background noise subtraction on both images; evaluation of mercury 197 uptake by the liver and spleen; calculation of the activity fractions on each kidney with respect to the injected dose, taking into account the kidney depth and the results referred to normal values; edition of the results. Automation minimizes the scattering parameters and by its simplification is a great asset in routine work [fr

  3. Multi-scale curvature for automated identification of glaciated mountain landscapes☆ (United States)

    Prasicek, Günther; Otto, Jan-Christoph; Montgomery, David R.; Schrott, Lothar


    Erosion by glacial and fluvial processes shapes mountain landscapes in a long-recognized and characteristic way. Upland valleys incised by fluvial processes typically have a V-shaped cross-section with uniform and moderately steep slopes, whereas glacial valleys tend to have a U-shaped profile with a changing slope gradient. We present a novel regional approach to automatically differentiate between fluvial and glacial mountain landscapes based on the relation of multi-scale curvature and drainage area. Sample catchments are delineated and multiple moving window sizes are used to calculate per-cell curvature over a variety of scales ranging from the vicinity of the flow path at the valley bottom to catchment sections fully including valley sides. Single-scale curvature can take similar values for glaciated and non-glaciated catchments but a comparison of multi-scale curvature leads to different results according to the typical cross-sectional shapes. To adapt these differences for automated classification of mountain landscapes into areas with V- and U-shaped valleys, curvature values are correlated with drainage area and a new and simple morphometric parameter, the Difference of Minimum Curvature (DMC), is developed. At three study sites in the western United States the DMC thresholds determined from catchment analysis are used to automatically identify 5 × 5 km quadrats of glaciated and non-glaciated landscapes and the distinctions are validated by field-based geological and geomorphological maps. Our results demonstrate that DMC is a good predictor of glacial imprint, allowing automated delineation of glacially and fluvially incised mountain landscapes. PMID:24748703

  4. Multiscale approaches to high efficiency photovoltaics

    Directory of Open Access Journals (Sweden)

    Connolly James Patrick


    Full Text Available While renewable energies are achieving parity around the globe, efforts to reach higher solar cell efficiencies becomes ever more difficult as they approach the limiting efficiency. The so-called third generation concepts attempt to break this limit through a combination of novel physical processes and new materials and concepts in organic and inorganic systems. Some examples of semi-empirical modelling in the field are reviewed, in particular for multispectral solar cells on silicon (French ANR project MultiSolSi. Their achievements are outlined, and the limits of these approaches shown. This introduces the main topic of this contribution, which is the use of multiscale experimental and theoretical techniques to go beyond the semi-empirical understanding of these systems. This approach has already led to great advances at modelling which have led to modelling software, which is widely known. Yet, a survey of the topic reveals a fragmentation of efforts across disciplines, firstly, such as organic and inorganic fields, but also between the high efficiency concepts such as hot carrier cells and intermediate band concepts. We show how this obstacle to the resolution of practical research obstacles may be lifted by inter-disciplinary cooperation across length scales, and across experimental and theoretical fields, and finally across materials systems. We present a European COST Action “MultiscaleSolar” kicking off in early 2015, which brings together experimental and theoretical partners in order to develop multiscale research in organic and inorganic materials. The goal of this defragmentation and interdisciplinary collaboration is to develop understanding across length scales, which will enable the full potential of third generation concepts to be evaluated in practise, for societal and industrial applications.

  5. Entropic Approach to Multiscale Clustering Analysis

    Directory of Open Access Journals (Sweden)

    Antonio Insolia


    Full Text Available Recently, a novel method has been introduced to estimate the statistical significance of clustering in the direction distribution of objects. The method involves a multiscale procedure, based on the Kullback–Leibler divergence and the Gumbel statistics of extreme values, providing high discrimination power, even in presence of strong background isotropic contamination. It is shown that the method is: (i semi-analytical, drastically reducing computation time; (ii very sensitive to small, medium and large scale clustering; (iii not biased against the null hypothesis. Applications to the physics of ultra-high energy cosmic rays, as a cosmological probe, are presented and discussed.

  6. Structure and multiscale mechanics of carbon nanomaterials

    CERN Document Server


    This book aims at providing a broad overview on the relationship between structure and mechanical properties of carbon nanomaterials from world-leading scientists in the field. The main aim is to get an in-depth understanding of the broad range of mechanical properties of carbon materials based on their unique nanostructure and on defects of several types and at different length scales. Besides experimental work mainly based on the use of (in-situ) Raman and X-ray scattering and on nanoindentation, the book also covers some aspects of multiscale modeling of the mechanics of carbon nanomaterials.

  7. Automatic readout micrometer (United States)

    Lauritzen, T.

    A measuring system is described for surveying and very accurately positioning objects with respect to a reference line. A principle use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse of fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  8. Automatic personnel contamination monitor

    International Nuclear Information System (INIS)

    Lattin, Kenneth R.


    United Nuclear Industries, Inc. (UNI) has developed an automatic personnel contamination monitor (APCM), which uniquely combines the design features of both portal and hand and shoe monitors. In addition, this prototype system also has a number of new features, including: micro computer control and readout, nineteen large area gas flow detectors, real-time background compensation, self-checking for system failures, and card reader identification and control. UNI's experience in operating the Hanford N Reactor, located in Richland, Washington, has shown the necessity of automatically monitoring plant personnel for contamination after they have passed through the procedurally controlled radiation zones. This final check ensures that each radiation zone worker has been properly checked before leaving company controlled boundaries. Investigation of the commercially available portal and hand and shoe monitors indicated that they did not have the sensitivity or sophistication required for UNI's application, therefore, a development program was initiated, resulting in the subject monitor. Field testing shows good sensitivity to personnel contamination with the majority of alarms showing contaminants on clothing, face and head areas. In general, the APCM has sensitivity comparable to portal survey instrumentation. The inherit stand-in, walk-on feature of the APCM not only makes it easy to use, but makes it difficult to bypass. (author)

  9. Reachability Games on Automatic Graphs (United States)

    Neider, Daniel

    In this work we study two-person reachability games on finite and infinite automatic graphs. For the finite case we empirically show that automatic game encodings are competitive to well-known symbolic techniques such as BDDs, SAT and QBF formulas. For the infinite case we present a novel algorithm utilizing algorithmic learning techniques, which allows to solve huge classes of automatic reachability games.

  10. Automatic reactor protection system tester

    International Nuclear Information System (INIS)

    Deliant, J.D.; Jahnke, S.; Raimondo, E.


    The object of this paper is to present the automatic tester of reactor protection systems designed and developed by EDF and Framatome. In order, the following points are discussed: . The necessity for reactor protection system testing, . The drawbacks of manual testing, . The description and use of the Framatome automatic tester, . On-site installation of this system, . The positive results obtained using the Framatome automatic tester in France

  11. Development of an automatic pipeline scanning system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae H.; Lee, Jae C.; Moon, Soon S.; Eom, Heung S.; Choi, Yu R


    Pressure pipe inspection in nuclear power plants is one of the mandatory regulation items. Comparing to manual ultrasonic inspection, automatic inspection has the benefits of more accurate and reliable inspection results and reduction of radiation disposal. final object of this project is to develop an automatic pipeline inspection system of pressure pipe welds in nuclear power plants. We developed a pipeline scanning robot with four magnetic wheels and 2-axis manipulator for controlling ultrasonic transducers, and developed the robot control computer which controls the robot to navigate along inspection path exactly. We expect our system can contribute to reduction of inspection time, performance enhancement, and effective management of inspection results. The system developed by this project can be practically used for inspection works after field tests. (author)

  12. Disentangling Complexity in Bayesian Automatic Adaptive Quadrature (United States)

    Adam, Gheorghe; Adam, Sanda


    The paper describes a Bayesian automatic adaptive quadrature (BAAQ) solution for numerical integration which is simultaneously robust, reliable, and efficient. Detailed discussion is provided of three main factors which contribute to the enhancement of these features: (1) refinement of the m-panel automatic adaptive scheme through the use of integration-domain-length-scale-adapted quadrature sums; (2) fast early problem complexity assessment - enables the non-transitive choice among three execution paths: (i) immediate termination (exceptional cases); (ii) pessimistic - involves time and resource consuming Bayesian inference resulting in radical reformulation of the problem to be solved; (iii) optimistic - asks exclusively for subrange subdivision by bisection; (3) use of the weaker accuracy target from the two possible ones (the input accuracy specifications and the intrinsic integrand properties respectively) - results in maximum possible solution accuracy under minimum possible computing time.

  13. Automatic sets and Delone sets

    International Nuclear Information System (INIS)

    Barbe, A; Haeseler, F von


    Automatic sets D part of Z m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D part of Z m to be a Delone set in R m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples

  14. Automatic liver contouring for radiotherapy treatment planning (United States)

    Li, Dengwang; Liu, Li; Kapp, Daniel S.; Xing, Lei


    To develop automatic and efficient liver contouring software for planning 3D-CT and four-dimensional computed tomography (4D-CT) for application in clinical radiation therapy treatment planning systems. The algorithm comprises three steps for overcoming the challenge of similar intensities between the liver region and its surrounding tissues. First, the total variation model with the L1 norm (TV-L1), which has the characteristic of multi-scale decomposition and an edge-preserving property, is used for removing the surrounding muscles and tissues. Second, an improved level set model that contains both global and local energy functions is utilized to extract liver contour information sequentially. In the global energy function, the local correlation coefficient (LCC) is constructed based on the gray level co-occurrence matrix both of the initial liver region and the background region. The LCC can calculate the correlation of a pixel with the foreground and background regions, respectively. The LCC is combined with intensity distribution models to classify pixels during the evolutionary process of the level set based method. The obtained liver contour is used as the candidate liver region for the following step. In the third step, voxel-based texture characterization is employed for refining the liver region and obtaining the final liver contours. The proposed method was validated based on the planning CT images of a group of 25 patients undergoing radiation therapy treatment planning. These included ten lung cancer patients with normal appearing livers and ten patients with hepatocellular carcinoma or liver metastases. The method was also tested on abdominal 4D-CT images of a group of five patients with hepatocellular carcinoma or liver metastases. The false positive volume percentage, the false negative volume percentage, and the dice similarity coefficient between liver contours obtained by a developed algorithm and a current standard delineated by the expert group

  15. Multiscale coherent structures in tokamak plasma turbulence

    International Nuclear Information System (INIS)

    Xu, G. S.; Wan, B. N.; Zhang, W.; Yang, Q. W.; Wang, L.; Wen, Y. Z.


    A 12-tip poloidal probe array is used on the HT-7 superconducting tokamak [Li, Wan, and Mao, Plasma Phys. Controlled Fusion 42, 135 (2000)] to measure plasma turbulence in the edge region. Some statistical analysis techniques are used to characterize the turbulence structures. It is found that the plasma turbulence is composed of multiscale coherent structures, i.e., turbulent eddies and there is self-similarity in a relative short scale range. The presence of the self-similarity is found due to the structural similarity of these eddies between different scales. These turbulent eddies constitute the basic convection cells, so the self-similar range is just the dominant scale range relevant to transport. The experimental results also indicate that the plasma turbulence is dominated by low-frequency and long-wavelength fluctuation components and its dispersion relation shows typical electron-drift-wave characteristics. Some large-scale coherent structures intermittently burst out and exhibit a very long poloidal extent, even longer than 6 cm. It is found that these large-scale coherent structures are mainly contributed by the low-frequency and long-wavelength fluctuating components and their presence is responsible for the observations of long-range correlations, i.e., the correlation in the scale range much longer than the turbulence decorrelation scale. These experimental observations suggest that the coexistence of multiscale coherent structures results in the self-similar turbulent state

  16. Institute for Multiscale Modeling of Biological Interactions

    Energy Technology Data Exchange (ETDEWEB)

    Paulaitis, Michael E; Garcia-Moreno, Bertrand; Lenhoff, Abraham


    The Institute for Multiscale Modeling of Biological Interactions (IMMBI) has two primary goals: Foster interdisciplinary collaborations among faculty and their research laboratories that will lead to novel applications of multiscale simulation and modeling methods in the biological sciences and engineering; and Building on the unique biophysical/biology-based engineering foundations of the participating faculty, train scientists and engineers to apply computational methods that collectively span multiple time and length scales of biological organization. The success of IMMBI will be defined by the following: Size and quality of the applicant pool for pre-doctoral and post-doctoral fellows; Academic performance; Quality of the pre-doctoral and post-doctoral research; Impact of the research broadly and to the DOE (ASCR program) mission; Distinction of the next career step for pre-doctoral and post-doctoral fellows; and Faculty collaborations that result from IMMBI activities. Specific details about accomplishments during the three years of DOE support for IMMBI have been documented in Annual Progress Reports (April 2005, June 2006, and March 2007) and a Report for a National Academy of Sciences Review (October 2005) that were submitted to DOE on the dates indicated. An overview of these accomplishments is provided.

  17. Neural network based multiscale image restoration approach (United States)

    de Castro, Ana Paula A.; da Silva, José D. S.


    This paper describes a neural network based multiscale image restoration approach. Multilayer perceptrons are trained with artificial images of degraded gray level circles, in an attempt to make the neural network learn inherent space relations of the degraded pixels. The present approach simulates the degradation by a low pass Gaussian filter blurring operation and the addition of noise to the pixels at pre-established rates. The training process considers the degraded image as input and the non-degraded image as output for the supervised learning process. The neural network thus performs an inverse operation by recovering a quasi non-degraded image in terms of least squared. The main difference of the approach to existing ones relies on the fact that the space relations are taken from different scales, thus providing relational space data to the neural network. The approach is an attempt to come up with a simple method that leads to an optimum solution to the problem. Considering different window sizes around a pixel simulates the multiscale operation. In the generalization phase the neural network is exposed to indoor, outdoor, and satellite degraded images following the same steps use for the artificial circle image.

  18. A Multiscale Model for Virus Capsid Dynamics

    Directory of Open Access Journals (Sweden)

    Changjun Chen


    Full Text Available Viruses are infectious agents that can cause epidemics and pandemics. The understanding of virus formation, evolution, stability, and interaction with host cells is of great importance to the scientific community and public health. Typically, a virus complex in association with its aquatic environment poses a fabulous challenge to theoretical description and prediction. In this work, we propose a differential geometry-based multiscale paradigm to model complex biomolecule systems. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum domain of the fluid mechanical description of the aquatic environment from the microscopic discrete domain of the atomistic description of the biomolecule. A multiscale action functional is constructed as a unified framework to derive the governing equations for the dynamics of different scales. We show that the classical Navier-Stokes equation for the fluid dynamics and Newton's equation for the molecular dynamics can be derived from the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows.

  19. Multi-scale biomedical systems: measurement challenges

    International Nuclear Information System (INIS)

    Summers, R


    Multi-scale biomedical systems are those that represent interactions in materials, sensors, and systems from a holistic perspective. It is possible to view such multi-scale activity using measurement of spatial scale or time scale, though in this paper only the former is considered. The biomedical application paradigm comprises interactions that range from quantum biological phenomena at scales of 10-12 for one individual to epidemiological studies of disease spread in populations that in a pandemic lead to measurement at a scale of 10+7. It is clear that there are measurement challenges at either end of this spatial scale, but those challenges that relate to the use of new technologies that deal with big data and health service delivery at the point of care are also considered. The measurement challenges lead to the use, in many cases, of model-based measurement and the adoption of virtual engineering. It is these measurement challenges that will be uncovered in this paper. (paper)

  20. A multiscale model for virus capsid dynamics. (United States)

    Chen, Changjun; Saxena, Rishu; Wei, Guo-Wei


    Viruses are infectious agents that can cause epidemics and pandemics. The understanding of virus formation, evolution, stability, and interaction with host cells is of great importance to the scientific community and public health. Typically, a virus complex in association with its aquatic environment poses a fabulous challenge to theoretical description and prediction. In this work, we propose a differential geometry-based multiscale paradigm to model complex biomolecule systems. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum domain of the fluid mechanical description of the aquatic environment from the microscopic discrete domain of the atomistic description of the biomolecule. A multiscale action functional is constructed as a unified framework to derive the governing equations for the dynamics of different scales. We show that the classical Navier-Stokes equation for the fluid dynamics and Newton's equation for the molecular dynamics can be derived from the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows.

  1. Multiscale permutation entropy analysis of electrocardiogram (United States)

    Liu, Tiebing; Yao, Wenpo; Wu, Min; Shi, Zhaorong; Wang, Jun; Ning, Xinbao


    To make a comprehensive nonlinear analysis to ECG, multiscale permutation entropy (MPE) was applied to ECG characteristics extraction to make a comprehensive nonlinear analysis of ECG. Three kinds of ECG from PhysioNet database, congestive heart failure (CHF) patients, healthy young and elderly subjects, are applied in this paper. We set embedding dimension to 4 and adjust scale factor from 2 to 100 with a step size of 2, and compare MPE with multiscale entropy (MSE). As increase of scale factor, MPE complexity of the three ECG signals are showing first-decrease and last-increase trends. When scale factor is between 10 and 32, complexities of the three ECG had biggest difference, entropy of the elderly is 0.146 less than the CHF patients and 0.025 larger than the healthy young in average, in line with normal physiological characteristics. Test results showed that MPE can effectively apply in ECG nonlinear analysis, and can effectively distinguish different ECG signals.

  2. Multiscale Convolutional Neural Networks for Hand Detection

    Directory of Open Access Journals (Sweden)

    Shiyang Yan


    Full Text Available Unconstrained hand detection in still images plays an important role in many hand-related vision problems, for example, hand tracking, gesture analysis, human action recognition and human-machine interaction, and sign language recognition. Although hand detection has been extensively studied for decades, it is still a challenging task with many problems to be tackled. The contributing factors for this complexity include heavy occlusion, low resolution, varying illumination conditions, different hand gestures, and the complex interactions between hands and objects or other hands. In this paper, we propose a multiscale deep learning model for unconstrained hand detection in still images. Deep learning models, and deep convolutional neural networks (CNNs in particular, have achieved state-of-the-art performances in many vision benchmarks. Developed from the region-based CNN (R-CNN model, we propose a hand detection scheme based on candidate regions generated by a generic region proposal algorithm, followed by multiscale information fusion from the popular VGG16 model. Two benchmark datasets were applied to validate the proposed method, namely, the Oxford Hand Detection Dataset and the VIVA Hand Detection Challenge. We achieved state-of-the-art results on the Oxford Hand Detection Dataset and had satisfactory performance in the VIVA Hand Detection Challenge.

  3. Residual-driven online generalized multiscale finite element methods

    KAUST Repository

    Chung, Eric T.


    The construction of local reduced-order models via multiscale basis functions has been an area of active research. In this paper, we propose online multiscale basis functions which are constructed using the offline space and the current residual. Online multiscale basis functions are constructed adaptively in some selected regions based on our error indicators. We derive an error estimator which shows that one needs to have an offline space with certain properties to guarantee that additional online multiscale basis function will decrease the error. This error decrease is independent of physical parameters, such as the contrast and multiple scales in the problem. The offline spaces are constructed using Generalized Multiscale Finite Element Methods (GMsFEM). We show that if one chooses a sufficient number of offline basis functions, one can guarantee that additional online multiscale basis functions will reduce the error independent of contrast. We note that the construction of online basis functions is motivated by the fact that the offline space construction does not take into account distant effects. Using the residual information, we can incorporate the distant information provided the offline approximation satisfies certain properties. In the paper, theoretical and numerical results are presented. Our numerical results show that if the offline space is sufficiently large (in terms of the dimension) such that the coarse space contains all multiscale spectral basis functions that correspond to small eigenvalues, then the error reduction by adding online multiscale basis function is independent of the contrast. We discuss various ways computing online multiscale basis functions which include a use of small dimensional offline spaces.

  4. Analysis of individual brain activation maps using hierarchical description and multiscale detection

    International Nuclear Information System (INIS)

    Poline, J.B.; Mazoyer, B.M.


    The authors propose a new method for the analysis of brain activation images that aims at detecting activated volumes rather than pixels. The method is based on Poisson process modeling, hierarchical description, and multiscale detection (MSD). Its performances have been assessed using both Monte Carlo simulated images and experimental PET brain activation data. As compared to other methods, the MSD approach shows enhanced sensitivity with a controlled overall type I error, and has the ability to provide an estimate of the spatial limits of the detected signals. It is applicable to any kind of difference image for which the spatial autocorrelation function can be approximated by a stationary Gaussian function

  5. Automatic segmentation of the colon (United States)

    Wyatt, Christopher L.; Ge, Yaorong; Vining, David J.


    Virtual colonoscopy is a minimally invasive technique that enables detection of colorectal polyps and cancer. Normally, a patient's bowel is prepared with colonic lavage and gas insufflation prior to computed tomography (CT) scanning. An important step for 3D analysis of the image volume is segmentation of the colon. The high-contrast gas/tissue interface that exists in the colon lumen makes segmentation of the majority of the colon relatively easy; however, two factors inhibit automatic segmentation of the entire colon. First, the colon is not the only gas-filled organ in the data volume: lungs, small bowel, and stomach also meet this criteria. User-defined seed points placed in the colon lumen have previously been required to spatially isolate only the colon. Second, portions of the colon lumen may be obstructed by peristalsis, large masses, and/or residual feces. These complicating factors require increased user interaction during the segmentation process to isolate additional colon segments. To automate the segmentation of the colon, we have developed a method to locate seed points and segment the gas-filled lumen with no user supervision. We have also developed an automated approach to improve lumen segmentation by digitally removing residual contrast-enhanced fluid resulting from a new bowel preparation that liquefies and opacifies any residual feces.

  6. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang


    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  7. Automatic quantitative metallography

    International Nuclear Information System (INIS)

    Barcelos, E.J.B.V.; Ambrozio Filho, F.; Cunha, R.C.


    The quantitative determination of metallographic parameters is analysed through the description of Micro-Videomat automatic image analysis system and volumetric percentage of perlite in nodular cast irons, porosity and average grain size in high-density sintered pellets of UO 2 , and grain size of ferritic steel. Techniques adopted are described and results obtained are compared with the corresponding ones by the direct counting process: counting of systematic points (grid) to measure volume and intersections method, by utilizing a circunference of known radius for the average grain size. The adopted technique for nodular cast iron resulted from the small difference of optical reflectivity of graphite and perlite. Porosity evaluation of sintered UO 2 pellets is also analyzed [pt


    Directory of Open Access Journals (Sweden)

    M. Mathias


    Full Text Available Procedural modeling has proven to be a very valuable tool in the field of architecture. In the last few years, research has soared to automatically create procedural models from images. However, current algorithms for this process of inverse procedural modeling rely on the assumption that the building style is known. So far, the determination of the building style has remained a manual task. In this paper, we propose an algorithm which automates this process through classification of architectural styles from facade images. Our classifier first identifies the images containing buildings, then separates individual facades within an image and determines the building style. This information could then be used to initialize the building reconstruction process. We have trained our classifier to distinguish between several distinct architectural styles, namely Flemish Renaissance, Haussmannian and Neoclassical. Finally, we demonstrate our approach on various street-side images.

  9. Automatic surveying techniques

    International Nuclear Information System (INIS)

    Sah, R.


    In order to investigate the feasibility of automatic surveying methods in a more systematic manner, the PEP organization signed a contract in late 1975 for TRW Systems Group to undertake a feasibility study. The completion of this study resulted in TRW Report 6452.10-75-101, dated December 29, 1975, which was largely devoted to an analysis of a survey system based on an Inertial Navigation System. This PEP note is a review and, in some instances, an extension of that TRW report. A second survey system which employed an ''Image Processing System'' was also considered by TRW, and it will be reviewed in the last section of this note. 5 refs., 5 figs., 3 tabs

  10. Automatic alkaloid removal system. (United States)

    Yahaya, Muhammad Rizuwan; Hj Razali, Mohd Hudzari; Abu Bakar, Che Abdullah; Ismail, Wan Ishak Wan; Muda, Wan Musa Wan; Mat, Nashriyah; Zakaria, Abd


    This alkaloid automated removal machine was developed at Instrumentation Laboratory, Universiti Sultan Zainal Abidin Malaysia that purposely for removing the alkaloid toxicity from Dioscorea hispida (DH) tuber. It is a poisonous plant where scientific study has shown that its tubers contain toxic alkaloid constituents, dioscorine. The tubers can only be consumed after it poisonous is removed. In this experiment, the tubers are needed to blend as powder form before inserting into machine basket. The user is need to push the START button on machine controller for switching the water pump ON by then creating turbulence wave of water in machine tank. The water will stop automatically by triggering the outlet solenoid valve. The powders of tubers are washed for 10 minutes while 1 liter of contaminated water due toxin mixture is flowing out. At this time, the controller will automatically triggered inlet solenoid valve and the new water will flow in machine tank until achieve the desire level that which determined by ultra sonic sensor. This process will repeated for 7 h and the positive result is achieved and shows it significant according to the several parameters of biological character ofpH, temperature, dissolve oxygen, turbidity, conductivity and fish survival rate or time. From that parameter, it also shows the positive result which is near or same with control water and assuming was made that the toxin is fully removed when the pH of DH powder is near with control water. For control water, the pH is about 5.3 while water from this experiment process is 6.0 and before run the machine the pH of contaminated water is about 3.8 which are too acid. This automated machine can save time for removing toxicity from DH compared with a traditional method while less observation of the user.

  11. Optimal Selection of Threshold Value 'r' for Refined Multiscale Entropy. (United States)

    Marwaha, Puneeta; Sunkaria, Ramesh Kumar


    Refined multiscale entropy (RMSE) technique was introduced to evaluate complexity of a time series over multiple scale factors 't'. Here threshold value 'r' is updated as 0.15 times SD of filtered scaled time series. The use of fixed threshold value 'r' in RMSE sometimes assigns very close resembling entropy values to certain time series at certain temporal scale factors and is unable to distinguish different time series optimally. The present study aims to evaluate RMSE technique by varying threshold value 'r' from 0.05 to 0.25 times SD of filtered scaled time series and finding optimal 'r' values for each scale factor at which different time series can be distinguished more effectively. The proposed RMSE was used to evaluate over HRV time series of normal sinus rhythm subjects, patients suffering from sudden cardiac death, congestive heart failure, healthy adult male, healthy adult female and mid-aged female groups as well as over synthetic simulated database for different datalengths 'N' of 3000, 3500 and 4000. The proposed RMSE results in improved discrimination among different time series. To enhance the computational capability, empirical mathematical equations have been formulated for optimal selection of threshold values 'r' as a function of SD of filtered scaled time series and datalength 'N' for each scale factor 't'.

  12. Multiscale Analysis of the Predictability of Stock Returns

    Directory of Open Access Journals (Sweden)

    Paweł Fiedor


    Full Text Available Due to the strong complexity of financial markets, economics does not have a unified theory of price formation in financial markets. The most common assumption is the Efficient-Market Hypothesis, which has been attacked by a number of researchers, using different tools. There were varying degrees to which these tools complied with the formal definitions of efficiency and predictability. In our earlier work, we analysed the predictability of stock returns at two time scales using the entropy rate, which can be directly linked to the mathematical definition of predictability. Nonetheless, none of the above-mentioned studies allow any general understanding of how the financial markets work, beyond disproving the Efficient-Market Hypothesis. In our previous study, we proposed the Maximum Entropy Production Principle, which uses the entropy rate to create a general principle underlying the price formation processes. Both of these studies show that the predictability of price changes is higher at the transaction level intraday scale than the scale of daily returns, but ignore all scales in between. In this study we extend these ideas using the multiscale entropy analysis framework to enhance our understanding of the predictability of price formation processes at various time scales.

  13. Dynamical glucometry: Use of multiscale entropy analysis in diabetes (United States)

    Costa, Madalena D.; Henriques, Teresa; Munshi, Medha N.; Segal, Alissa R.; Goldberger, Ary L.


    Diabetes mellitus (DM) is one of the world's most prevalent medical conditions. Contemporary management focuses on lowering mean blood glucose values toward a normal range, but largely ignores the dynamics of glucose fluctuations. We probed analyte time series obtained from continuous glucose monitor (CGM) sensors. We show that the fluctuations in CGM values sampled every 5 min are not uncorrelated noise. Next, using multiscale entropy analysis, we quantified the complexity of the temporal structure of the CGM time series from a group of elderly subjects with type 2 DM and age-matched controls. We further probed the structure of these CGM time series using detrended fluctuation analysis. Our findings indicate that the dynamics of glucose fluctuations from control subjects are more complex than those of subjects with type 2 DM over time scales ranging from about 5 min to 5 h. These findings support consideration of a new framework, dynamical glucometry, to guide mechanistic research and to help assess and compare therapeutic interventions, which should enhance complexity of glucose fluctuations and not just lower mean and variance of blood glucose levels.

  14. Magnetospheric Multiscale Mission (MMS) Phase 2B Navigation Performance (United States)

    Scaperoth, Paige Thomas; Long, Anne; Carpenter, Russell


    The Magnetospheric Multiscale (MMS) formation flying mission, which consists of four spacecraft flying in a tetrahedral formation, has challenging navigation requirements associated with determining and maintaining the relative separations required to meet the science requirements. The baseline navigation concept for MMS is for each spacecraft to independently estimate its position, velocity and clock states using GPS pseudorange data provided by the Goddard Space Flight Center-developed Navigator receiver and maneuver acceleration measurements provided by the spacecraft's attitude control subsystem. State estimation is performed onboard in real-time using the Goddard Enhanced Onboard Navigation System flight software, which is embedded in the Navigator receiver. The current concept of operations for formation maintenance consists of a sequence of two maintenance maneuvers that is performed every 2 weeks. Phase 2b of the MMS mission, in which the spacecraft are in 1.2 x 25 Earth radii orbits with nominal separations at apogee ranging from 30 km to 400 km, has the most challenging navigation requirements because, during this phase, GPS signal acquisition is restricted to less than one day of the 2.8-day orbit. This paper summarizes the results from high-fidelity simulations to determine if the MMS navigation requirements can be met between and immediately following the maintenance maneuver sequence in Phase 2b.

  15. Classifying visemes for automatic lipreading

    NARCIS (Netherlands)

    Visser, Michiel; Poel, Mannes; Nijholt, Antinus; Matousek, Vaclav; Mautner, Pavel; Ocelikovi, Jana; Sojka, Petr


    Automatic lipreading is automatic speech recognition that uses only visual information. The relevant data in a video signal is isolated and features are extracted from it. From a sequence of feature vectors, where every vector represents one video image, a sequence of higher level semantic elements

  16. A Unification of Inheritance and Automatic Program Specialization

    DEFF Research Database (Denmark)

    Schultz, Ulrik Pagh


    , inheritance is used to control the automatic application of program specialization to class members during compilation to obtain an efficient implementation. This paper presents the language JUST, which integrates object-oriented concepts, block structure, and techniques from automatic program specialization......The object-oriented style of programming facilitates program adaptation and enhances program genericness, but at the expense of efficiency. Automatic program specialization can be used to generate specialized, efficient implementations for specific scenarios, but requires the program...... to be structured appropriately for specialization and is yet another new concept for the programmer to understand and apply. We have unified automatic program specialization and inheritance into a single concept, and implemented this approach in a modified version of Java named JUST. When programming in JUST...

  17. Examining Multiscale Movement Coordination in Collaborative Problem Solving

    DEFF Research Database (Denmark)

    Wiltshire, Travis; Steffensen, Sune Vork


    During collaborative problem solving (CPS), coordination occurs at different spatial and temporal scales. This multiscale coordination should, at least on some scales, play a functional role in facilitating effective collaboration outcomes. To evaluate this, we conducted a study of computer...

  18. Multi-Scale Simulation of High Energy Density Ionic Liquids

    National Research Council Canada - National Science Library

    Voth, Gregory A


    The focus of this AFOSR project was the molecular dynamics (MD) simulation of ionic liquid structure, dynamics, and interfacial properties, as well as multi-scale descriptions of these novel liquids (e.g...

  19. Multiscale behaviour of volatility autocorrelations in a financial market


    Pasquini, Michele; Serva, Maurizio


    We perform a scaling analysis on NYSE daily returns. We show that volatility correlations are power-laws on a time range from one day to one year and, more important, that they exhibit a multiscale behaviour.

  20. CPR-based next-generation multiscale simulators

    NARCIS (Netherlands)

    Cusini, M.; Lukyanov, A.; Natvig, J.; Hajibeygi, H.


    Unconventional Reservoir simulations involve several challenges not only arising from geological heterogeneities, but also from strong nonlinear physical coupling terms. All exiting upscaling and multiscale methods rely on a classical sequential formulation to treat the coupling between the

  1. HLA component based environment for distributed multiscale simulations

    NARCIS (Netherlands)

    Rycerz, K.; Bubak, M.; Sloot, P.M.A.; Getov, V.


    In this paper we present the Grid environment that supports application building basing on a High Level Architecture (HLA) component model. The proposed model is particularly suitable for distributed multiscale simulations. Original HLA partly supports interoperability and composability of

  2. Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition (United States)

    Kim, Jonghwa; André, Elisabeth

    This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.

  3. Development of an automatic reactor inspection system

    International Nuclear Information System (INIS)

    Kim, Jae Hee; Eom, Heung Seop; Lee, Jae Cheol; Choi, Yoo Raek; Moon, Soon Seung


    Using recent technologies on a mobile robot computer science, we developed an automatic inspection system for weld lines of the reactor vessel. The ultrasonic inspection of the reactor pressure vessel is currently performed by commercialized robot manipulators. Since, however, the conventional fixed type robot manipulator is very huge, heavy and expensive, it needs long inspection time and is hard to handle and maintain. In order to resolve these problems, we developed a new automatic inspection system using a small mobile robot crawling on the vertical wall of the reactor vessel. According to our conceptual design, we developed the reactor inspection system including an underwater inspection robot, a laser position control subsystem, an ultrasonic data acquisition/analysis subsystem and a main control subsystem. We successfully carried out underwater experiments on the reactor vessel mockup, and real reactor ready for Ulchine nuclear power plant unit 6 at Dusan Heavy Industry in Korea. After this project, we have a plan to commercialize our inspection system. Using this system, we can expect much reduction of the inspection time, performance enhancement, automatic management of inspection history, etc. In the economic point of view, we can also expect import substitution more than 4 million dollars. The established essential technologies for intelligent control and automation are expected to be synthetically applied to the automation of similar systems in nuclear power plants

  4. Multiscale modelling of coupled problems in porous materials


    Carmeliet, Jan; Derluyn, Hannelore; Mertens, Stijn; Moonen, Peter


    In this paper a multiscale approach for coupled mechanical and transport phenomena in porous media is presented. It is shown that monoscale approaches show different limitations: phenomena like nonlinear elasticity, hysteresis, stiffness recovery in compressive loading, preferential moisture uptake into cracks, changes of the permeability caused by changes in the pore structure due to chemical processes are not taken adequately into account. The multiscale mechanical model is b...

  5. Numerical methods and analysis of multiscale problems

    CERN Document Server

    Madureira, Alexandre L


    This book is about numerical modeling of multiscale problems, and introduces several asymptotic analysis and numerical techniques which are necessary for a proper approximation of equations that depend on different physical scales. Aimed at advanced undergraduate and graduate students in mathematics, engineering and physics – or researchers seeking a no-nonsense approach –, it discusses examples in their simplest possible settings, removing mathematical hurdles that might hinder a clear understanding of the methods. The problems considered are given by singular perturbed reaction advection diffusion equations in one and two-dimensional domains, partial differential equations in domains with rough boundaries, and equations with oscillatory coefficients. This work shows how asymptotic analysis can be used to develop and analyze models and numerical methods that are robust and work well for a wide range of parameters.

  6. Multi-scale simulation for plasma science

    Energy Technology Data Exchange (ETDEWEB)

    Ishiguro, S; Usami, S; Horiuchi, R; Ohtani, H; Maluckov, A; Skoric, M M, E-mail:


    In order to perform a computer simulation of a large time and spatial scale system, such as a fusion plasma device and solar-terrestrial plasma, macro simulation model, where micro physics is modeled analytically or empirically, is usually used. However, kinetic effects such as wave-particle interaction play important roles in most of nonlinear plasma phenomena and result in anomalous behavior. This limits the applicability of macro simulation models. In a past few years several attempts have been performed to overcome this difficulty. Two types of multi-scale simulation method for nonlinear plasma science are presented. First one is the Micro-Macro Interconnected Simulation Method (MMIS), where micro model and macro model are connected dynamically through an interface and macro time and space simulation is performed. Second one is the Equation Free Projective Integration Method (EFPI), where macro space and time scale simulation is performed by using only a micro simulator and a sophisticated numerical algorithm.

  7. Multi-scale modeling of composites

    DEFF Research Database (Denmark)

    Azizi, Reza

    A general method to obtain the homogenized response of metal-matrix composites is developed. It is assumed that the microscopic scale is sufficiently small compared to the macroscopic scale such that the macro response does not affect the micromechanical model. Therefore, the microscopic scale...... is analyzed using a Representative Volume Element (RVE), while the homogenized data are saved and used as an input to the macro scale. The dependence of fiber size is analyzed using a higher order plasticity theory, where the free energy is stored due to plastic strain gradients at the micron scale. Hill...... to plastic deformation. The macroscopic operators found, can be used to model metal matrix composites on the macroscopic scale using a hierarchical multi-scale approach. Finally, decohesion under tension and shear loading is studied using a cohesive law for the interface between matrix and fiber....

  8. Multiscale simulation approach for battery production systems

    CERN Document Server

    Schönemann, Malte


    Addressing the challenge of improving battery quality while reducing high costs and environmental impacts of the production, this book presents a multiscale simulation approach for battery production systems along with a software environment and an application procedure. Battery systems are among the most important technologies of the 21st century since they are enablers for the market success of electric vehicles and stationary energy storage solutions. However, the performance of batteries so far has limited possible applications. Addressing this challenge requires an interdisciplinary understanding of dynamic cause-effect relationships between processes, equipment, materials, and environmental conditions. The approach in this book supports the integrated evaluation of improvement measures and is usable for different planning horizons. It is applied to an exemplary battery cell production and module assembly in order to demonstrate the effectiveness and potential benefits of the simulation.

  9. Multiscale reconstruction algorithm for compressed sensing. (United States)

    Lei, Jing; Liu, Wenyi; Liu, Shi; Liu, Qibin


    Compressed sensing (CS) method has attracted increasing attention owing to providing a novel insight for signal and image processing technology. Acquiring high-quality reconstruction results plays a crucial role in successful applications of CS method. This paper presents a multiscale reconstruction model that simultaneously considers the inaccuracy properties on the measurement data and the measurement matrix. Based on the wavelet analysis method, the original inverse problem is decomposed into a sequence of inverse problems, which are solved successively from the largest scale to the original scale. An objective functional, that integrate the beneficial advantages of the least trimmed sum of absolute deviations (LTA) estimation and the combinational M-estimation, is proposed. An iteration scheme that incorporates the advantages of the homotopy method and the evolutionary programming (EP) algorithm is designed for solving the proposed objective functional. Numerical simulations are implemented to validate the feasibility of the proposed reconstruction method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  10. MUSIC: MUlti-Scale Initial Conditions (United States)

    Hahn, Oliver; Abel, Tom


    MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.


    International Nuclear Information System (INIS)

    Uritsky, Vadim M.; Davila, Joseph M.


    Multiscale topological complexity of the solar magnetic field is among the primary factors controlling energy release in the corona, including associated processes in the photospheric and chromospheric boundaries. We present a new approach for analyzing multiscale behavior of the photospheric magnetic flux underlying these dynamics as depicted by a sequence of high-resolution solar magnetograms. The approach involves two basic processing steps: (1) identification of timing and location of magnetic flux origin and demise events (as defined by DeForest et al.) by tracking spatiotemporal evolution of unipolar and bipolar photospheric regions, and (2) analysis of collective behavior of the detected magnetic events using a generalized version of the Grassberger-Procaccia correlation integral algorithm. The scale-free nature of the developed algorithms makes it possible to characterize the dynamics of the photospheric network across a wide range of distances and relaxation times. Three types of photospheric conditions are considered to test the method: a quiet photosphere, a solar active region (NOAA 10365) in a quiescent non-flaring state, and the same active region during a period of M-class flares. The results obtained show (1) the presence of a topologically complex asymmetrically fragmented magnetic network in the quiet photosphere driven by meso- and supergranulation, (2) the formation of non-potential magnetic structures with complex polarity separation lines inside the active region, and (3) statistical signatures of canceling bipolar magnetic structures coinciding with flaring activity in the active region. Each of these effects can represent an unstable magnetic configuration acting as an energy source for coronal dissipation and heating.

  12. Hybrid stochastic simplifications for multiscale gene networks

    Directory of Open Access Journals (Sweden)

    Debussche Arnaud


    Full Text Available Abstract Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion 123 which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.

  13. Multiscale sampling model for motion integration. (United States)

    Sherbakov, Lena; Yazdanbakhsh, Arash


    Biologically plausible strategies for visual scene integration across spatial and temporal domains continues to be a challenging topic. The fundamental question we address is whether classical problems in motion integration, such as the aperture problem, can be solved in a model that samples the visual scene at multiple spatial and temporal scales in parallel. We hypothesize that fast interareal connections that allow feedback of information between cortical layers are the key processes that disambiguate motion direction. We developed a neural model showing how the aperture problem can be solved using different spatial sampling scales between LGN, V1 layer 4, V1 layer 6, and area MT. Our results suggest that multiscale sampling, rather than feedback explicitly, is the key process that gives rise to end-stopped cells in V1 and enables area MT to solve the aperture problem without the need for calculating intersecting constraints or crafting intricate patterns of spatiotemporal receptive fields. Furthermore, the model explains why end-stopped cells no longer emerge in the absence of V1 layer 6 activity (Bolz & Gilbert, 1986), why V1 layer 4 cells are significantly more end-stopped than V1 layer 6 cells (Pack, Livingstone, Duffy, & Born, 2003), and how it is possible to have a solution to the aperture problem in area MT with no solution in V1 in the presence of driving feedback. In summary, while much research in the field focuses on how a laminar architecture can give rise to complicated spatiotemporal receptive fields to solve problems in the motion domain, we show that one can reframe motion integration as an emergent property of multiscale sampling achieved concurrently within lamina and across multiple visual areas.

  14. Multiscale modelling of nucleosome core particle aggregation (United States)

    Lyubartsev, Alexander P.; Korolev, Nikolay; Fan, Yanping; Nordenskiöld, Lars


    The nucleosome core particle (NCP) is the basic building block of chromatin. Under the influence of multivalent cations, isolated mononucleosomes exhibit a rich phase behaviour forming various columnar phases with characteristic NCP-NCP stacking. NCP stacking is also a regular element of chromatin structure in vivo. Understanding the mechanism of nucleosome stacking and the conditions leading to self-assembly of NCPs is still incomplete. Due to the complexity of the system and the need to describe electrostatics properly by including the explicit mobile ions, novel modelling approaches based on coarse-grained (CG) methods at the multiscale level becomes a necessity. In this work we present a multiscale CG computer simulation approach to modelling interactions and self-assembly of solutions of NCPs induced by the presence of multivalent cations. Starting from continuum simulations including explicit three-valent cobalt(III)hexammine (CoHex3+) counterions and 20 NCPs, based on a previously developed advanced CG NCP model with one bead per amino acid and five beads per two DNA base pair unit (Fan et al 2013 PLoS One 8 e54228), we use the inverse Monte Carlo method to calculate effective interaction potentials for a ‘super-CG’ NCP model consisting of seven beads for each NCP. These interaction potentials are used in large-scale simulations of up to 5000 NCPs, modelling self-assembly induced by CoHex3+. The systems of ‘super-CG’ NCPs form a single large cluster of stacked NCPs without long-range order in agreement with experimental data for NCPs precipitated by the three-valent polyamine, spermidine3+.

  15. Experimental evaluation of multiscale tendon mechanics. (United States)

    Fang, Fei; Lake, Spencer P


    Tendon's primary function is a mechanical link between muscle and bone. The hierarchical structure of tendon and specific compositional constituents are believed to be critical for proper mechanical function. With increased appreciation for tendon importance and the development of various technological advances, this review paper summarizes recent experimental approaches that have been used to study multiscale tendon mechanics, includes an overview of studies that have evaluated the role of specific tissue constituents, and also proposes challenges/opportunities facing tendon study. Tendon has been demonstrated to have specific structural characteristics (e.g., multi-level hierarchy, crimp pattern, helix) and complex mechanical properties (e.g., non-linearity, anisotropy, viscoelasticity). Physical mechanisms including uncrimping, fiber sliding, and collagen reorganization have been shown to govern tendon mechanical responses under both static and dynamic loading. Several tendon constituents with relatively small quantities have been suggested to play a role in its mechanics, although some results are conflicting. Further research should be performed to understand the interplay and communication of tendon mechanical properties across levels of the hierarchical structure, and further show how each of these components contribute to tendon mechanics. The studies summarized and discussed in this review have helped elucidate important aspects of multiscale tendon mechanics, which is a prerequisite for analyzing stress/strain transfer between multiple scales and identifying key principles of mechanotransduction. This information could further facilitate interpreting the functional diversity of tendons from different species, different locations, and even different developmental stages, and then better understand and identify fundamental concepts related to tendon degeneration, disease, and healing. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc

  16. Robust simplifications of multiscale biochemical networks

    Directory of Open Access Journals (Sweden)

    Zinovyev Andrei


    Full Text Available Abstract Background Cellular processes such as metabolism, decision making in development and differentiation, signalling, etc., can be modeled as large networks of biochemical reactions. In order to understand the functioning of these systems, there is a strong need for general model reduction techniques allowing to simplify models without loosing their main properties. In systems biology we also need to compare models or to couple them as parts of larger models. In these situations reduction to a common level of complexity is needed. Results We propose a systematic treatment of model reduction of multiscale biochemical networks. First, we consider linear kinetic models, which appear as "pseudo-monomolecular" subsystems of multiscale nonlinear reaction networks. For such linear models, we propose a reduction algorithm which is based on a generalized theory of the limiting step that we have developed in 1. Second, for non-linear systems we develop an algorithm based on dominant solutions of quasi-stationarity equations. For oscillating systems, quasi-stationarity and averaging are combined to eliminate time scales much faster and much slower than the period of the oscillations. In all cases, we obtain robust simplifications and also identify the critical parameters of the model. The methods are demonstrated for simple examples and for a more complex model of NF-κB pathway. Conclusion Our approach allows critical parameter identification and produces hierarchies of models. Hierarchical modeling is important in "middle-out" approaches when there is need to zoom in and out several levels of complexity. Critical parameter identification is an important issue in systems biology with potential applications to biological control and therapeutics. Our approach also deals naturally with the presence of multiple time scales, which is a general property of systems biology models.

  17. Automatic exposure for xeromammography

    International Nuclear Information System (INIS)

    Aichinger, H.


    During mammography without intensifying screens, exposure measurements are carried out behind the film. It is, however, difficult to construct an absolutely shadow-free ionization chamber of adequate sensitivity working in the necessary range of 25 to 50 kV. Repeated attempts have been made to utilize the advantages of automatic exposure for xero-mammography. In this case also the ionization chamber was placed behind the Xerox plate. Depending on tube filtration, object thickness and tube voltage, more than 80%, sometimes even 90%, of the radiation is absorbed by the Xerox plate. Particularly the characteristic Mo radiation of 17.4 keV and 19.6 keV is almost totally absorbed by the plate and cannot therefore be registered by the ionization chamber. This results in a considerable dependence of the exposure on kV and object thickness. Dependence on tube voltage and object thickness have been examined dosimetrically and spectroscopically with a Ge(Li)-spectrometer. Finally, the successful use of a shadow-free chamber is described; this has been particularly adapted for xero-mammography and is placed in front of the plate. (orig) [de

  18. Historical Review and Perspective on Automatic Journalizing


    Kato, Masaki


    ContentsIntroduction1. EDP Accounting and Automatic Journalizing2. Learning System of Automatic Journalizing3. Automatic Journalizing by the Artificial Intelligence4. Direction of the Progress of the Accounting Information System

  19. A multi-scale integrated modeling framework to measure comprehensive impact of coastal reclamation activities in Yellow River estuary, China. (United States)

    Xu, Yan; Cai, Yanpeng; Sun, Tao; Tan, Qian


    In this paper, an improved multi-scale integrated modeling framework has been established to evaluate coastal reclamation intensity (CRI). About 7 indicators are considered, including ecological degradation intensity (EDI), hydrodynamic disturbance (IHD), engineering types, water quality, economic investment, population growth, and reclaimed land area. Meanwhile, an integrated framework enhanced methods in terms of (a) measuring intensity of ecological degradation process under multi-scale impact, (b) developing the indicator system of CRI, and discussing the driving forces and trends of coastal reclamation, (c) determining fuzzy preference relations of weight and calculating the specific value of CRI with the case study areas of Yellow River estuary from 2000 to 2015. As the result, the CRI has been expanded unceasingly in recent years. The total growth rate from 2000 to 2015 is about 37.97%. It is concluded that CRI has climbed to a higher intensity level in resent 15years. Copyright © 2017. Published by Elsevier Ltd.

  20. Electronic amplifiers for automatic compensators

    CERN Document Server

    Polonnikov, D Ye


    Electronic Amplifiers for Automatic Compensators presents the design and operation of electronic amplifiers for use in automatic control and measuring systems. This book is composed of eight chapters that consider the problems of constructing input and output circuits of amplifiers, suppression of interference and ensuring high sensitivity.This work begins with a survey of the operating principles of electronic amplifiers in automatic compensator systems. The succeeding chapters deal with circuit selection and the calculation and determination of the principal characteristics of amplifiers, as

  1. Automatic Road Gap Detection Using Fuzzy Inference System (United States)

    Hashemi, S.; Valadan Zoej, M. J.; Mokhtarzadeh, M.


    Automatic feature extraction from aerial and satellite images is a high-level data processing which is still one of the most important research topics of the field. In this area, most of the researches are focused on the early step of road detection, where road tracking methods, morphological analysis, dynamic programming and snakes, multi-scale and multi-resolution methods, stereoscopic and multi-temporal analysis, hyper spectral experiments, are some of the mature methods in this field. Although most researches are focused on detection algorithms, none of them can extract road network perfectly. On the other hand, post processing algorithms accentuated on the refining of road detection results, are not developed as well. In this article, the main is to design an intelligent method to detect and compensate road gaps remained on the early result of road detection algorithms. The proposed algorithm consists of five main steps as follow: 1) Short gap coverage: In this step, a multi-scale morphological is designed that covers short gaps in a hierarchical scheme. 2) Long gap detection: In this step, the long gaps, could not be covered in the previous stage, are detected using a fuzzy inference system. for this reason, a knowledge base consisting of some expert rules are designed which are fired on some gap candidates of the road detection results. 3) Long gap coverage: In this stage, detected long gaps are compensated by two strategies of linear and polynomials for this reason, shorter gaps are filled by line fitting while longer ones are compensated by polynomials.4) Accuracy assessment: In order to evaluate the obtained results, some accuracy assessment criteria are proposed. These criteria are obtained by comparing the obtained results with truly compensated ones produced by a human expert. The complete evaluation of the obtained results whit their technical discussions are the materials of the full paper.

  2. Characteristics and design improvement of AP1000 automatic depressurization system

    International Nuclear Information System (INIS)

    Jin Fei


    Automatic depressurization system, as a specialty of AP1000 Design, enhances capability of mitigating design basis accidents for plant. Advancement of the system is discussed by comparing with traditional PWR design and analyzing system functions, such as depressurizing and venting. System design improvement during China Project performance is also described. At the end, suggestions for the system in China Project are listed. (author)

  3. Automatic analysis of multiparty meetings

    Indian Academy of Sciences (India)

    AMI) meeting corpus, the development of a meeting speech recognition system, and systems for the automatic segmentation, summarization and social processing of meetings, together with some example applications based on these systems.

  4. AVID: Automatic Visualization Interface Designer

    National Research Council Canada - National Science Library

    Chuah, Mei


    .... Automatic generation offers great flexibility in performing data and information analysis tasks, because new designs are generated on a case by case basis to suit current and changing future needs...

  5. Clothes Dryer Automatic Termination Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    TeGrotenhuis, Ward E.


    Volume 2: Improved Sensor and Control Designs Many residential clothes dryers on the market today provide automatic cycles that are intended to stop when the clothes are dry, as determined by the final remaining moisture content (RMC). However, testing of automatic termination cycles has shown that many dryers are susceptible to over-drying of loads, leading to excess energy consumption. In particular, tests performed using the DOE Test Procedure in Appendix D2 of 10 CFR 430 subpart B have shown that as much as 62% of the energy used in a cycle may be from over-drying. Volume 1 of this report shows an average of 20% excess energy from over-drying when running automatic cycles with various load compositions and dryer settings. Consequently, improving automatic termination sensors and algorithms has the potential for substantial energy savings in the U.S.

  6. An automatic image recognition approach

    Directory of Open Access Journals (Sweden)

    Tudor Barbu


    Full Text Available Our paper focuses on the graphical analysis domain. We propose an automatic image recognition technique. This approach consists of two main pattern recognition steps. First, it performs an image feature extraction operation on an input image set, using statistical dispersion features. Then, an unsupervised classification process is performed on the previously obtained graphical feature vectors. An automatic region-growing based clustering procedure is proposed and utilized in the classification stage.

  7. Stay Focused! The Effects of Internal and External Focus of Attention on Movement Automaticity in Patients with Stroke

    NARCIS (Netherlands)

    Kal, E. C.; van der Kamp, J.; Houdijk, H.; Groet, E.; van Bennekom, C. A. M.; Scherder, E. J. A.


    Dual-task performance is often impaired after stroke. This may be resolved by enhancing patients' automaticity of movement. This study sets out to test the constrained action hypothesis, which holds that automaticity of movement is enhanced by triggering an external focus (on movement effects),

  8. Fusion of dynamic contrast-enhanced magnetic resonance mammography at 3.0 T with X-ray mammograms: Pilot study evaluation using dedicated semi-automatic registration software

    Energy Technology Data Exchange (ETDEWEB)

    Dietzel, Matthias, E-mail: [Institute of Diagnostic and Interventional Radiology, Friedrich-Schiller-University Jena, Erlanger Allee 101, D-07740 Jena (Germany); Hopp, Torsten; Ruiter, Nicole [Karlsruhe Institute of Technology (KIT), Institute for Data Processing and Electronics, Postfach 3640, D-76021 Karlsruhe (Germany); Zoubi, Ramy [Institute of Diagnostic and Interventional Radiology, Friedrich-Schiller-University Jena, Erlanger Allee 101, D-07740 Jena (Germany); Runnebaum, Ingo B. [Clinic of Gynecology and Obstetrics, Friedrich-Schiller-University Jena, Bachstrasse 18, D-07743 Jena (Germany); Kaiser, Werner A. [Institute of Diagnostic and Interventional Radiology, Friedrich-Schiller-University Jena, Erlanger Allee 101, D-07740 Jena (Germany); Medical School, University of Harvard, 25 Shattuck Street, Boston, MA 02115 (United States); Baltzer, Pascal A.T. [Institute of Diagnostic and Interventional Radiology, Friedrich-Schiller-University Jena, Erlanger Allee 101, D-07740 Jena (Germany)


    Rationale and objectives: To evaluate the semi-automatic image registration accuracy of X-ray-mammography (XR-M) with high-resolution high-field (3.0 T) MR-mammography (MR-M) in an initial pilot study. Material and methods: MR-M was acquired on a high-field clinical scanner at 3.0 T (T1-weighted 3D VIBE {+-} Gd). XR-M was obtained with state-of-the-art full-field digital systems. Seven patients with clearly delineable mass lesions >10 mm both in XR-M and MR-M were enrolled (exclusion criteria: previous breast surgery; surgical intervention between XR-M and MR-M). XR-M and MR-M were matched using a dedicated image-registration algorithm allowing semi-automatic non-linear deformation of MR-M based on finite-element modeling. To identify registration errors (RE) a virtual craniocaudal 2D mammogram was calculated by the software from MR-M (with and w/o Gadodiamide/Gd) and matched with corresponding XR-M. To quantify REs the geometric center of the lesions in the virtual vs. conventional mammogram were subtracted. The robustness of registration was quantified by registration of X-MRs to both MR-Ms with and w/o Gadodiamide. Results: Image registration was performed successfully for all patients. Overall RE was 8.2 mm (1 min after Gd; confidence interval/CI: 2.0-14.4 mm, standard deviation/SD: 6.7 mm) vs. 8.9 mm (no Gd; CI: 4.0-13.9 mm, SD: 5.4 mm). The mean difference between pre- vs. post-contrast was 0.7 mm (SD: 1.9 mm). Conclusion: Image registration of high-field 3.0 T MR-mammography with X-ray-mammography is feasible. For this study applying a high-resolution protocol at 3.0 T, the registration was robust and the overall registration error was sufficient for clinical application.

  9. Fusion of dynamic contrast-enhanced magnetic resonance mammography at 3.0 T with X-ray mammograms: Pilot study evaluation using dedicated semi-automatic registration software

    International Nuclear Information System (INIS)

    Dietzel, Matthias; Hopp, Torsten; Ruiter, Nicole; Zoubi, Ramy; Runnebaum, Ingo B.; Kaiser, Werner A.; Baltzer, Pascal A.T.


    Rationale and objectives: To evaluate the semi-automatic image registration accuracy of X-ray-mammography (XR-M) with high-resolution high-field (3.0 T) MR-mammography (MR-M) in an initial pilot study. Material and methods: MR-M was acquired on a high-field clinical scanner at 3.0 T (T1-weighted 3D VIBE ± Gd). XR-M was obtained with state-of-the-art full-field digital systems. Seven patients with clearly delineable mass lesions >10 mm both in XR-M and MR-M were enrolled (exclusion criteria: previous breast surgery; surgical intervention between XR-M and MR-M). XR-M and MR-M were matched using a dedicated image-registration algorithm allowing semi-automatic non-linear deformation of MR-M based on finite-element modeling. To identify registration errors (RE) a virtual craniocaudal 2D mammogram was calculated by the software from MR-M (with and w/o Gadodiamide/Gd) and matched with corresponding XR-M. To quantify REs the geometric center of the lesions in the virtual vs. conventional mammogram were subtracted. The robustness of registration was quantified by registration of X-MRs to both MR-Ms with and w/o Gadodiamide. Results: Image registration was performed successfully for all patients. Overall RE was 8.2 mm (1 min after Gd; confidence interval/CI: 2.0-14.4 mm, standard deviation/SD: 6.7 mm) vs. 8.9 mm (no Gd; CI: 4.0-13.9 mm, SD: 5.4 mm). The mean difference between pre- vs. post-contrast was 0.7 mm (SD: 1.9 mm). Conclusion: Image registration of high-field 3.0 T MR-mammography with X-ray-mammography is feasible. For this study applying a high-resolution protocol at 3.0 T, the registration was robust and the overall registration error was sufficient for clinical application.

  10. Multiscale Modeling of Mesoscale and Interfacial Phenomena (United States)

    Petsev, Nikolai Dimitrov

    we provide a novel and general framework for multiscale modeling of systems featuring one or more dissolved species. This makes it possible to retain molecular detail for parts of the problem that require it while using a simple, continuum description for parts where high detail is unnecessary, reducing the number of degrees of freedom (i.e. number of particles) dramatically. This opens the possibility for modeling ion transport in biological processes and biomolecule assembly in ionic solution, as well as electrokinetic phenomena at interfaces such as corrosion. The number of particles in the system is further reduced through an integrated boundary approach, which we apply to colloidal suspensions. In this thesis, we describe this general framework for multiscale modeling single- and multicomponent systems, provide several simple equilibrium and non-equilibrium case studies, and discuss future applications.

  11. Multi-scale approximation of Vlasov equation

    International Nuclear Information System (INIS)

    Mouton, A.


    One of the most important difficulties of numerical simulation of magnetized plasmas is the existence of multiple time and space scales, which can be very different. In order to produce good simulations of these multi-scale phenomena, it is recommended to develop some models and numerical methods which are adapted to these problems. Nowadays, the two-scale convergence theory introduced by G. Nguetseng and G. Allaire is one of the tools which can be used to rigorously derive multi-scale limits and to obtain new limit models which can be discretized with a usual numerical method: this procedure is so-called a two-scale numerical method. The purpose of this thesis is to develop a two-scale semi-Lagrangian method and to apply it on a gyrokinetic Vlasov-like model in order to simulate a plasma submitted to a large external magnetic field. However, the physical phenomena we have to simulate are quite complex and there are many questions without answers about the behaviour of a two-scale numerical method, especially when such a method is applied on a nonlinear model. In a first part, we develop a two-scale finite volume method and we apply it on the weakly compressible 1D isentropic Euler equations. Even if this mathematical context is far from a Vlasov-like model, it is a relatively simple framework in order to study the behaviour of a two-scale numerical method in front of a nonlinear model. In a second part, we develop a two-scale semi-Lagrangian method for the two-scale model developed by E. Frenod, F. Salvarani et E. Sonnendrucker in order to simulate axisymmetric charged particle beams. Even if the studied physical phenomena are quite different from magnetic fusion experiments, the mathematical context of the one-dimensional paraxial Vlasov-Poisson model is very simple for establishing the basis of a two-scale semi-Lagrangian method. In a third part, we use the two-scale convergence theory in order to improve M. Bostan's weak-* convergence results about the finite

  12. Self-organizing neural networks for automatic detection and classification of contrast-enhancing lesions in dynamic MR-mammography; Selbstorganisierende neuronale Netze zur automatischen Detektion und Klassifikation von Kontrast(mittel)-verstaerkten Laesionen in der dynamischen MR-Mammographie

    Energy Technology Data Exchange (ETDEWEB)

    Vomweg, T.W.; Teifke, A.; Kauczor, H.U.; Achenbach, T.; Rieker, O.; Schreiber, W.G.; Heitmann, K.R.; Beier, T.; Thelen, M. [Klinik und Poliklinik fuer Radiologie, Klinikum der Univ. Mainz (Germany)


    Purpose: Investigation and statistical evaluation of 'Self-Organizing Maps', a special type of neural networks in the field of artificial intelligence, classifying contrast enhancing lesions in dynamic MR-mammography. Material and Methods: 176 investigations with proven histology after core biopsy or operation were randomly divided into two groups. Several Self-Organizing Maps were trained by investigations of the first group to detect and classify contrast enhancing lesions in dynamic MR-mammography. Each single pixel's signal/time curve of all patients within the second group was analyzed by the Self-Organizing Maps. The likelihood of malignancy was visualized by color overlays on the MR-images. At last assessment of contrast-enhancing lesions by each different network was rated visually and evaluated statistically. Results: A well balanced neural network achieved a sensitivity of 90.5% and a specificity of 72.2% in predicting malignancy of 88 enhancing lesions. Detailed analysis of false-positive results revealed that every second fibroadenoma showed a 'typical malignant' signal/time curve without any chance to differentiate between fibroadenomas and malignant tissue regarding contrast enhancement alone; but this special group of lesions was represented by a well-defined area of the Self-Organizing Map. Discussion: Self-Organizing Maps are capable of classifying a dynamic signal/time curve as 'typical benign' or 'typical malignant'. Therefore, they can be used as second opinion. In view of the now known localization of fibroadenomas enhancing like malignant tumors at the Self-Organizing Map, these lesions could be passed to further analysis by additional post-processing elements (e.g., based on T2-weighted series or morphology analysis) in the future. (orig.)

  13. A framework for evaluating automatic indexing or classification in the context of retrieval

    DEFF Research Database (Denmark)

    Golub, Korajlka; Soergel, Dagobert; Buchanan, George


    Tools for automatic subject assignment help deal with scale and sustainability in creating and enriching metadata, establishing more connections across and between resources and enhancing consistency. While some software vendors and experimental researchers claim the tools can replace manual...

  14. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao


    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.

    This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  15. Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Plechac, Petr [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Mathematics; Univ. of Delaware, Newark, DE (United States). Dept. of Mathematics; Vlachos, Dionisios [Univ. of Delaware, Newark, DE (United States). Dept. of Chemical and Biomolecular Engineering; Katsoulakis, Markos [Univ. of Massachusetts, Amherst, MA (United States). Dept. of Mathematics


    The overall objective of this project is to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals. Specific goals include: (i) Development of rigorous spatio-temporal coarse-grained kinetic Monte Carlo (KMC) mathematics and simulation for microscopic processes encountered in biomass transformation. (ii) Development of hybrid multiscale simulation that links stochastic simulation to a deterministic partial differential equation (PDE) model for an entire reactor. (iii) Development of hybrid multiscale simulation that links KMC simulation with quantum density functional theory (DFT) calculations. (iv) Development of parallelization of models of (i)-(iii) to take advantage of Petaflop computing and enable real world applications of complex, multiscale models. In this NCE period, we continued addressing these objectives and completed the proposed work. Main initiatives, key results, and activities are outlined.

  16. Microphysics in Multi-scale Modeling System with Unified Physics (United States)

    Tao, Wei-Kuo


    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  17. Conformal-Based Surface Morphing and Multi-Scale Representation

    Directory of Open Access Journals (Sweden)

    Ka Chun Lam


    Full Text Available This paper presents two algorithms, based on conformal geometry, for the multi-scale representations of geometric shapes and surface morphing. A multi-scale surface representation aims to describe a 3D shape at different levels of geometric detail, which allows analyzing or editing surfaces at the global or local scales effectively. Surface morphing refers to the process of interpolating between two geometric shapes, which has been widely applied to estimate or analyze deformations in computer graphics, computer vision and medical imaging. In this work, we propose two geometric models for surface morphing and multi-scale representation for 3D surfaces. The basic idea is to represent a 3D surface by its mean curvature function, H, and conformal factor function λ, which uniquely determine the geometry of the surface according to Riemann surface theory. Once we have the (λ, H parameterization of the surface, post-processing of the surface can be done directly on the conformal parameter domain. In particular, the problem of multi-scale representations of shapes can be reduced to the signal filtering on the λ and H parameters. On the other hand, the surface morphing problem can be transformed to an interpolation process of two sets of (λ, H parameters. We test the proposed algorithms on 3D human face data and MRI-derived brain surfaces. Experimental results show that our proposed methods can effectively obtain multi-scale surface representations and give natural surface morphing results.

  18. Multiscale Modeling in the Clinic: Drug Design and Development

    Energy Technology Data Exchange (ETDEWEB)

    Clancy, Colleen E.; An, Gary; Cannon, William R.; Liu, Yaling; May, Elebeoba E.; Ortoleva, Peter; Popel, Aleksander S.; Sluka, James P.; Su, Jing; Vicini, Paolo; Zhou, Xiaobo; Eckmann, David M.


    A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions to guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.

  19. Multi-scale surface-groundwater interactions: Processes and Implications (United States)

    Packman, A. I.; Harvey, J. W.; Worman, A.; Cardenas, M. B.; Schumer, R.; Jerolmack, D. J.; Tank, J. L.; Stonedahl, S. H.


    Site-based investigations of stream-subsurface interactions normally focus on a limited range of spatial scales - typically either very shallow subsurface flows in the hyporheic zone, or much larger scale surface- groundwater interactions - but subsurface flows are linked across this entire continuum. Broad, multi-scale surface-groundwater interactions produce complex patterns in porewater flows, and interfacial fluxes do not average in a simple fashion because of the competitive effects of flows induced at different scales. For example, reach-scale stream-groundwater interactions produce sequences of gaining and losing reaches that can either suppress or enhance local-scale hyporheic exchange. Many individual topographic features also produce long power-law tails in surface residence time distributions, and the duration of these tails is greatly extended by interactions over a wide range of spatial scales. Simultaneous sediment transport and landscape evolution further complicates the analysis of porewater flow dynamics in rivers. Finally, inhomogeneity in important biogeochemical processes, particularly microbial processes that are stimulated near the sediment- water interface, leads to a great degree of non-linearity in chemical transformation rates in stream channels. This high degree of complexity in fluvial systems requires that careful approaches be used to extend local observations of hyporheic exchange and associated nutrient, carbon, and contaminant transformations to larger spatial scales. It is important to recognize that conventional advection-dispersion models are not expected to apply, and instead anomalous transport models must be used. Unfortunately, no generally applicable model is available for stream-groundwater interactions at the present time. Alternative approaches for modeling conservative and reactive transport will be discussed, and a strategy articulated for coping with the complexity of coupled surface-subsurface dynamics in fluvial

  20. A Risk Assessment System with Automatic Extraction of Event Types (United States)

    Capet, Philippe; Delavallade, Thomas; Nakamura, Takuya; Sandor, Agnes; Tarsitano, Cedric; Voyatzi, Stavroula

    In this article we describe the joint effort of experts in linguistics, information extraction and risk assessment to integrate EventSpotter, an automatic event extraction engine, into ADAC, an automated early warning system. By detecting as early as possible weak signals of emerging risks ADAC provides a dynamic synthetic picture of situations involving risk. The ADAC system calculates risk on the basis of fuzzy logic rules operated on a template graph whose leaves are event types. EventSpotter is based on a general purpose natural language dependency parser, XIP, enhanced with domain-specific lexical resources (Lexicon-Grammar). Its role is to automatically feed the leaves with input data.

  1. Unusual multiscale mechanics of biomimetic nanoparticle hydrogels. (United States)

    Zhou, Yunlong; Damasceno, Pablo F; Somashekar, Bagganahalli S; Engel, Michael; Tian, Falin; Zhu, Jian; Huang, Rui; Johnson, Kyle; McIntyre, Carl; Sun, Kai; Yang, Ming; Green, Peter F; Ramamoorthy, Ayyalusamy; Glotzer, Sharon C; Kotov, Nicholas A


    Viscoelastic properties are central for gels and other materials. Simultaneously, high storage and loss moduli are difficult to attain due to their contrarian requirements to chemical structure. Biomimetic inorganic nanoparticles offer a promising toolbox for multiscale engineering of gel mechanics, but a conceptual framework for their molecular, nanoscale, mesoscale, and microscale engineering as viscoelastic materials is absent. Here we show nanoparticle gels with simultaneously high storage and loss moduli from CdTe nanoparticles. Viscoelastic figure of merit reaches 1.83 MPa exceeding that of comparable gels by 100-1000 times for glutathione-stabilized nanoparticles. The gels made from the smallest nanoparticles display the highest stiffness, which was attributed to the drastic change of GSH configurations when nanoparticles decrease in size. A computational model accounting for the difference in nanoparticle interactions for variable GSH configurations describes the unusual trends of nanoparticle gel viscoelasticity. These observations are generalizable to other NP gels interconnected by supramolecular interactions and lead to materials with high-load bearing abilities and energy dissipation needed for multiple technologies.

  2. Multiscale physics of rubber-ice friction (United States)

    Tuononen, Ari J.; Kriston, András; Persson, Bo


    Ice friction plays an important role in many engineering applications, e.g., tires on icy roads, ice breaker ship motion, or winter sports equipment. Although numerous experiments have already been performed to understand the effect of various conditions on ice friction, to reveal the fundamental frictional mechanisms is still a challenging task. This study uses in situ white light interferometry to analyze ice surface topography during linear friction testing with a rubber slider. The method helps to provide an understanding of the link between changes in the surface topography and the friction coefficient through direct visualization and quantitative measurement of the morphologies of the ice surface at different length scales. Besides surface polishing and scratching, it was found that ice melts locally even after one sweep showing the refrozen droplets. A multi-scale rubber friction theory was also applied to study the contribution of viscoelasticity to the total friction coefficient, which showed a significant level with respect to the smoothness of the ice; furthermore, the theory also confirmed the possibility of local ice melting.

  3. Fast Plasma Investigation for Magnetospheric Multiscale (United States)

    Pollock, C.; Moore, T.; Coffey, V.; Dorelli J.; Giles, B.; Adrian, M.; Chandler, M.; Duncan, C.; Figueroa-Vinas, A.; Garcia, K.; hide


    The Fast Plasma Investigation (FPI) was developed for flight on the Magnetospheric Multiscale (MMS) mission to measure the differential directional flux of magnetospheric electrons and ions with unprecedented time resolution to resolve kinetic-scale plasma dynamics. This increased resolution has been accomplished by placing four dual 180-degree top hat spectrometers for electrons and four dual 180-degree top hat spectrometers for ions around the periphery of each of four MMS spacecraft. Using electrostatic field-of-view deflection, the eight spectrometers for each species together provide 4pi-sr-field-of-view with, at worst, 11.25-degree sample spacing. Energy/charge sampling is provided by swept electrostatic energy/charge selection over the range from 10 eVq to 30000 eVq. The eight dual spectrometers on each spacecraft are controlled and interrogated by a single block redundant Instrument Data Processing Unit, which in turn interfaces to the observatory's Instrument Suite Central Instrument Data processor. This paper described the design of FPI, its ground and in-flight calibration, its operational concept, and its data products.

  4. Multiscale Concrete Modeling of Aging Degradation

    Energy Technology Data Exchange (ETDEWEB)

    Hammi, Yousseff [Mississippi State Univ., Mississippi State, MS (United States); Gullett, Philipp [Mississippi State Univ., Mississippi State, MS (United States); Horstemeyer, Mark F. [Mississippi State Univ., Mississippi State, MS (United States)


    In this work a numerical finite element framework is implemented to enable the integration of coupled multiscale and multiphysics transport processes. A User Element subroutine (UEL) in Abaqus is used to simultaneously solve stress equilibrium, heat conduction, and multiple diffusion equations for 2D and 3D linear and quadratic elements. Transport processes in concrete structures and their degradation mechanisms are presented along with the discretization of the governing equations. The multiphysics modeling framework is theoretically extended to the linear elastic fracture mechanics (LEFM) by introducing the eXtended Finite Element Method (XFEM) and based on the XFEM user element implementation of Giner et al. [2009]. A damage model that takes into account the damage contribution from the different degradation mechanisms is theoretically developed. The total contribution of damage is forwarded to a Multi-Stage Fatigue (MSF) model to enable the assessment of the fatigue life and the deterioration of reinforced concrete structures in a nuclear power plant. Finally, two examples are presented to illustrate the developed multiphysics user element implementation and the XFEM implementation of Giner et al. [2009].

  5. Fast Plasma Investigation for Magnetospheric Multiscale (United States)

    Pollock, C.; Moore, T.; Jacques, A.; Burch, J.; Gliese, U.; Saito, Y.; Omoto, T.; Avanov, L.; Barrie, A.; Coffey, V.; Dorelli, J.; Gershman, D.; Giles, B.; Rosnack, T.; Salo, C.; Yokota, S.; Adrian, M.; Aoustin, C.; Auletti, C.; Aung, S.; Bigio, V.; Cao, N.; Chandler, M.; Chornay, D.; Christian, K.; Clark, G.; Collinson, G.; Corris, T.; De Los Santos, A.; Devlin, R.; Diaz, T.; Dickerson, T.; Dickson, C.; Diekmann, A.; Diggs, F.; Duncan, C.; Figueroa-Vinas, A.; Firman, C.; Freeman, M.; Galassi, N.; Garcia, K.; Goodhart, G.; Guererro, D.; Hageman, J.; Hanley, J.; Hemminger, E.; Holland, M.; Hutchins, M.; James, T.; Jones, W.; Kreisler, S.; Kujawski, J.; Lavu, V.; Lobell, J.; LeCompte, E.; Lukemire, A.; MacDonald, E.; Mariano, A.; Mukai, T.; Narayanan, K.; Nguyan, Q.; Onizuka, M.; Paterson, W.; Persyn, S.; Piepgrass, B.; Cheney, F.; Rager, A.; Raghuram, T.; Ramil, A.; Reichenthal, L.; Rodriguez, H.; Rouzaud, J.; Rucker, A.; Saito, Y.; Samara, M.; Sauvaud, J.-A.; Schuster, D.; Shappirio, M.; Shelton, K.; Sher, D.; Smith, D.; Smith, K.; Smith, S.; Steinfeld, D.; Szymkiewicz, R.; Tanimoto, K.; Taylor, J.; Tucker, C.; Tull, K.; Uhl, A.; Vloet, J.; Walpole, P.; Weidner, S.; White, D.; Winkert, G.; Yeh, P.-S.; Zeuch, M.


    The Fast Plasma Investigation (FPI) was developed for flight on the Magnetospheric Multiscale (MMS) mission to measure the differential directional flux of magnetospheric electrons and ions with unprecedented time resolution to resolve kinetic-scale plasma dynamics. This increased resolution has been accomplished by placing four dual 180-degree top hat spectrometers for electrons and four dual 180-degree top hat spectrometers for ions around the periphery of each of four MMS spacecraft. Using electrostatic field-of-view deflection, the eight spectrometers for each species together provide 4pi-sr field-of-view with, at worst, 11.25-degree sample spacing. Energy/charge sampling is provided by swept electrostatic energy/charge selection over the range from 10 eV/q to 30000 eV/q. The eight dual spectrometers on each spacecraft are controlled and interrogated by a single block redundant Instrument Data Processing Unit, which in turn interfaces to the observatory's Instrument Suite Central Instrument Data Processor. This paper describes the design of FPI, its ground and in-flight calibration, its operational concept, and its data products.

  6. Multiscale Modeling of UHTC: Thermal Conductivity (United States)

    Lawson, John W.; Murry, Daw; Squire, Thomas; Bauschlicher, Charles W.


    We are developing a multiscale framework in computational modeling for the ultra high temperature ceramics (UHTC) ZrB2 and HfB2. These materials are characterized by high melting point, good strength, and reasonable oxidation resistance. They are candidate materials for a number of applications in extreme environments including sharp leading edges of hypersonic aircraft. In particular, we used a combination of ab initio methods, atomistic simulations and continuum computations to obtain insights into fundamental properties of these materials. Ab initio methods were used to compute basic structural, mechanical and thermal properties. From these results, a database was constructed to fit a Tersoff style interatomic potential suitable for atomistic simulations. These potentials were used to evaluate the lattice thermal conductivity of single crystals and the thermal resistance of simple grain boundaries. Finite element method (FEM) computations using atomistic results as inputs were performed with meshes constructed on SEM images thereby modeling the realistic microstructure. These continuum computations showed the reduction in thermal conductivity due to the grain boundary network.

  7. Magnetospheric MultiScale (MMS) System Manager (United States)

    Schiff, Conrad; Maher, Francis Alfred; Henely, Sean Philip; Rand, David


    The Magnetospheric MultiScale (MMS) mission is an ambitious NASA space science mission in which 4 spacecraft are flown in tight formation about a highly elliptical orbit. Each spacecraft has multiple instruments that measure particle and field compositions in the Earths magnetosphere. By controlling the members relative motion, MMS can distinguish temporal and spatial fluctuations in a way that a single spacecraft cannot.To achieve this control, 2 sets of four maneuvers, distributed evenly across the spacecraft must be performed approximately every 14 days. Performing a single maneuver on an individual spacecraft is usually labor intensive and the complexity becomes clearly increases with four. As a result, the MMS flight dynamics team turned to the System Manager to put the routine or error-prone under machine control freeing the analysts for activities that require human judgment.The System Manager is an expert system that is capable of handling operations activities associated with performing MMS maneuvers. As an expert system, it can work off a known schedule, launching jobs based on a one-time occurrence or on a set reoccurring schedule. It is also able to detect situational changes and use event-driven programming to change schedules, adapt activities, or call for help.

  8. A multiscale modeling approach for biomolecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Bowling, Alan, E-mail:; Haghshenas-Jaryani, Mahdi, E-mail: [The University of Texas at Arlington, Department of Mechanical and Aerospace Engineering (United States)


    This paper presents a new multiscale molecular dynamic model for investigating the effects of external interactions, such as contact and impact, during stepping and docking of motor proteins and other biomolecular systems. The model retains the mass properties ensuring that the result satisfies Newton’s second law. This idea is presented using a simple particle model to facilitate discussion of the rigid body model; however, the particle model does provide insights into particle dynamics at the nanoscale. The resulting three-dimensional model predicts a significant decrease in the effect of the random forces associated with Brownian motion. This conclusion runs contrary to the widely accepted notion that the motor protein’s movements are primarily the result of thermal effects. This work focuses on the mechanical aspects of protein locomotion; the effect ATP hydrolysis is estimated as internal forces acting on the mechanical model. In addition, the proposed model can be numerically integrated in a reasonable amount of time. Herein, the differences between the motion predicted by the old and new modeling approaches are compared using a simplified model of myosin V.

  9. Multiscale modeling of integrated CCS systems (United States)

    Alhajaj, Ahmed; Shah, Nilay


    The world will continue consuming fossil fuel within the coming decades to meet its growing energy demand; however, this source must be cleaner through implementation of carbon capture, transport and storage (CCTS). This process is complex and involves multiple phases, owned by different operational companies and stakeholders with different business models and regulatory framework. The objective of this work is to develop a multiscale modeling approach to link process models, post-combustion capture plant model and network design models under an optimization framework in order to design and analyse the cost optimal CO2 infrastructure that match CO2 sources and sinks in capacity and time. The network comprises a number of CO2 sources at fixed locations and a number of potential CO2 storage sites. The decisions to be determined include from which sources it is appropriate to capture CO2 and the cost-optimal degree-of-capture (DOC) for a given source and the infrastructural layout of the CO2 transmission network.

  10. An Improved Algorithm Based on Minimum Spanning Tree for Multi-scale Segmentation of Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    LI Hui


    Full Text Available As the basis of object-oriented information extraction from remote sensing imagery,image segmentation using multiple image features,exploiting spatial context information, and by a multi-scale approach are currently the research focuses. Using an optimization approach of the graph theory, an improved multi-scale image segmentation method is proposed. In this method, the image is applied with a coherent enhancement anisotropic diffusion filter followed by a minimum spanning tree segmentation approach, and the resulting segments are merged with reference to a minimum heterogeneity criterion.The heterogeneity criterion is defined as a function of the spectral characteristics and shape parameters of segments. The purpose of the merging step is to realize the multi-scale image segmentation. Tested on two images, the proposed method was visually and quantitatively compared with the segmentation method employed in the eCognition software. The results show that the proposed method is effective and outperforms the latter on areas with subtle spectral differences.

  11. A multi-scale framework to link remotely sensed metrics with socioeconomic data (United States)

    Watmough, Gary; Svenning, Jens-Christian; Palm, Cheryl; Sullivan, Clare; Danylo, Olha; McCallum, Ian


    There is increasing interest in the use of remotely sensed satellite data for estimating human poverty as it can bridge data gaps that prevent fine scale monitoring of development goals across large areas. The ways in which metrics derived from satellite imagery are linked with socioeconomic data are crucial for accurate estimation of poverty. Yet, to date, approaches in the literature linking satellite metrics with socioeconomic data are poorly characterized. Typically, approaches use a GIS approach such as circular buffer zones around a village or household or an administrative boundary such as a district or census enumeration area. These polygons are then used to extract environmental data from satellite imagery and related to the socioeconomic data in statistical analyses. The use of a single polygon to link environment and socioeconomic data is inappropriate in coupled human-natural systems as processes operate over multiple scales. Human interactions with the environment occur at multiple levels from individual (household) access to agricultural plots adjacent to homes, to communal access to common pool resources (CPR) such as forests at the village level. Here, we present a multi-scale framework that explicitly considers how people use the landscape. The framework is presented along with a case study example in Kenya. The multi-scale approach could enhance the modelling of human-environment interactions which will have important consequences for monitoring the sustainable development goals for human livelihoods and biodiversity conservation.

  12. A multiscale method for a robust detection of the default mode network (United States)

    Baquero, Katherine; Gómez, Francisco; Cifuentes, Christian; Guldenmund, Pieter; Demertzi, Athena; Vanhaudenhuyse, Audrey; Gosseries, Olivia; Tshibanda, Jean-Flory; Noirhomme, Quentin; Laureys, Steven; Soddu, Andrea; Romero, Eduardo


    The Default Mode Network (DMN) is a resting state network widely used for the analysis and diagnosis of mental disorders. It is normally detected in fMRI data, but for its detection in data corrupted by motion artefacts or low neuronal activity, the use of a robust analysis method is mandatory. In fMRI it has been shown that the signal-to-noise ratio (SNR) and the detection sensitivity of neuronal regions is increased with di erent smoothing kernels sizes. Here we propose to use a multiscale decomposition based of a linear scale-space representation for the detection of the DMN. Three main points are proposed in this methodology: rst, the use of fMRI data at di erent smoothing scale-spaces, second, detection of independent neuronal components of the DMN at each scale by using standard preprocessing methods and ICA decomposition at scale-level, and nally, a weighted contribution of each scale by the Goodness of Fit measurement. This method was applied to a group of control subjects and was compared with a standard preprocesing baseline. The detection of the DMN was improved at single subject level and at group level. Based on these results, we suggest to use this methodology to enhance the detection of the DMN in data perturbed with artefacts or applied to subjects with low neuronal activity. Furthermore, the multiscale method could be extended for the detection of other resting state neuronal networks.

  13. Friction and adhesion of hierarchical carbon nanotube structures for biomimetic dry adhesives: multiscale modeling. (United States)

    Hu, Shihao; Jiang, Haodan; Xia, Zhenhai; Gao, Xiaosheng


    With unique hierarchical fibrillar structures on their feet, gecko lizards can walk on vertical walls or even ceilings. Recent experiments have shown that strong binding along the shear direction and easy lifting in the normal direction can be achieved by forming unidirectional carbon nanotube array with laterally distributed tips similar to gecko's feet. In this study, a multiscale modeling approach was developed to analyze friction and adhesion behaviors of this hierarchical fibrillar system. Vertically aligned carbon nanotube array with laterally distributed segments at the end was simulated by coarse grained molecular dynamics. The effects of the laterally distributed segments on friction and adhesion strengths were analyzed, and further adopted as cohesive laws used in finite element analysis at device scale. The results show that the laterally distributed segments play an essential role in achieving high force anisotropy between normal and shear directions in the adhesives. Finite element analysis reveals a new friction-enhanced adhesion mechanism of the carbon nanotube array, which also exists in gecko adhesive system. The multiscale modeling provides an approach to bridge the microlevel structures of the carbon nanotube array with its macrolevel adhesive behaviors, and the predictions from this modeling give an insight into the mechanisms of gecko-mimicking dry adhesives.

  14. Generalized multiscale finite element method. Symmetric interior penalty coupling

    KAUST Repository

    Efendiev, Yalchin R.


    Motivated by applications to numerical simulations of flows in highly heterogeneous porous media, we develop multiscale finite element methods for second order elliptic equations. We discuss a multiscale model reduction technique in the framework of the discontinuous Galerkin finite element method. We propose two different finite element spaces on the coarse mesh. The first space is based on a local eigenvalue problem that uses an interior weighted L2-norm and a boundary weighted L2-norm for computing the "mass" matrix. The second choice is based on generation of a snapshot space and subsequent selection of a subspace of a reduced dimension. The approximation with these multiscale spaces is based on the discontinuous Galerkin finite element method framework. We investigate the stability and derive error estimates for the methods and further experimentally study their performance on a representative number of numerical examples. © 2013 Elsevier Inc.

  15. A Micromechanics-Based Method for Multiscale Fatigue Prediction (United States)

    Moore, John Allan

    An estimated 80% of all structural failures are due to mechanical fatigue, often resulting in catastrophic, dangerous and costly failure events. However, an accurate model to predict fatigue remains an elusive goal. One of the major challenges is that fatigue is intrinsically a multiscale process, which is dependent on a structure's geometric design as well as its material's microscale morphology. The following work begins with a microscale study of fatigue nucleation around non- metallic inclusions. Based on this analysis, a novel multiscale method for fatigue predictions is developed. This method simulates macroscale geometries explicitly while concurrently calculating the simplified response of microscale inclusions. Thus, providing adequate detail on multiple scales for accurate fatigue life predictions. The methods herein provide insight into the multiscale nature of fatigue, while also developing a tool to aid in geometric design and material optimization for fatigue critical devices such as biomedical stents and artificial heart valves.

  16. Algorithmic foundation of multi-scale spatial representation

    CERN Document Server

    Li, Zhilin


    With the widespread use of GIS, multi-scale representation has become an important issue in the realm of spatial data handling. However, no book to date has systematically tackled the different aspects of this discipline. Emphasizing map generalization, Algorithmic Foundation of Multi-Scale Spatial Representation addresses the mathematical basis of multi-scale representation, specifically, the algorithmic foundation.Using easy-to-understand language, the author focuses on geometric transformations, with each chapter surveying a particular spatial feature. After an introduction to the essential operations required for geometric transformations as well as some mathematical and theoretical background, the book describes algorithms for a class of point features/clusters. It then examines algorithms for individual line features, such as the reduction of data points, smoothing (filtering), and scale-driven generalization, followed by a discussion of algorithms for a class of line features including contours, hydrog...

  17. Automatic sample changers maintenance manual

    International Nuclear Information System (INIS)

    Myers, T.A.


    This manual describes and provides trouble-shooting aids for the Automatic Sample Changer electronics on the automatic beta counting system, developed by the Los Alamos Scientific Laboratory Group CNC-11. The output of a gas detector is shaped by a preamplifier, then is coupled to an amplifier. Amplifier output is discriminated and is the input to a scaler. An identification number is associated with each sample. At a predetermined count length, the identification number, scaler data plus other information is punched out on a data card. The next sample to be counted is automatically selected. The beta counter uses the same electronics as the prior count did, the only difference being the sample identification number and sample itself. This manual is intended as a step-by-step aid in trouble-shooting the electronics associated with positioning the sample, counting the sample, and getting the needed data punched on an 80-column data card

  18. Multiscale Finite Element Methods for Flows on Rough Surfaces

    KAUST Repository

    Efendiev, Yalchin


    In this paper, we present the Multiscale Finite Element Method (MsFEM) for problems on rough heterogeneous surfaces. We consider the diffusion equation on oscillatory surfaces. Our objective is to represent small-scale features of the solution via multiscale basis functions described on a coarse grid. This problem arises in many applications where processes occur on surfaces or thin layers. We present a unified multiscale finite element framework that entails the use of transformations that map the reference surface to the deformed surface. The main ingredients of MsFEM are (1) the construction of multiscale basis functions and (2) a global coupling of these basis functions. For the construction of multiscale basis functions, our approach uses the transformation of the reference surface to a deformed surface. On the deformed surface, multiscale basis functions are defined where reduced (1D) problems are solved along the edges of coarse-grid blocks to calculate nodalmultiscale basis functions. Furthermore, these basis functions are transformed back to the reference configuration. We discuss the use of appropriate transformation operators that improve the accuracy of the method. The method has an optimal convergence if the transformed surface is smooth and the image of the coarse partition in the reference configuration forms a quasiuniform partition. In this paper, we consider such transformations based on harmonic coordinates (following H. Owhadi and L. Zhang [Comm. Pure and Applied Math., LX(2007), pp. 675-723]) and discuss gridding issues in the reference configuration. Numerical results are presented where we compare the MsFEM when two types of deformations are used formultiscale basis construction. The first deformation employs local information and the second deformation employs a global information. Our numerical results showthat one can improve the accuracy of the simulations when a global information is used. © 2013 Global-Science Press.

  19. Generalized multiscale finite element methods (GMsFEM)

    KAUST Repository

    Efendiev, Yalchin R.


    In this paper, we propose a general approach called Generalized Multiscale Finite Element Method (GMsFEM) for performing multiscale simulations for problems without scale separation over a complex input space. As in multiscale finite element methods (MsFEMs), the main idea of the proposed approach is to construct a small dimensional local solution space that can be used to generate an efficient and accurate approximation to the multiscale solution with a potentially high dimensional input parameter space. In the proposed approach, we present a general procedure to construct the offline space that is used for a systematic enrichment of the coarse solution space in the online stage. The enrichment in the online stage is performed based on a spectral decomposition of the offline space. In the online stage, for any input parameter, a multiscale space is constructed to solve the global problem on a coarse grid. The online space is constructed via a spectral decomposition of the offline space and by choosing the eigenvectors corresponding to the largest eigenvalues. The computational saving is due to the fact that the construction of the online multiscale space for any input parameter is fast and this space can be re-used for solving the forward problem with any forcing and boundary condition. Compared with the other approaches where global snapshots are used, the local approach that we present in this paper allows us to eliminate unnecessary degrees of freedom on a coarse-grid level. We present various examples in the paper and some numerical results to demonstrate the effectiveness of our method. © 2013 Elsevier Inc.

  20. Rough Set Approach to Incomplete Multiscale Information System (United States)

    Yang, Xibei; Qi, Yong; Yu, Dongjun; Yu, Hualong; Song, Xiaoning; Yang, Jingyu


    Multiscale information system is a new knowledge representation system for expressing the knowledge with different levels of granulations. In this paper, by considering the unknown values, which can be seen everywhere in real world applications, the incomplete multiscale information system is firstly investigated. The descriptor technique is employed to construct rough sets at different scales for analyzing the hierarchically structured data. The problem of unravelling decision rules at different scales is also addressed. Finally, the reduct descriptors are formulated to simplify decision rules, which can be derived from different scales. Some numerical examples are employed to substantiate the conceptual arguments. PMID:25276852

  1. Multiscale Shannon entropy and its application in the stock market (United States)

    Gu, Rongbao


    In this paper, we perform a multiscale entropy analysis on the Dow Jones Industrial Average Index using the Shannon entropy. The stock index shows the characteristic of multi-scale entropy that caused by noise in the market. The entropy is demonstrated to have significant predictive ability for the stock index in both long-term and short-term, and empirical results verify that noise does exist in the market and can affect stock price. It has important implications on market participants such as noise traders.

  2. Multiscale analysis and nonlinear dynamics from genes to the brain

    CERN Document Server

    Schuster, Heinz Georg


    Since modeling multiscale phenomena in systems biology and neuroscience is a highly interdisciplinary task, the editor of the book invited experts in bio-engineering, chemistry, cardiology, neuroscience, computer science, and applied mathematics, to provide their perspectives. Each chapter is a window into the current state of the art in the areas of research discussed and the book is intended for advanced researchers interested in recent developments in these fields. While multiscale analysis is the major integrating theme of the book, its subtitle does not call for bridging the scales from g

  3. Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Plechac, Petr [Univ. of Delaware, Newark, DE (United States). Dept. of Mathematical Sciences


    The overall objective of this project was to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics and developing rigorous mathematical techniques and computational algorithms to study such models. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals.

  4. Modeling Temporal Evolution and Multiscale Structure in Networks

    DEFF Research Database (Denmark)

    Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard


    Many real-world networks exhibit both temporal evolution and multiscale structure. We propose a model for temporally correlated multifurcating hierarchies in complex networks which jointly capture both effects. We use the Gibbs fragmentation tree as prior over multifurcating trees and a change......-point model to account for the temporal evolution of each vertex. We demonstrate that our model is able to infer time-varying multiscale structure in synthetic as well as three real world time-evolving complex networks. Our modeling of the temporal evolution of hierarchies brings new insights...

  5. Multiscale integration schemes for jump-diffusion systems

    Energy Technology Data Exchange (ETDEWEB)

    Givon, D.; Kevrekidis, I.G.


    We study a two-time-scale system of jump-diffusion stochastic differential equations. We analyze a class of multiscale integration methods for these systems, which, in the spirit of [1], consist of a hybridization between a standard solver for the slow components and short runs for the fast dynamics, which are used to estimate the effect that the fast components have on the slow ones. We obtain explicit bounds for the discrepancy between the results of the multiscale integration method and the slow components of the original system.

  6. Multiscale modeling of complex materials phenomenological, theoretical and computational aspects

    CERN Document Server

    Trovalusci, Patrizia


    The papers in this volume deal with materials science, theoretical mechanics and experimental and computational techniques at multiple scales, providing a sound base and a framework for many applications which are hitherto treated in a phenomenological sense. The basic principles are formulated of multiscale modeling strategies towards modern complex multiphase materials subjected to various types of mechanical, thermal loadings and environmental effects. The focus is on problems where mechanics is highly coupled with other concurrent physical phenomena. Attention is also focused on the historical origins of multiscale modeling and foundations of continuum mechanics currently adopted to model non-classical continua with substructure, for which internal length scales play a crucial role.

  7. Multi-scale magnetic field intermittence in the plasma sheet

    Directory of Open Access Journals (Sweden)

    Z. Vörös


    Full Text Available This paper demonstrates that intermittent magnetic field fluctuations in the plasma sheet exhibit transitory, localized, and multi-scale features. We propose a multifractal-based algorithm, which quantifies intermittence on the basis of the statistical distribution of the "strength of burstiness", estimated within a sliding window. Interesting multi-scale phenomena observed by the Cluster spacecraft include large-scale motion of the current sheet and bursty bulk flow associated turbulence, interpreted as a cross-scale coupling (CSC process.Key words. Magnetospheric physics (magnetotail; plasma sheet – Space plasma physics (turbulence

  8. Development of an automatic scaler

    International Nuclear Information System (INIS)

    He Yuehong


    A self-designed automatic scaler is introduced. A microcontroller LPC936 is used as the master chip in the scaler. A counter integrated with the micro-controller is configured to operate as external pulse counter. Software employed in the scaler is based on a embedded real-time operating system kernel named Small RTOS. Data storage, calculation and some other functions are also provided. The scaler is designed for applications with low cost, low power consumption solutions. By now, the automatic scaler has been applied in a surface contamination instrument. (authors)

  9. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard


    Annual Review in Automatic Programming focuses on the techniques of automatic programming used with digital computers. Topics covered range from the design of machine-independent programming languages to the use of recursive procedures in ALGOL 60. A multi-pass translation scheme for ALGOL 60 is described, along with some commercial source languages. The structure and use of the syntax-directed compiler is also considered.Comprised of 12 chapters, this volume begins with a discussion on the basic ideas involved in the description of a computing process as a program for a computer, expressed in

  10. Traduction automatique et terminologie automatique (Automatic Translation and Automatic Terminology (United States)

    Dansereau, Jules


    An exposition of reasons why a system of automatic translation could not use a terminology bank except as a source of information. The fundamental difference between the two tools is explained and examples of translation and mistranslation are given as evidence of the limits and possibilities of each process. (Text is in French.) (AMH)

  11. Wayside acoustic diagnosis of defective train bearings based on signal resampling and information enhancement (United States)

    He, Qingbo; Wang, Jun; Hu, Fei; Kong, Fanrang


    The diagnosis of train bearing defects plays a significant role to maintain the safety of railway transport. Among various defect detection techniques, acoustic diagnosis is capable of detecting incipient defects of a train bearing as well as being suitable for wayside monitoring. However, the wayside acoustic signal will be corrupted by the Doppler effect and surrounding heavy noise. This paper proposes a solution to overcome these two difficulties in wayside acoustic diagnosis. In the solution, a dynamically resampling method is firstly presented to reduce the Doppler effect, and then an adaptive stochastic resonance (ASR) method is proposed to enhance the defective characteristic frequency automatically by the aid of noise. The resampling method is based on a frequency variation curve extracted from the time-frequency distribution (TFD) of an acoustic signal by dynamically minimizing the local cost functions. For the ASR method, the genetic algorithm is introduced to adaptively select the optimal parameter of the multiscale noise tuning (MST)-based stochastic resonance (SR) method. The proposed wayside acoustic diagnostic scheme combines signal resampling and information enhancement, and thus is expected to be effective in wayside defective bearing detection. The experimental study verifies the effectiveness of the proposed solution.

  12. Multi-scale and shape constrained localized region-based active contour segmentation of uterine fibroid ultrasound images in HIFU therapy.

    Directory of Open Access Journals (Sweden)

    Xiangyun Liao

    Full Text Available To overcome the severe intensity inhomogeneity and blurry boundaries in HIFU (High Intensity Focused Ultrasound ultrasound images, an accurate and efficient multi-scale and shape constrained localized region-based active contour model (MSLCV, was developed to accurately and efficiently segment the target region in HIFU ultrasound images of uterine fibroids.We incorporated a new shape constraint into the localized region-based active contour, which constrained the active contour to obtain the desired, accurate segmentation, avoiding boundary leakage and excessive contraction. Localized region-based active contour modeling is suitable for ultrasound images, but it still cannot acquire satisfactory segmentation for HIFU ultrasound images of uterine fibroids. We improved the localized region-based active contour model by incorporating a shape constraint into region-based level set framework to increase segmentation accuracy. Some improvement measures were proposed to overcome the sensitivity of initialization, and a multi-scale segmentation method was proposed to improve segmentation efficiency. We also designed an adaptive localizing radius size selection function to acquire better segmentation results.Experimental results demonstrated that the MSLCV model was significantly more accurate and efficient than conventional methods. The MSLCV model has been quantitatively validated via experiments, obtaining an average of 0.94 for the DSC (Dice similarity coefficient and 25.16 for the MSSD (mean sum of square distance. Moreover, by using the multi-scale segmentation method, the MSLCV model's average segmentation time was decreased to approximately 1/8 that of the localized region-based active contour model (the LCV model.An accurate and efficient multi-scale and shape constrained localized region-based active contour model was designed for the semi-automatic segmentation of uterine fibroid ultrasound (UFUS images in HIFU therapy. Compared with other

  13. Automatic generation of executable communication specifications from parallel applications

    Energy Technology Data Exchange (ETDEWEB)

    Pakin, Scott [Los Alamos National Laboratory; Wu, Xing [NCSU; Mueller, Frank [NCSU


    Portable parallel benchmarks are widely used and highly effective for (a) the evaluation, analysis and procurement of high-performance computing (HPC) systems and (b) quantifying the potential benefits of porting applications for new hardware platforms. Yet, past techniques to synthetically parameterized hand-coded HPC benchmarks prove insufficient for today's rapidly-evolving scientific codes particularly when subject to multi-scale science modeling or when utilizing domain-specific libraries. To address these problems, this work contributes novel methods to automatically generate highly portable and customizable communication benchmarks from HPC applications. We utilize ScalaTrace, a lossless, yet scalable, parallel application tracing framework to collect selected aspects of the run-time behavior of HPC applications, including communication operations and execution time, while abstracting away the details of the computation proper. We subsequently generate benchmarks with identical run-time behavior from the collected traces. A unique feature of our approach is that we generate benchmarks in CONCEPTUAL, a domain-specific language that enables the expression of sophisticated communication patterns using a rich and easily understandable grammar yet compiles to ordinary C + MPI. Experimental results demonstrate that the generated benchmarks are able to preserve the run-time behavior - including both the communication pattern and the execution time - of the original applications. Such automated benchmark generation is particularly valuable for proprietary, export-controlled, or classified application codes: when supplied to a third party. Our auto-generated benchmarks ensure performance fidelity but without the risks associated with releasing the original code. This ability to automatically generate performance-accurate benchmarks from parallel applications is novel and without any precedence, to our knowledge.

  14. A consideration of the operation of automatic production machines. (United States)

    Hoshi, Toshiro; Sugimoto, Noboru


    At worksites, various automatic production machines are in use to release workers from muscular labor or labor in the detrimental environment. On the other hand, a large number of industrial accidents have been caused by automatic production machines. In view of this, this paper considers the operation of automatic production machines from the viewpoint of accident prevention, and points out two types of machine operation - operation for which quick performance is required (operation that is not permitted to be delayed) - and operation for which composed performance is required (operation that is not permitted to be performed in haste). These operations are distinguished by operation buttons of suitable colors and shapes. This paper shows that these characteristics are evaluated as "asymmetric on the time-axis". Here, in order for workers to accept the risk of automatic production machines, it is preconditioned in general that harm should be sufficiently small or avoidance of harm is easy. In this connection, this paper shows the possibility of facilitating the acceptance of the risk of automatic production machines by enhancing the asymmetric on the time-axis.

  15. The Potential of Automatic Word Comparison for Historical Linguistics. (United States)

    List, Johann-Mattis; Greenhill, Simon J; Gray, Russell D


    The amount of data from languages spoken all over the world is rapidly increasing. Traditional manual methods in historical linguistics need to face the challenges brought by this influx of data. Automatic approaches to word comparison could provide invaluable help to pre-analyze data which can be later enhanced by experts. In this way, computational approaches can take care of the repetitive and schematic tasks leaving experts to concentrate on answering interesting questions. Here we test the potential of automatic methods to detect etymologically related words (cognates) in cross-linguistic data. Using a newly compiled database of expert cognate judgments across five different language families, we compare how well different automatic approaches distinguish related from unrelated words. Our results show that automatic methods can identify cognates with a very high degree of accuracy, reaching 89% for the best-performing method Infomap. We identify the specific strengths and weaknesses of these different methods and point to major challenges for future approaches. Current automatic approaches for cognate detection-although not perfect-could become an important component of future research in historical linguistics.

  16. Automatic segmentation of relevant structures in DCE MR mammograms (United States)

    Koenig, Matthias; Laue, Hendrik; Boehler, Tobias; Peitgen, Heinz-Otto


    The automatic segmentation of relevant structures such as skin edge, chest wall, or nipple in dynamic contrast enhanced MR imaging (DCE MRI) of the breast provides additional information for computer aided diagnosis (CAD) systems. Automatic reporting using BI-RADS criteria benefits of information about location of those structures. Lesion positions can be automatically described relatively to such reference structures for reporting purposes. Furthermore, this information can assist data reduction for computation expensive preprocessing such as registration, or for visualization of only the segments of current interest. In this paper, a novel automatic method for determining the air-breast boundary resp. skin edge, for approximation of the chest wall, and locating of the nipples is presented. The method consists of several steps which are built on top of each other. Automatic threshold computation leads to the air-breast boundary which is then analyzed to determine the location of the nipple. Finally, results of both steps are starting point for approximation of the chest wall. The proposed process was evaluated on a large data set of DCE MRI recorded by T1 sequences and yielded reasonable results in all cases.

  17. Quantum theory of multiscale coarse-graining (United States)

    Han, Yining; Jin, Jaehyeok; Wagner, Jacob W.; Voth, Gregory A.


    Coarse-grained (CG) models serve as a powerful tool to simulate molecular systems at much longer temporal and spatial scales. Previously, CG models and methods have been built upon classical statistical mechanics. The present paper develops a theory and numerical methodology for coarse-graining in quantum statistical mechanics, by generalizing the multiscale coarse-graining (MS-CG) method to quantum Boltzmann statistics. A rigorous derivation of the sufficient thermodynamic consistency condition is first presented via imaginary time Feynman path integrals. It identifies the optimal choice of CG action functional and effective quantum CG (qCG) force field to generate a quantum MS-CG (qMS-CG) description of the equilibrium system that is consistent with the quantum fine-grained model projected onto the CG variables. A variational principle then provides a class of algorithms for optimally approximating the qMS-CG force fields. Specifically, a variational method based on force matching, which was also adopted in the classical MS-CG theory, is generalized to quantum Boltzmann statistics. The qMS-CG numerical algorithms and practical issues in implementing this variational minimization procedure are also discussed. Then, two numerical examples are presented to demonstrate the method. Finally, as an alternative strategy, a quasi-classical approximation for the thermal density matrix expressed in the CG variables is derived. This approach provides an interesting physical picture for coarse-graining in quantum Boltzmann statistical mechanics in which the consistency with the quantum particle delocalization is obviously manifest, and it opens up an avenue for using path integral centroid-based effective classical force fields in a coarse-graining methodology.

  18. First results from the Magnetospheric Multiscale mission (United States)

    Lavraud, B.


    Since its launch in March 2015, NASA's Magnetospheric Multiscale mission (MMS) provides a wealth of unprecedented high resolution measurements of space plasma properties and dynamics in the near-Earth environment. MMS was designed in the first place to study the fundamental process of collision-less magnetic reconnection. The two first results reviewed here pertain to this topic and highlight how the extremely high resolution MMS data (electrons, in particular, with full three dimensional measurements at 30 ms in burst mode) have permitted to tackle electron dynamics in unprecedented details. The first result demonstrates how electrons become demagnetized and scattered near the magnetic reconnection X line as a result of increased magnetic field curvature, together with a decrease in its magnitude. The second result demonstrates that electrons form crescent-shaped, agyrotropic distribution functions very near the X line, suggestive of the existence of a perpendicular current aligned with the local electric field and consistent with the energy conversion expected in magnetic reconnection (such that J\\cdot E > 0). Aside from magnetic reconnection, we show how MMS contributes to topics such as wave properties and their interaction with particles. Thanks again to extremely high resolution measurements, the lossless and periodical energy exchange between wave electromagnetic fields and particles, as expected in the case of kinetic Alfvén waves, was confirmed. Although not discussed, MMS has the potential to solve many other outstanding issues in collision-less plasma physics, for example regarding shock or turbulence acceleration, with obvious broader impacts in astrophysics in general.

  19. A multiscale problem in thermal science

    Directory of Open Access Journals (Sweden)

    Casenave Fabien


    Full Text Available We consider a multiscale heat problem in civil aviation: determine the temperature field in a plane in flying conditions, with air conditioning. Ventilated electronic components in the bay bring a heat source, introducing a second scale in the problem. First, we present three levels of modelling for the physical phenomena, which are applied to the two sub-problems: the plane and the electronic component. Then, having reduced the complexity of the problem to a linear non-symmetric coercive PDE, we will use the reduced basis method for the electronic component problem. Nous considérons un problème multi-échelle d’aérothermie en aviation civile. Nous souhai- tons déterminer le champ de température dans un avion en conditions de vol, avec présence d’une climatisation. Des composants électroniques ventilés sont présents dans la soute, et constituent une source de chaleur, introduisant une deuxième échelle dans notre problème. Dans un premier temps, nous présentons trois niveaux de modélisation pour le phénomène d’aérothermie, que nous appliquerons aux deux sous-problèmes : l’avion et le composant électronique. Ensuite, nous appliquons la méthode des bases réduites au problème du composant électronique, en considérant des simplifications de modélisation amenant à la résolution numérique d’une EDP elliptique linéaire coercive non-symétrique.

  20. Fast Particle Methods for Multiscale Phenomena Simulations (United States)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew


    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  1. Quantum theory of multiscale coarse-graining. (United States)

    Han, Yining; Jin, Jaehyeok; Wagner, Jacob W; Voth, Gregory A


    Coarse-grained (CG) models serve as a powerful tool to simulate molecular systems at much longer temporal and spatial scales. Previously, CG models and methods have been built upon classical statistical mechanics. The present paper develops a theory and numerical methodology for coarse-graining in quantum statistical mechanics, by generalizing the multiscale coarse-graining (MS-CG) method to quantum Boltzmann statistics. A rigorous derivation of the sufficient thermodynamic consistency condition is first presented via imaginary time Feynman path integrals. It identifies the optimal choice of CG action functional and effective quantum CG (qCG) force field to generate a quantum MS-CG (qMS-CG) description of the equilibrium system that is consistent with the quantum fine-grained model projected onto the CG variables. A variational principle then provides a class of algorithms for optimally approximating the qMS-CG force fields. Specifically, a variational method based on force matching, which was also adopted in the classical MS-CG theory, is generalized to quantum Boltzmann statistics. The qMS-CG numerical algorithms and practical issues in implementing this variational minimization procedure are also discussed. Then, two numerical examples are presented to demonstrate the method. Finally, as an alternative strategy, a quasi-classical approximation for the thermal density matrix expressed in the CG variables is derived. This approach provides an interesting physical picture for coarse-graining in quantum Boltzmann statistical mechanics in which the consistency with the quantum particle delocalization is obviously manifest, and it opens up an avenue for using path integral centroid-based effective classical force fields in a coarse-graining methodology.

  2. Multiscale mechanical modeling of soft biological tissues (United States)

    Stylianopoulos, Triantafyllos


    Soft biological tissues include both native and artificial tissues. In the human body, tissues like the articular cartilage, arterial wall, and heart valve leaflets are examples of structures composed of an underlying network of collagen fibers, cells, proteins and molecules. Artificial tissues are less complex than native tissues and mainly consist of a fiber polymer network with the intent of replacing lost or damaged tissue. Understanding of the mechanical function of these materials is essential for many clinical treatments (e.g. arterial clamping, angioplasty), diseases (e.g. arteriosclerosis) and tissue engineering applications (e.g. engineered blood vessels or heart valves). This thesis presents the derivation and application of a multiscale methodology to describe the macroscopic mechanical function of soft biological tissues incorporating directly their structural architecture. The model, which is based on volume averaging theory, accounts for structural parameters such as the network volume fraction and orientation, the realignment of the fibers in response to strain, the interactions among the fibers and the interactions between the fibers and the interstitial fluid in order to predict the overall tissue behavior. Therefore, instead of using a constitutive equation to relate strain to stress, the tissue microstructure is modeled within a representative volume element (RVE) and the macroscopic response at any point in the tissue is determined by solving a micromechanics problem in the RVE. The model was applied successfully to acellular collagen gels, native blood vessels, and electrospun polyurethane scaffolds and provided accurate predictions for permeability calculations in isotropic and oriented fiber networks. The agreement of model predictions with experimentally determined mechanical properties provided insights into the mechanics of tissues and tissue constructs, while discrepancies revealed limitations of the model framework.

  3. Impact of model complexity and multi-scale data integration on the estimation of hydrogeological parameters in a dual-porosity aquifer (United States)

    Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi


    This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.

  4. Front-end vision and multi-scale image analysis multi-scale computer vision theory and applications, written in Mathematica

    CERN Document Server

    Romeny, Bart M Haar


    Front-End Vision and Multi-Scale Image Analysis is a tutorial in multi-scale methods for computer vision and image processing. It builds on the cross fertilization between human visual perception and multi-scale computer vision (`scale-space') theory and applications. The multi-scale strategies recognized in the first stages of the human visual system are carefully examined, and taken as inspiration for the many geometric methods discussed. All chapters are written in Mathematica, a spectacular high-level language for symbolic and numerical manipulations. The book presents a new and effective

  5. Multiscale Drivers of Global Environmental Health (United States)

    Desai, Manish Anil

    In this dissertation, I motivate, develop, and demonstrate three such approaches for investigating multiscale drivers of global environmental health: (1) a metric for analyzing contributions and responses to climate change from global to sectoral scales, (2) a framework for unraveling the influence of environmental change on infectious diseases at regional to local scales, and (3) a model for informing the design and evaluation of clean cooking interventions at community to household scales. The full utility of climate debt as an analytical perspective will remain untapped without tools that can be manipulated by a wide range of analysts, including global environmental health researchers. Chapter 2 explains how international natural debt (IND) apportions global radiative forcing from fossil fuel carbon dioxide and methane, the two most significant climate altering pollutants, to individual entities -- primarily countries but also subnational states and economic sectors, with even finer scales possible -- as a function of unique trajectories of historical emissions, taking into account the quite different radiative efficiencies and atmospheric lifetimes of each pollutant. Owing to its straightforward and transparent derivation, IND can readily operationalize climate debt to consider issues of equity and efficiency and drive scenario exercises that explore the response to climate change at multiple scales. Collectively, the analyses presented in this chapter demonstrate how IND can inform a range of key question on climate change mitigation at multiple scales, compelling environmental health towards an appraisal of the causes and not just the consequences of climate change. The environmental change and infectious disease (EnvID) conceptual framework of Chapter 3 builds on a rich history of prior efforts in epidemiologic theory, environmental science, and mathematical modeling by: (1) articulating a flexible and logical system specification; (2) incorporating

  6. Automatic segmentation and classification of multiple sclerosis in multichannel MRI. (United States)

    Akselrod-Ballin, Ayelet; Galun, Meirav; Gomori, John Moshe; Filippi, Massimo; Valsasina, Paola; Basri, Ronen; Brandt, Achi


    We introduce a multiscale approach that combines segmentation with classification to detect abnormal brain structures in medical imagery, and demonstrate its utility in automatically detecting multiple sclerosis (MS) lesions in 3-D multichannel magnetic resonance (MR) images. Our method uses segmentation to obtain a hierarchical decomposition of a multichannel, anisotropic MR scans. It then produces a rich set of features describing the segments in terms of intensity, shape, location, neighborhood relations, and anatomical context. These features are then fed into a decision forest classifier, trained with data labeled by experts, enabling the detection of lesions at all scales. Unlike common approaches that use voxel-by-voxel analysis, our system can utilize regional properties that are often important for characterizing abnormal brain structures. We provide experiments on two types of real MR images: a multichannel proton-density-, T2-, and T1-weighted dataset of 25 MS patients and a single-channel fluid attenuated inversion recovery (FLAIR) dataset of 16 MS patients. Comparing our results with lesion delineation by a human expert and with previously extensively validated results shows the promise of the approach.

  7. Lung image patch classification with automatic feature learning. (United States)

    Li, Qing; Cai, Weidong; Feng, David Dagan


    Image patch classification is an important task in many different medical imaging applications. The classification performance is usually highly dependent on the effectiveness of image feature vectors. While many feature descriptors have been proposed over the past years, they can be quite complicated and domain-specific. Automatic feature learning from image data has thus emerged as a different trend recently, to capture the intrinsic image features without manual feature design. In this paper, we propose to create multi-scale feature extractors based on an unsupervised learning algorithm; and obtain the image feature vectors by convolving the feature extractors with the image patches. The auto-generated image features are data-adaptive and highly descriptive. A simple classification scheme is then used to classify the image patches. The proposed method is generic in nature and can be applied to different imaging domains. For evaluation, we perform image patch classification to differentiate various lung tissue patterns commonly seen in interstitial lung disease (ILD), and demonstrate promising results.

  8. Automatically Preparing Safe SQL Queries (United States)

    Bisht, Prithvi; Sistla, A. Prasad; Venkatakrishnan, V. N.

    We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.

  9. Automatic Validation of Protocol Narration

    DEFF Research Database (Denmark)

    Bodei, Chiara; Buchholtz, Mikael; Degano, Pierpablo


    We perform a systematic expansion of protocol narrations into terms of a process algebra in order to make precise some of the detailed checks that need to be made in a protocol. We then apply static analysis technology to develop an automatic validation procedure for protocols. Finally, we...

  10. The Automatic Measurement of Targets

    DEFF Research Database (Denmark)

    Höhle, Joachim


    The automatic measurement of targets is demonstrated by means of a theoretical example and by an interactive measuring program for real imagery from a réseau camera. The used strategy is a combination of two methods: the maximum correlation coefficient and the correlation in the subpixel range...... interactive software is also part of a computer-assisted learning program on digital photogrammetry....

  11. Automatic Error Analysis Using Intervals (United States)

    Rothwell, E. J.; Cloud, M. J.


    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  12. Automatic female dehumanization across the menstrual cycle. (United States)

    Piccoli, Valentina; Fantoni, Carlo; Foroni, Francesco; Bianchi, Mauro; Carnaghi, Andrea


    In this study, we investigate whether hormonal shifts during the menstrual cycle contribute to the dehumanization of other women and men. Female participants with different levels of likelihood of conception (LoC) completed a semantic priming paradigm in a lexical decision task. When the word 'woman' was the prime, animal words were more accessible in high versus low LoC whereas human words were more inhibited in the high versus low LoC. When the word 'man' was used as the prime, no difference was found in terms of accessibility between high and low LoC for either animal or human words. These results show that the female dehumanization is automatically elicited by menstrual cycle-related processes and likely associated with an enhanced activation of mate-attraction goals. © 2016 The British Psychological Society.

  13. Space-time multiscale methods for Large Eddy Simulation

    NARCIS (Netherlands)

    Munts, E.A.


    The Variational Multiscale (VMS) method has appeared as a promising new approach to the Large Eddy Simulation (LES) of turbulent flows. The key advantage of the VMS approach is that it allows different subgrid-scale (SGS) modeling assumptions to be made at different ranges of the resolved scales.

  14. Multiscale modeling and simulation of brain blood flow

    Energy Technology Data Exchange (ETDEWEB)

    Perdikaris, Paris, E-mail: [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Grinberg, Leopold, E-mail: [IBM T.J Watson Research Center, 1 Rogers St, Cambridge, Massachusetts 02142 (United States); Karniadakis, George Em, E-mail: [Division of Applied Mathematics, Brown University, Providence, Rhode Island 02912 (United States)


    The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process taking place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.

  15. Hybrid continuum–molecular modelling of multiscale internal gas flows

    International Nuclear Information System (INIS)

    Patronis, Alexander; Lockerby, Duncan A.; Borg, Matthew K.; Reese, Jason M.


    We develop and apply an efficient multiscale method for simulating a large class of low-speed internal rarefied gas flows. The method is an extension of the hybrid atomistic–continuum approach proposed by Borg et al. (2013) [28] for the simulation of micro/nano flows of high-aspect ratio. The major new extensions are: (1) incorporation of fluid compressibility; (2) implementation using the direct simulation Monte Carlo (DSMC) method for dilute rarefied gas flows, and (3) application to a broader range of geometries, including periodic, non-periodic, pressure-driven, gravity-driven and shear-driven internal flows. The multiscale method is applied to micro-scale gas flows through a periodic converging–diverging channel (driven by an external acceleration) and a non-periodic channel with a bend (driven by a pressure difference), as well as the flow between two eccentric cylinders (with the inner rotating relative to the outer). In all these cases there exists a wide variation of Knudsen number within the geometries, as well as substantial compressibility despite the Mach number being very low. For validation purposes, our multiscale simulation results are compared to those obtained from full-scale DSMC simulations: very close agreement is obtained in all cases for all flow variables considered. Our multiscale simulation is an order of magnitude more computationally efficient than the full-scale DSMC for the first and second test cases, and two orders of magnitude more efficient for the third case

  16. Multi-scale modeling strategies in materials science—The ...

    Indian Academy of Sciences (India)


    will review the recently developed quasicontinuum method which is an attempt to bridge the length scales in a single seamless model with the aid of the finite element method. Attempts to generalize this method to finite temperatures will be outlined. Keywords. Multi-scale models; quasicontinuum method; finite elements. 1.

  17. Computer-Aided Multiscale Modelling for Chemical Process Engineering

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Gani, Rafiqul


    Chemical processes are generally modeled through monoscale approaches, which, while not adequate, satisfy a useful role in product-process design. In this case, use of a multi-dimensional and multi-scale model-based approach has importance in product-process development. A computer-aided framewor...

  18. Randomized Oversampling for Generalized Multiscale Finite Element Methods

    KAUST Repository

    Calo, Victor M.


    In this paper, we develop efficient multiscale methods for flows in heterogeneous media. We use the generalized multiscale finite element (GMsFEM) framework. GMsFEM approximates the solution space locally using a few multiscale basis functions. This approximation selects an appropriate snapshot space and a local spectral decomposition, e.g., the use of oversampled regions, in order to achieve an efficient model reduction. However, the successful construction of snapshot spaces may be costly if too many local problems need to be solved in order to obtain these spaces. We use a moderate quantity of local solutions (or snapshot vectors) with random boundary conditions on oversampled regions with zero forcing to deliver an efficient methodology. Motivated by the randomized algorithm presented in [P. G. Martinsson, V. Rokhlin, and M. Tygert, A Randomized Algorithm for the approximation of Matrices, YALEU/DCS/TR-1361, Yale University, 2006], we consider a snapshot space which consists of harmonic extensions of random boundary conditions defined in a domain larger than the target region. Furthermore, we perform an eigenvalue decomposition in this small space. We study the application of randomized sampling for GMsFEM in conjunction with adaptivity, where local multiscale spaces are adaptively enriched. Convergence analysis is provided. We present representative numerical results to validate the method proposed.

  19. Riparian ecosystems and buffers - multiscale structure, function, and management: introduction (United States)

    Kathleen A. Dwire; Richard R. Lowrance


    Given the importance of issues related to improved understanding and management of riparian ecosystems and buffers, the American Water Resources Association (AWRA) sponsored a Summer Specialty Conference in June 2004 at Olympic Valley, California, entitled 'Riparian Ecosystems and Buffers: Multiscale Structure, Function, and Management.' The primary objective...

  20. A multiphysics and multiscale software environment for modeling astrophysical systems

    NARCIS (Netherlands)

    Portegies Zwart, S.; McMillan, S.; Harfst, S.; Groen, D.; Fujii, M.; Ó Nualláin, B.; Glebbeek, E.; Heggie, D.; Lombardi, J.; Hut, P.; Angelou, V.; Banerjee, S.; Belkus, H.; Fragos, T.; Fregeau, J.; Gaburov, E.; Izzard, R.; Jurić, M.; Justham, S.; Sottoriva, A.; Teuben, P.; van Bever, J.; Yaron, O.; Zemp, M.


    We present MUSE, a software framework for combining existing computational tools for different astrophysical domains into a single multiphysics, multiscale application. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying

  1. A Liver-centric Multiscale Modeling Framework for Xenobiotics (United States)

    We describe a multi-scale framework for modeling acetaminophen-induced liver toxicity. Acetaminophen is a widely used analgesic. Overdose of acetaminophen can result in liver injury via its biotransformation into toxic product, which further induce massive necrosis. Our study foc...

  2. Covariance, correlation matrix, and the multiscale community structure of networks. (United States)

    Shen, Hua-Wei; Cheng, Xue-Qi; Fang, Bin-Xing


    Empirical studies show that real world networks often exhibit multiple scales of topological descriptions. However, it is still an open problem how to identify the intrinsic multiple scales of networks. In this paper, we consider detecting the multiscale community structure of network from the perspective of dimension reduction. According to this perspective, a covariance matrix of network is defined to uncover the multiscale community structure through the translation and rotation transformations. It is proved that the covariance matrix is the unbiased version of the well-known modularity matrix. We then point out that the translation and rotation transformations fail to deal with the heterogeneous network, which is very common in nature and society. To address this problem, a correlation matrix is proposed through introducing the rescaling transformation into the covariance matrix. Extensive tests on real world and artificial networks demonstrate that the correlation matrix significantly outperforms the covariance matrix, identically the modularity matrix, as regards identifying the multiscale community structure of network. This work provides a novel perspective to the identification of community structure and thus various dimension reduction methods might be used for the identification of community structure. Through introducing the correlation matrix, we further conclude that the rescaling transformation is crucial to identify the multiscale community structure of network, as well as the translation and rotation transformations.

  3. Multiscale Simulation Framework for Coupled Fluid Flow and Mechanical Deformation

    Energy Technology Data Exchange (ETDEWEB)

    Tchelepi, Hamdi


    A multiscale linear-solver framework for the pressure equation associated with flow in highly heterogeneous porous formations was developed. The multiscale based approach is cast in a general algebraic form, which facilitates integration of the new scalable linear solver in existing flow simulators. The Algebraic Multiscale Solver (AMS) is employed as a preconditioner within a multi-stage strategy. The formulations investigated include the standard MultiScale Finite-Element (MSFE) andMultiScale Finite-Volume (MSFV) methods. The local-stage solvers include incomplete factorization and the so-called Correction Functions (CF) associated with the MSFV approach. Extensive testing of AMS, as an iterative linear solver, indicate excellent convergence rates and computational scalability. AMS compares favorably with advanced Algebraic MultiGrid (AMG) solvers for highly detailed three-dimensional heterogeneous models. Moreover, AMS is expected to be especially beneficial in solving time-dependent problems of coupled multiphase flow and transport in large-scale subsurface formations.

  4. Multi-Scale Pattern Recognition for Image Classification and Segmentation

    NARCIS (Netherlands)

    Li, Y.


    Scale is an important parameter of images. Different objects or image structures (e.g. edges and corners) can appear at different scales and each is meaningful only over a limited range of scales. Multi-scale analysis has been widely used in image processing and computer vision, serving as the basis

  5. On a multiscale approach for filter efficiency simulations

    KAUST Repository

    Iliev, Oleg


    Filtration in general, and the dead end depth filtration of solid particles out of fluid in particular, is intrinsic multiscale problem. The deposition (capturing of particles) essentially depends on local velocity, on microgeometry (pore scale geometry) of the filtering medium and on the diameter distribution of the particles. The deposited (captured) particles change the microstructure of the porous media what leads to change of permeability. The changed permeability directly influences the velocity field and pressure distribution inside the filter element. To close the loop, we mention that the velocity influences the transport and deposition of particles. In certain cases one can evaluate the filtration efficiency considering only microscale or only macroscale models, but in general an accurate prediction of the filtration efficiency requires multiscale models and algorithms. This paper discusses the single scale and the multiscale models, and presents a fractional time step discretization algorithm for the multiscale problem. The velocity within the filter element is computed at macroscale, and is used as input for the solution of microscale problems at selected locations of the porous medium. The microscale problem is solved with respect to transport and capturing of individual particles, and its solution is postprocessed to provide permeability values for macroscale computations. Results from computational experiments with an oil filter are presented and discussed.

  6. Fast 2D Simulation of Superconductors: a Multiscale Approach

    DEFF Research Database (Denmark)

    Rodriguez Zermeno, Victor Manuel; Sørensen, Mads Peter; Pedersen, Niels Falsig


    This work presents a method to calculate AC losses in thin conductors such as the commercially available second generation superconducting wires through a multiscale meshing technique. The main idea is to use large aspect ratio elements to accurately simulate thin material layers. For a single th...

  7. Multiscale analysis of structure development in expanded starch snacks (United States)

    van der Sman, R. G. M.; Broeze, J.


    In this paper we perform a multiscale analysis of the food structuring process of the expansion of starchy snack foods like keropok, which obtains a solid foam structure. In particular, we want to investigate the validity of the hypothesis of Kokini and coworkers, that expansion is optimal at the moisture content, where the glass transition and the boiling line intersect. In our analysis we make use of several tools, (1) time scale analysis from the field of physical transport phenomena, (2) the scale separation map (SSM) developed within a multiscale simulation framework of complex automata, (3) the supplemented state diagram (SSD), depicting phase transition and glass transition lines, and (4) a multiscale simulation model for the bubble expansion. Results of the time scale analysis are plotted in the SSD, and give insight into the dominant physical processes involved in expansion. Furthermore, the results of the time scale analysis are used to construct the SSM, which has aided us in the construction of the multiscale simulation model. Simulation results are plotted in the SSD. This clearly shows that the hypothesis of Kokini is qualitatively true, but has to be refined. Our results show that bubble expansion is optimal for moisture content, where the boiling line for gas pressure of 4 bars intersects the isoviscosity line of the critical viscosity 106 Pa.s, which runs parallel to the glass transition line.

  8. Uncertainty Quantification and Management for Multi-scale Nuclear Materials Modeling

    International Nuclear Information System (INIS)

    McDowell, David; Deo, Chaitanya; Zhu, Ting; Wang, Yan


    Understanding and improving microstructural mechanical stability in metals and alloys is central to the development of high strength and high ductility materials for cladding and cores structures in advanced fast reactors. Design and enhancement of radiation-induced damage tolerant alloys are facilitated by better understanding the connection of various unit processes to collective responses in a multiscale model chain, including: dislocation nucleation, absorption and desorption at interfaces; vacancy production, radiation-induced segregation of Cr and Ni at defect clusters (point defect sinks) in BCC Fe-Cr ferritic/martensitic steels; investigation of interaction of interstitials and vacancies with impurities (V, Nb, Ta, Mo, W, Al, Si, P, S); time evolution of swelling (cluster growth) phenomena of irradiated materials; and energetics and kinetics of dislocation bypass of defects formed by interstitial clustering and formation of prismatic loops, informing statistical models of continuum character with regard to processes of dislocation glide, vacancy agglomeration and swelling, climb and cross slip.

  9. Uncertainty Quantification and Management for Multi-scale Nuclear Materials Modeling

    Energy Technology Data Exchange (ETDEWEB)

    McDowell, David [Georgia Inst. of Technology, Atlanta, GA (United States); Deo, Chaitanya [Georgia Inst. of Technology, Atlanta, GA (United States); Zhu, Ting [Georgia Inst. of Technology, Atlanta, GA (United States); Wang, Yan [Georgia Inst. of Technology, Atlanta, GA (United States)


    Understanding and improving microstructural mechanical stability in metals and alloys is central to the development of high strength and high ductility materials for cladding and cores structures in advanced fast reactors. Design and enhancement of radiation-induced damage tolerant alloys are facilitated by better understanding the connection of various unit processes to collective responses in a multiscale model chain, including: dislocation nucleation, absorption and desorption at interfaces; vacancy production, radiation-induced segregation of Cr and Ni at defect clusters (point defect sinks) in BCC Fe-Cr ferritic/martensitic steels; investigation of interaction of interstitials and vacancies with impurities (V, Nb, Ta, Mo, W, Al, Si, P, S); time evolution of swelling (cluster growth) phenomena of irradiated materials; and energetics and kinetics of dislocation bypass of defects formed by interstitial clustering and formation of prismatic loops, informing statistical models of continuum character with regard to processes of dislocation glide, vacancy agglomeration and swelling, climb and cross slip.

  10. Study on Tensile Damage Constitutive Model for Multiscale Polypropylene Fiber Concrete

    Directory of Open Access Journals (Sweden)

    Ninghui Liang


    Full Text Available Polypropylene fibers perform well in roughness enhancement and corrosion resistance. They can dissipate energy when cracks occur in concrete. Furthermore, they can improve the concrete tensile properties by synergistic work with it. To study the tensile properties of the multiscale polypropylene concrete, uniaxial tensile strength of 18 fiber reinforced and 3 plain concrete specimens was experimentally tested using the paste steel method. The test results indicate that both the strength and the peak strain can be substantially improved. Based on the results, a tensile damage constitutive model was proposed and implemented into FLAC3D for numerical experimentation. The numerical results are consistent with the experimental observations in general and some discrepancies are discussed.

  11. Depth estimation from multi-scale SLIC superpixels using non-parametric learning (United States)

    Jiang, Yifeng; Zhu, Yuesheng; Qing, Yin; Yang, Fan


    This study introduces a novel depth estimation method that can automatically generate plausible depth map from a single image with unstructured environment. Our goal is to extrapolate depth map with more correct, rich, and distinct depth order, which is both quantitatively accurate as well as visually pleasing. Based on the preexisting DepthTransfer algorithm, our approach primarily transfers depth information at the level of superpixels from the most photometrically similar retrieval images under the framework of non-parametric learning. Posteriorly, we propose to concurrently warp the corresponding superpixels in multi-scale levels, where we employ an improved SLIC technique to segment the RGBD images from coarse to fine. Then, modified Cross Bilateral Filter is leveraged to refine the final depth field. With respect to training and evaluation, we perform our experiment on the popular Make3D dataset and demonstrate that our method outperforms the state-of-the-art in both efficacy and computational efficiency. Especially, the final results show that in qualitatively evaluation, our results are visually superior in realism and simultaneously more immersive.

  12. Epileptic Seizure Classification of EEGs Using Time-Frequency Analysis Based Multiscale Radial Basis Functions. (United States)

    Li, Yang; Wang, Xu-Dong; Luo, Mei-Lin; Li, Ke; Yang, Xiao-Feng; Guo, Qi


    The automatic detection of epileptic seizures from electroencephalography (EEG) signals is crucial for the localization and classification of epileptic seizure activity. However, seizure processes are typically dynamic and nonstationary, and thus, distinguishing rhythmic discharges from nonstationary processes is one of the challenging problems. In this paper, an adaptive and localized time-frequency representation in EEG signals is proposed by means of multiscale radial basis functions (MRBF) and a modified particle swarm optimization (MPSO) to improve both time and frequency resolution simultaneously, which is a novel MRBF-MPSO framework of the time-frequency feature extraction for epileptic EEG signals. The dimensionality of extracted features can be greatly reduced by the principle component analysis algorithm before the most discriminative features selected are fed into a support vector machine (SVM) classifier with the radial basis function (RBF) in order to separate epileptic seizure from seizure-free EEG signals. The classification performance of the proposed method has been evaluated by using several state-of-art feature extraction algorithms and other five different classifiers like linear discriminant analysis, and logistic regression. The experimental results indicate that the proposed MRBF-MPSO-SVM classification method outperforms competing techniques in terms of classification accuracy, and shows the effectiveness of the proposed method for classification of seizure epochs and seizure-free epochs.

  13. Multiscale Geoscene Segmentation for Extracting Urban Functional Zones from VHR Satellite Images

    Directory of Open Access Journals (Sweden)

    Xiuyuan Zhang


    Full Text Available Urban functional zones, such as commercial, residential, and industrial zones, are basic units of urban planning, and play an important role in monitoring urbanization. However, historical functional-zone maps are rarely available for cities in developing countries, as traditional urban investigations focus on geographic objects rather than functional zones. Recent studies have sought to extract functional zones automatically from very-high-resolution (VHR satellite images, and they mainly concentrate on classification techniques, but ignore zone segmentation which delineates functional-zone boundaries and is fundamental to functional-zone analysis. To resolve the issue, this study presents a novel segmentation method, geoscene segmentation, which can identify functional zones at multiple scales by aggregating diverse urban objects considering their features and spatial patterns. In experiments, we applied this method to three Chinese cities—Beijing, Putian, and Zhuhai—and generated detailed functional-zone maps with diverse functional categories. These experimental results indicate our method effectively delineates urban functional zones with VHR imagery; different categories of functional zones extracted by using different scale parameters; and spatial patterns that are more important than the features of individual objects in extracting functional zones. Accordingly, the presented multiscale geoscene segmentation method is important for urban-functional-zone analysis, and can provide valuable data for city planners.

  14. Domain Decomposition Preconditioners for Multiscale Flows in High-Contrast Media

    KAUST Repository

    Galvis, Juan


    In this paper, we study domain decomposition preconditioners for multiscale flows in high-contrast media. We consider flow equations governed by elliptic equations in heterogeneous media with a large contrast in the coefficients. Our main goal is to develop domain decomposition preconditioners with the condition number that is independent of the contrast when there are variations within coarse regions. This is accomplished by designing coarse-scale spaces and interpolators that represent important features of the solution within each coarse region. The important features are characterized by the connectivities of high-conductivity regions. To detect these connectivities, we introduce an eigenvalue problem that automatically detects high-conductivity regions via a large gap in the spectrum. A main observation is that this eigenvalue problem has a few small, asymptotically vanishing eigenvalues. The number of these small eigenvalues is the same as the number of connected high-conductivity regions. The coarse spaces are constructed such that they span eigenfunctions corresponding to these small eigenvalues. These spaces are used within two-level additive Schwarz preconditioners as well as overlapping methods for the Schur complement to design preconditioners. We show that the condition number of the preconditioned systems is independent of the contrast. More detailed studies are performed for the case when the high-conductivity region is connected within coarse block neighborhoods. Our numerical experiments confirm the theoretical results presented in this paper. © 2010 Society for Industrial and Applied Mathematics.

  15. Unsupervised Transfer Learning via Multi-Scale Convolutional Sparse Coding for Biomedical Applications. (United States)

    Chang, Hang; Han, Ju; Zhong, Cheng; Snijders, Antoine M; Mao, Jian-Hua


    The capabilities of (I) learning transferable knowledge across domains; and (II) fine-tuning the pre-learned base knowledge towards tasks with considerably smaller data scale are extremely important. Many of the existing transfer learning techniques are supervised approaches, among which deep learning has the demonstrated power of learning domain transferrable knowledge with large scale network trained on massive amounts of labeled data. However, in many biomedical tasks, both the data and the corresponding label can be very limited, where the unsupervised transfer learning capability is urgently needed. In this paper, we proposed a novel multi-scale convolutional sparse coding (MSCSC) method, that (I) automatically learns filter banks at different scales in a joint fashion with enforced scale-specificity of learned patterns; and (II) provides an unsupervised solution for learning transferable base knowledge and fine-tuning it towards target tasks. Extensive experimental evaluation of MSCSC demonstrates the effectiveness of the proposed MSCSC in both regular and transfer learning tasks in various biomedical domains.

  16. Automatisms: bridging clinical neurology with criminal law. (United States)

    Rolnick, Joshua; Parvizi, Josef


    The law, like neurology, grapples with the relationship between disease states and behavior. Sometimes, the two disciplines share the same terminology, such as automatism. In law, the "automatism defense" is a claim that action was involuntary or performed while unconscious. Someone charged with a serious crime can acknowledge committing the act and yet may go free if, relying on the expert testimony of clinicians, the court determines that the act of crime was committed in a state of automatism. In this review, we explore the relationship between the use of automatism in the legal and clinical literature. We close by addressing several issues raised by the automatism defense: semantic ambiguity surrounding the term automatism, the presence or absence of consciousness during automatisms, and the methodological obstacles that have hindered the study of cognition during automatisms. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. Mixing in 3D Sparse Multi-Scale Grid Generated Turbulence (United States)

    Usama, Syed; Kopec, Jacek; Tellez, Jackson; Kwiatkowski, Kamil; Redondo, Jose; Malik, Nadeem


    Flat 2D fractal grids are known to alter turbulence characteristics downstream of the grid as compared to the regular grids with the same blockage ratio and the same mass inflow rates [1]. This has excited interest in the turbulence community for possible exploitation for enhanced mixing and related applications. Recently, a new 3D multi-scale grid design has been proposed [2] such that each generation of length scale of turbulence grid elements is held in its own frame, the overall effect is a 3D co-planar arrangement of grid elements. This produces a 'sparse' grid system whereby each generation of grid elements produces a turbulent wake pattern that interacts with the other wake patterns downstream. A critical motivation here is that the effective blockage ratio in the 3D Sparse Grid Turbulence (3DSGT) design is significantly lower than in the flat 2D counterpart - typically the blockage ratio could be reduced from say 20% in 2D down to 4% in the 3DSGT. If this idea can be realized in practice, it could potentially greatly enhance the efficiency of turbulent mixing and transfer processes clearly having many possible applications. Work has begun on the 3DSGT experimentally using Surface Flow Image Velocimetry (SFIV) [3] at the European facility in the Max Planck Institute for Dynamics and Self-Organization located in Gottingen, Germany and also at the Technical University of Catalonia (UPC) in Spain, and numerically using Direct Numerical Simulation (DNS) at King Fahd University of Petroleum & Minerals (KFUPM) in Saudi Arabia and in University of Warsaw in Poland. DNS is the most useful method to compare the experimental results with, and we are studying different types of codes such as Imcompact3d, and OpenFoam. Many variables will eventually be investigated for optimal mixing conditions. For example, the number of scale generations, the spacing between frames, the size ratio of grid elements, inflow conditions, etc. We will report upon the first set of findings

  18. Foundations for a multiscale collaborative Earth model

    KAUST Repository

    Afanasiev, M.


    of the CSEM development, the broad global updates mostly act to remove artefacts from the assembly of the initial CSEM. During the future evolution of the CSEM, the reference data set will be used to account for the influence of small-scale refinements on large-scale global structure. The CSEM as a computational framework is intended to help bridging the gap between local, regional and global tomography, and to contribute to the development of a global multiscale Earth model. While the current construction serves as a first proof of concept, future refinements and additions will require community involvement, which is welcome at this stage already.

  19. Multiscale decomposition for heterogeneous land-atmosphere systems (United States)

    Liu, Shaofeng; Shao, Yaping; Hintz, Michael; Lennartz-Sassinek, Sabine


    The land-atmosphere system is characterized by pronounced land surface heterogeneity and vigorous atmospheric turbulence both covering a wide range of scales. The multiscale surface heterogeneities and multiscale turbulent eddies interact nonlinearly with each other. Understanding these multiscale processes quantitatively is essential to the subgrid parameterizations for weather and climate models. In this paper, we propose a method for surface heterogeneity quantification and turbulence structure identification. The first part of the method is an orthogonal transform in the probability density function (PDF) domain, in contrast to the orthogonal wavelet transforms which are performed in the physical space. As the basis of the whole method, the orthogonal PDF transform (OPT) is used to asymptotically reconstruct the original signals by representing the signal values with multilevel approximations. The "patch" idea is then applied to these reconstructed fields in order to recognize areas at the land surface or in turbulent flows that are of the same characteristics. A patch here is a connected area with the same approximation. For each recognized patch, a length scale is then defined to build the energy spectrum. The OPT and related energy spectrum analysis, as a whole referred to as the orthogonal PDF decomposition (OPD), is applied to two-dimensional heterogeneous land surfaces and atmospheric turbulence fields for test. The results show that compared to the wavelet transforms, the OPD can reconstruct the original signal more effectively, and accordingly, its energy spectrum represents the signal's multiscale variation more accurately. The method we propose in this paper is of general nature and therefore can be of interest for problems of multiscale process description in other geophysical disciplines.

  20. The Multiscale Robin Coupled Method for flows in porous media (United States)

    Guiraldello, Rafael T.; Ausas, Roberto F.; Sousa, Fabricio S.; Pereira, Felipe; Buscaglia, Gustavo C.


    A multiscale mixed method aiming at the accurate approximation of velocity and pressure fields in heterogeneous porous media is proposed. The procedure is based on a new domain decomposition method in which the local problems are subject to Robin boundary conditions. The domain decomposition procedure is defined in terms of two independent spaces on the skeleton of the decomposition, corresponding to interface pressures and fluxes, that can be chosen with great flexibility to accommodate local features of the underlying permeability fields. The well-posedness of the new domain decomposition procedure is established and its connection with the method of Douglas et al. (1993) [12], is identified, also allowing us to reinterpret the known procedure as an optimized Schwarz (or Two-Lagrange-Multiplier) method. The multiscale property of the new domain decomposition method is indicated, and its relation with the Multiscale Mortar Mixed Finite Element Method (MMMFEM) and the Multiscale Hybrid-Mixed (MHM) Finite Element Method is discussed. Numerical simulations are presented aiming at illustrating several features of the new method. Initially we illustrate the possibility of switching from MMMFEM to MHM by suitably varying the Robin condition parameter in the new multiscale method. Then we turn our attention to realistic flows in high-contrast, channelized porous formations. We show that for a range of values of the Robin condition parameter our method provides better approximations for pressure and velocity than those computed with either the MMMFEM and the MHM. This is an indication that our method has the potential to produce more accurate velocity fields in the presence of rough, realistic permeability fields of petroleum reservoirs.

  1. Multiscale model reduction for shale gas transport in fractured media

    KAUST Repository

    Akkutlu, I. Y.


    In this paper, we develop a multiscale model reduction technique that describes shale gas transport in fractured media. Due to the pore-scale heterogeneities and processes, we use upscaled models to describe the matrix. We follow our previous work (Akkutlu et al. Transp. Porous Media 107(1), 235–260, 2015), where we derived an upscaled model in the form of generalized nonlinear diffusion model to describe the effects of kerogen. To model the interaction between the matrix and the fractures, we use Generalized Multiscale Finite Element Method (Efendiev et al. J. Comput. Phys. 251, 116–135, 2013, 2015). In this approach, the matrix and the fracture interaction is modeled via local multiscale basis functions. In Efendiev et al. (2015), we developed the GMsFEM and applied for linear flows with horizontal or vertical fracture orientations aligned with a Cartesian fine grid. The approach in Efendiev et al. (2015) does not allow handling arbitrary fracture distributions. In this paper, we (1) consider arbitrary fracture distributions on an unstructured grid; (2) develop GMsFEM for nonlinear flows; and (3) develop online basis function strategies to adaptively improve the convergence. The number of multiscale basis functions in each coarse region represents the degrees of freedom needed to achieve a certain error threshold. Our approach is adaptive in a sense that the multiscale basis functions can be added in the regions of interest. Numerical results for two-dimensional problem are presented to demonstrate the efficiency of proposed approach. © 2016 Springer International Publishing Switzerland

  2. Multiscale stabilization for convection-dominated diffusion in heterogeneous media

    KAUST Repository

    Calo, Victor M.


    We develop a Petrov-Galerkin stabilization method for multiscale convection-diffusion transport systems. Existing stabilization techniques add a limited number of degrees of freedom in the form of bubble functions or a modified diffusion, which may not be sufficient to stabilize multiscale systems. We seek a local reduced-order model for this kind of multiscale transport problems and thus, develop a systematic approach for finding reduced-order approximations of the solution. We start from a Petrov-Galerkin framework using optimal weighting functions. We introduce an auxiliary variable to a mixed formulation of the problem. The auxiliary variable stands for the optimal weighting function. The problem reduces to finding a test space (a dimensionally reduced space for this auxiliary variable), which guarantees that the error in the primal variable (representing the solution) is close to the projection error of the full solution on the dimensionally reduced space that approximates the solution. To find the test space, we reformulate some recent mixed Generalized Multiscale Finite Element Methods. We introduce snapshots and local spectral problems that appropriately define local weight and trial spaces. In particular, we use energy minimizing snapshots and local spectral decompositions in the natural norm associated with the auxiliary variable. The resulting spectral decomposition adaptively identifies and builds the optimal multiscale space to stabilize the system. We discuss the stability and its relation to the approximation property of the test space. We design online basis functions, which accelerate convergence in the test space, and consequently, improve stability. We present several numerical examples and show that one needs a few test functions to achieve an error similar to the projection error in the primal variable irrespective of the Peclet number.

  3. Automatic detection of spiculation of pulmonary nodules in computed tomography images (United States)

    Ciompi, F.; Jacobs, C.; Scholten, E. T.; van Riel, S. J.; W. Wille, M. M.; Prokop, M.; van Ginneken, B.


    We present a fully automatic method for the assessment of spiculation of pulmonary nodules in low-dose Computed Tomography (CT) images. Spiculation is considered as one of the indicators of nodule malignancy and an important feature to assess in order to decide on a patient-tailored follow-up procedure. For this reason, lung cancer screening scenario would benefit from the presence of a fully automatic system for the assessment of spiculation. The presented framework relies on the fact that spiculated nodules mainly differ from non-spiculated ones in their morphology. In order to discriminate the two categories, information on morphology is captured by sampling intensity profiles along circular patterns on spherical surfaces centered on the nodule, in a multi-scale fashion. Each intensity profile is interpreted as a periodic signal, where the Fourier transform is applied, obtaining a spectrum. A library of spectra is created by clustering data via unsupervised learning. The centroids of the clusters are used to label back each spectrum in the sampling pattern. A compact descriptor encoding the nodule morphology is obtained as the histogram of labels along all the spherical surfaces and used to classify spiculated nodules via supervised learning. We tested our approach on a set of nodules from the Danish Lung Cancer Screening Trial (DLCST) dataset. Our results show that the proposed method outperforms other 3-D descriptors of morphology in the automatic assessment of spiculation.

  4. Multiscale Roughness Influencing on Transport Behavior of Passive Solute through a Single Self-affine Fracture (United States)

    Dou, Z.


    In this study, the influence of multi-scale roughness on transport behavior of the passive solute through the self-affine fracture was investigated. The single self-affine fracture was constructed by the successive random additions (SRA) and the fracture roughness was decomposed into two different scales (i.e. large-scale primary roughness and small-scale secondary roughness) by the Wavelet analysis technique. The fluid flow in fractures, which was characterized by the Forchheimer's law, showed the non-linear flow behaviors such as eddies and tortuous streamlines. The results indicated that the small-scale secondary roughness was primarily responsible for the non-linear flow behaviors. The direct simulations of asymptotic passive solute transport represented the Non-Fickian transport characteristics (i.e. early arrivals and long tails) in breakthrough curves (BTCs) and residence time distributions (RTDs) with and without consideration of the secondary roughness. Analysis of multiscale BTCs and RTDs showed that the small-scale secondary roughness played a significant role in enhancing the Non-Fickian transport characteristics. We found that removing small-scale secondary roughness led to the lengthening arrival and shortening tail. The peak concentration in BTCs decreased as the secondary roughness was removed, implying that the secondary could also enhance the solute dilution. The estimated BTCs by the Fickian advection-dispersion equation (ADE) yielded errors which decreased with the small-scale secondary roughness being removed. The mobile-immobile model (MIM) was alternatively implemented to characterize the Non-Fickian transport. We found that the MIM was more capable of estimating Non-Fickian BTCs. The small-scale secondary roughness resulted in the decreasing mobile domain fraction and the increasing mass exchange rate between immobile and mobile domains. The estimated parameters from the MIM could provide insight into the inherent mechanism of roughness

  5. Self-Compassion and Automatic Thoughts (United States)

    Akin, Ahmet


    The aim of this research is to examine the relationships between self-compassion and automatic thoughts. Participants were 299 university students. In this study, the Self-compassion Scale and the Automatic Thoughts Questionnaire were used. The relationships between self-compassion and automatic thoughts were examined using correlation analysis…

  6. Automatic TLI recognition system. Part 1: System description

    Energy Technology Data Exchange (ETDEWEB)

    Partin, J.K.; Lassahn, G.D.; Davidson, J.R.


    This report describes an automatic target recognition system for fast screening of large amounts of multi-sensor image data, based on low-cost parallel processors. This system uses image data fusion and gives uncertainty estimates. It is relatively low cost, compact, and transportable. The software is easily enhanced to expand the system`s capabilities, and the hardware is easily expandable to increase the system`s speed. This volume gives a general description of the ATR system.

  7. Multi-scale evolution of a derecho-producing MCS (United States)

    Bernardet, Ligia Ribeiro


    In this dissertation we address one type of severe weather: strong straight-line winds. In particular, we focus on derechos, a type of wind storm caused by a convective system and characterized by its long duration and by the large area it covers. One interesting characteristic of these storms is that they develop at night, on the cold side of a thermal boundary. This region is not characterized by large convective instability. In fact, surface parcels are generally stable with respect to vertical displacements. To gain understanding of the physical processes involved in these storms, we focused on the case of a MCS that developed in eastern Colorado on 12-13 May, 1985. The system formed in the afternoon, was active until early morning, and caused strong winds during the night. A multi-scale full physics simulation of this case was performed using a non-hydrostatic mesoscale model. Four telescopically nested grids covering from the synoptic scale down to cloud scale circulations were used. A Lagrangian model was used to follow trajectories of parcels that took part in the updraft and in the downdraft, and balance of forces were computed along the trajectories. Our results show that the synoptic and mesoscale environment of the storm largely influences convective organization and cloud-scale circulations. During the day, when the boundary layer is well mixed, the source of air for the clouds is located within the boundary layer. At night, when the boundary layer becomes stable, the source of air shifts to the top of the boundary layer. It is composed of warm, moist air that is brought by the nocturnal low-level jet. The downdraft structure also changes from day to night. During the day, parcels acquire negative buoyancy because of cooling due to evaporation and melting. As they sink, they remain colder than the environment, and end up at the surface constituting the cold pool. During the night, downdrafts are stronger, generating the strong surface winds. The most

  8. Automatic schema evolution in Root

    International Nuclear Information System (INIS)

    Brun, R.; Rademakers, F.


    ROOT version 3 (spring 2001) supports automatic class schema evolution. In addition this version also produces files that are self-describing. This is achieved by storing in each file a record with the description of all the persistent classes in the file. Being self-describing guarantees that a file can always be read later, its structure browsed and objects inspected, also when the library with the compiled code of these classes is missing. The schema evolution mechanism supports the frequent case when multiple data sets generated with many different class versions must be analyzed in the same session. ROOT supports the automatic generation of C++ code describing the data objects in a file

  9. Automatic design of magazine covers (United States)

    Jahanian, Ali; Liu, Jerry; Tretter, Daniel R.; Lin, Qian; Damera-Venkata, Niranjan; O'Brien-Strain, Eamonn; Lee, Seungyon; Fan, Jian; Allebach, Jan P.


    In this paper, we propose a system for automatic design of magazine covers that quantifies a number of concepts from art and aesthetics. Our solution to automatic design of this type of media has been shaped by input from professional designers, magazine art directors and editorial boards, and journalists. Consequently, a number of principles in design and rules in designing magazine covers are delineated. Several techniques are derived and employed in order to quantify and implement these principles and rules in the format of a software framework. At this stage, our framework divides the task of design into three main modules: layout of magazine cover elements, choice of color for masthead and cover lines, and typography of cover lines. Feedback from professional designers on our designs suggests that our results are congruent with their intuition.

  10. Automatic computation of radioimmunoassay data

    International Nuclear Information System (INIS)

    Toyota, Takayoshi; Kudo, Mikihiko; Abe, Kanji; Kawamata, Fumiaki; Uehata, Shigeru.


    Radioimmunoassay provided dose response curves which showed linearity by the use of logistic transformation (Rodbard). This transformation which was applicable to radioimmunoassay should be useful for the computer processing of insulin and C-peptide assay. In the present studies, standard curves were analysed by testing the fit of analytic functions to radioimmunoassay of insulin and C-peptides. A program for use in combination with the double antibody technique was made by Dr. Kawamata. This approach was evidenced to be useful in order to allow automatic computation of data derived from the double antibody assays of insulin and C-peptides. Automatic corrected calculations of radioimmunoassay data of insulin was found to be satisfactory. (auth.)

  11. Physics of Automatic Target Recognition

    CERN Document Server

    Sadjadi, Firooz


    Physics of Automatic Target Recognition addresses the fundamental physical bases of sensing, and information extraction in the state-of-the art automatic target recognition field. It explores both passive and active multispectral sensing, polarimetric diversity, complex signature exploitation, sensor and processing adaptation, transformation of electromagnetic and acoustic waves in their interactions with targets, background clutter, transmission media, and sensing elements. The general inverse scattering, and advanced signal processing techniques and scientific evaluation methodologies being used in this multi disciplinary field will be part of this exposition. The issues of modeling of target signatures in various spectral modalities, LADAR, IR, SAR, high resolution radar, acoustic, seismic, visible, hyperspectral, in diverse geometric aspects will be addressed. The methods for signal processing and classification will cover concepts such as sensor adaptive and artificial neural networks, time reversal filt...

  12. MOS voltage automatic tuning circuit


    李, 田茂; 中田, 辰則; 松本, 寛樹


    Abstract ###Automatic tuning circuit adjusts frequency performance to compensate for the process variation. Phase locked ###loop (PLL) is a suitable oscillator for the integrated circuit. It is a feedback system that compares the input ###phase with the output phase. It can make the output frequency equal to the input frequency. In this paper, PLL ###fomed of MOSFET's is presented.The presented circuit consists of XOR circuit, Low-pass filter and Relaxation ###Oscillator. On PSPICE simulation...

  13. Automatic Detection of Fake News


    Pérez-Rosas, Verónica; Kleinberg, Bennett; Lefevre, Alexandra; Mihalcea, Rada


    The proliferation of misleading information in everyday access media outlets such as social media feeds, news blogs, and online newspapers have made it challenging to identify trustworthy news sources, thus increasing the need for computational tools able to provide insights into the reliability of online content. In this paper, we focus on the automatic identification of fake content in online news. Our contribution is twofold. First, we introduce two novel datasets for the task of fake news...

  14. Automatic controller at associated memory

    International Nuclear Information System (INIS)

    Courty, P.


    Organized around an A2 type controller, this CAMAC device allows on command of the associated computer to start reading 64K 16 bit words into an outer memory. This memory is fully controlled by the computer. In the automatic mode, which works at 10 6 words/sec, the computer can access any other module of the same crate by cycle-stealing [fr

  15. Automatic Guidance for Remote Manipulator (United States)

    Johnston, A. R.


    Position sensor and mirror guides manipulator toward object. Grasping becomes automatic when sensor begins to receive signal from reflector on object to be manipulated. Light-emitting diodes on manipulator produce light signals for reflector, which is composite of plane and corner reflectors. Proposed scheme especially useful when manipulator arm tends to flex or when object is moving. Sensor and microprocessor designed to compensate for manipulatorarm oscillation.

  16. Annual review in automatic programming

    CERN Document Server

    Halpern, Mark I; Bolliet, Louis


    Computer Science and Technology and their Application is an eight-chapter book that first presents a tutorial on database organization. Subsequent chapters describe the general concepts of Simula 67 programming language; incremental compilation and conversational interpretation; dynamic syntax; the ALGOL 68. Other chapters discuss the general purpose conversational system for graphical programming and automatic theorem proving based on resolution. A survey of extensible programming language is also shown.

  17. Automatically-Programed Machine Tools (United States)

    Purves, L.; Clerman, N.


    Software produces cutter location files for numerically-controlled machine tools. APT, acronym for Automatically Programed Tools, is among most widely used software systems for computerized machine tools. APT developed for explicit purpose of providing effective software system for programing NC machine tools. APT system includes specification of APT programing language and language processor, which executes APT statements and generates NC machine-tool motions specified by APT statements.

  18. Automatic computation of transfer functions (United States)

    Atcitty, Stanley; Watson, Luke Dale


    Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.

  19. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Du, Qiang [Pennsylvania State Univ., State College, PA (United States)


    The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of which is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next

  20. Multiscale interpretation of taut string estimation and its connection to Unbalanced Haar wavelets


    Cho, Haeran; Fryzlewicz, Piotr


    We compare two state-of-the-art non-linear techniques for nonparametric function estimation via piecewise constant approximation: the taut string and the Unbalanced Haar methods. While it is well-known that the latter is multiscale, it is not obvious that the former can also be interpreted as multiscale. We provide a unified multiscale representation for both methods, which offers an insight into the relationship between them as well as suggesting lessons both methods can learn from each other.

  1. A multiscale dataset for understanding complex eco-hydrological processes in a heterogeneous oasis system


    Li, Xin; Liu, Shaomin; Xiao, Qin; Ma, Mingguo; Jin, Rui; Che, Tao; Wang, Weizhen; Hu, Xiaoli; Xu, Ziwei; Wen, Jianguang; Wang, Liangxu


    We introduce a multiscale dataset obtained from Heihe Watershed Allied Telemetry Experimental Research (HiWATER) in an oasis-desert area in 2012. Upscaling of eco-hydrological processes on a heterogeneous surface is a grand challenge. Progress in this field is hindered by the poor availability of multiscale observations. HiWATER is an experiment designed to address this challenge through instrumentation on hierarchically nested scales to obtain multiscale and multidisciplinary data. The HiWAT...

  2. Data Services and Transnational Access for European Geosciences Multi-Scale Laboratories (United States)

    Funiciello, Francesca; Rosenau, Matthias; Sagnotti, Leonardo; Scarlato, Piergiorgio; Tesei, Telemaco; Trippanera, Daniele; Spires, Chris; Drury, Martyn; Kan-Parker, Mirjam; Lange, Otto; Willingshofer, Ernst


    The EC policy for research in the new millennium supports the development of european-scale research infrastructures. In this perspective, the existing research infrastructures are going to be integrated with the objective to increase their accessibility and to enhance the usability of their multidisciplinary data. Building up integrating Earth Sciences infrastructures in Europe is the mission of the Implementation Phase (IP) of the European Plate Observing System (EPOS) project (2015-2019). The integration of european multiscale laboratories - analytical, experimental petrology and volcanology, magnetic and analogue laboratories - plays a key role in this context and represents a specific task of EPOS IP. In the frame of the WP16 of EPOS IP working package 16, European geosciences multiscale laboratories aims to be linked, merging local infrastructures into a coherent and collaborative network. In particular, the EPOS IP WP16-task 4 "Data services" aims at standardize data and data products, already existing and newly produced by the participating laboratories, and made them available through a new digital platform. The following data and repositories have been selected for the purpose: 1) analytical and properties data a) on volcanic ash from explosive eruptions, of interest to the aviation industry, meteorological and government institutes, b) on magmas in the context of eruption and lava flow hazard evaluation, and c) on rock systems of key importance in mineral exploration and mining operations; 2) experimental data describing: a) rock and fault properties of importance for modelling and forecasting natural and induced subsidence, seismicity and associated hazards, b) rock and fault properties relevant for modelling the containment capacity of rock systems for CO2, energy sources and wastes, c) crustal and upper mantle rheology as needed for modelling sedimentary basin formation and crustal stress distributions, d) the composition, porosity, permeability, and

  3. Current progress in tissue engineering of heart valves: multiscale problems, multiscale solutions. (United States)

    Cheung, Daniel Y; Duan, Bin; Butcher, Jonathan T


    Heart valve disease is an increasingly prevalent and clinically serious condition. There are no clinically effective biological diagnostics or treatment strategies. The only recourse available is replacement with a prosthetic valve, but the inability of these devices to grow or respond biologically to their environments necessitates multiple resizing surgeries and life-long coagulation treatment, especially in children. Tissue engineering has a unique opportunity to impact heart valve disease by providing a living valve conduit, capable of growth and biological integration. This review will cover current tissue engineering strategies in fabricating heart valves and their progress towards the clinic, including molded scaffolds using naturally derived or synthetic polymers, decellularization, electrospinning, 3D bioprinting, hybrid techniques, and in vivo engineering. Whereas much progress has been made to create functional living heart valves, a clinically viable product is not yet realized. The next leap in engineered living heart valves will require a deeper understanding of how the natural multi-scale structural and biological heterogeneity of the tissue ensures its efficient function. Related, improved fabrication strategies must be developed that can replicate this de novo complexity, which is likely instructive for appropriate cell differentiation and remodeling whether seeded with autologous stem cells in vitro or endogenously recruited cells.

  4. Multivariate Multiscale Entropy Applied to Center of Pressure Signals Analysis: An Effect of Vibration Stimulation of Shoes

    Directory of Open Access Journals (Sweden)

    Jiann-Shing Shieh


    Full Text Available Falls are unpredictable accidents and resulting injuries can be serious to the elderly. A preventative solution can be the use of vibration stimulus of white noise to improve the sense of balance. In this work, a pair of vibration shoes were developed and controlled by a touch-type switch which can generate mechanical vibration noise to stimulate the patient’s feet while wearing the shoes. In order to evaluate the balance stability and treatment effect of vibrating insoles in these shoes, multivariate multiscale entropy (MMSE algorithm is applied to calculate the relative complexity index of reconstructed center of pressure (COP signals in antero-posterior and medio-lateral directions by the multivariate empirical mode decomposition (MEMD. The results show that the balance stability of 61.5% elderly subjects is improved after wearing the developed shoes, which is more than 30.8% using multiscale entropy. In conclusion, MEMD-enhanced MMSE is able to distinguish the smaller differences between before and after the use of vibration shoes in both two directions, which is more powerful than the empirical mode decomposition (EMD-enhanced MSE in each individual direction.

  5. Reducing the computational requirements for simulating tunnel fires by combining multiscale modelling and multiple processor calculation

    DEFF Research Database (Denmark)

    Vermesi, Izabella; Rein, Guillermo; Colella, Francesco


    in FDS version 6.0, a widely used fire-specific, open source CFD software. Furthermore, it compares the reduction in simulation time given by multiscale modelling with the one given by the use of multiple processor calculation. This was done using a 1200m long tunnel with a rectangular cross...... processor calculation (97% faster when using a single mesh and multiscale modelling; only 46% faster when using the full tunnel and multiple meshes). In summary, it was found that multiscale modelling with FDS v.6.0 is feasible, and the combination of multiple meshes and multiscale modelling was established...


    Directory of Open Access Journals (Sweden)

    Y. Jia


    Full Text Available The change detection of remote sensing images means analysing the change information quantitatively and recognizing the change types of the surface coverage data in different time phases. With the appearance of high resolution remote sensing image, object-oriented change detection method arises at this historic moment. In this paper, we research multi-scale approach for high resolution images, which includes multi-scale segmentation, multi-scale feature selection and multi-scale classification. Experimental results show that this method has a stronger advantage than the traditional single-scale method of high resolution remote sensing image change detection.

  7. Guided extracellular matrix formation from fibroblast cells cultured on bio-inspired configurable multiscale substrata

    Directory of Open Access Journals (Sweden)

    Won-Gyu Bae


    Full Text Available Engineering complex extracellular matrix (ECM is an important challenge for cell and tissue engineering applications as well as for understanding fundamental cell biology. We developed the methodology for fabrication of precisely controllable multiscale hierarchical structures using capillary force lithography in combination with original wrinkling technique for the generation of well-defined native ECM-like platforms by culturing fibroblast cells on the multiscale substrata [1]. This paper provides information on detailed characteristics of polyethylene glycol-diacrylate multiscale substrata. In addition, a possible model for guided extracellular matrix formation from fibroblast cells cultured on bio-inspired configurable multiscale substrata is proposed.

  8. Unification of automatic target tracking and automatic target recognition (United States)

    Schachter, Bruce J.


    The subject being addressed is how an automatic target tracker (ATT) and an automatic target recognizer (ATR) can be fused together so tightly and so well that their distinctiveness becomes lost in the merger. This has historically not been the case outside of biology and a few academic papers. The biological model of ATT∪ATR arises from dynamic patterns of activity distributed across many neural circuits and structures (including retina). The information that the brain receives from the eyes is "old news" at the time that it receives it. The eyes and brain forecast a tracked object's future position, rather than relying on received retinal position. Anticipation of the next moment - building up a consistent perception - is accomplished under difficult conditions: motion (eyes, head, body, scene background, target) and processing limitations (neural noise, delays, eye jitter, distractions). Not only does the human vision system surmount these problems, but it has innate mechanisms to exploit motion in support of target detection and classification. Biological vision doesn't normally operate on snapshots. Feature extraction, detection and recognition are spatiotemporal. When vision is viewed as a spatiotemporal process, target detection, recognition, tracking, event detection and activity recognition, do not seem as distinct as they are in current ATT and ATR designs. They appear as similar mechanism taking place at varying time scales. A framework is provided for unifying ATT and ATR.

  9. Multiscale Modeling of Composites: Toward Virtual Testing … and Beyond (United States)

    LLorca, J.; González, C.; Molina-Aldareguía, J. M.; Lópes, C. S.


    Recent developments in the area of multiscale modeling of fiber-reinforced polymers are presented. The overall strategy takes advantage of the separation of length scales between different entities (ply, laminate, and component) found in composite structures. This allows us to carry out multiscale modeling by computing the properties of one entity (e.g., individual plies) at the relevant length scale, homogenizing the results into a constitutive model, and passing this information to the next length scale to determine the mechanical behavior of the larger entity (e.g., laminate). As a result, high-fidelity numerical simulations of the mechanical behavior of composite coupons and small components are nowadays feasible starting from the matrix, fiber, and interface properties and spatial distribution. Finally, the roadmap is outlined for extending the current strategy to include functional properties and processing into the simulation scheme.

  10. Multiscale High-Level Feature Fusion for Histopathological Image Classification

    Directory of Open Access Journals (Sweden)

    ZhiFei Lai


    Full Text Available Histopathological image classification is one of the most important steps for disease diagnosis. We proposed a method for multiclass histopathological image classification based on deep convolutional neural network referred to as coding network. It can gain better representation for the histopathological image than only using coding network. The main process is that training a deep convolutional neural network is to extract high-level feature and fuse two convolutional layers’ high-level feature as multiscale high-level feature. In order to gain better performance and high efficiency, we would employ sparse autoencoder (SAE and principal components analysis (PCA to reduce the dimensionality of multiscale high-level feature. We evaluate the proposed method on a real histopathological image dataset. Our results suggest that the proposed method is effective and outperforms the coding network.

  11. Hierarchical multiscale model for biomechanics analysis of microfilament networks (United States)

    Li, Tong; Gu, Y. T.; Feng, Xi-Qiao; Yarlagadda, Prasad K. D. V.; Oloyede, Adekunle


    The mechanisms of force generation and transference via microfilament networks are crucial to the understandings of mechanobiology of cellular processes in living cells. However, there exists an enormous challenge for all-atom physics simulation of real size microfilament networks due to scale limitation of molecular simulation techniques. Following biophysical investigations of constitutive relations between adjacent globular actin monomers on filamentous actin, a hierarchical multiscale model was developed to investigate the biomechanical properties of microfilament networks. This model was validated by previous experimental studies of axial tension and transverse vibration of single F-actin. The biomechanics of microfilament networks can be investigated at the scale of real eukaryotic cell size (10 μm). This multiscale approach provides a powerful modeling tool which can contribute to the understandings of actin-related cellular processes in living cells.

  12. Multiscale analysis of surface morphologies by curvelet and contourlet transforms

    International Nuclear Information System (INIS)

    Li, Linfu; Zhang, Xiangchao; Zhang, Hao; He, Xiaoying; Xu, Min


    The surface topographies of precision components are critical to their functionalities. However, it is challenging to characterize the topographies of complex surfaces, especially for structured surfaces. The wavelet families are promising for the multiscale geometry analysis of nonstochastic surfaces. The second-generation curvelet transform provides a sparse representation and good multiscale decomposition for curve singularities. However, the contourlet expansion, composed of bases oriented along various directions in multiple scales with smaller redundancy rates, has a remarkable capability of representing borderlines. In this paper they are both adopted for the characterization of surface topographies. Different components can be extracted according to their scales and morphological characteristics; as a result, the corresponding manufacturing processes and functionalities can be analyzed specifically. Numerical experiments are given to demonstrate the capabilities of these methods in sparse representation and effective extraction of geometry features of different nonstochastic surfaces. (paper)

  13. Multi-scale atmospheric environment modelling for urban areas

    Directory of Open Access Journals (Sweden)

    A. A. Baklanov


    Full Text Available Modern supercomputers allow realising multi-scale systems for assessment and forecasting of urban meteorology, air pollution and emergency preparedness and considering nesting with obstacle-resolved models. A multi-scale modelling system with downscaling from regional to city-scale with the Environment – HIgh Resolution Limited Area Model (Enviro-HIRLAM and to micro-scale with the obstacle-resolved Micro-scale Model for Urban Environment (M2UE is suggested and demonstrated. The M2UE validation results versus the Mock Urban Setting Trial (MUST experiment indicate satisfactory quality of the model. Necessary conditions for the choice of nested models, building descriptions, areas and resolutions of nested models are analysed. Two-way nesting (up- and down-scaling, when scale effects both directions (from the meso-scale on the micro-scale and from the micro-scale on the meso-scale, is also discussed.

  14. RBF Multiscale Collocation for Second Order Elliptic Boundary Value Problems

    KAUST Repository

    Farrell, Patricio


    In this paper, we discuss multiscale radial basis function collocation methods for solving elliptic partial differential equations on bounded domains. The approximate solution is constructed in a multilevel fashion, each level using compactly supported radial basis functions of smaller scale on an increasingly fine mesh. On each level, standard symmetric collocation is employed. A convergence theory is given, which builds on recent theoretical advances for multiscale approximation using compactly supported radial basis functions. We are able to show that the convergence is linear in the number of levels. We also discuss the condition numbers of the arising systems and the effect of simple, diagonal preconditioners, now proving rigorously previous numerical observations. © 2013 Society for Industrial and Applied Mathematics.

  15. Global sensitivity analysis of multiscale properties of porous materials (United States)

    Um, Kimoon; Zhang, Xuan; Katsoulakis, Markos; Plechac, Petr; Tartakovsky, Daniel M.


    Ubiquitous uncertainty about pore geometry inevitably undermines the veracity of pore- and multi-scale simulations of transport phenomena in porous media. It raises two fundamental issues: sensitivity of effective material properties to pore-scale parameters and statistical parameterization of Darcy-scale models that accounts for pore-scale uncertainty. Homogenization-based maps of pore-scale parameters onto their Darcy-scale counterparts facilitate both sensitivity analysis (SA) and uncertainty quantification. We treat uncertain geometric characteristics of a hierarchical porous medium as random variables to conduct global SA and to derive probabilistic descriptors of effective diffusion coefficients and effective sorption rate. Our analysis is formulated in terms of solute transport diffusing through a fluid-filled pore space, while sorbing to the solid matrix. Yet it is sufficiently general to be applied to other multiscale porous media phenomena that are amenable to homogenization.

  16. Multiscale singular value manifold for rotating machinery fault diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Yi; Lu, BaoChun; Zhang, Deng Feng [School of Mechanical Engineering, Nanjing University of Science and Technology,Nanjing (United States)


    Time-frequency distribution of vibration signal can be considered as an image that contains more information than signal in time domain. Manifold learning is a novel theory for image recognition that can be also applied to rotating machinery fault pattern recognition based on time-frequency distributions. However, the vibration signal of rotating machinery in fault condition contains cyclical transient impulses with different phrases which are detrimental to image recognition for time-frequency distribution. To eliminate the effects of phase differences and extract the inherent features of time-frequency distributions, a multiscale singular value manifold method is proposed. The obtained low-dimensional multiscale singular value manifold features can reveal the differences of different fault patterns and they are applicable to classification and diagnosis. Experimental verification proves that the performance of the proposed method is superior in rotating machinery fault diagnosis.

  17. Multiscale correlations in highly resolved Large Eddy Simulations (United States)

    Biferale, Luca; Buzzicotti, Michele; Linkmann, Moritz


    Understanding multiscale turbulent statistics is one of the key challenges for many modern applied and fundamental problems in fluid dynamics. One of the main obstacles is the existence of anomalously strong non Gaussian fluctuations, which become more and more important with increasing Reynolds number. In order to assess the performance of LES models in reproducing these extreme events with reasonable accuracy, it is helpful to further understand the statistical properties of the coupling between the resolved and the subgrid scales. We present analytical and numerical results focussing on the multiscale correlations between the subgrid stress and the resolved velocity field obtained both from LES and filtered DNS data. Furthermore, a comparison is carried out between LES and DNS results concerning the scaling behaviour of higher-order structure functions using both Smagorinsky or self-similar Fourier sub-grid models. ERC AdG Grant No 339032 NewTURB.

  18. Information theory and stochastics for multiscale nonlinear systems

    CERN Document Server

    Majda, Andrew J; Grote, Marcus J


    This book introduces mathematicians to the fascinating emerging mathematical interplay between ideas from stochastics and information theory and important practical issues in studying complex multiscale nonlinear systems. It emphasizes the serendipity between modern applied mathematics and applications where rigorous analysis, the development of qualitative and/or asymptotic models, and numerical modeling all interact to explain complex phenomena. After a brief introduction to the emerging issues in multiscale modeling, the book has three main chapters. The first chapter is an introduction to information theory with novel applications to statistical mechanics, predictability, and Jupiter's Red Spot for geophysical flows. The second chapter discusses new mathematical issues regarding fluctuation-dissipation theorems for complex nonlinear systems including information flow, various approximations, and illustrates applications to various mathematical models. The third chapter discusses stochastic modeling of com...

  19. Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance

    Energy Technology Data Exchange (ETDEWEB)

    Estep, Donald [Colorado State Univ., Fort Collins, CO (United States); El-Azab, Anter [Florida State Univ., Tallahassee, FL (United States); Pernice, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States); Peterson, John W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Polyakov, Peter [Univ. of Wyoming, Laramie, WY (United States); Tavener, Simon [Colorado State Univ., Fort Collins, CO (United States); Xiu, Dongbin [Purdue Univ., West Lafayette, IN (United States); Univ. of Utah, Salt Lake City, UT (United States)


    In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis for computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.

  20. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard


    Annual Review in Automatic Programming, Volume 4 is a collection of papers that deals with the GIER ALGOL compiler, a parameterized compiler based on mechanical linguistics, and the JOVIAL language. A couple of papers describes a commercial use of stacks, an IBM system, and what an ideal computer program support system should be. One paper reviews the system of compilation, the development of a more advanced language, programming techniques, machine independence, and program transfer to other machines. Another paper describes the ALGOL 60 system for the GIER machine including running ALGOL pro

  1. On automatic machine translation evaluation

    Directory of Open Access Journals (Sweden)

    Darinka Verdonik


    Full Text Available An important task of developing machine translation (MT is evaluating system performance. Automatic measures are most commonly used for this task, as manual evaluation is time-consuming and costly. However, to perform an objective evaluation is not a trivial task. Automatic measures, such as BLEU, TER, NIST, METEOR etc., have their own weaknesses, while manual evaluations are also problematic since they are always to some extent subjective. In this paper we test the influence of a test set on the results of automatic MT evaluation for the subtitling domain. Translating subtitles is a rather specific task for MT, since subtitles are a sort of summarization of spoken text rather than a direct translation of (written text. Additional problem when translating language pair that does not include English, in our example Slovene-Serbian, is that commonly the translations are done from English to Serbian and from English to Slovenian, and not directly, since most of the TV production is originally filmed in English. All this poses additional challenges to MT and consequently to MT evaluation. Automatic evaluation is based on a reference translation, which is usually taken from an existing parallel corpus and marked as a test set. In our experiments, we compare the evaluation results for the same MT system output using three types of test set. In the first round, the test set are 4000 subtitles from the parallel corpus of subtitles SUMAT. These subtitles are not direct translations from Serbian to Slovene or vice versa, but are based on an English original. In the second round, the test set are 1000 subtitles randomly extracted from the first test set and translated anew, from Serbian to Slovenian, based solely on the Serbian written subtitles. In the third round, the test set are the same 1000 subtitles, however this time the Slovene translations were obtained by manually correcting the Slovene MT outputs so that they are correct translations of the

  2. Automatic Evaluation of Machine Translation

    DEFF Research Database (Denmark)

    Martinez, Mercedes Garcia; Koglin, Arlene; Mesa-Lao, Bartolomé


    of quality criteria in as few edits as possible. The quality of MT systems is generally measured by automatic metrics, producing scores that should correlate with human evaluation.In this study, we investigate correlations between one of such metrics, i.e. Translation Edit Rate (TER), and actual post...... of post-editing effort, namely i) temporal (time), ii) cognitive (mental processes) and iii) technical (keyboard activity). For the purposes of this research, TER scores were correlated with two different indicators of post-editing effort as computed in the CRITT Translation Process Database (TPR...

  3. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard


    Annual Review in Automatic Programming, Volume 2 is a collection of papers that discusses the controversy about the suitability of COBOL as a common business oriented language, and the development of different common languages for scientific computation. A couple of papers describes the use of the Genie system in numerical calculation and analyzes Mercury autocode in terms of a phrase structure language, such as in the source language, target language, the order structure of ATLAS, and the meta-syntactical language of the assembly program. Other papers explain interference or an ""intermediate

  4. Coordinated hybrid automatic repeat request

    KAUST Repository

    Makki, Behrooz


    We develop a coordinated hybrid automatic repeat request (HARQ) approach. With the proposed scheme, if a user message is correctly decoded in the first HARQ rounds, its spectrum is allocated to other users, to improve the network outage probability and the users\\' fairness. The results, which are obtained for single- and multiple-antenna setups, demonstrate the efficiency of the proposed approach in different conditions. For instance, with a maximum of M retransmissions and single transmit/receive antennas, the diversity gain of a user increases from M to (J+1)(M-1)+1 where J is the number of users helping that user.

  5. Multiscale Methods for Accurate, Efficient, and Scale-Aware Models

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Vincent [Univ. of Wisconsin, Milwaukee, WI (United States)


    The goal of UWM’s portion of the Multiscale project was to develop a unified cloud parameterization that could simulate all cloud types --- including stratocumulus, shallow cumulus, and deep cumulus --- using the single equation set implemented in CLUBB. An advantage of a unified parameterization methodology is that it avoids the difficult task of interfacing different cloud parameterizations for different cloud types. To interface CLUBB’s clouds to the microphysics, a Monte Carlo interface, SILHS, was further developed.

  6. Lifetime statistics of quantum chaos studied by a multiscale analysis

    KAUST Repository

    Di Falco, A.


    In a series of pump and probe experiments, we study the lifetime statistics of a quantum chaotic resonator when the number of open channels is greater than one. Our design embeds a stadium billiard into a two dimensional photonic crystal realized on a silicon-on-insulator substrate. We calculate resonances through a multiscale procedure that combines energy landscape analysis and wavelet transforms. Experimental data is found to follow the universal predictions arising from random matrix theory with an excellent level of agreement.

  7. Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"

    Energy Technology Data Exchange (ETDEWEB)

    Petzold, Linda R.


    Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.

  8. Multiscale Monte Carlo equilibration: Pure Yang-Mills theory (United States)

    Endres, Michael G.; Brower, Richard C.; Detmold, William; Orginos, Kostas; Pochinsky, Andrew V.


    We present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.

  9. Hybrid multiscale simulation of a mixing-controlled reaction (United States)

    Scheibe, Timothy D.; Schuchardt, Karen; Agarwal, Khushbu; Chase, Jared; Yang, Xiaofan; Palmer, Bruce J.; Tartakovsky, Alexandre M.; Elsethagen, Todd; Redden, George


    Continuum-scale models, which employ a porous medium conceptualization to represent properties and processes averaged over a large number of solid grains and pore spaces, are widely used to study subsurface flow and reactive transport. Recently, pore-scale models, which explicitly resolve individual soil grains and pores, have been developed to more accurately model and study pore-scale phenomena, such as mineral precipitation and dissolution reactions, microbially-mediated surface reactions, and other complex processes. However, these highly-resolved models are prohibitively expensive for modeling domains of sizes relevant to practical problems. To broaden the utility of pore-scale models for larger domains, we developed a hybrid multiscale model that initially simulates the full domain at the continuum scale and applies a pore-scale model only to areas of high reactivity. Since the location and number of pore-scale model regions in the model varies as the reactions proceed, an adaptive script defines the number and location of pore regions within each continuum iteration and initializes pore-scale simulations from macroscale information. Another script communicates information from the pore-scale simulation results back to the continuum scale. These components provide loose coupling between the pore- and continuum-scale codes into a single hybrid multiscale model implemented within the SWIFT workflow environment. In this paper, we consider an irreversible homogeneous bimolecular reaction (two solutes reacting to form a third solute) in a 2D test problem. This paper is focused on the approach used for multiscale coupling between pore- and continuum-scale models, application to a realistic test problem, and implications of the results for predictive simulation of mixing-controlled reactions in porous media. Our results and analysis demonstrate that the hybrid multiscale method provides a feasible approach for increasing the accuracy of subsurface reactive transport

  10. Multiscale modelling of trabecular bone: from micro to macroscale


    Levrero Florencio, Francesc


    Trabecular bone has a complex and porous microstructure. This study develops approaches to determine the mechanical behaviour of this material at the macroscopic level through the use of homogenisation-based multiscale methods using micro-finite element simulations. In homogenisation-based finite element methods, a simulation involving a representative volume element of the microstructure of the considered material is performed with a specific set of boundary conditions. The ma...

  11. An empirical analysis of dynamic multiscale hedging using wavelet decomposition


    Conlon, Thomas; Cotter, John


    This paper investigates the hedging effectiveness of a dynamic moving window OLS hedging model, formed using wavelet decomposed time-series. The wavelet transform is applied to calculate the appropriate dynamic minimum-variance hedge ratio for various hedging horizons for a number of assets. The effectiveness of the dynamic multiscale hedging strategy is then tested, both in- and out-of-sample, using standard variance reduction and expanded to include a downside risk metric, the time horizon ...

  12. Evaluation of the Community Multiscale Air Quality (CMAQ) ... (United States)

    This work evaluates particle size-composition distributions simulated by the Community Multiscale Air Quality (CMAQ) model using Micro-Orifice Uniform Deposit Impactor (MOUDI) measurements at 18 sites across North America. Size-resolved measurements of particulate SO4+, with the model ranging from an underestimation to overestimation of both the peak diameter and peak particle concentration across the sites. Computing PM2.5 from the modeled size distribution parameters rather than by summing the masses in the Aitken and a

  13. Hybrid multiscale modeling and prediction of cancer cell behavior.

    Directory of Open Access Journals (Sweden)

    Mohammad Hossein Zangooei

    Full Text Available Understanding cancer development crossing several spatial-temporal scales is of great practical significance to better understand and treat cancers. It is difficult to tackle this challenge with pure biological means. Moreover, hybrid modeling techniques have been proposed that combine the advantages of the continuum and the discrete methods to model multiscale problems.In light of these problems, we have proposed a new hybrid vascular model to facilitate the multiscale modeling and simulation of cancer development with respect to the agent-based, cellular automata and machine learning methods. The purpose of this simulation is to create a dataset that can be used for prediction of cell phenotypes. By using a proposed Q-learning based on SVR-NSGA-II method, the cells have the capability to predict their phenotypes autonomously that is, to act on its own without external direction in response to situations it encounters.Computational simulations of the model were performed in order to analyze its performance. The most striking feature of our results is that each cell can select its phenotype at each time step according to its condition. We provide evidence that the prediction of cell phenotypes is reliable.Our proposed model, which we term a hybrid multiscale modeling of cancer cell behavior, has the potential to combine the best features of both continuum and discrete models. The in silico results indicate that the 3D model can represent key features of cancer growth, angiogenesis, and its related micro-environment and show that the findings are in good agreement with biological tumor behavior. To the best of our knowledge, this paper is the first hybrid vascular multiscale modeling of cancer cell behavior that has the capability to predict cell phenotypes individually by a self-generated dataset.

  14. Multiscale eddy simulation for moist atmospheric convection: Preliminary investigation

    International Nuclear Information System (INIS)

    Stechmann, Samuel N.


    A multiscale computational framework is designed for simulating atmospheric convection and clouds. In this multiscale framework, large eddy simulation (LES) is used to model the coarse scales of 100 m and larger, and a stochastic, one-dimensional turbulence (ODT) model is used to represent the fine scales of 100 m and smaller. Coupled and evolving together, these two components provide a multiscale eddy simulation (MES). Through its fine-scale turbulence and moist thermodynamics, MES allows coarse grid cells to be partially cloudy and to encompass cloudy–clear air mixing on scales down to 1 m; in contrast, in typical LES such fine-scale processes are not represented or are parameterized using bulk deterministic closures. To illustrate MES and investigate its multiscale dynamics, a shallow cumulus cloud field is simulated. The fine-scale variability is seen to take a plausible form, with partially cloudy grid cells prominent near cloud edges and cloud top. From earlier theoretical work, this mixing of cloudy and clear air is believed to have an important impact on buoyancy. However, contrary to expectations based on earlier theoretical studies, the mean statistics of the bulk cloud field are essentially the same in MES and LES; possible reasons for this are discussed, including possible limitations in the present formulation of MES. One difference between LES and MES is seen in the coarse-scale turbulent kinetic energy, which appears to grow slowly in time due to incoherent stochastic fluctuations in the buoyancy. This and other considerations suggest the need for some type of spatial and/or temporal filtering to attenuate undersampling of the stochastic fine-scale processes

  15. Generalization of mixed multiscale finite element methods with applications

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C S [Texas A & M Univ., College Station, TX (United States)


    Many science and engineering problems exhibit scale disparity and high contrast. The small scale features cannot be omitted in the physical models because they can affect the macroscopic behavior of the problems. However, resolving all the scales in these problems can be prohibitively expensive. As a consequence, some types of model reduction techniques are required to design efficient solution algorithms. For practical purpose, we are interested in mixed finite element problems as they produce solutions with certain conservative properties. Existing multiscale methods for such problems include the mixed multiscale finite element methods. We show that for complicated problems, the mixed multiscale finite element methods may not be able to produce reliable approximations. This motivates the need of enrichment for coarse spaces. Two enrichment approaches are proposed, one is based on generalized multiscale finte element metthods (GMsFEM), while the other is based on spectral element-based algebraic multigrid (rAMGe). The former one, which is called mixed GMsFEM, is developed for both Darcy’s flow and linear elasticity. Application of the algorithm in two-phase flow simulations are demonstrated. For linear elasticity, the algorithm is subtly modified due to the symmetry requirement of the stress tensor. The latter enrichment approach is based on rAMGe. The algorithm differs from GMsFEM in that both of the velocity and pressure spaces are coarsened. Due the multigrid nature of the algorithm, recursive application is available, which results in an efficient multilevel construction of the coarse spaces. Stability, convergence analysis, and exhaustive numerical experiments are carried out to validate the proposed enrichment approaches. iii

  16. Multiscale Universal Interface: A concurrent framework for coupling heterogeneous solvers (United States)

    Tang, Yu-Hang; Kudo, Shuhei; Bian, Xin; Li, Zhen; Karniadakis, George Em


    Concurrently coupled numerical simulations using heterogeneous solvers are powerful tools for modeling multiscale phenomena. However, major modifications to existing codes are often required to enable such simulations, posing significant difficulties in practice. In this paper we present a C++ library, i.e. the Multiscale Universal Interface (MUI), which is capable of facilitating the coupling effort for a wide range of multiscale simulations. The library adopts a header-only form with minimal external dependency and hence can be easily dropped into existing codes. A data sampler concept is introduced, combined with a hybrid dynamic/static typing mechanism, to create an easily customizable framework for solver-independent data interpretation. The library integrates MPI MPMD support and an asynchronous communication protocol to handle inter-solver information exchange irrespective of the solvers' own MPI awareness. Template metaprogramming is heavily employed to simultaneously improve runtime performance and code flexibility. We validated the library by solving three different multiscale problems, which also serve to demonstrate the flexibility of the framework in handling heterogeneous models and solvers. In the first example, a Couette flow was simulated using two concurrently coupled Smoothed Particle Hydrodynamics (SPH) simulations of different spatial resolutions. In the second example, we coupled the deterministic SPH method with the stochastic Dissipative Particle Dynamics (DPD) method to study the effect of surface grafting on the hydrodynamics properties on the surface. In the third example, we consider conjugate heat transfer between a solid domain and a fluid domain by coupling the particle-based energy-conserving DPD (eDPD) method with the Finite Element Method (FEM).

  17. Multiscale computer modeling in biomechanics and biomedical engineering

    CERN Document Server


    This book reviews the state-of-the-art in multiscale computer modeling, in terms of both accomplishments and challenges. The information in the book is particularly useful for biomedical engineers, medical physicists and researchers in systems biology, mathematical biology, micro-biomechanics and biomaterials who are interested in how to bridge between traditional biomedical engineering work at the organ and tissue scales, and the newer arenas of cellular and molecular bioengineering.

  18. Motor automaticity in Parkinson’s disease (United States)

    Wu, Tao; Hallett, Mark; Chan, Piu


    Bradykinesia is the most important feature contributing to motor difficulties in Parkinson’s disease (PD). However, the pathophysiology underlying bradykinesia is not fully understood. One important aspect is that PD patients have difficulty in performing learned motor skills automatically, but this problem has been generally overlooked. Here we review motor automaticity associated motor deficits in PD, such as reduced arm swing, decreased stride length, freezing of gait, micrographia and reduced facial expression. Recent neuroimaging studies have revealed some neural mechanisms underlying impaired motor automaticity in PD, including less efficient neural coding of movement, failure to shift automated motor skills to the sensorimotor striatum, instability of the automatic mode within the striatum, and use of attentional control and/or compensatory efforts to execute movements usually performed automatically in healthy people. PD patients lose previously acquired automatic skills due to their impaired sensorimotor striatum, and have difficulty in acquiring new automatic skills or restoring lost motor skills. More investigations on the pathophysiology of motor automaticity, the effect of L-dopa or surgical treatments on automaticity, and the potential role of using measures of automaticity in early diagnosis of PD would be valuable. PMID:26102020

  19. Introduction and application of the multiscale coefficient of variation analysis. (United States)

    Abney, Drew H; Kello, Christopher T; Balasubramaniam, Ramesh


    Quantifying how patterns of behavior relate across multiple levels of measurement typically requires long time series for reliable parameter estimation. We describe a novel analysis that estimates patterns of variability across multiple scales of analysis suitable for time series of short duration. The multiscale coefficient of variation (MSCV) measures the distance between local coefficient of variation estimates within particular time windows and the overall coefficient of variation across all time samples. We first describe the MSCV analysis and provide an example analytical protocol with corresponding MATLAB implementation and code. Next, we present a simulation study testing the new analysis using time series generated by ARFIMA models that span white noise, short-term and long-term correlations. The MSCV analysis was observed to be sensitive to specific parameters of ARFIMA models varying in the type of temporal structure and time series length. We then apply the MSCV analysis to short time series of speech phrases and musical themes to show commonalities in multiscale structure. The simulation and application studies provide evidence that the MSCV analysis can discriminate between time series varying in multiscale structure and length.

  20. A multiscale mortar multipoint flux mixed finite element method

    KAUST Repository

    Wheeler, Mary Fanett


    In this paper, we develop a multiscale mortar multipoint flux mixed finite element method for second order elliptic problems. The equations in the coarse elements (or subdomains) are discretized on a fine grid scale by a multipoint flux mixed finite element method that reduces to cell-centered finite differences on irregular grids. The subdomain grids do not have to match across the interfaces. Continuity of flux between coarse elements is imposed via a mortar finite element space on a coarse grid scale. With an appropriate choice of polynomial degree of the mortar space, we derive optimal order convergence on the fine scale for both the multiscale pressure and velocity, as well as the coarse scale mortar pressure. Some superconvergence results are also derived. The algebraic system is reduced via a non-overlapping domain decomposition to a coarse scale mortar interface problem that is solved using a multiscale flux basis. Numerical experiments are presented to confirm the theory and illustrate the efficiency and flexibility of the method. © EDP Sciences, SMAI, 2012.

  1. Multiscale methods in turbulent combustion: strategies and computational challenges

    International Nuclear Information System (INIS)

    Echekki, Tarek


    A principal challenge in modeling turbulent combustion flows is associated with their complex, multiscale nature. Traditional paradigms in the modeling of these flows have attempted to address this nature through different strategies, including exploiting the separation of turbulence and combustion scales and a reduced description of the composition space. The resulting moment-based methods often yield reasonable predictions of flow and reactive scalars' statistics under certain conditions. However, these methods must constantly evolve to address combustion at different regimes, modes or with dominant chemistries. In recent years, alternative multiscale strategies have emerged, which although in part inspired by the traditional approaches, also draw upon basic tools from computational science, applied mathematics and the increasing availability of powerful computational resources. This review presents a general overview of different strategies adopted for multiscale solutions of turbulent combustion flows. Within these strategies, some specific models are discussed or outlined to illustrate their capabilities and underlying assumptions. These strategies may be classified under four different classes, including (i) closure models for atomistic processes, (ii) multigrid and multiresolution strategies, (iii) flame-embedding strategies and (iv) hybrid large-eddy simulation-low-dimensional strategies. A combination of these strategies and models can potentially represent a robust alternative strategy to moment-based models; but a significant challenge remains in the development of computational frameworks for these approaches as well as their underlying theories. (topical review)

  2. Generalized multiscale finite element method for elasticity equations

    KAUST Repository

    Chung, Eric T.


    In this paper, we discuss the application of generalized multiscale finite element method (GMsFEM) to elasticity equation in heterogeneous media. We consider steady state elasticity equations though some of our applications are motivated by elastic wave propagation in subsurface where the subsurface properties can be highly heterogeneous and have high contrast. We present the construction of main ingredients for GMsFEM such as the snapshot space and offline spaces. The latter is constructed using local spectral decomposition in the snapshot space. The spectral decomposition is based on the analysis which is provided in the paper. We consider both continuous Galerkin and discontinuous Galerkin coupling of basis functions. Both approaches have their cons and pros. Continuous Galerkin methods allow avoiding penalty parameters though they involve partition of unity functions which can alter the properties of multiscale basis functions. On the other hand, discontinuous Galerkin techniques allow gluing multiscale basis functions without any modifications. Because basis functions are constructed independently from each other, this approach provides an advantage. We discuss the use of oversampling techniques that use snapshots in larger regions to construct the offline space. We provide numerical results to show that one can accurately approximate the solution using reduced number of degrees of freedom.

  3. A Multiscale Enrichment Procedure for Nonlinear Monotone Operators

    KAUST Repository

    Efendiev, Yalchin R.


    In this paper, multiscale finite element methods (MsFEMs) and domain decomposition techniques are developed for a class of nonlinear elliptic problems with high-contrast coefficients. In the process, existing work on linear problems [Y. Efendiev, J. Galvis, R. Lazarov, S. Margenov and J. Ren, Robust two-level domain decomposition preconditioners for high-contrast anisotropic flows in multiscale media. Submitted.; Y. Efendiev, J. Galvis and X. Wu, J. Comput. Phys. 230 (2011) 937–955; J. Galvis and Y. Efendiev, SIAM Multiscale Model. Simul. 8 (2010) 1461–1483.] is extended to treat a class of nonlinear elliptic operators. The proposed method requires the solutions of (small dimension and local) nonlinear eigenvalue problems in order to systematically enrich the coarse solution space. Convergence of the method is shown to relate to the dimension of the coarse space (due to the enrichment procedure) as well as the coarse mesh size. In addition, it is shown that the coarse mesh spaces can be effectively used in two-level domain decomposition preconditioners. A number of numerical results are presented to complement the analysis.

  4. Standard Model in multiscale theories and observational constraints (United States)

    Calcagni, Gianluca; Nardelli, Giuseppe; Rodríguez-Fernández, David


    We construct and analyze the Standard Model of electroweak and strong interactions in multiscale spacetimes with (i) weighted derivatives and (ii) q -derivatives. Both theories can be formulated in two different frames, called fractional and integer picture. By definition, the fractional picture is where physical predictions should be made. (i) In the theory with weighted derivatives, it is shown that gauge invariance and the requirement of having constant masses in all reference frames make the Standard Model in the integer picture indistinguishable from the ordinary one. Experiments involving only weak and strong forces are insensitive to a change of spacetime dimensionality also in the fractional picture, and only the electromagnetic and gravitational sectors can break the degeneracy. For the simplest multiscale measures with only one characteristic time, length and energy scale t*, ℓ* and E*, we compute the Lamb shift in the hydrogen atom and constrain the multiscale correction to the ordinary result, getting the absolute upper bound t*28 TeV . Stronger bounds are obtained from the measurement of the fine-structure constant. (ii) In the theory with q -derivatives, considering the muon decay rate and the Lamb shift in light atoms, we obtain the independent absolute upper bounds t*35 MeV . For α0=1 /2 , the Lamb shift alone yields t*450 GeV .

  5. Multi-scale symbolic transfer entropy analysis of EEG (United States)

    Yao, Wenpo; Wang, Jun


    From both global and local perspectives, we symbolize two kinds of EEG and analyze their dynamic and asymmetrical information using multi-scale transfer entropy. Multi-scale process with scale factor from 1 to 199 and step size of 2 is applied to EEG of healthy people and epileptic patients, and then the permutation with embedding dimension of 3 and global approach are used to symbolize the sequences. The forward and reverse symbol sequences are taken as the inputs of transfer entropy. Scale factor intervals of permutation and global way are (37, 57) and (65, 85) where the two kinds of EEG have satisfied entropy distinctions. When scale factor is 67, transfer entropy of the healthy and epileptic subjects of permutation, 0.1137 and 0.1028, have biggest difference. And the corresponding values of the global symbolization is 0.0641 and 0.0601 which lies in the scale factor of 165. Research results show that permutation which takes contribution of local information has better distinction and is more effectively applied to our multi-scale transfer entropy analysis of EEG.

  6. An automatic holographic adaptive phoropter (United States)

    Amirsolaimani, Babak; Peyghambarian, N.; Schwiegerling, Jim; Bablumyan, Arkady; Savidis, Nickolaos; Peyman, Gholam


    Phoropters are the most common instrument used to detect refractive errors. During a refractive exam, lenses are flipped in front of the patient who looks at the eye chart and tries to read the symbols. The procedure is fully dependent on the cooperation of the patient to read the eye chart, provides only a subjective measurement of visual acuity, and can at best provide a rough estimate of the patient's vision. Phoropters are difficult to use for mass screenings requiring a skilled examiner, and it is hard to screen young children and the elderly etc. We have developed a simplified, lightweight automatic phoropter that can measure the optical error of the eye objectively without requiring the patient's input. The automatic holographic adaptive phoropter is based on a Shack-Hartmann wave front sensor and three computercontrolled fluidic lenses. The fluidic lens system is designed to be able to provide power and astigmatic corrections over a large range of corrections without the need for verbal feedback from the patient in less than 20 seconds.

  7. Automatic welding machine for piping

    International Nuclear Information System (INIS)

    Yoshida, Kazuhiro; Koyama, Takaichi; Iizuka, Tomio; Ito, Yoshitoshi; Takami, Katsumi.


    A remotely controlled automatic special welding machine for piping was developed. This machine is utilized for long distance pipe lines, chemical plants, thermal power generating plants and nuclear power plants effectively from the viewpoint of good quality control, reduction of labor and good controllability. The function of this welding machine is to inspect the shape and dimensions of edge preparation before welding work by the sense of touch, to detect the temperature of melt pool, inspect the bead form by the sense of touch, and check the welding state by ITV during welding work, and to grind the bead surface and inspect the weld metal by ultrasonic test automatically after welding work. The construction of this welding system, the main specification of the apparatus, the welding procedure in detail, the electrical source of this welding machine, the cooling system, the structure and handling of guide ring, the central control system and the operating characteristics are explained. The working procedure and the effect by using this welding machine, and the application to nuclear power plants and the other industrial field are outlined. The HIDIC 08 is used as the controlling computer. This welding machine is useful for welding SUS piping as well as carbon steel piping. (Nakai, Y.)

  8. SRV-automatic handling device

    International Nuclear Information System (INIS)

    Yamada, Koji


    Automatic handling device for the steam relief valves (SRV's) is developed in order to achieve a decrease in exposure of workers, increase in availability factor, improvement in reliability, improvement in safety of operation, and labor saving. A survey is made during a periodical inspection to examine the actual SVR handling operation. An SRV automatic handling device consists of four components: conveyor, armed conveyor, lifting machine, and control/monitoring system. The conveyor is so designed that the existing I-rail installed in the containment vessel can be used without any modification. This is employed for conveying an SRV along the rail. The armed conveyor, designed for a box rail, is used for an SRV installed away from the rail. By using the lifting machine, an SRV installed away from the I-rail is brought to a spot just below the rail so that the SRV can be transferred by the conveyor. The control/monitoring system consists of a control computer, operation panel, TV monitor and annunciator. The SRV handling device is operated by remote control from a control room. A trial equipment is constructed and performance/function testing is carried out using actual SRV's. As a result, is it shown that the SRV handling device requires only two operators to serve satisfactorily. The required time for removal and replacement of one SRV is about 10 minutes. (Nogami, K.)

  9. Automatic scanning of emulsion films

    International Nuclear Information System (INIS)

    D'Ambrosio, N.; Mandrioli, G.; Sirrib, G.


    The use of nuclear emulsions in recent large neutrino experiments is mostly due to the significant results in the developments of this detection technique. In the emulsion films, trajectories of through-going particles are permanently recorded: thus, the emulsion target can be considered not only as a tracking but also as a storing device. If the data readout is performed by automatic scanning systems interfaced to an acquisition computer equipped with a fast frame grabber, nuclear emulsions can be used as very large target detector and quickly analyzed in particle physics experiments. Techniques for automatic scanning of nuclear emulsions have been developed in the early past. The effort was initiated by Niwa at Nagoya (Japan) in the late 70s. The first large-scale application was the CHORUS experiment; then emulsions have been used to search for T neutrinos in a high track density environment like DONUT. In order to measure with high accuracy and high speed, very strict constraints must be satisfied in terms of mechanical precisions, camera speed, image processing power. Recent improvements in this technique are briefly reported

  10. Automatic generation of tourist brochures

    KAUST Repository

    Birsak, Michael


    We present a novel framework for the automatic generation of tourist brochures that include routing instructions and additional information presented in the form of so-called detail lenses. The first contribution of this paper is the automatic creation of layouts for the brochures. Our approach is based on the minimization of an energy function that combines multiple goals: positioning of the lenses as close as possible to the corresponding region shown in an overview map, keeping the number of lenses low, and an efficient numbering of the lenses. The second contribution is a route-aware simplification of the graph of streets used for traveling between the points of interest (POIs). This is done by reducing the graph consisting of all shortest paths through the minimization of an energy function. The output is a subset of street segments that enable traveling between all the POIs without considerable detours, while at the same time guaranteeing a clutter-free visualization. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  11. Multiscale modeling for materials design: Molecular square catalysts (United States)

    Majumder, Debarshi

    In a wide variety of materials, including a number of heterogeneous catalysts, the properties manifested at the process scale are a consequence of phenomena that occur at different time and length scales. Recent experimental developments allow materials to be designed precisely at the nanometer scale. However, the optimum design of such materials requires capabilities to predict the properties at the process scale based on the phenomena occurring at the relevant scales. The thesis research reported here addresses this need to develop multiscale modeling strategies for the design of new materials. As a model system, a new system of materials called molecular squares was studied in this research. Both serial and parallel multiscale strategies and their components were developed as parts of this work. As a serial component, a parameter estimation tool was developed that uses a hierarchical protocol and consists of two different search elements: a global search method implemented using a genetic algorithm that is capable of exploring large parametric space, and a local search method using gradient search techniques that accurately finds the optimum in a localized space. As an essential component of parallel multiscale modeling, different standard as well as specialized computational fluid dynamics (CFD) techniques were explored and developed in order to identify a technique that is best suited to solve a membrane reactor model employing layered films of molecular squares as the heterogeneous catalyst. The coupled set of non-linear partial differential equations (PDEs) representing the continuum model was solved numerically using three different classes of methods: a split-step method using finite difference (FD); domain decomposition in two different forms, one involving three overlapping subdomains and the other involving a gap-tooth scheme; and the multiple-timestep method that was developed in this research. The parallel multiscale approach coupled continuum

  12. An Automatic Cognitive Graph-Based Segmentation for Detection of Blood Vessels in Retinal Images

    Directory of Open Access Journals (Sweden)

    Rasha Al Shehhi


    Full Text Available This paper presents a hierarchical graph-based segmentation for blood vessel detection in digital retinal images. This segmentation employs some of perceptual Gestalt principles: similarity, closure, continuity, and proximity to merge segments into coherent connected vessel-like patterns. The integration of Gestalt principles is based on object-based features (e.g., color and black top-hat (BTH morphology and context and graph-analysis algorithms (e.g., Dijkstra path. The segmentation framework consists of two main steps: preprocessing and multiscale graph-based segmentation. Preprocessing is to enhance lighting condition, due to low illumination contrast, and to construct necessary features to enhance vessel structure due to sensitivity of vessel patterns to multiscale/multiorientation structure. Graph-based segmentation is to decrease computational processing required for region of interest into most semantic objects. The segmentation was evaluated on three publicly available datasets. Experimental results show that preprocessing stage achieves better results compared to state-of-the-art enhancement methods. The performance of the proposed graph-based segmentation is found to be consistent and comparable to other existing methods, with improved capability of detecting small/thin vessels.

  13. Automatic Fault Characterization via Abnormality-Enhanced Classification

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Laguna, I; de Supinski, B R


    Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help to identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.

  14. Automaticity in reading isiZulu


    Sandra Land


    Automaticity, or instant recognition of combinations of letters as units of language, is essential for proficient reading in any language. The article explores automaticity amongst competent adult first-language readers of isiZulu, and the factors associated with it or its opposite - active decoding. Whilst the transparent spelling patterns of isiZulu aid learner readers, some of its orthographical features may militate against their gaining automaticity. These features are agglutination; a c...

  15. Multiscale time-splitting strategy for multiscale multiphysics processes of two-phase flow in fractured media

    KAUST Repository

    Sun, S.


    The temporal discretization scheme is one important ingredient of efficient simulator for two-phase flow in the fractured porous media. The application of single-scale temporal scheme is restricted by the rapid changes of the pressure and saturation in the fractured system with capillarity. In this paper, we propose a multi-scale time splitting strategy to simulate multi-scale multi-physics processes of two-phase flow in fractured porous media. We use the multi-scale time schemes for both the pressure and saturation equations; that is, a large time-step size is employed for the matrix domain, along with a small time-step size being applied in the fractures. The total time interval is partitioned into four temporal levels: the first level is used for the pressure in the entire domain, the second level matching rapid changes of the pressure in the fractures, the third level treating the response gap between the pressure and the saturation, and the fourth level applied for the saturation in the fractures. This method can reduce the computational cost arisen from the implicit solution of the pressure equation. Numerical examples are provided to demonstrate the efficiency of the proposed method.

  16. A new design of automatic vertical drilling tool

    Directory of Open Access Journals (Sweden)

    Yanfeng Ma


    Full Text Available In order to effectively improve penetration rates and enhance wellbore quality for vertical wells, a new Automatic Vertical Drilling Tool (AVDT based on Eccentric Braced Structure (EBS is designed. Applying operating principle of rotary steering drilling, AVDT adds offset gravity block automatic induction inclination mechanism. When hole straightening happens, tools take essentric moment to be produced by gravity of offset gravity lock to control the bearing of guide force, so that well straightening is achieved. The normal tool's size of the AVDT is designed as 215.9 mm,other major components' sizes are worked out by the result of theoretical analysis, including the offset angle of EBS. This paper aims to introduce the structure, operating principle, theoretical analysis and describe the key components' parameters setting of the AVDT.

  17. An automatic beam focusing system for MeV protons (United States)

    Udalagama, C. N. B.; Bettiol, A. A.; van Kan, J. A.; Teo, E. J.; Breese, M. B. H.; Osipowicz, T.; Watt, F.


    An automatic focusing system for MeV protons has been developed. The focusing system utilises rapid real time proton induced secondary electron imaging of a calibration grid coupled with a modified Gaussian fit in order to take into account the enhanced secondary electron signal from the calibration grid edge. The focusing system has been successfully applied to MeV protons focused using a coupled triplet configuration of magnetic quadrupole lenses (Oxford triplet). Automatic beam focusing of a coarse beamspot of approximately (5 × 3.5) micrometres in the X and Y directions to a sub-micrometre beamspot of approximately (0.7 × 0.6) micrometers was achieved at a beam current of about 50 pA.

  18. Automatic Capture Verification in Pacemakers (Autocapture – Utility and Problems

    Directory of Open Access Journals (Sweden)

    Ruth Kam


    Full Text Available The concept of a closed – loop feedback system, that would automatically assess pacing threshold and self -adjust pacing output to ensure consistent myocardial capture, has many appeals. Enhancing patient safety in cases of an unexpected rise in threshold, reduced current drain, hence prolonging battery longevity and reducing the amount of physician intervention required are just some of the advantages. Autocapture (AC is a proprietary algorithm developed by St Jude Medical CRMD, Sylmar, CA, USA, (SJM that was the first to commercially provide these automatic functions in a single chamber pacemaker (Microny and Regency, and subsequently in a dual chamber pacemaker (Affinity, Entity and Identity family of pacemakers. This article reviews the conditions necessary for AC verification and performance and the problems encountered in clinical practice.

  19. An automatic beam focusing system for MeV protons

    International Nuclear Information System (INIS)

    Udalagama, C.N.B.; Bettiol, A.A.; Kan, J.A. van; Teo, E.J.; Breese, M.B.H.; Osipowicz, T.; Watt, F.


    An automatic focusing system for MeV protons has been developed. The focusing system utilises rapid real time proton induced secondary electron imaging of a calibration grid coupled with a modified Gaussian fit in order to take into account the enhanced secondary electron signal from the calibration grid edge. The focusing system has been successfully applied to MeV protons focused using a coupled triplet configuration of magnetic quadrupole lenses (Oxford triplet). Automatic beam focusing of a coarse beamspot of approximately (5 x 3.5) micrometres in the X and Y directions to a sub-micrometre beamspot of approximately (0.7 x 0.6) micrometers was achieved at a beam current of about 50 pA

  20. Automatic Multimedia Creation Enriched with Dynamic Conceptual Data

    Directory of Open Access Journals (Sweden)

    Angel Martín


    Full Text Available There is a growing gap between the multimedia production and the context centric multimedia services. The main problem is the under-exploitation of the content creation design. The idea is to support dynamic content generation adapted to the user or display profile. Our work is an implementation of a web platform for automatic generation of multimedia presentations based on SMIL (Synchronized Multimedia Integration Language standard. The system is able to produce rich media with dynamic multimedia content retrieved automatically from different content databases matching the semantic context. For this purpose, we extend the standard interpretation of SMIL tags in order to accomplish a semantic translation of multimedia objects in database queries. This permits services to take benefit of production process to create customized content enhanced with real time information fed from databases. The described system has been successfully deployed to create advanced context centric weather forecasts.