WorldWideScience

Sample records for automatic multiscale enhancement

  1. Adaptive multiscale processing for contrast enhancement

    Science.gov (United States)

    Laine, Andrew F.; Song, Shuwu; Fan, Jian; Huda, Walter; Honeyman, Janice C.; Steinbach, Barbara G.

    1993-07-01

    This paper introduces a novel approach for accomplishing mammographic feature analysis through overcomplete multiresolution representations. We show that efficient representations may be identified from digital mammograms within a continuum of scale space and used to enhance features of importance to mammography. Choosing analyzing functions that are well localized in both space and frequency, results in a powerful methodology for image analysis. We describe methods of contrast enhancement based on two overcomplete (redundant) multiscale representations: (1) Dyadic wavelet transform (2) (phi) -transform. Mammograms are reconstructed from transform coefficients modified at one or more levels by non-linear, logarithmic and constant scale-space weight functions. Multiscale edges identified within distinct levels of transform space provide a local support for enhancement throughout each decomposition. We demonstrate that features extracted from wavelet spaces can provide an adaptive mechanism for accomplishing local contrast enhancement. We suggest that multiscale detection and local enhancement of singularities may be effectively employed for the visualization of breast pathology without excessive noise amplification.

  2. Mammographic feature enhancement by multiscale analysis

    International Nuclear Information System (INIS)

    Laine, A.F.; Schuler, S.; Fan, J.; Huda, W.

    1994-01-01

    This paper introduces a novel approach for accomplishing mammographic feature analysis by overcomplete multiresolution representations. The authors show that efficient representations may be identified within a continuum of scale-space and used to enhance features of importance to mammography. Methods of contrast enhancement are described based on three overcomplete multiscale representations: (1) the dyadic wavelet transform (separable), (2) the var-phi-transform (nonseparable, nonorthogonal), and (3) the hexagonal wavelet transform (nonseparable). Multiscale edges identified within distinct levels of transform space provide local support for image enhancement. Mammograms are reconstructed from wavelet coefficients modified at one or more levels by local and global nonlinear operators. In each case, edges and gain parameters are identified adaptively by a measure of energy within each level of scale-space. The authors show quantitatively that transform coefficients, modified by adaptive nonlinear operators, can make more obvious unseen or barely seen features of mammography without requiring additional radiation. The results are compared with traditional image enhancement techniques by measuring the local contrast of known mammographic features. The authors demonstrate that features extracted from multiresolution representations can provide an adaptive mechanism for accomplishing local contrast enhancement. By improving the visualization of breast pathology, they can improve chances of early detection while requiring less time to evaluate mammograms for most patients

  3. Multiscale Retinex

    Directory of Open Access Journals (Sweden)

    Ana Belén Petro

    2014-04-01

    Full Text Available While the retinex theory aimed at explaining human color perception, its derivations have led to efficient algorithms enhancing local image contrast, thus permitting among other features, to "see in the shadows". Among these derived algorithms, Multiscale Retinex is probably the most successful center-surround image filter. In this paper, we offer an analysis and implementation of Multiscale Retinex. We point out and resolve some ambiguities of the method. In particular, we show that the important color correction final step of the method can be seriously improved. This analysis permits to come up with an automatic implementation of Multiscale Retinex which is as faithful as possible to the one described in the original paper. Overall, this implementation delivers excellent results and confirms the validity of Multiscale Retinex for image color restoration and contrast enhancement. Nevertheless, while the method parameters can be fixed, we show that a crucial choice must be left to the user, depending on the lightning condition of the image: the method must either be applied to each color independently if a color balance is required, or to the luminance only if the goal is to achieve local contrast enhancement. Thus, we propose two slightly different algorithms to deal with both cases.

  4. Morphological rational multi-scale algorithm for color contrast enhancement

    Science.gov (United States)

    Peregrina-Barreto, Hayde; Terol-Villalobos, Iván R.

    2010-01-01

    Contrast enhancement main goal consists on improving the image visual appearance but also it is used for providing a transformed image in order to segment it. In mathematical morphology several works have been derived from the framework theory for contrast enhancement proposed by Meyer and Serra. However, when working with images with a wide range of scene brightness, as for example when strong highlights and deep shadows appear in the same image, the proposed morphological methods do not allow the enhancement. In this work, a rational multi-scale method, which uses a class of morphological connected filters called filters by reconstruction, is proposed. Granulometry is used by finding the more accurate scales for filters and with the aim of avoiding the use of other little significant scales. The CIE-u'v'Y' space was used to introduce our results since it takes into account the Weber's Law and by avoiding the creation of new colors it permits to modify the luminance values without affecting the hue. The luminance component ('Y) is enhanced separately using the proposed method, next it is used for enhancing the chromatic components (u', v') by means of the center of gravity law of color mixing.

  5. Automatic facial pore analysis system using multi-scale pore detection.

    Science.gov (United States)

    Sun, J Y; Kim, S W; Lee, S H; Choi, J E; Ko, S J

    2017-08-01

    As facial pore widening and its treatments have become common concerns in the beauty care field, the necessity for an objective pore-analyzing system has been increased. Conventional apparatuses lack in usability requiring strong light sources and a cumbersome photographing process, and they often yield unsatisfactory analysis results. This study was conducted to develop an image processing technique for automatic facial pore analysis. The proposed method detects facial pores using multi-scale detection and optimal scale selection scheme and then extracts pore-related features such as total area, average size, depth, and the number of pores. Facial photographs of 50 subjects were graded by two expert dermatologists, and correlation analyses between the features and clinical grading were conducted. We also compared our analysis result with those of conventional pore-analyzing devices. The number of large pores and the average pore size were highly correlated with the severity of pore enlargement. In comparison with the conventional devices, the proposed analysis system achieved better performance showing stronger correlation with the clinical grading. The proposed system is highly accurate and reliable for measuring the severity of skin pore enlargement. It can be suitably used for objective assessment of the pore tightening treatments. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. Color Image Enhancement Using Multiscale Retinex Based on Particle Swarm Optimization Method

    Science.gov (United States)

    Matin, F.; Jeong, Y.; Kim, K.; Park, K.

    2018-01-01

    This paper introduces, a novel method for the image enhancement using multiscale retinex and practical swarm optimization. Multiscale retinex is widely used image enhancement technique which intemperately pertains on parameters such as Gaussian scales, gain and offset, etc. To achieve the privileged effect, the parameters need to be tuned manually according to the image. In order to handle this matter, a developed retinex algorithm based on PSO has been used. The PSO method adjusted the parameters for multiscale retinex with chromaticity preservation (MSRCP) attains better outcome to compare with other existing methods. The experimental result indicates that the proposed algorithm is an efficient one and not only provides true color loyalty in low light conditions but also avoid color distortion at the same time.

  7. Automatic Craniomaxillofacial Landmark Digitization via Segmentation-guided Partially-joint Regression Forest Model and Multi-scale Statistical Features

    Science.gov (United States)

    Zhang, Jun; Gao, Yaozong; Wang, Li; Tang, Zhen; Xia, James J.; Shen, Dinggang

    2016-01-01

    Objective The goal of this paper is to automatically digitize craniomaxillofacial (CMF) landmarks efficiently and accurately from cone-beam computed tomography (CBCT) images, by addressing the challenge caused by large morphological variations across patients and image artifacts of CBCT images. Methods We propose a Segmentation-guided Partially-joint Regression Forest (S-PRF) model to automatically digitize CMF landmarks. In this model, a regression voting strategy is first adopted to localize each landmark by aggregating evidences from context locations, thus potentially relieving the problem caused by image artifacts near the landmark. Second, CBCT image segmentation is utilized to remove uninformative voxels caused by morphological variations across patients. Third, a partially-joint model is further proposed to separately localize landmarks based on the coherence of landmark positions to improve the digitization reliability. In addition, we propose a fast vector quantization (VQ) method to extract high-level multi-scale statistical features to describe a voxel's appearance, which has low dimensionality, high efficiency, and is also invariant to the local inhomogeneity caused by artifacts. Results Mean digitization errors for 15 landmarks, in comparison to the ground truth, are all less than 2mm. Conclusion Our model has addressed challenges of both inter-patient morphological variations and imaging artifacts. Experiments on a CBCT dataset show that our approach achieves clinically acceptable accuracy for landmark digitalization. Significance Our automatic landmark digitization method can be used clinically to reduce the labor cost and also improve digitalization consistency. PMID:26625402

  8. Automatic ultrasound image enhancement for 2D semi-automatic breast-lesion segmentation

    Science.gov (United States)

    Lu, Kongkuo; Hall, Christopher S.

    2014-03-01

    Breast cancer is the fastest growing cancer, accounting for 29%, of new cases in 2012, and second leading cause of cancer death among women in the United States and worldwide. Ultrasound (US) has been used as an indispensable tool for breast cancer detection/diagnosis and treatment. In computer-aided assistance, lesion segmentation is a preliminary but vital step, but the task is quite challenging in US images, due to imaging artifacts that complicate detection and measurement of the suspect lesions. The lesions usually present with poor boundary features and vary significantly in size, shape, and intensity distribution between cases. Automatic methods are highly application dependent while manual tracing methods are extremely time consuming and have a great deal of intra- and inter- observer variability. Semi-automatic approaches are designed to counterbalance the advantage and drawbacks of the automatic and manual methods. However, considerable user interaction might be necessary to ensure reasonable segmentation for a wide range of lesions. This work proposes an automatic enhancement approach to improve the boundary searching ability of the live wire method to reduce necessary user interaction while keeping the segmentation performance. Based on the results of segmentation of 50 2D breast lesions in US images, less user interaction is required to achieve desired accuracy, i.e. < 80%, when auto-enhancement is applied for live-wire segmentation.

  9. Automatic anatomically selective image enhancement in digital chest radiography

    International Nuclear Information System (INIS)

    Sezan, M.I.; Minerbo, G.N.; Schaetzing, R.

    1989-01-01

    The authors develop a technique for automatic anatomically selective enhancement of digital chest radiographs. Anatomically selective enhancement is motivated by the desire to simultaneously meet the different enhancement requirements of the lung field and the mediastinum. A recent peak detection algorithm and a set of rules are applied to the image histogram to determine automatically a gray-level threshold between the lung field and mediastinum. The gray-level threshold facilitates anatomically selective gray-scale modification and/or unsharp masking. Further, in an attempt to suppress possible white-band or black-band artifacts due to unsharp masking at sharp edges, local-contrast adaptivity is incorporated into anatomically selective unsharp masking by designing an anatomy-sensitive emphasis parameter which varies asymmetrically with positive and negative values of the local image contrast

  10. Automatic Classification of Normal and Cancer Lung CT Images Using Multiscale AM-FM Features

    Directory of Open Access Journals (Sweden)

    Eman Magdy

    2015-01-01

    Full Text Available Computer-aided diagnostic (CAD systems provide fast and reliable diagnosis for medical images. In this paper, CAD system is proposed to analyze and automatically segment the lungs and classify each lung into normal or cancer. Using 70 different patients’ lung CT dataset, Wiener filtering on the original CT images is applied firstly as a preprocessing step. Secondly, we combine histogram analysis with thresholding and morphological operations to segment the lung regions and extract each lung separately. Amplitude-Modulation Frequency-Modulation (AM-FM method thirdly, has been used to extract features for ROIs. Then, the significant AM-FM features have been selected using Partial Least Squares Regression (PLSR for classification step. Finally, K-nearest neighbour (KNN, support vector machine (SVM, naïve Bayes, and linear classifiers have been used with the selected AM-FM features. The performance of each classifier in terms of accuracy, sensitivity, and specificity is evaluated. The results indicate that our proposed CAD system succeeded to differentiate between normal and cancer lungs and achieved 95% accuracy in case of the linear classifier.

  11. Feature and Contrast Enhancement of Mammographic Image Based on Multiscale Analysis and Morphology

    Directory of Open Access Journals (Sweden)

    Shibin Wu

    2013-01-01

    Full Text Available A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR, and contrast improvement index (CII.

  12. Enhanced inertia from lossy effective fluids using multi-scale sonic crystals

    Directory of Open Access Journals (Sweden)

    Matthew D. Guild

    2014-12-01

    Full Text Available In this work, a recent theoretically predicted phenomenon of enhanced permittivity with electromagnetic waves using lossy materials is investigated for the analogous case of mass density and acoustic waves, which represents inertial enhancement. Starting from fundamental relationships for the homogenized quasi-static effective density of a fluid host with fluid inclusions, theoretical expressions are developed for the conditions on the real and imaginary parts of the constitutive fluids to have inertial enhancement, which are verified with numerical simulations. Realizable structures are designed to demonstrate this phenomenon using multi-scale sonic crystals, which are fabricated using a 3D printer and tested in an acoustic impedance tube, yielding good agreement with the theoretical predictions and demonstrating enhanced inertia.

  13. Accessories for Enhancement of the Semi-Automatic Welding Processes

    National Research Council Canada - National Science Library

    Wheeler, Douglas M; Sawhill, James M

    2000-01-01

    The project's objective is to identify specific areas of the semi-automatic welding operation that is performed with the major semi-automatic processes, which would be more productive if a suitable...

  14. Automatic tools for enhancing the collaborative experience in large projects

    International Nuclear Information System (INIS)

    Bourilkov, D; Rodriquez, J L

    2014-01-01

    With the explosion of big data in many fields, the efficient management of knowledge about all aspects of the data analysis gains in importance. A key feature of collaboration in large scale projects is keeping a log of what is being done and how - for private use, reuse, and for sharing selected parts with collaborators and peers, often distributed geographically on an increasingly global scale. Even better if the log is automatically created on the fly while the scientist or software developer is working in a habitual way, without the need for extra efforts. This saves time and enables a team to do more with the same resources. The CODESH - COllaborative DEvelopment SHell - and CAVES - Collaborative Analysis Versioning Environment System projects address this problem in a novel way. They build on the concepts of virtual states and transitions to enhance the collaborative experience by providing automatic persistent virtual logbooks. CAVES is designed for sessions of distributed data analysis using the popular ROOT framework, while CODESH generalizes the approach for any type of work on the command line in typical UNIX shells like bash or tcsh. Repositories of sessions can be configured dynamically to record and make available the knowledge accumulated in the course of a scientific or software endeavor. Access can be controlled to define logbooks of private sessions or sessions shared within or between collaborating groups. A typical use case is building working scalable systems for analysis of Petascale volumes of data as encountered in the LHC experiments. Our approach is general enough to find applications in many fields.

  15. Energy landscape of all-atom protein-protein interactions revealed by multiscale enhanced sampling.

    Directory of Open Access Journals (Sweden)

    Kei Moritsugu

    2014-10-01

    Full Text Available Protein-protein interactions are regulated by a subtle balance of complicated atomic interactions and solvation at the interface. To understand such an elusive phenomenon, it is necessary to thoroughly survey the large configurational space from the stable complex structure to the dissociated states using the all-atom model in explicit solvent and to delineate the energy landscape of protein-protein interactions. In this study, we carried out a multiscale enhanced sampling (MSES simulation of the formation of a barnase-barstar complex, which is a protein complex characterized by an extraordinary tight and fast binding, to determine the energy landscape of atomistic protein-protein interactions. The MSES adopts a multicopy and multiscale scheme to enable for the enhanced sampling of the all-atom model of large proteins including explicit solvent. During the 100-ns MSES simulation of the barnase-barstar system, we observed the association-dissociation processes of the atomistic protein complex in solution several times, which contained not only the native complex structure but also fully non-native configurations. The sampled distributions suggest that a large variety of non-native states went downhill to the stable complex structure, like a fast folding on a funnel-like potential. This funnel landscape is attributed to dominant configurations in the early stage of the association process characterized by near-native orientations, which will accelerate the native inter-molecular interactions. These configurations are guided mostly by the shape complementarity between barnase and barstar, and lead to the fast formation of the final complex structure along the downhill energy landscape.

  16. Enhancement tuning and control for high dynamic range images in multi-scale locally adaptive contrast enhancement algorithms

    Science.gov (United States)

    Cvetkovic, Sascha D.; Schirris, Johan; de With, Peter H. N.

    2009-01-01

    For real-time imaging in surveillance applications, visibility of details is of primary importance to ensure customer confidence. If we display High Dynamic-Range (HDR) scenes whose contrast spans four or more orders of magnitude on a conventional monitor without additional processing, results are unacceptable. Compression of the dynamic range is therefore a compulsory part of any high-end video processing chain because standard monitors are inherently Low- Dynamic Range (LDR) devices with maximally two orders of display dynamic range. In real-time camera processing, many complex scenes are improved with local contrast enhancements, bringing details to the best possible visibility. In this paper, we show how a multi-scale high-frequency enhancement scheme, in which gain is a non-linear function of the detail energy, can be used for the dynamic range compression of HDR real-time video camera signals. We also show the connection of our enhancement scheme to the processing way of the Human Visual System (HVS). Our algorithm simultaneously controls perceived sharpness, ringing ("halo") artifacts (contrast) and noise, resulting in a good balance between visibility of details and non-disturbance of artifacts. The overall quality enhancement, suitable for both HDR and LDR scenes, is based on a careful selection of the filter types for the multi-band decomposition and a detailed analysis of the signal per frequency band.

  17. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    Energy Technology Data Exchange (ETDEWEB)

    Qiu, J [Taishan Medical University, Taian, Shandong (China); Washington University in St Louis, St Louis, MO (United States); Li, H. Harlod; Zhang, T; Yang, D [Washington University in St Louis, St Louis, MO (United States); Ma, F [Taishan Medical University, Taian, Shandong (China)

    2015-06-15

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.

  18. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    International Nuclear Information System (INIS)

    Qiu, J; Li, H. Harlod; Zhang, T; Yang, D; Ma, F

    2015-01-01

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools

  19. Multiscale mechanics of the lateral pressure effect on enhancing the load transfer between polymer coated CNTs.

    Science.gov (United States)

    Yazdandoost, Fatemeh; Mirzaeifar, Reza; Qin, Zhao; Buehler, Markus J

    2017-05-04

    While individual carbon nanotubes (CNTs) are known as one of the strongest fibers ever known, even the strongest fabricated macroscale CNT yarns and fibers are still significantly weaker than individual nanotubes. The loss in mechanical properties is mainly because the deformation mechanism of CNT fibers is highly governed by the weak shear strength corresponding to sliding of nanotubes on each other. Adding polymer coating to the bundles, and twisting the CNT yarns to enhance the intertube interactions are both efficient methods to improve the mechanical properties of macroscale yarns. Here, we perform molecular dynamics (MD) simulations to unravel the unknown deformation mechanism in the intertube polymer chains and also local deformations of the CNTs at the atomistic scale. Our results show that the lateral pressure can have both beneficial and adverse effects on shear strength of polymer coated CNTs, depending on the local deformations at the atomistic scale. In this paper we also introduce a bottom-up bridging strategy between a full atomistic model and a coarse-grained (CG) model. Our trained CG model is capable of incorporating the atomistic scale local deformations of each CNT to the larger scale collect behavior of bundles, which enables the model to accurately predict the effect of lateral pressure on larger CNT bundles and yarns. The developed multiscale CG model is implemented to study the effect of lateral pressure on the shear strength of straight polymer coated CNT yarns, and also the effect of twisting on the pull-out force of bundles in spun CNT yarns.

  20. Enhancing the Automatic Generation of Hints with Expert Seeding

    Science.gov (United States)

    Stamper, John; Barnes, Tiffany; Croy, Marvin

    2011-01-01

    The Hint Factory is an implementation of our novel method to automatically generate hints using past student data for a logic tutor. One disadvantage of the Hint Factory is the time needed to gather enough data on new problems in order to provide hints. In this paper we describe the use of expert sample solutions to "seed" the hint generation…

  1. An enhanced model for automatically extracting topic phrase from ...

    African Journals Online (AJOL)

    The key benefit foreseen from this automatic document classification is not only related to search engines, but also to many other fields like, document organization, text filtering and semantic index managing. Key words: Keyphrase extraction, machine learning, search engine snippet, document classification, topic tracking ...

  2. Automatic Ship Detection in Remote Sensing Images from Google Earth of Complex Scenes Based on Multiscale Rotation Dense Feature Pyramid Networks

    Directory of Open Access Journals (Sweden)

    Xue Yang

    2018-01-01

    Full Text Available Ship detection has been playing a significant role in the field of remote sensing for a long time, but it is still full of challenges. The main limitations of traditional ship detection methods usually lie in the complexity of application scenarios, the difficulty of intensive object detection, and the redundancy of the detection region. In order to solve these problems above, we propose a framework called Rotation Dense Feature Pyramid Networks (R-DFPN which can effectively detect ships in different scenes including ocean and port. Specifically, we put forward the Dense Feature Pyramid Network (DFPN, which is aimed at solving problems resulting from the narrow width of the ship. Compared with previous multiscale detectors such as Feature Pyramid Network (FPN, DFPN builds high-level semantic feature-maps for all scales by means of dense connections, through which feature propagation is enhanced and feature reuse is encouraged. Additionally, in the case of ship rotation and dense arrangement, we design a rotation anchor strategy to predict the minimum circumscribed rectangle of the object so as to reduce the redundant detection region and improve the recall. Furthermore, we also propose multiscale region of interest (ROI Align for the purpose of maintaining the completeness of the semantic and spatial information. Experiments based on remote sensing images from Google Earth for ship detection show that our detection method based on R-DFPN representation has state-of-the-art performance.

  3. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement.

    Science.gov (United States)

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-02-07

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.

  4. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement

    Science.gov (United States)

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-01-01

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893

  5. Automatic, anatomically selective, artifact-free enhancement of digital chest radiographs

    International Nuclear Information System (INIS)

    Sezan, M.I.; Tekalp, A.M.; Schaetzing, R.

    1988-01-01

    The authors propose a technique for automatic, anatomically selective, artifact-free enhancement of digital chest radiographs. Anatomically selective enhancement is motivated by the different enhancement requirements of the lung field and the mediastinum. A recent peak detection algorithm is applied to the image histogram to automatically determine a gray-level threshold between the lung and mediastinum fields. The gray-level threshold facilitates anatomically selective gray-scale modification and unsharp masking. Further, in an attempt to suppress possible white-band artifacts due to unsharp masking at sharp edges, local-contrast adaptivity is incorporated into anatomically selective unsharp masking by designing an anatomy-sensitive emphasis parameter that varied asymmetrically with positive and negative values of the local image contrast

  6. Comparison of liver volumetry on contrast-enhanced CT images: one semiautomatic and two automatic approaches.

    Science.gov (United States)

    Cai, Wei; He, Baochun; Fan, Yingfang; Fang, Chihua; Jia, Fucang

    2016-11-08

    This study was to evaluate the accuracy, consistency, and efficiency of three liver volumetry methods- one interactive method, an in-house-developed 3D medical Image Analysis (3DMIA) system, one automatic active shape model (ASM)-based segmentation, and one automatic probabilistic atlas (PA)-guided segmentation method on clinical contrast-enhanced CT images. Forty-two datasets, including 27 normal liver and 15 space-occupying liver lesion patients, were retrospectively included in this study. The three methods - one semiautomatic 3DMIA, one automatic ASM-based, and one automatic PA-based liver volumetry - achieved an accuracy with VD (volume difference) of -1.69%, -2.75%, and 3.06% in the normal group, respectively, and with VD of -3.20%, -3.35%, and 4.14% in the space-occupying lesion group, respectively. However, the three methods achieved an efficiency of 27.63 mins, 1.26 mins, 1.18 mins on average, respectively, compared with the manual volumetry, which took 43.98 mins. The high intraclass correlation coefficient between the three methods and the manual method indicated an excel-lent agreement on liver volumetry. Significant differences in segmentation time were observed between the three methods (3DMIA, ASM, and PA) and the manual volumetry (p volumetries (ASM and PA) and the semiautomatic volumetry (3DMIA) (p < 0.001). The semiautomatic interactive 3DMIA, automatic ASM-based, and automatic PA-based liver volum-etry agreed well with manual gold standard in both the normal liver group and the space-occupying lesion group. The ASM- and PA-based automatic segmentation have better efficiency in clinical use. © 2016 The Authors.

  7. MULTISCALE TENSOR ANISOTROPIC FILTERING OF FLUORESCENCE MICROSCOPY FOR DENOISING MICROVASCULATURE.

    Science.gov (United States)

    Prasath, V B S; Pelapur, R; Glinskii, O V; Glinsky, V V; Huxley, V H; Palaniappan, K

    2015-04-01

    Fluorescence microscopy images are contaminated by noise and improving image quality without blurring vascular structures by filtering is an important step in automatic image analysis. The application of interest here is to automatically extract the structural components of the microvascular system with accuracy from images acquired by fluorescence microscopy. A robust denoising process is necessary in order to extract accurate vascular morphology information. For this purpose, we propose a multiscale tensor with anisotropic diffusion model which progressively and adaptively updates the amount of smoothing while preserving vessel boundaries accurately. Based on a coherency enhancing flow with planar confidence measure and fused 3D structure information, our method integrates multiple scales for microvasculature preservation and noise removal membrane structures. Experimental results on simulated synthetic images and epifluorescence images show the advantage of our improvement over other related diffusion filters. We further show that the proposed multiscale integration approach improves denoising accuracy of different tensor diffusion methods to obtain better microvasculature segmentation.

  8. Adaptive Multiscale Noise Control Enhanced Stochastic Resonance Method Based on Modified EEMD with Its Application in Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Jimeng Li

    2016-01-01

    Full Text Available The structure of mechanical equipment becomes increasingly complex, and tough environments under which it works often make bearings and gears subject to failure. However, effective extraction of useful feature information submerged in strong noise that is indicative of structural defects has remained a major challenge. Therefore, an adaptive multiscale noise control enhanced stochastic resonance (SR method based on modified ensemble empirical mode decomposition (EEMD for mechanical fault diagnosis is proposed in the paper. According to the oscillation characteristics of signal itself, the algorithm of modified EEMD can adaptively decompose the fault signals into different scales and it reduces the decomposition levels to improve calculation efficiency of the proposed method. Through filter processing with the constructed filters, the orthogonality of adjacent intrinsic mode functions (IMFs can be improved, which is conducive to enhancing the extraction of weak features from strong noise. The constructed signal obtained by using IMFs is inputted into the SR system, and the noise control parameter of different scales is optimized and selected with the help of the genetic algorithm, thus achieving the enhancement extraction of weak features. Finally, simulation experiments and engineering application of bearing fault diagnosis demonstrate the effectiveness and feasibility of the proposed method.

  9. Automatic detection of arterial input function in dynamic contrast enhanced MRI based on affinity propagation clustering.

    Science.gov (United States)

    Shi, Lin; Wang, Defeng; Liu, Wen; Fang, Kui; Wang, Yi-Xiang J; Huang, Wenhua; King, Ann D; Heng, Pheng Ann; Ahuja, Anil T

    2014-05-01

    To automatically and robustly detect the arterial input function (AIF) with high detection accuracy and low computational cost in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). In this study, we developed an automatic AIF detection method using an accelerated version (Fast-AP) of affinity propagation (AP) clustering. The validity of this Fast-AP-based method was proved on two DCE-MRI datasets, i.e., rat kidney and human head and neck. The detailed AIF detection performance of this proposed method was assessed in comparison with other clustering-based methods, namely original AP and K-means, as well as the manual AIF detection method. Both the automatic AP- and Fast-AP-based methods achieved satisfactory AIF detection accuracy, but the computational cost of Fast-AP could be reduced by 64.37-92.10% on rat dataset and 73.18-90.18% on human dataset compared with the cost of AP. The K-means yielded the lowest computational cost, but resulted in the lowest AIF detection accuracy. The experimental results demonstrated that both the AP- and Fast-AP-based methods were insensitive to the initialization of cluster centers, and had superior robustness compared with K-means method. The Fast-AP-based method enables automatic AIF detection with high accuracy and efficiency. Copyright © 2013 Wiley Periodicals, Inc.

  10. Enhanced Electrochemical and Thermal Transport Properties of Graphene/MoS2 Heterostructures for Energy Storage: Insights from Multiscale Modeling.

    Science.gov (United States)

    Gong, Feng; Ding, Zhiwei; Fang, Yin; Tong, Chuan-Jia; Xia, Dawei; Lv, Yingying; Wang, Bin; Papavassiliou, Dimitrios V; Liao, Jiaxuan; Wu, Mengqiang

    2018-05-02

    Graphene has been combined with molybdenum disulfide (MoS 2 ) to ameliorate the poor cycling stability and rate performance of MoS 2 in lithium ion batteries, yet the underlying mechanisms remain less explored. Here, we develop multiscale modeling to investigate the enhanced electrochemical and thermal transport properties of graphene/MoS 2 heterostructures (GM-Hs) with a complex morphology. The calculated electronic structures demonstrate the greatly improved electrical conductivity of GM-Hs compared to MoS 2 . Increasing the graphene layers in GM-Hs not only improves the electrical conductivity but also stabilizes the intercalated Li atoms in GM-Hs. It is also found that GM-Hs with three graphene layers could achieve and maintain a high thermal conductivity of 85.5 W/(m·K) at a large temperature range (100-500 K), nearly 6 times that of pure MoS 2 [∼15 W/(m·K)], which may accelerate the heat conduction from electrodes to the ambient. Our quantitative findings may shed light on the enhanced battery performances of various graphene/transition-metal chalcogenide composites in energy storage devices.

  11. Enhanced Automatic Action Imitation and Intact Imitation-Inhibition in Schizophrenia.

    Science.gov (United States)

    Simonsen, Arndis; Fusaroli, Riccardo; Skewes, Joshua Charles; Roepstorff, Andreas; Campbell-Meiklejohn, Daniel; Mors, Ole; Bliksted, Vibeke

    2018-02-21

    Imitation plays a key role in social learning and in facilitating social interactions and likely constitutes a basic building block of social cognition that supports higher-level social abilities. Recent findings suggest that patients with schizophrenia have imitation impairments that could contribute to the social impairments associated with the disorder. However, extant studies have specifically assessed voluntary imitation or automatic imitation of emotional stimuli without controlling for potential confounders. The imitation impairments seen might therefore be secondary to other cognitive, motoric, or emotional deficits associated with the disorder. To overcome this issue, we used an automatic imitation paradigm with nonemotional stimuli to assess automatic imitation and the top-down modulation of imitation where participants were required to lift one of 2 fingers according to a number shown on the screen while observing the same or the other finger movement. In addition, we used a control task with a visual cue in place of a moving finger, to isolate the effect of observing finger movement from other visual cueing effects. Data from 33 patients (31 medicated) and 40 matched healthy controls were analyzed. Patients displayed enhanced imitation and intact top-down modulation of imitation. The enhanced imitation seen in patients may have been medication induced as larger effects were seen in patients receiving higher antipsychotic doses. In sum, we did not find an imitation impairment in schizophrenia. The results suggest that previous findings of impaired imitation in schizophrenia might have been due to other cognitive, motoric, and/or emotional deficits.

  12. Comparison of liver volumetry on contrast‐enhanced CT images: one semiautomatic and two automatic approaches

    Science.gov (United States)

    Cai, Wei; He, Baochun; Fang, Chihua

    2016-01-01

    This study was to evaluate the accuracy, consistency, and efficiency of three liver volumetry methods— one interactive method, an in‐house‐developed 3D medical Image Analysis (3DMIA) system, one automatic active shape model (ASM)‐based segmentation, and one automatic probabilistic atlas (PA)‐guided segmentation method on clinical contrast‐enhanced CT images. Forty‐two datasets, including 27 normal liver and 15 space‐occupying liver lesion patients, were retrospectively included in this study. The three methods — one semiautomatic 3DMIA, one automatic ASM‐based, and one automatic PA‐based liver volumetry — achieved an accuracy with VD (volume difference) of −1.69%,−2.75%, and 3.06% in the normal group, respectively, and with VD of −3.20%,−3.35%, and 4.14% in the space‐occupying lesion group, respectively. However, the three methods achieved an efficiency of 27.63 mins, 1.26 mins, 1.18 mins on average, respectively, compared with the manual volumetry, which took 43.98 mins. The high intraclass correlation coefficient between the three methods and the manual method indicated an excellent agreement on liver volumetry. Significant differences in segmentation time were observed between the three methods (3DMIA, ASM, and PA) and the manual volumetry (pvolumetries (ASM and PA) and the semiautomatic volumetry (3DMIA) (pvolumetry agreed well with manual gold standard in both the normal liver group and the space‐occupying lesion group. The ASM‐ and PA‐based automatic segmentation have better efficiency in clinical use. PACS number(s): 87.55.‐x PMID:27929487

  13. The Multiscale Bowler-Hat Transform for Vessel Enhancement in 3D Biomedical Images

    OpenAIRE

    Sazak, Cigdem; Nelson, Carl J.; Obara, Boguslaw

    2018-01-01

    Enhancement and detection of 3D vessel-like structures has long been an open problem as most existing image processing methods fail in many aspects, including a lack of uniform enhancement between vessels of different radii and a lack of enhancement at the junctions. Here, we propose a method based on mathematical morphology to enhance 3D vessel-like structures in biomedical images. The proposed method, 3D bowler-hat transform, combines sphere and line structuring elements to enhance vessel-l...

  14. Automatic method for selective enhancement of different tissue densities at digital chest radiography

    International Nuclear Information System (INIS)

    McNitt-Gray, M.F.; Taira, R.K.; Eldredge, S.L.; Razavi, M.

    1991-01-01

    This paper reports that digital chest radiographs often are too bright and/or lack contrast when viewed on a video display. The authors have developed a method that can automatically provide a series of look-up tables that selectively enhance the radiographically soft or dense tissues on a digital chest radiograph. This reduces viewer interaction and improves displayed image quality. On the basis of a histogram analysis, gray-level ranges are approximated for the patient background, radiographically soft tissues, and radiographically dense tissues. A series of look-up tables is automatically created by varying the contrast in each range to achieve a level of enhancement for a selected tissue range. This is repeated for differing amounts of enhancement and for each tissue range. This allows the viewer to interactively select a tissue density range and degree of enhancement at the time of display via precalculated look-up tables. Preclinical trials in pediatric radiology using computed radiography images show that this method reduces viewer interaction and improves or maintains the displayed image quality

  15. Automatic segmentation of the choroid in enhanced depth imaging optical coherence tomography images.

    Science.gov (United States)

    Tian, Jing; Marziliano, Pina; Baskaran, Mani; Tun, Tin Aung; Aung, Tin

    2013-03-01

    Enhanced Depth Imaging (EDI) optical coherence tomography (OCT) provides high-definition cross-sectional images of the choroid in vivo, and hence is used in many clinical studies. However, the quantification of the choroid depends on the manual labelings of two boundaries, Bruch's membrane and the choroidal-scleral interface. This labeling process is tedious and subjective of inter-observer differences, hence, automatic segmentation of the choroid layer is highly desirable. In this paper, we present a fast and accurate algorithm that could segment the choroid automatically. Bruch's membrane is detected by searching the pixel with the biggest gradient value above the retinal pigment epithelium (RPE) and the choroidal-scleral interface is delineated by finding the shortest path of the graph formed by valley pixels using Dijkstra's algorithm. The experiments comparing automatic segmentation results with the manual labelings are conducted on 45 EDI-OCT images and the average of Dice's Coefficient is 90.5%, which shows good consistency of the algorithm with the manual labelings. The processing time for each image is about 1.25 seconds.

  16. Quadrant Dynamic with Automatic Plateau Limit Histogram Equalization for Image Enhancement

    Directory of Open Access Journals (Sweden)

    P. Jagatheeswari

    2014-01-01

    Full Text Available The fundamental and important preprocessing stage in image processing is the image contrast enhancement technique. Histogram equalization is an effective contrast enhancement technique. In this paper, a histogram equalization based technique called quadrant dynamic with automatic plateau limit histogram equalization (QDAPLHE is introduced. In this method, a hybrid of dynamic and clipped histogram equalization methods are used to increase the brightness preservation and to reduce the overenhancement. Initially, the proposed QDAPLHE algorithm passes the input image through a median filter to remove the noises present in the image. Then the histogram of the filtered image is divided into four subhistograms while maintaining second separated point as the mean brightness. Then the clipping process is implemented by calculating automatically the plateau limit as the clipped level. The clipped portion of the histogram is modified to reduce the loss of image intensity value. Finally the clipped portion is redistributed uniformly to the entire dynamic range and the conventional histogram equalization is executed in each subhistogram independently. Based on the qualitative and the quantitative analysis, the QDAPLHE method outperforms some existing methods in literature.

  17. Automatic Enhancement of the Reference Set for Multi-Criteria Sorting in The Frame of Theseus Method

    Directory of Open Access Journals (Sweden)

    Fernandez Eduardo

    2014-05-01

    Full Text Available Some recent works have established the importance of handling abundant reference information in multi-criteria sorting problems. More valid information allows a better characterization of the agent’s assignment policy, which can lead to an improved decision support. However, sometimes information for enhancing the reference set may be not available, or may be too expensive. This paper explores an automatic mode of enhancing the reference set in the framework of the THESEUS multi-criteria sorting method. Some performance measures are defined in order to test results of the enhancement. Several theoretical arguments and practical experiments are provided here, supporting a basic advantage of the automatic enhancement: a reduction of the vagueness measure that improves the THESEUS accuracy, without additional efforts from the decision agent. The experiments suggest that the errors coming from inadequate automatic assignments can be kept at a manageable level.

  18. Distributed multiscale computing

    NARCIS (Netherlands)

    Borgdorff, J.

    2014-01-01

    Multiscale models combine knowledge, data, and hypotheses from different scales. Simulating a multiscale model often requires extensive computation. This thesis evaluates distributing these computations, an approach termed distributed multiscale computing (DMC). First, the process of multiscale

  19. Multi-Scale Computational Enzymology: Enhancing Our Understanding of Enzymatic Catalysis

    OpenAIRE

    Rami Gherib; Hisham M. Dokainish; James W. Gauld

    2013-01-01

    Elucidating the origin of enzymatic catalysis stands as one the great challenges of contemporary biochemistry and biophysics. The recent emergence of computational enzymology has enhanced our atomistic-level description of biocatalysis as well the kinetic and thermodynamic properties of their mechanisms. There exists a diversity of computational methods allowing the investigation of specific enzymatic properties. Small or large density functional theory models allow the comparison of a pleth...

  20. A Multi-scale Approach for CO2 Accounting and Risk Analysis in CO2 Enhanced Oil Recovery Sites

    Science.gov (United States)

    Dai, Z.; Viswanathan, H. S.; Middleton, R. S.; Pan, F.; Ampomah, W.; Yang, C.; Jia, W.; Lee, S. Y.; McPherson, B. J. O. L.; Grigg, R.; White, M. D.

    2015-12-01

    Using carbon dioxide in enhanced oil recovery (CO2-EOR) is a promising technology for emissions management because CO2-EOR can dramatically reduce carbon sequestration costs in the absence of greenhouse gas emissions policies that include incentives for carbon capture and storage. This study develops a multi-scale approach to perform CO2 accounting and risk analysis for understanding CO2 storage potential within an EOR environment at the Farnsworth Unit of the Anadarko Basin in northern Texas. A set of geostatistical-based Monte Carlo simulations of CO2-oil-water flow and transport in the Marrow formation are conducted for global sensitivity and statistical analysis of the major risk metrics: CO2 injection rate, CO2 first breakthrough time, CO2 production rate, cumulative net CO2 storage, cumulative oil and CH4 production, and water injection and production rates. A global sensitivity analysis indicates that reservoir permeability, porosity, and thickness are the major intrinsic reservoir parameters that control net CO2 injection/storage and oil/CH4 recovery rates. The well spacing (the distance between the injection and production wells) and the sequence of alternating CO2 and water injection are the major operational parameters for designing an effective five-spot CO2-EOR pattern. The response surface analysis shows that net CO2 injection rate increases with the increasing reservoir thickness, permeability, and porosity. The oil/CH4 production rates are positively correlated to reservoir permeability, porosity and thickness, but negatively correlated to the initial water saturation. The mean and confidence intervals are estimated for quantifying the uncertainty ranges of the risk metrics. The results from this study provide useful insights for understanding the CO2 storage potential and the corresponding risks of commercial-scale CO2-EOR fields.

  1. Enhancement of the automatic ultrasonic signal processing system using digital technology

    International Nuclear Information System (INIS)

    Koo, In Soo; Park, H. Y.; Suh, Y. S.; Kim, D. Hoon; Huh, S.; Sung, S. H.; Jang, G. S.; Ryoo, S. G.; Choi, J. H.; Kim, Y. H.; Lee, J. C.; Kim, D. Hyun; Park, H. J.; Kim, Y. C.; Lee, J. P.; Park, C. H.; Kim, M. S.

    1999-12-01

    The objective of this study is to develop the automatic ultrasonic signal processing system which can be used in the inspection equipment to assess the integrity of the reactor vessel by enhancing the performance of the ultrasonic signal processing system. Main activities of this study divided into three categories such as the development of the circuits for generating ultrasonic signal and receiving the signal from the inspection equipment, the development of signal processing algorithm and H/W of the data processing system, and the development of the specification for application programs and system S/W for the analysis and evaluation computer. The results of main activities are as follows 1) the design of the ultrasonic detector and the automatic ultrasonic signal processing system by using the investigation of the state-of-the-art technology in the inside and outside of the country. 2) the development of H/W and S/W of the data processing system based on the results. Especially, the H/W of the data processing system, which have both advantages of digital and analog controls through the real-time digital signal processing, was developed using the DSP which can process the digital signal in the real-time, and was developed not only firmware of the data processing system in order for the peripherals but also the test algorithm of specimen for the calibration. The application programs and the system S/W of the analysis/evaluation computer were developed. Developed equipment was verified by the performance test. Based on developed prototype for the automatic ultrasonic signal processing system, the localization of the overall ultrasonic inspection equipment for nuclear industries would be expected through the further studies of the H/W establishment of real applications, developing the S/W specification of the analysis computer. (author)

  2. Resolution enhancement of lung 4D-CT data using multiscale interphase iterative nonlocal means

    International Nuclear Information System (INIS)

    Zhang Yu; Yap, Pew-Thian; Wu Guorong; Feng Qianjin; Chen Wufan; Lian Jun; Shen Dinggang

    2013-01-01

    Purpose: Four-dimensional computer tomography (4D-CT) has been widely used in lung cancer radiotherapy due to its capability in providing important tumor motion information. However, the prolonged scanning duration required by 4D-CT causes considerable increase in radiation dose. To minimize the radiation-related health risk, radiation dose is often reduced at the expense of interslice spatial resolution. However, inadequate resolution in 4D-CT causes artifacts and increases uncertainty in tumor localization, which eventually results in extra damages of healthy tissues during radiotherapy. In this paper, the authors propose a novel postprocessing algorithm to enhance the resolution of lung 4D-CT data. Methods: The authors' premise is that anatomical information missing in one phase can be recovered from the complementary information embedded in other phases. The authors employ a patch-based mechanism to propagate information across phases for the reconstruction of intermediate slices in the longitudinal direction, where resolution is normally the lowest. Specifically, the structurally matching and spatially nearby patches are combined for reconstruction of each patch. For greater sensitivity to anatomical details, the authors employ a quad-tree technique to adaptively partition the image for more fine-grained refinement. The authors further devise an iterative strategy for significant enhancement of anatomical details. Results: The authors evaluated their algorithm using a publicly available lung data that consist of 10 4D-CT cases. The authors’ algorithm gives very promising results with significantly enhanced image structures and much less artifacts. Quantitative analysis shows that the authors’ algorithm increases peak signal-to-noise ratio by 3–4 dB and the structural similarity index by 3%–5% when compared with the standard interpolation-based algorithms. Conclusions: The authors have developed a new algorithm to improve the resolution of 4D-CT. It

  3. Automatic fringe enhancement with novel bidimensional sinusoids-assisted empirical mode decomposition.

    Science.gov (United States)

    Wang, Chenxing; Kemao, Qian; Da, Feipeng

    2017-10-02

    Fringe-based optical measurement techniques require reliable fringe analysis methods, where empirical mode decomposition (EMD) is an outstanding one due to its ability of analyzing complex signals and the merit of being data-driven. However, two challenging issues hinder the application of EMD in practical measurement. One is the tricky mode mixing problem (MMP), making the decomposed intrinsic mode functions (IMFs) have equivocal physical meaning; the other is the automatic and accurate extraction of the sinusoidal fringe from the IMFs when unpredictable and unavoidable background and noise exist in real measurements. Accordingly, in this paper, a novel bidimensional sinusoids-assisted EMD (BSEMD) is proposed to decompose a fringe pattern into mono-component bidimensional IMFs (BIMFs), with the MMP solved; properties of the resulted BIMFs are then analyzed to recognize and enhance the useful fringe component. The decomposition and the fringe recognition are integrated and the latter provides a feedback to the former, helping to automatically stop the decomposition to make the algorithm simpler and more reliable. A series of experiments show that the proposed method is accurate, efficient and robust to various fringe patterns even with poor quality, rendering it a potential tool for practical use.

  4. Multi-scale Control and Enhancement of Reactor Boiling Heat Flux by Reagents and Nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Manglik, R M; Athavale, A; Kalaikadal, D S; Deodhar, A; Verma, U

    2011-09-02

    The phenomenological characterization of the use of non-invasive and passive techniques to enhance the boiling heat transfer in water has been carried out in this extended study. It provides fundamental enhanced heat transfer data for nucleate boiling and discusses the associated physics with the aim of addressing future and next-generation reactor thermal-hydraulic management. It essentially addresses the hypothesis that in phase-change processes during boiling, the primary mechanisms can be related to the liquid-vapor interfacial tension and surface wetting at the solidliquid interface. These interfacial characteristics can be significantly altered and decoupled by introducing small quantities of additives in water, such as surface-active polymers, surfactants, and nanoparticles. The changes are fundamentally caused at a molecular-scale by the relative bulk molecular dynamics and adsorption-desorption of the additive at the liquid-vapor interface, and its physisorption and electrokinetics at the liquid-solid interface. At the micro-scale, the transient transport mechanisms at the solid-liquid-vapor interface during nucleation and bubblegrowth can be attributed to thin-film spreading, surface-micro-cavity activation, and micro-layer evaporation. Furthermore at the macro-scale, the heat transport is in turn governed by the bubble growth and distribution, macro-layer heat transfer, bubble dynamics (bubble coalescence, collapse, break-up, and translation), and liquid rheology. Some of these behaviors and processes are measured and characterized in this study, the outcomes of which advance the concomitant fundamental physics, as well as provide insights for developing control strategies for the molecular-scale manipulation of interfacial tension and surface wetting in boiling by means of polymeric reagents, surfactants, and other soluble surface-active additives.

  5. Multi-scale Control and Enhancement of Reactor Boiling Heat Flux by Reagents and Nanoparticles

    International Nuclear Information System (INIS)

    Manglik, R.M.; Athavale, A.; Kalaikadal, D.S.; Deodhar, A.; Verma, U.

    2011-01-01

    The phenomenological characterization of the use of non-invasive and passive techniques to enhance the boiling heat transfer in water has been carried out in this extended study. It provides fundamental enhanced heat transfer data for nucleate boiling and discusses the associated physics with the aim of addressing future and next-generation reactor thermal-hydraulic management. It essentially addresses the hypothesis that in phase-change processes during boiling, the primary mechanisms can be related to the liquid-vapor interfacial tension and surface wetting at the solidliquid interface. These interfacial characteristics can be significantly altered and decoupled by introducing small quantities of additives in water, such as surface-active polymers, surfactants, and nanoparticles. The changes are fundamentally caused at a molecular-scale by the relative bulk molecular dynamics and adsorption-desorption of the additive at the liquid-vapor interface, and its physisorption and electrokinetics at the liquid-solid interface. At the micro-scale, the transient transport mechanisms at the solid-liquid-vapor interface during nucleation and bubblegrowth can be attributed to thin-film spreading, surface-micro-cavity activation, and micro-layer evaporation. Furthermore at the macro-scale, the heat transport is in turn governed by the bubble growth and distribution, macro-layer heat transfer, bubble dynamics (bubble coalescence, collapse, break-up, and translation), and liquid rheology. Some of these behaviors and processes are measured and characterized in this study, the outcomes of which advance the concomitant fundamental physics, as well as provide insights for developing control strategies for the molecular-scale manipulation of interfacial tension and surface wetting in boiling by means of polymeric reagents, surfactants, and other soluble surface-active additives.

  6. An automatic registration method for frameless stereotaxy, image guided surgery, and enhanced reality visualization

    International Nuclear Information System (INIS)

    Grimson, W.E.L.; Lozano-Perez, T.; White, S.J.; Wells, W.M. III; Kikinis, R.

    1996-01-01

    There is a need for frameless guidance systems to help surgeons plan the exact location for incisions, to define the margins of tumors, and to precisely identify locations of neighboring critical structures. The authors have developed an automatic technique for registering clinical data, such as segmented magnetic resonance imaging (MRI) or computed tomography (CT) reconstructions, with any view of the patient on the operating table. They demonstrate on the specific example of neurosurgery. The method enables a visual mix of live video of the patient and the segmented three-dimensional (3-D) MRI or CT model. This supports enhanced reality techniques for planning and guiding neurosurgical procedures and allows them to interactively view extracranial or intracranial structures nonintrusively. Extensions of the method include image guided biopsies, focused therapeutic procedures, and clinical studies involving change detection over time sequences of images

  7. Multi-scale enhancement of climate prediction over land by improving the model sensitivity to vegetation variability

    Science.gov (United States)

    Alessandri, A.; Catalano, F.; De Felice, M.; Hurk, B. V. D.; Doblas-Reyes, F. J.; Boussetta, S.; Balsamo, G.; Miller, P. A.

    2017-12-01

    Here we demonstrate, for the first time, that the implementation of a realistic representation of vegetation in Earth System Models (ESMs) can significantly improve climate simulation and prediction across multiple time-scales. The effective sub-grid vegetation fractional coverage vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the surface resistance to evapotranspiration, albedo, roughness lenght, and soil field capacity. To adequately represent this effect in the EC-Earth ESM, we included an exponential dependence of the vegetation cover on the Leaf Area Index.By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal (2-4 months) and weather (4 days) time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation-cover consistently correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in

  8. Multi-Scale Computational Enzymology: Enhancing Our Understanding of Enzymatic Catalysis

    Science.gov (United States)

    Gherib, Rami; Dokainish, Hisham M.; Gauld, James W.

    2014-01-01

    Elucidating the origin of enzymatic catalysis stands as one the great challenges of contemporary biochemistry and biophysics. The recent emergence of computational enzymology has enhanced our atomistic-level description of biocatalysis as well the kinetic and thermodynamic properties of their mechanisms. There exists a diversity of computational methods allowing the investigation of specific enzymatic properties. Small or large density functional theory models allow the comparison of a plethora of mechanistic reactive species and divergent catalytic pathways. Molecular docking can model different substrate conformations embedded within enzyme active sites and determine those with optimal binding affinities. Molecular dynamics simulations provide insights into the dynamics and roles of active site components as well as the interactions between substrate and enzymes. Hybrid quantum mechanical/molecular mechanical (QM/MM) can model reactions in active sites while considering steric and electrostatic contributions provided by the surrounding environment. Using previous studies done within our group, on OvoA, EgtB, ThrRS, LuxS and MsrA enzymatic systems, we will review how these methods can be used either independently or cooperatively to get insights into enzymatic catalysis. PMID:24384841

  9. Multiscale Interfacial Strategy to Engineer Mixed Metal-Oxide Anodes toward Enhanced Cycling Efficiency.

    Science.gov (United States)

    Ma, Yue; Tai, Cheuk-Wai; Li, Shaowen; Edström, Kristina; Wei, Bingqing

    2018-06-13

    Interconnected macro/mesoporous structures of mixed metal oxide (MMO) are developed on nickel foam as freestanding anodes for Li-ion batteries. The sustainable production is realized via a wet chemical etching process with bio-friendly chemicals. By means of divalent iron doping during an in situ recrystallization process, the as-developed MMO anodes exhibit enhanced levels of cycling efficiency. Furthermore, this atomic-scale modification coherently synergizes with the encapsulation layer across a micrometer scale. During this step, we develop a quasi-gel-state tri-copolymer, i.e., F127-resorcinol-melamine, as the N-doped carbon source to regulate the interfacial chemistry of the MMO electrodes. Electrochemical tests of the modified Fe x Ni 1- x O@NC-NiF anode in both half-cell and full-cell configurations unravel the favorable suppression of the irreversible capacity loss and satisfactory cyclability at the high rates. This study highlights a proof-of-concept modification strategy across multiple scales to govern the interfacial chemical process of the electrodes toward better reversibility.

  10. Multi-Scale Computational Enzymology: Enhancing Our Understanding of Enzymatic Catalysis

    Directory of Open Access Journals (Sweden)

    Rami Gherib

    2013-12-01

    Full Text Available Elucidating the origin of enzymatic catalysis stands as one the great challenges of contemporary biochemistry and biophysics. The recent emergence of computational enzymology has enhanced our atomistic-level description of biocatalysis as well the kinetic and thermodynamic properties of their mechanisms. There exists a diversity of computational methods allowing the investigation of specific enzymatic properties. Small or large density functional theory models allow the comparison of a plethora of mechanistic reactive species and divergent catalytic pathways. Molecular docking can model different substrate conformations embedded within enzyme active sites and determine those with optimal binding affinities. Molecular dynamics simulations provide insights into the dynamics and roles of active site components as well as the interactions between substrate and enzymes. Hybrid quantum mechanical/molecular mechanical (QM/MM can model reactions in active sites while considering steric and electrostatic contributions provided by the surrounding environment. Using previous studies done within our group, on OvoA, EgtB, ThrRS, LuxS and MsrA enzymatic systems, we will review how these methods can be used either independently or cooperatively to get insights into enzymatic catalysis.

  11. Knickzone Extraction Tool (KET – A new ArcGIS toolset for automatic extraction of knickzones from a DEM based on multi-scale stream gradients

    Directory of Open Access Journals (Sweden)

    Zahra Tuba

    2017-04-01

    Full Text Available Extraction of knickpoints or knickzones from a Digital Elevation Model (DEM has gained immense significance owing to the increasing implications of knickzones on landform development. However, existing methods for knickzone extraction tend to be subjective or require time-intensive data processing. This paper describes the proposed Knickzone Extraction Tool (KET, a new raster-based Python script deployed in the form of an ArcGIS toolset that automates the process of knickzone extraction and is both fast and more user-friendly. The KET is based on multi-scale analysis of slope gradients along a river course, where any locally steep segment (knickzone can be extracted as an anomalously high local gradient. We also conducted a comparative analysis of the KET and other contemporary knickzone identification techniques. The relationship between knickzone distribution and its morphometric characteristics are also examined through a case study of a mountainous watershed in Japan.

  12. Automatic segmentation of myocardium at risk from contrast enhanced SSFP CMR: validation against expert readers and SPECT

    International Nuclear Information System (INIS)

    Tufvesson, Jane; Carlsson, Marcus; Aletras, Anthony H.; Engblom, Henrik; Deux, Jean-François; Koul, Sasha; Sörensson, Peder; Pernow, John; Atar, Dan; Erlinge, David; Arheden, Håkan; Heiberg, Einar

    2016-01-01

    Efficacy of reperfusion therapy can be assessed as myocardial salvage index (MSI) by determining the size of myocardium at risk (MaR) and myocardial infarction (MI), (MSI = 1-MI/MaR). Cardiovascular magnetic resonance (CMR) can be used to assess MI by late gadolinium enhancement (LGE) and MaR by either T2-weighted imaging or contrast enhanced SSFP (CE-SSFP). Automatic segmentation algorithms have been developed and validated for MI by LGE as well as for MaR by T2-weighted imaging. There are, however, no algorithms available for CE-SSFP. Therefore, the aim of this study was to develop and validate automatic segmentation of MaR in CE-SSFP. The automatic algorithm applies surface coil intensity correction and classifies myocardial intensities by Expectation Maximization to define a MaR region based on a priori regional criteria, and infarct region from LGE. Automatic segmentation was validated against manual delineation by expert readers in 183 patients with reperfused acute MI from two multi-center randomized clinical trials (RCT) (CHILL-MI and MITOCARE) and against myocardial perfusion SPECT in an additional set (n = 16). Endocardial and epicardial borders were manually delineated at end-diastole and end-systole. Manual delineation of MaR was used as reference and inter-observer variability was assessed for both manual delineation and automatic segmentation of MaR in a subset of patients (n = 15). MaR was expressed as percent of left ventricular mass (%LVM) and analyzed by bias (mean ± standard deviation). Regional agreement was analyzed by Dice Similarity Coefficient (DSC) (mean ± standard deviation). MaR assessed by manual and automatic segmentation were 36 ± 10 % and 37 ± 11 %LVM respectively with bias 1 ± 6 %LVM and regional agreement DSC 0.85 ± 0.08 (n = 183). MaR assessed by SPECT and CE-SSFP automatic segmentation were 27 ± 10 %LVM and 29 ± 7 %LVM respectively with bias 2 ± 7 %LVM. Inter-observer variability was 0 ± 3 %LVM for manual delineation and

  13. 3D automatic segmentation method for retinal optical coherence tomography volume data using boundary surface enhancement

    Directory of Open Access Journals (Sweden)

    Yankui Sun

    2016-03-01

    Full Text Available With the introduction of spectral-domain optical coherence tomography (SD-OCT, much larger image datasets are routinely acquired compared to what was possible using the previous generation of time-domain OCT. Thus, there is a critical need for the development of three-dimensional (3D segmentation methods for processing these data. We present here a novel 3D automatic segmentation method for retinal OCT volume data. Briefly, to segment a boundary surface, two OCT volume datasets are obtained by using a 3D smoothing filter and a 3D differential filter. Their linear combination is then calculated to generate new volume data with an enhanced boundary surface, where pixel intensity, boundary position information, and intensity changes on both sides of the boundary surface are used simultaneously. Next, preliminary discrete boundary points are detected from the A-Scans of the volume data. Finally, surface smoothness constraints and a dynamic threshold are applied to obtain a smoothed boundary surface by correcting a small number of error points. Our method can extract retinal layer boundary surfaces sequentially with a decreasing search region of volume data. We performed automatic segmentation on eight human OCT volume datasets acquired from a commercial Spectralis OCT system, where each volume of datasets contains 97 OCT B-Scan images with a resolution of 496×512 (each B-Scan comprising 512 A-Scans containing 496 pixels; experimental results show that this method can accurately segment seven layer boundary surfaces in normal as well as some abnormal eyes.

  14. Fully automatic segmentation of left atrium and pulmonary veins in late gadolinium-enhanced MRI: Towards objective atrial scar assessment.

    Science.gov (United States)

    Tao, Qian; Ipek, Esra Gucuk; Shahzad, Rahil; Berendsen, Floris F; Nazarian, Saman; van der Geest, Rob J

    2016-08-01

    To realize objective atrial scar assessment, this study aimed to develop a fully automatic method to segment the left atrium (LA) and pulmonary veins (PV) from late gadolinium-enhanced (LGE) magnetic resonance imaging (MRI). The extent and distribution of atrial scar, visualized by LGE-MRI, provides important information for clinical treatment of atrial fibrillation (AF) patients. Forty-six AF patients (age 62 ± 8, 14 female) who underwent cardiac MRI prior to RF ablation were included. A contrast-enhanced MR angiography (MRA) sequence was acquired for anatomy assessment followed by an LGE sequence for LA scar assessment. A fully automatic segmentation method was proposed consisting of two stages: 1) global segmentation by multiatlas registration; and 2) local refinement by 3D level-set. These automatic segmentation results were compared with manual segmentation. The LA and PVs were automatically segmented in all subjects. Compared with manual segmentation, the method yielded a surface-to-surface distance of 1.49 ± 0.65 mm in the LA region when using both MRA and LGE, and 1.80 ± 0.93 mm when using LGE alone (P automatic and manual segmentation was comparable to the interobserver difference (P = 0.8 in LA region and P = 0.7 in PV region). We developed a fully automatic method for LA and PV segmentation from LGE-MRI, with comparable performance to a human observer. Inclusion of an MRA sequence further improves the segmentation accuracy. The method leads to automatic generation of a patient-specific model, and potentially enables objective atrial scar assessment for AF patients. J. Magn. Reson. Imaging 2016;44:346-354. © 2016 Wiley Periodicals, Inc.

  15. Bio-stimuli-responsive multi-scale hyaluronic acid nanoparticles for deepened tumor penetration and enhanced therapy.

    Science.gov (United States)

    Huo, Mengmeng; Li, Wenyan; Chaudhuri, Arka Sen; Fan, Yuchao; Han, Xiu; Yang, Chen; Wu, Zhenghong; Qi, Xiaole

    2017-09-01

    In this study, we developed bio-stimuli-responsive multi-scale hyaluronic acid (HA) nanoparticles encapsulated with polyamidoamine (PAMAM) dendrimers as the subunits. These HA/PAMAM nanoparticles of large scale (197.10±3.00nm) were stable during systematic circulation then enriched at the tumor sites; however, they were prone to be degraded by the high expressed hyaluronidase (HAase) to release inner PAMAM dendrimers and regained a small scale (5.77±0.25nm) with positive charge. After employing tumor spheroids penetration assay on A549 3D tumor spheroids for 8h, the fluorescein isothiocyanate (FITC) labeled multi-scale HA/PAMAM-FITC nanoparticles could penetrate deeply into these tumor spheroids with the degradation of HAase. Moreover, small animal imaging technology in male nude mice bearing H22 tumor showed HA/PAMAM-FITC nanoparticles possess higher prolonged systematic circulation compared with both PAMAM-FITC nanoparticles and free FITC. In addition, after intravenous administration in mice bearing H22 tumors, methotrexate (MTX) loaded multi-scale HA/PAMAM-MTX nanoparticles exhibited a 2.68-fold greater antitumor activity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A highly versatile automatized setup for quantitative measurements of PHIP enhancements

    Science.gov (United States)

    Kiryutin, Alexey S.; Sauer, Grit; Hadjiali, Sara; Yurkovskaya, Alexandra V.; Breitzke, Hergen; Buntkowsky, Gerd

    2017-12-01

    The design and application of a versatile and inexpensive experimental extension to NMR spectrometers is described that allows to carry out highly reproducible PHIP experiments directly in the NMR sample tube, i.e. under PASADENA condition, followed by the detection of the NMR spectra of hyperpolarized products with high spectral resolution. Employing this high resolution it is feasible to study kinetic processes in the solution with high accuracy. As a practical example the dissolution of hydrogen gas in the liquid and the PHIP kinetics during the hydrogenation reaction of Fmoc-O-propargyl-L-tyrosine in acetone-d6 are monitored. The timing of the setup is fully controlled by the pulse-programmer of the NMR spectrometer. By flushing with an inert gas it is possible to efficiently quench the hydrogenation reaction in a controlled fashion and to detect the relaxation of hyperpolarization without a background reaction. The proposed design makes it possible to carry out PHIP experiments in an automatic mode and reliably determine the enhancement of polarized signals.

  17. Automatic three-dimensional rib centerline extraction from CT scans for enhanced visualization and anatomical context

    Science.gov (United States)

    Ramakrishnan, Sowmya; Alvino, Christopher; Grady, Leo; Kiraly, Atilla

    2011-03-01

    We present a complete automatic system to extract 3D centerlines of ribs from thoracic CT scans. Our rib centerline system determines the positional information for the rib cage consisting of extracted rib centerlines, spinal canal centerline, pairing and labeling of ribs. We show an application of this output to produce an enhanced visualization of the rib cage by the method of Kiraly et al., in which the ribs are digitally unfolded along their centerlines. The centerline extraction consists of three stages: (a) pre-trace processing for rib localization, (b) rib centerline tracing, and (c) post-trace processing to merge the rib traces. Then we classify ribs from non-ribs and determine anatomical rib labeling. Our novel centerline tracing technique uses the Random Walker algorithm to segment the structural boundary of the rib in successive 2D cross sections orthogonal to the longitudinal direction of the ribs. Then the rib centerline is progressively traced along the rib using a 3D Kalman filter. The rib centerline extraction framework was evaluated on 149 CT datasets with varying slice spacing, dose, and under a variety of reconstruction kernels. The results of the evaluation are presented. The extraction takes approximately 20 seconds on a modern radiology workstation and performs robustly even in the presence of partial volume effects or rib pathologies such as bone metastases or fractures, making the system suitable for assisting clinicians in expediting routine rib reading for oncology and trauma applications.

  18. A study on design enhancement of automatic depressurization system in a passive PWR

    International Nuclear Information System (INIS)

    Yu, Sung Sik

    1993-02-01

    In a Passive PWR, the successful actuation of the Automatic Depressurization System is essentially required so that no core damage is occurred following small LOCA. But it has been shown in the previous studies that Core Damage Frequency form small LOCA is significantly caused by unavailability of the ADS. In this study, the design vulnerabilities impacting the ADS unavailability are identified through the reliability assessment using the fault tree methodology and then the design enhancements towards improving the system reliability are developed. A series of small LOCA analyses using RELAP5 code are performed to validate the system requirements for the successful depressurization and to study the thermal-hydraulic feasibility of the proposed design enhancements. The impact on CDF according to the change of system unavailability is also analyzed. In addition, aqualitative analysis is performed to reduce the inadvertent opening of the ADS valves. From the results of the analyses, the ADS is understood to have less incentive on the reliability improvement through system simplification. It is found that based on system characteristics, the major contributor to the system unavailability is the first stage. A series-parallel configuration with two trains of eight valves, which shows a higher reliability compared to the base ADS design, is recommended as an alternative first stage of the ADS. In addition, establishment of the appropriate ADS operation strategy is proposed such as allowing manual operation of the first stage and allowing the forced depressurization using the normal residual heat removal system connected to the RCS following the successful depressurization up to the 3rd stage and the failure of the 4th stage

  19. Multiscale CNNs for Brain Tumor Segmentation and Diagnosis

    Directory of Open Access Journals (Sweden)

    Liya Zhao

    2016-01-01

    Full Text Available Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs. Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.

  20. Multiscale CNNs for Brain Tumor Segmentation and Diagnosis.

    Science.gov (United States)

    Zhao, Liya; Jia, Kebin

    2016-01-01

    Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs). Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.

  1. Enhancing Thermoelectric Performances of Bismuth Antimony Telluride via Synergistic Combination of Multiscale Structuring and Band Alignment by FeTe2 Incorporation.

    Science.gov (United States)

    Shin, Weon Ho; Roh, Jong Wook; Ryu, Byungki; Chang, Hye Jung; Kim, Hyun Sik; Lee, Soonil; Seo, Won Seon; Ahn, Kyunghan

    2018-01-31

    It has been a difficulty to form well-distributed nano- and mesosized inclusions in a Bi 2 Te 3 -based matrix and thereby realizing no degradation of carrier mobility at interfaces between matrix and inclusions for high thermoelectric performances. Herein, we successfully synthesize multistructured thermoelectric Bi 0.4 Sb 1.6 Te 3 materials with Fe-rich nanoprecipitates and sub-micron FeTe 2 inclusions by a conventional solid-state reaction followed by melt-spinning and spark plasma sintering that could be a facile preparation method for scale-up production. This study presents a bismuth antimony telluride based thermoelectric material with a multiscale structure whose lattice thermal conductivity is drastically reduced with minimal degradation on its carrier mobility. This is possible because a carefully chosen FeTe 2 incorporated in the matrix allows its interfacial valence band with the matrix to be aligned, leading to a significantly improved p-type thermoelectric power factor. Consequently, an impressively high thermoelectric figure of merit ZT of 1.52 is achieved at 396 K for p-type Bi 0.4 Sb 1.6 Te 3 -8 mol % FeTe 2 , which is a 43% enhancement in ZT compared to the pristine Bi 0.4 Sb 1.6 Te 3 . This work demonstrates not only the effectiveness of multiscale structuring for lowering lattice thermal conductivities, but also the importance of interfacial band alignment between matrix and inclusions for maintaining high carrier mobilities when designing high-performance thermoelectric materials.

  2. APT - NASA ENHANCED VERSION OF AUTOMATICALLY PROGRAMMED TOOL SOFTWARE - STAND-ALONE VERSION

    Science.gov (United States)

    Premo, D. A.

    1994-01-01

    The APT code is one of the most widely used software tools for complex numerically controlled (N/C) machining. APT is an acronym for Automatically Programmed Tools and is used to denote both a language and the computer software that processes that language. Development of the APT language and software system was begun over twenty years ago as a U. S. government sponsored industry and university research effort. APT is a "problem oriented" language that was developed for the explicit purpose of aiding the N/C machine tools. Machine-tool instructions and geometry definitions are written in the APT language to constitute a "part program." The APT part program is processed by the APT software to produce a cutter location (CL) file. This CL file may then be processed by user supplied post processors to convert the CL data into a form suitable for a particular N/C machine tool. This June, 1989 offering of the APT system represents an adaptation, with enhancements, of the public domain version of APT IV/SSX8 to the DEC VAX-11/780 for use by the Engineering Services Division of the NASA Goddard Space Flight Center. Enhancements include the super pocket feature which allows concave and convex polygon shapes of up to 40 points including shapes that overlap, that leave islands of material within the pocket, and that have one or more arcs as part of the pocket boundary. Recent modifications to APT include a rework of the POCKET subroutine and correction of an error that prevented the use within a macro of a macro variable cutter move statement combined with macro variable double check surfaces. Former modifications included the expansion of array and buffer sizes to accommodate larger part programs, and the insertion of a few user friendly error messages. The APT system software on the DEC VAX-11/780 is organized into two separate programs: the load complex and the APT processor. The load complex handles the table initiation phase and is usually only run when changes to the

  3. Enhanced Representation of Soil NO Emissions in the Community Multiscale Air Quality (CMAQ) Model Version 5.0.2

    Science.gov (United States)

    Rasool, Quazi Z.; Zhang, Rui; Lash, Benjamin; Cohan, Daniel S.; Cooter, Ellen J.; Bash, Jesse O.; Lamsal, Lok N.

    2016-01-01

    Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions.

  4. Two Methods of Automatic Evaluation of Speech Signal Enhancement Recorded in the Open-Air MRI Environment

    Science.gov (United States)

    Přibil, Jiří; Přibilová, Anna; Frollo, Ivan

    2017-12-01

    The paper focuses on two methods of evaluation of successfulness of speech signal enhancement recorded in the open-air magnetic resonance imager during phonation for the 3D human vocal tract modeling. The first approach enables to obtain a comparison based on statistical analysis by ANOVA and hypothesis tests. The second method is based on classification by Gaussian mixture models (GMM). The performed experiments have confirmed that the proposed ANOVA and GMM classifiers for automatic evaluation of the speech quality are functional and produce fully comparable results with the standard evaluation based on the listening test method.

  5. Hierarchical multiscale modeling for flows in fractured media using generalized multiscale finite element method

    KAUST Repository

    Efendiev, Yalchin R.

    2015-06-05

    In this paper, we develop a multiscale finite element method for solving flows in fractured media. Our approach is based on generalized multiscale finite element method (GMsFEM), where we represent the fracture effects on a coarse grid via multiscale basis functions. These multiscale basis functions are constructed in the offline stage via local spectral problems following GMsFEM. To represent the fractures on the fine grid, we consider two approaches (1) discrete fracture model (DFM) (2) embedded fracture model (EFM) and their combination. In DFM, the fractures are resolved via the fine grid, while in EFM the fracture and the fine grid block interaction is represented as a source term. In the proposed multiscale method, additional multiscale basis functions are used to represent the long fractures, while short-size fractures are collectively represented by a single basis functions. The procedure is automatically done via local spectral problems. In this regard, our approach shares common concepts with several approaches proposed in the literature as we discuss. We would like to emphasize that our goal is not to compare DFM with EFM, but rather to develop GMsFEM framework which uses these (DFM or EFM) fine-grid discretization techniques. Numerical results are presented, where we demonstrate how one can adaptively add basis functions in the regions of interest based on error indicators. We also discuss the use of randomized snapshots (Calo et al. Randomized oversampling for generalized multiscale finite element methods, 2014), which reduces the offline computational cost.

  6. Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation.

    Science.gov (United States)

    Pereira, Sérgio; Meier, Raphael; McKinley, Richard; Wiest, Roland; Alves, Victor; Silva, Carlos A; Reyes, Mauricio

    2018-02-01

    Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Joint Multi-scale Convolution Neural Network for Scene Classification of High Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    ZHENG Zhuo

    2018-05-01

    Full Text Available High resolution remote sensing imagery scene classification is important for automatic complex scene recognition, which is the key technology for military and disaster relief, etc. In this paper, we propose a novel joint multi-scale convolution neural network (JMCNN method using a limited amount of image data for high resolution remote sensing imagery scene classification. Different from traditional convolutional neural network, the proposed JMCNN is an end-to-end training model with joint enhanced high-level feature representation, which includes multi-channel feature extractor, joint multi-scale feature fusion and Softmax classifier. Multi-channel and scale convolutional extractors are used to extract scene middle features, firstly. Then, in order to achieve enhanced high-level feature representation in a limit dataset, joint multi-scale feature fusion is proposed to combine multi-channel and scale features using two feature fusions. Finally, enhanced high-level feature representation can be used for classification by Softmax. Experiments were conducted using two limit public UCM and SIRI datasets. Compared to state-of-the-art methods, the JMCNN achieved improved performance and great robustness with average accuracies of 89.3% and 88.3% on the two datasets.

  8. Retinex enhancement of infrared images.

    Science.gov (United States)

    Li, Ying; He, Renjie; Xu, Guizhi; Hou, Changzhi; Sun, Yunyan; Guo, Lei; Rao, Liyun; Yan, Weili

    2008-01-01

    With the ability of imaging the temperature distribution of body, infrared imaging is promising in diagnostication and prognostication of diseases. However the poor quality of the raw original infrared images prevented applications and one of the essential problems is the low contrast appearance of the imagined object. In this paper, the image enhancement technique based on the Retinex theory is studied, which is a process that automatically retrieve the visual realism to images. The algorithms, including Frackle-McCann algorithm, McCann99 algorithm, single-scale Retinex algorithm, multi-scale Retinex algorithm and multi-scale Retinex algorithm with color restoration, are experienced to the enhancement of infrared images. The entropy measurements along with the visual inspection were compared and results shown the algorithms based on Retinex theory have the ability in enhancing the infrared image. Out of the algorithms compared, MSRCR demonstrated the best performance.

  9. Multifunctional multiscale composites: Processing, modeling and characterization

    Science.gov (United States)

    Qiu, Jingjing

    Carbon nanotubes (CNTs) demonstrate extraordinary properties and show great promise in enhancing out-of-plane properties of traditional polymer/fiber composites and enabling functionality. However, current manufacturing challenges hinder the realization of their potential. In the dissertation research, both experimental and computational efforts have been conducted to investigate effective manufacturing techniques of CNT integrated multiscale composites. The fabricated composites demonstrated significant improvements in physical properties, such as tensile strength, tensile modulus, inter-laminar shear strength, thermal dimension stability and electrical conductivity. Such multiscale composites were truly multifunctional with the addition of CNTs. Furthermore, a novel hierarchical multiscale modeling method was developed in this research. Molecular dynamic (MD) simulation offered reasonable explanation of CNTs dispersion and their motion in polymer solution. Bi-mode finite-extensible-nonlinear-elastic (FENE) dumbbell simulation was used to analyze the influence of CNT length distribution on the stress tensor and shear-rate-dependent viscosity. Based on the simulated viscosity profile and empirical equations from experiments, a macroscale flow simulation model on the finite element method (FEM) method was developed and validated to predict resin flow behavior in the processing of CNT-enhanced multiscale composites. The proposed multiscale modeling method provided a comprehensive understanding of micro/nano flow in both atomistic details and mesoscale. The simulation model can be used to optimize process design and control of the mold-filling process in multiscale composite manufacturing. This research provided systematic investigations into the CNT-based multiscale composites. The results from this study may be used to leverage the benefits of CNTs and open up new application opportunities for high-performance multifunctional multiscale composites. Keywords. Carbon

  10. Enhanced thermoelectric properties in p-type Bi{sub 0.4}Sb{sub 1.6}Te{sub 3} alloy by combining incorporation and doping using multi-scale CuAlO{sub 2} particles

    Energy Technology Data Exchange (ETDEWEB)

    Song, Zijun; Liu, Yuan; Zhou, Zhenxing; Lu, Xiaofang; Wang, Lianjun [State Key Laboratory for Modification of Chemical Fibers and Polymer Materials, College of Materials Science and Engineering, Donghua University, Shanghai (China); Institute of Functional Materials, Donghua University, Shanghai (China); Zhang, Qihao [State Key Laboratory of High Performance Ceramics and Superfine Microstructure, Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai (China); University of Chinese Academy of Sciences, Beijing (China); Jiang, Wan [State Key Laboratory for Modification of Chemical Fibers and Polymer Materials, College of Materials Science and Engineering, Donghua University, Shanghai (China); Institute of Functional Materials, Donghua University, Shanghai (China); School of Material Science and Engineering, Jingdezhen Ceramic Institute, Jingdezhen (China); Chen, Lidong [State Key Laboratory of High Performance Ceramics and Superfine Microstructure, Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai (China)

    2017-01-15

    Multi-scale CuAlO{sub 2} particles are introduced into the Bi{sub 0.4}Sb{sub 1.6}Te{sub 3} matrix to synergistically optimize the electrical conductivity, Seebeck coefficient, and the lattice thermal conductivity. Cu element originating from fine CuAlO{sub 2} grains diffuses into the Bi{sub 0.4}Sb{sub 1.6}Te{sub 3} matrix and tunes the carrier concentration while the coarse CuAlO{sub 2} particles survive as the second phase within the matrix. The power factor is improved at the whole temperatures range due to the low-energy electron filtering effect on Seebeck coefficient and enhanced electrical transport property by mild Cu doping. Meanwhile, the remaining CuAlO{sub 2} inclusions give rise to more boundaries and newly built interfaces scattering of heat-carrying phonons, resulting in the reduced lattice thermal conductivity. Consequently, the maximum ZT is found to be enhanced by 150% arising from the multi-scale microstructure regulation when the CuAlO{sub 2} content reaches 0.6 vol.%. Not only that, but the ZT curves get flat in the whole temperature range after introducing the multi-scale CuAlO{sub 2} particles, which leads to a remarkable increase in the average ZT. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  11. Multiscale principal component analysis

    International Nuclear Information System (INIS)

    Akinduko, A A; Gorban, A N

    2014-01-01

    Principal component analysis (PCA) is an important tool in exploring data. The conventional approach to PCA leads to a solution which favours the structures with large variances. This is sensitive to outliers and could obfuscate interesting underlying structures. One of the equivalent definitions of PCA is that it seeks the subspaces that maximize the sum of squared pairwise distances between data projections. This definition opens up more flexibility in the analysis of principal components which is useful in enhancing PCA. In this paper we introduce scales into PCA by maximizing only the sum of pairwise distances between projections for pairs of datapoints with distances within a chosen interval of values [l,u]. The resulting principal component decompositions in Multiscale PCA depend on point (l,u) on the plane and for each point we define projectors onto principal components. Cluster analysis of these projectors reveals the structures in the data at various scales. Each structure is described by the eigenvectors at the medoid point of the cluster which represent the structure. We also use the distortion of projections as a criterion for choosing an appropriate scale especially for data with outliers. This method was tested on both artificial distribution of data and real data. For data with multiscale structures, the method was able to reveal the different structures of the data and also to reduce the effect of outliers in the principal component analysis

  12. Automatic assessment of coronary artery calcium score from contrast-enhanced 256-row coronary computed tomography angiography.

    Science.gov (United States)

    Rubinshtein, Ronen; Halon, David A; Gaspar, Tamar; Lewis, Basil S; Peled, Nathan

    2014-01-01

    The coronary artery calcium score (CS), an independent predictor of cardiovascular events, can be obtained from a stand-alone nonenhanced computed tomography (CT) scan (CSCT) or as an additional nonenhanced procedure before contrast-enhanced coronary CT angiography (CCTA). We evaluated the accuracy of a novel fully automatic tool for computing CS from the CCTA examination. One hundred thirty-six consecutive symptomatic patients (aged 59 ± 11 years, 40% female) without known coronary artery disease who underwent both 256-row CSCT and CCTA were studied. Original scan reconstruction (slice thickness) was maintained (3 mm for CSCT and 0.67 mm for CCTA). CS was computed from CCTA by an automatic tool (COR Analyzer, rcadia Medical Imaging, Haifa, Israel) and compared with CS results obtained by standard assessment of nonenhanced CSCT (HeartBeat CS, Philips, Cleveland, Ohio). We also compared both methods for classification into 5 commonly used CS categories (0, 1 to 10, 11 to 100, 101 to 400, >400 Agatston units). All scans were of diagnostic quality. CS obtained by the COR Analyzer from CCTA classified 111 of 136 (82%) of patients into identical categories as CS by CSCT and 24 of remaining 25 into an adjacent category. Overall, CS values from CCTA showed high correlation with CS values from CSCT (Spearman rank correlation = 0.95, p automatically computed from 256-row CCTA correlated highly with standard CS values obtained from nonenhanced CSCT. CS obtained directly from CCTA may obviate the need for an additional scan and attendant radiation. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Multi-scale enhancement of climate prediction over land by increasing the model sensitivity to vegetation variability in EC-Earth

    Science.gov (United States)

    Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul A.

    2017-08-01

    The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (twentieth century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2 m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over

  14. Fatigue of multiscale composites with secondary nanoplatelet reinforcement: 3D computational analysis

    DEFF Research Database (Denmark)

    Dai, Gaoming; Mishnaevsky, Leon, Jr.

    2014-01-01

    3D numerical simulations of fatigue damage of multiscale fiber reinforced polymer composites with secondary nanoclay reinforcement are carried out. Macro–micro FE models of the multiscale composites are generated automatically using Python based software. The effect of the nanoclay reinforcement....... Multiscale composites with exfoliated nanoreinforcement and aligned nanoplatelets ensure the better fatigue resistance than those with intercalated/clustered and randomly oriented nanoreinforcement....

  15. Technology-Enhanced Assessment of Math Fact Automaticity: Patterns of Performance for Low- and Typically Achieving Students

    Science.gov (United States)

    Stickney, Eric M.; Sharp, Lindsay B.; Kenyon, Amanda S.

    2012-01-01

    Because math fact automaticity has been identified as a key barrier for students struggling with mathematics, we examined how initial math achievement levels influenced the path to automaticity (e.g., variation in number of attempts, speed of retrieval, and skill maintenance over time) and the relation between attainment of automaticity and gains…

  16. A new combined technique for automatic contrast enhancement of digital images

    Directory of Open Access Journals (Sweden)

    Ismail A. Humied

    2012-03-01

    Full Text Available Some low contrast images have certain characteristics makes it difficult to use traditional methods to improve it. An example of these characteristics, that the amplitudes of images histogram components are very high at one location on the gray scale and very small in the rest of the gray scale. In the present paper, a new method is described. It can deal with such cases. The proposed method is a combination of Histogram Equalization (HE and Fast Gray-Level Grouping (FGLG. The basic procedure of this method is segments the original histogram of a low contrast image into two sub-histograms according to the location of the highest amplitude of the histogram components, and achieving contrast enhancement by equalizing the left segment of the histogram components using (HE technique and using (FGLG technique to equalize the right segment of this histogram components. The results have shown that the proposed method does not only produce better results than each individual contrast enhancement technique, but it is also fully automated. Moreover, it is applicable to a broad variety of images that satisfy the properties mentioned above and suffer from low contrast.

  17. Automatic control of the effluent turbidity from a chemically enhanced primary treatment with microsieving.

    Science.gov (United States)

    Väänänen, J; Memet, S; Günther, T; Lilja, M; Cimbritz, M; la Cour Jansen, J

    2017-10-01

    For chemically enhanced primary treatment (CEPT) with microsieving, a feedback proportional integral controller combined with a feedforward compensator was used in large pilot scale to control effluent water turbidity to desired set points. The effluent water turbidity from the microsieve was maintained at various set points in the range 12-80 NTU basically independent for a number of studied variations in influent flow rate and influent wastewater compositions. Effluent turbidity was highly correlated with effluent chemical oxygen demand (COD). Thus, for CEPT based on microsieving, controlling the removal of COD was possible. Thereby incoming carbon can be optimally distributed between biological nitrogen removal and anaerobic digestion for biogas production. The presented method is based on common automation and control strategies; therefore fine tuning and optimization for specific requirements are simplified compared to model-based dosing control.

  18. Using Adaptive Tone Mapping to Enhance Edge-Preserving Color Image Automatically

    Directory of Open Access Journals (Sweden)

    Lu Min-Yao

    2010-01-01

    Full Text Available One common characteristic of most high-contrast images is the coexistence of dark shadows and bright light source in one scene. It is very difficult to present details in both dark and bright areas simultaneously on most display devices. In order to resolve this problem, a new method utilizing bilateral filter combined with adaptive tone-mapping method is proposed to improve image quality. First of all, bilateral filter is used to decompose image into two layers: large-scale layer and detail layer. Then, the large-scale layer image is divided into three regions: bright, mid-tone, and dark region. Finally, an appropriate tone-mapping method is chosen to process each region according to its individual property. Only large-scale layer image is enhanced by using adaptive tone mapping; therefore, the details of the original image can be preserved. The experiment results demonstrate the success of proposed method. Furthermore, the proposed method can also avoid posterization produced by methods using histogram equalization.

  19. Musical Instrument Identification using Multiscale Mel-frequency Cepstral Coefficients

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Morvidone, Marcela; Daudet, Laurent

    2010-01-01

    We investigate the benefits of evaluating Mel-frequency cepstral coefficients (MFCCs) over several time scales in the context of automatic musical instrument identification for signals that are monophonic but derived from real musical settings. We define several sets of features derived from MFCC...... multiscale decompositions perform significantly better than features computed using a single time-resolution....

  20. Modeling of time-lapse multi-scale seismic monitoring of CO2 injected into a fault zone to enhance the characterization of permeability in enhanced geothermal systems

    Science.gov (United States)

    Zhang, R.; Borgia, A.; Daley, T. M.; Oldenburg, C. M.; Jung, Y.; Lee, K. J.; Doughty, C.; Altundas, B.; Chugunov, N.; Ramakrishnan, T. S.

    2017-12-01

    Subsurface permeable faults and fracture networks play a critical role for enhanced geothermal systems (EGS) by providing conduits for fluid flow. Characterization of the permeable flow paths before and after stimulation is necessary to evaluate and optimize energy extraction. To provide insight into the feasibility of using CO2 as a contrast agent to enhance fault characterization by seismic methods, we model seismic monitoring of supercritical CO2 (scCO2) injected into a fault. During the CO2 injection, the original brine is replaced by scCO2, which leads to variations in geophysical properties of the formation. To explore the technical feasibility of the approach, we present modeling results for different time-lapse seismic methods including surface seismic, vertical seismic profiling (VSP), and a cross-well survey. We simulate the injection and production of CO2 into a normal fault in a system based on the Brady's geothermal field and model pressure and saturation variations in the fault zone using TOUGH2-ECO2N. The simulation results provide changing fluid properties during the injection, such as saturation and salinity changes, which allow us to estimate corresponding changes in seismic properties of the fault and the formation. We model the response of the system to active seismic monitoring in time-lapse mode using an anisotropic finite difference method with modifications for fracture compliance. Results to date show that even narrow fault and fracture zones filled with CO2 can be better detected using the VSP and cross-well survey geometry, while it would be difficult to image the CO2 plume by using surface seismic methods.

  1. Resistance Training Exercise Program for Intervention to Enhance Gait Function in Elderly Chronically Ill Patients: Multivariate Multiscale Entropy for Center of Pressure Signal Analysis

    Directory of Open Access Journals (Sweden)

    Ming-Shu Chen

    2014-01-01

    Full Text Available Falls are unpredictable accidents, and the resulting injuries can be serious in the elderly, particularly those with chronic diseases. Regular exercise is recommended to prevent and treat hypertension and other chronic diseases by reducing clinical blood pressure. The “complexity index” (CI, based on multiscale entropy (MSE algorithm, has been applied in recent studies to show a person’s adaptability to intrinsic and external perturbations and widely used measure of postural sway or stability. The multivariate multiscale entropy (MMSE was advanced algorithm used to calculate the complexity index (CI values of the center of pressure (COP data. In this study, we applied the MSE & MMSE to analyze gait function of 24 elderly, chronically ill patients (44% female; 56% male; mean age, 67.56±10.70 years with either cardiovascular disease, diabetes mellitus, or osteoporosis. After a 12-week training program, postural stability measurements showed significant improvements. Our results showed beneficial effects of resistance training, which can be used to improve postural stability in the elderly and indicated that MMSE algorithms to calculate CI of the COP data were superior to the multiscale entropy (MSE algorithm to identify the sense of balance in the elderly.

  2. Multiscale Cancer Modeling

    Science.gov (United States)

    Macklin, Paul; Cristini, Vittorio

    2013-01-01

    Simulating cancer behavior across multiple biological scales in space and time, i.e., multiscale cancer modeling, is increasingly being recognized as a powerful tool to refine hypotheses, focus experiments, and enable more accurate predictions. A growing number of examples illustrate the value of this approach in providing quantitative insight on the initiation, progression, and treatment of cancer. In this review, we introduce the most recent and important multiscale cancer modeling works that have successfully established a mechanistic link between different biological scales. Biophysical, biochemical, and biomechanical factors are considered in these models. We also discuss innovative, cutting-edge modeling methods that are moving predictive multiscale cancer modeling toward clinical application. Furthermore, because the development of multiscale cancer models requires a new level of collaboration among scientists from a variety of fields such as biology, medicine, physics, mathematics, engineering, and computer science, an innovative Web-based infrastructure is needed to support this growing community. PMID:21529163

  3. Multiscale Representations Phase II

    National Research Council Canada - National Science Library

    Bar-Yam, Yaneer

    2004-01-01

    .... Multiscale analysis provides an analytic tool that can be applied to evaluating force capabilities as well as the relevance of designs for technological innovations to support force structures and their modernization...

  4. Multiscale System Theory

    Science.gov (United States)

    1990-02-21

    LIDS-P-1953 Multiscale System Theory Albert Benveniste IRISA-INRIA, Campus de Beaulieu 35042 RENNES CEDEX, FRANCE Ramine Nikoukhah INRIA...TITLE AND SUBTITLE Multiscale System Theory 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e...the development of a corresponding system theory and a theory of stochastic processes and their estimation. The research presented in this and several

  5. Multiscale Simulations Using Particles

    DEFF Research Database (Denmark)

    Walther, Jens Honore

    vortex methods for problems in continuum fluid dynamics, dissipative particle dynamics for flow at the meso scale, and atomistic molecular dynamics simulations of nanofluidic systems. We employ multiscale techniques to breach the atomistic and continuum scales to study fundamental problems in fluid...... dynamics. Recent work on the thermophoretic motion of water nanodroplets confined inside carbon nanotubes, and multiscale techniques for polar liquids will be discussed in detail at the symposium....

  6. Automatic gallbladder segmentation using combined 2D and 3D shape features to perform volumetric analysis in native and secretin-enhanced MRCP sequences.

    Science.gov (United States)

    Gloger, Oliver; Bülow, Robin; Tönnies, Klaus; Völzke, Henry

    2017-11-24

    We aimed to develop the first fully automated 3D gallbladder segmentation approach to perform volumetric analysis in volume data of magnetic resonance (MR) cholangiopancreatography (MRCP) sequences. Volumetric gallbladder analysis is performed for non-contrast-enhanced and secretin-enhanced MRCP sequences. Native and secretin-enhanced MRCP volume data were produced with a 1.5-T MR system. Images of coronal maximum intensity projections (MIP) are used to automatically compute 2D characteristic shape features of the gallbladder in the MIP images. A gallbladder shape space is generated to derive 3D gallbladder shape features, which are then combined with 2D gallbladder shape features in a support vector machine approach to detect gallbladder regions in MRCP volume data. A region-based level set approach is used for fine segmentation. Volumetric analysis is performed for both sequences to calculate gallbladder volume differences between both sequences. The approach presented achieves segmentation results with mean Dice coefficients of 0.917 in non-contrast-enhanced sequences and 0.904 in secretin-enhanced sequences. This is the first approach developed to detect and segment gallbladders in MR-based volume data automatically in both sequences. It can be used to perform gallbladder volume determination in epidemiological studies and to detect abnormal gallbladder volumes or shapes. The positive volume differences between both sequences may indicate the quantity of the pancreatobiliary reflux.

  7. Multiscale Computing with the Multiscale Modeling Library and Runtime Environment

    NARCIS (Netherlands)

    Borgdorff, J.; Mamonski, M.; Bosak, B.; Groen, D.; Ben Belgacem, M.; Kurowski, K.; Hoekstra, A.G.

    2013-01-01

    We introduce a software tool to simulate multiscale models: the Multiscale Coupling Library and Environment 2 (MUSCLE 2). MUSCLE 2 is a component-based modeling tool inspired by the multiscale modeling and simulation framework, with an easy-to-use API which supports Java, C++, C, and Fortran. We

  8. Dynamic contrast-enhanced MRI for automatic detection of foci @]@of residual or recurrent disease after prostatectomy

    Energy Technology Data Exchange (ETDEWEB)

    Parra, N.A.; Orman, Amber; Abramowitz, Matthew; Pollack, Alan; Stoyanova, Radka [University of Miami Miller School of Medicine, Department of Radiation Oncology, Miami, FL (United States); Padgett, Kyle [University of Miami Miller School of Medicine, Department of Radiation Oncology, Miami, FL (United States); University of Miami Miller School of Medicine, Department of Radiology, Miami, FL (United States); Casillas, Victor [University of Miami Miller School of Medicine, Department of Radiology, Miami, FL (United States); Punnen, Sanoj [University of Miami Miller School of Medicine, Department of Urology, Miami, FL (United States)

    2017-01-15

    This study aimed to develop an automated procedure for identifying suspicious foci of residual/recurrent disease in the prostate bed using dynamic contrast-enhanced-MRI (DCE-MRI) in prostate cancer patients after prostatectomy. Data of 22 patients presenting for salvage radiotherapy (RT) with an identified gross tumor volume (GTV) in the prostate bed were analyzed retrospectively. An unsupervised pattern recognition method was used to analyze DCE-MRI curves from the prostate bed. Data were represented as a product of a number of signal-vs.-time patterns and their weights. The temporal pattern, characterized by fast wash-in and gradual wash-out, was considered the ''tumor'' pattern. The corresponding weights were thresholded based on the number (1, 1.5, 2, 2.5) of standard deviations away from the mean, denoted as DCE1.0,.., DCE2.5, and displayed on the T2-weighted MRI. The resultant four volumes were compared with the GTV and maximum pre-RT prostate-specific antigen (PSA) level. Pharmacokinetic modeling was also carried out. Principal component analysis determined 2-4 significant patterns in patients' DCE-MRI. Analysis and display of the identified suspicious foci was performed in commercial software (MIM Corporation, Cleveland, OH, USA). In general, DCE1.0/DCE1.5 highlighted larger areas than GTV. DCE2.0 and GTV were significantly correlated (r = 0.60, p < 0.05). DCE2.0/DCA2.5 were also significantly correlated with PSA (r = 0.52, 0.67, p < 0.05). K{sup trans} for DCE2.5 was statistically higher than the GTV's K{sup trans} (p < 0.05), indicating that the automatic volume better captures areas of malignancy. A software tool was developed for identification and visualization of the suspicious foci in DCE-MRI from post-prostatectomy patients and was integrated into the treatment planning system. (orig.) [German] Entwicklung eines automatischen Analyseverfahrens, um nach Prostatektomie mittels dynamischer kontrastmittelverstaerkter

  9. Automatic Imitation

    Science.gov (United States)

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"…

  10. Modeling of heterogeneous elastic materials by the multiscale hp-adaptive finite element method

    Science.gov (United States)

    Klimczak, Marek; Cecot, Witold

    2018-01-01

    We present an enhancement of the multiscale finite element method (MsFEM) by combining it with the hp-adaptive FEM. Such a discretization-based homogenization technique is a versatile tool for modeling heterogeneous materials with fast oscillating elasticity coefficients. No assumption on periodicity of the domain is required. In order to avoid direct, so-called overkill mesh computations, a coarse mesh with effective stiffness matrices is used and special shape functions are constructed to account for the local heterogeneities at the micro resolution. The automatic adaptivity (hp-type at the macro resolution and h-type at the micro resolution) increases efficiency of computation. In this paper details of the modified MsFEM are presented and a numerical test performed on a Fichera corner domain is presented in order to validate the proposed approach.

  11. Multiscale Biological Materials

    DEFF Research Database (Denmark)

    Frølich, Simon

    of multiscale biological systems have been investigated and new research methods for automated Rietveld refinement and diffraction scattering computed tomography developed. The composite nature of biological materials was investigated at the atomic scale by looking at the consequences of interactions between...

  12. Automatic Vessel Segmentation on Retinal Images

    Institute of Scientific and Technical Information of China (English)

    Chun-Yuan Yu; Chia-Jen Chang; Yen-Ju Yao; Shyr-Shen Yu

    2014-01-01

    Several features of retinal vessels can be used to monitor the progression of diseases. Changes in vascular structures, for example, vessel caliber, branching angle, and tortuosity, are portents of many diseases such as diabetic retinopathy and arterial hyper-tension. This paper proposes an automatic retinal vessel segmentation method based on morphological closing and multi-scale line detection. First, an illumination correction is performed on the green band retinal image. Next, the morphological closing and subtraction processing are applied to obtain the crude retinal vessel image. Then, the multi-scale line detection is used to fine the vessel image. Finally, the binary vasculature is extracted by the Otsu algorithm. In this paper, for improving the drawbacks of multi-scale line detection, only the line detectors at 4 scales are used. The experimental results show that the accuracy is 0.939 for DRIVE (digital retinal images for vessel extraction) retinal database, which is much better than other methods.

  13. Development of new techniques and enhancement of automatic capability of neutron activation analysis at the Dalat Research Reactor

    International Nuclear Information System (INIS)

    Ho Manh Dung; Ho Van Doanh; Tran Quang Thien; Pham Ngoc Tuan; Pham Ngoc Son; Tran Quoc Duong; Nguyen Van Cuong; Nguyen Minh Tuan; Nguyen Giang; Nguyen Thi Sy

    2017-01-01

    The techniques of neutron activation analysis (NAA) including cyclic, epithermal and prompt-gamma (CNAA, ENAA and PGNAA, respectively) have been developed at the Dalat research reactor (DRR). In addition, the efforts has been spent to improve the automatic capability of irradiation, measurement and data processing of NAA. The renewal of necessary devices/tools for sample preparation have also been done. Eventually, the performance and the utility in terms of sensitivity, accuracy and stability of the analytical results generated by NAA at DRR have significantly been improved. The main results of the project are: 1) Upgrading of the fast irradiation system on Channel 13-2/TC to allow the cyclic irradiations; 2) Development of CNAA; 3) Development of ENAA; 4) Application of k0-method for PGNAA; 5) Investigation of the automatic sample changer (ASC2); 6) Upgrading of Ko-DALAT software for ENAA and modification of k0-IAEA software for CNAA and PGNAA; and 7) Optimization of irradiation and measurement facilities as well as sample preparation devices/tools. A set of procedures of relevant developed techniques in the project were established. The procedures have been evaluated by analysis of the reference materials for which they are meeting the requirements of multi-element analysis for the intended applications. (author)

  14. Towards distributed multiscale computing for the VPH

    NARCIS (Netherlands)

    Hoekstra, A.G.; Coveney, P.

    2010-01-01

    Multiscale modeling is fundamental to the Virtual Physiological Human (VPH) initiative. Most detailed three-dimensional multiscale models lead to prohibitive computational demands. As a possible solution we present MAPPER, a computational science infrastructure for Distributed Multiscale Computing

  15. Automatic dosage of hydrogen peroxide in solar photo-Fenton plants: Development of a control strategy for efficiency enhancement

    Energy Technology Data Exchange (ETDEWEB)

    Ortega-Gomez, E. [Department of Chemical Engineering, University of Almeria, 04120 Almeria (Spain); CIESOL, Joint Centre of the University of Almeria-CIEMAT, 04120 Almeria (Spain); Moreno Ubeda, J.C. [Department of Language and Computation, University of Almeria, 04120 Almeria (Spain); Alvarez Hervas, J.D. [Department of Language and Computation, University of Almeria, 04120 Almeria (Spain); Department of Language and Computation, University of Sevilla, 41092 Sevilla (Spain); Casas Lopez, J.L.; Santos-Juanes Jorda, L. [Department of Chemical Engineering, University of Almeria, 04120 Almeria (Spain); CIESOL, Joint Centre of the University of Almeria-CIEMAT, 04120 Almeria (Spain); Sanchez Perez, J.A., E-mail: jsanchez@ual.es [Department of Chemical Engineering, University of Almeria, 04120 Almeria (Spain); CIESOL, Joint Centre of the University of Almeria-CIEMAT, 04120 Almeria (Spain)

    2012-10-30

    Highlights: Black-Right-Pointing-Pointer Dissolved oxygen monitoring is used for automatic dosage of H{sub 2}O{sub 2} in photo-Fenton. Black-Right-Pointing-Pointer PI with anti-windup minimises H{sub 2}O{sub 2} consumption. Black-Right-Pointing-Pointer The H{sub 2}O{sub 2} consumption was reduced up to 50% with respect to manual addition strategies. Black-Right-Pointing-Pointer Appropriate H{sub 2}O{sub 2} dosage is achieved by PI with anti-windup under disturbances. - Abstract: The solar photo-Fenton process is widely used for the elimination of pollutants in aqueous effluent and, as such, is amply cited in the literature. In this process, hydrogen peroxide represents the highest operational cost. Up until now, manual dosing of H{sub 2}O{sub 2} has led to low process performance. Consequently, there is a need to automate the hydrogen peroxide dosage for use in industrial applications. As it has been demonstrated that a relationship exists between dissolved oxygen (DO) concentration and hydrogen peroxide consumption, DO can be used as a variable in optimising the hydrogen peroxide dosage. For this purpose, a model was experimentally obtained linking the dynamic behaviour of DO to hydrogen peroxide consumption. Following this, a control system was developed based on this model. This control system - a proportional and integral controller (PI) with an anti-windup mechanism - has been tested experimentally. The assays were carried out in a pilot plant under sunlight conditions and with paracetamol used as the model pollutant. In comparison with non-assisted addition methods (a sole initial or continuous addition), a decrease of 50% in hydrogen peroxide consumption was achieved when the automatic controller was used, driving an economic saving and an improvement in process efficiency.

  16. Multiscale Cues Drive Collective Cell Migration

    Science.gov (United States)

    Nam, Ki-Hwan; Kim, Peter; Wood, David K.; Kwon, Sunghoon; Provenzano, Paolo P.; Kim, Deok-Ho

    2016-07-01

    To investigate complex biophysical relationships driving directed cell migration, we developed a biomimetic platform that allows perturbation of microscale geometric constraints with concomitant nanoscale contact guidance architectures. This permits us to elucidate the influence, and parse out the relative contribution, of multiscale features, and define how these physical inputs are jointly processed with oncogenic signaling. We demonstrate that collective cell migration is profoundly enhanced by the addition of contract guidance cues when not otherwise constrained. However, while nanoscale cues promoted migration in all cases, microscale directed migration cues are dominant as the geometric constraint narrows, a behavior that is well explained by stochastic diffusion anisotropy modeling. Further, oncogene activation (i.e. mutant PIK3CA) resulted in profoundly increased migration where extracellular multiscale directed migration cues and intrinsic signaling synergistically conspire to greatly outperform normal cells or any extracellular guidance cues in isolation.

  17. Multiscale singularity trees

    DEFF Research Database (Denmark)

    Somchaipeng, Kerawit; Sporring, Jon; Johansen, Peter

    2007-01-01

    We propose MultiScale Singularity Trees (MSSTs) as a structure to represent images, and we propose an algorithm for image comparison based on comparing MSSTs. The algorithm is tested on 3 public image databases and compared to 2 state-of-theart methods. We conclude that the computational complexity...... of our algorithm only allows for the comparison of small trees, and that the results of our method are comparable with state-of-the-art using much fewer parameters for image representation....

  18. Multiscale modelling of nanostructures

    International Nuclear Information System (INIS)

    Vvedensky, Dimitri D

    2004-01-01

    Most materials phenomena are manifestations of processes that are operative over a vast range of length and time scales. A complete understanding of the behaviour of materials thereby requires theoretical and computational tools that span the atomic-scale detail of first-principles methods and the more coarse-grained description provided by continuum equations. Recent efforts have focused on combining traditional methodologies-density functional theory, molecular dynamics, Monte Carlo methods and continuum descriptions-within a unified multiscale framework. This review covers the techniques that have been developed to model various aspects of materials behaviour with the ultimate aim of systematically coupling the atomistic to the continuum descriptions. The approaches described typically have been motivated by particular applications but can often be applied in wider contexts. The self-assembly of quantum dot ensembles will be used as a case study for the issues that arise and the methods used for all nanostructures. Although quantum dots can be obtained with all the standard growth methods and for a variety of material systems, their appearance is a quite selective process, involving the competition between equilibrium and kinetic effects, and the interplay between atomistic and long-range interactions. Most theoretical models have addressed particular aspects of the ordering kinetics of quantum dot ensembles, with far fewer attempts at a comprehensive synthesis of this inherently multiscale phenomenon. We conclude with an assessment of the current status of multiscale modelling strategies and highlight the main outstanding issues. (topical review)

  19. Self-organizing neural networks for automatic detection and classification of contrast-enhancing lesions in dynamic MR-mammography

    International Nuclear Information System (INIS)

    Vomweg, T.W.; Teifke, A.; Kauczor, H.U.; Achenbach, T.; Rieker, O.; Schreiber, W.G.; Heitmann, K.R.; Beier, T.; Thelen, M.

    2005-01-01

    Purpose: Investigation and statistical evaluation of 'Self-Organizing Maps', a special type of neural networks in the field of artificial intelligence, classifying contrast enhancing lesions in dynamic MR-mammography. Material and Methods: 176 investigations with proven histology after core biopsy or operation were randomly divided into two groups. Several Self-Organizing Maps were trained by investigations of the first group to detect and classify contrast enhancing lesions in dynamic MR-mammography. Each single pixel's signal/time curve of all patients within the second group was analyzed by the Self-Organizing Maps. The likelihood of malignancy was visualized by color overlays on the MR-images. At last assessment of contrast-enhancing lesions by each different network was rated visually and evaluated statistically. Results: A well balanced neural network achieved a sensitivity of 90.5% and a specificity of 72.2% in predicting malignancy of 88 enhancing lesions. Detailed analysis of false-positive results revealed that every second fibroadenoma showed a 'typical malignant' signal/time curve without any chance to differentiate between fibroadenomas and malignant tissue regarding contrast enhancement alone; but this special group of lesions was represented by a well-defined area of the Self-Organizing Map. Discussion: Self-Organizing Maps are capable of classifying a dynamic signal/time curve as 'typical benign' or 'typical malignant'. Therefore, they can be used as second opinion. In view of the now known localization of fibroadenomas enhancing like malignant tumors at the Self-Organizing Map, these lesions could be passed to further analysis by additional post-processing elements (e.g., based on T2-weighted series or morphology analysis) in the future. (orig.)

  20. Semi-automatic motion compensation of contrast-enhanced ultrasound images from abdominal organs for perfusion analysis

    Czech Academy of Sciences Publication Activity Database

    Schafer, S.; Nylund, K.; Saevik, F.; Engjom, T.; Mézl, M.; Jiřík, Radovan; Dimcevski, G.; Gilja, O.H.; Tönnies, K.

    2015-01-01

    Roč. 63, AUG 1 (2015), s. 229-237 ISSN 0010-4825 R&D Projects: GA ČR GAP102/12/2380 Institutional support: RVO:68081731 Keywords : ultrasonography * motion analysis * motion compensation * registration * CEUS * contrast-enhanced ultrasound * perfusion * perfusion modeling Subject RIV: FS - Medical Facilities ; Equipment Impact factor: 1.521, year: 2015

  1. SU-E-J-182: A Feasibility Study Evaluating Automatic Identification of Gross Tumor Volume for Breast Cancer Radiotherapy Using Dynamic Contrast-Enhanced MR Imaging

    International Nuclear Information System (INIS)

    Wang, C; Horton, J; Yin, F; Blitzblau, R; Palta, M; Chang, Z

    2014-01-01

    Purpose: To develop a computerized pharmacokinetic model-free Gross Tumor Volume (GTV) segmentation method based on dynamic contrastenhanced MRI (DCE-MRI) data that can improve physician GTV contouring efficiency. Methods: 12 patients with biopsy-proven early stage breast cancer with post-contrast enhanced DCE-MRI images were analyzed in this study. A fuzzy c-means (FCM) clustering-based method was applied to segment 3D GTV from pre-operative DCE-MRI data. A region of interest (ROI) is selected by a clinician/physicist, and the normalized signal evolution curves were calculated by dividing the signal intensity enhancement value at each voxel by the pre-contrast signal intensity value at the corresponding voxel. Three semi-quantitative metrics were analyzed based on normalized signal evolution curves: initial Area Under signal evolution Curve (iAUC), Immediate Enhancement Ratio (IER), and Variance of Enhancement Slope (VES). The FCM algorithm wass applied to partition ROI voxels into GTV voxels and non-GTV voxels by using three analyzed metrics. The partition map for the smaller cluster is then generated and binarized with an automatically calculated threshold. To reduce spurious structures resulting from background, a labeling operation was performed to keep the largest three-dimensional connected component as the identified target. Basic morphological operations including hole-filling and spur removal were useutilized to improve the target smoothness. Each segmented GTV was compared to that drawn by experienced radiation oncologists. An agreement index was proposed to quantify the overlap between the GTVs identified using two approaches and a thershold value of 0.4 is regarded as acceptable. Results: The GTVs identified by the proposed method were overlapped with the ones drawn by radiation oncologists in all cases, and in 10 out of 12 cases, the agreement indices were above the threshold of 0.4. Conclusion: The proposed automatic segmentation method was shown to

  2. SU-E-J-182: A Feasibility Study Evaluating Automatic Identification of Gross Tumor Volume for Breast Cancer Radiotherapy Using Dynamic Contrast-Enhanced MR Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Wang, C; Horton, J; Yin, F; Blitzblau, R; Palta, M; Chang, Z [Duke University Medical Center, Durham, NC (United States)

    2014-06-01

    Purpose: To develop a computerized pharmacokinetic model-free Gross Tumor Volume (GTV) segmentation method based on dynamic contrastenhanced MRI (DCE-MRI) data that can improve physician GTV contouring efficiency. Methods: 12 patients with biopsy-proven early stage breast cancer with post-contrast enhanced DCE-MRI images were analyzed in this study. A fuzzy c-means (FCM) clustering-based method was applied to segment 3D GTV from pre-operative DCE-MRI data. A region of interest (ROI) is selected by a clinician/physicist, and the normalized signal evolution curves were calculated by dividing the signal intensity enhancement value at each voxel by the pre-contrast signal intensity value at the corresponding voxel. Three semi-quantitative metrics were analyzed based on normalized signal evolution curves: initial Area Under signal evolution Curve (iAUC), Immediate Enhancement Ratio (IER), and Variance of Enhancement Slope (VES). The FCM algorithm wass applied to partition ROI voxels into GTV voxels and non-GTV voxels by using three analyzed metrics. The partition map for the smaller cluster is then generated and binarized with an automatically calculated threshold. To reduce spurious structures resulting from background, a labeling operation was performed to keep the largest three-dimensional connected component as the identified target. Basic morphological operations including hole-filling and spur removal were useutilized to improve the target smoothness. Each segmented GTV was compared to that drawn by experienced radiation oncologists. An agreement index was proposed to quantify the overlap between the GTVs identified using two approaches and a thershold value of 0.4 is regarded as acceptable. Results: The GTVs identified by the proposed method were overlapped with the ones drawn by radiation oncologists in all cases, and in 10 out of 12 cases, the agreement indices were above the threshold of 0.4. Conclusion: The proposed automatic segmentation method was shown to

  3. Multiscale Signal Analysis and Modeling

    CERN Document Server

    Zayed, Ahmed

    2013-01-01

    Multiscale Signal Analysis and Modeling presents recent advances in multiscale analysis and modeling using wavelets and other systems. This book also presents applications in digital signal processing using sampling theory and techniques from various function spaces, filter design, feature extraction and classification, signal and image representation/transmission, coding, nonparametric statistical signal processing, and statistical learning theory. This book also: Discusses recently developed signal modeling techniques, such as the multiscale method for complex time series modeling, multiscale positive density estimations, Bayesian Shrinkage Strategies, and algorithms for data adaptive statistics Introduces new sampling algorithms for multidimensional signal processing Provides comprehensive coverage of wavelets with presentations on waveform design and modeling, wavelet analysis of ECG signals and wavelet filters Reviews features extraction and classification algorithms for multiscale signal and image proce...

  4. Multiscale computing in the exascale era

    NARCIS (Netherlands)

    Alowayyed, S.; Groen, D.; Coveney, P.V.; Hoekstra, A.G.

    We expect that multiscale simulations will be one of the main high performance computing workloads in the exascale era. We propose multiscale computing patterns as a generic vehicle to realise load balanced, fault tolerant and energy aware high performance multiscale computing. Multiscale computing

  5. Multi-scale analysis of lung computed tomography images

    CERN Document Server

    Gori, I; Fantacci, M E; Preite Martinez, A; Retico, A; De Mitri, I; Donadio, S; Fulcheri, C

    2007-01-01

    A computer-aided detection (CAD) system for the identification of lung internal nodules in low-dose multi-detector helical Computed Tomography (CT) images was developed in the framework of the MAGIC-5 project. The three modules of our lung CAD system, a segmentation algorithm for lung internal region identification, a multi-scale dot-enhancement filter for nodule candidate selection and a multi-scale neural technique for false positive finding reduction, are described. The results obtained on a dataset of low-dose and thin-slice CT scans are shown in terms of free response receiver operating characteristic (FROC) curves and discussed.

  6. Automatized spleen segmentation in non-contrast-enhanced MR volume data using subject-specific shape priors

    Science.gov (United States)

    Gloger, Oliver; Tönnies, Klaus; Bülow, Robin; Völzke, Henry

    2017-07-01

    To develop the first fully automated 3D spleen segmentation framework derived from T1-weighted magnetic resonance (MR) imaging data and to verify its performance for spleen delineation and volumetry. This approach considers the issue of low contrast between spleen and adjacent tissue in non-contrast-enhanced MR images. Native T1-weighted MR volume data was performed on a 1.5 T MR system in an epidemiological study. We analyzed random subsamples of MR examinations without pathologies to develop and verify the spleen segmentation framework. The framework is modularized to include different kinds of prior knowledge into the segmentation pipeline. Classification by support vector machines differentiates between five different shape types in computed foreground probability maps and recognizes characteristic spleen regions in axial slices of MR volume data. A spleen-shape space generated by training produces subject-specific prior shape knowledge that is then incorporated into a final 3D level set segmentation method. Individually adapted shape-driven forces as well as image-driven forces resulting from refined foreground probability maps steer the level set successfully to the segment the spleen. The framework achieves promising segmentation results with mean Dice coefficients of nearly 0.91 and low volumetric mean errors of 6.3%. The presented spleen segmentation approach can delineate spleen tissue in native MR volume data. Several kinds of prior shape knowledge including subject-specific 3D prior shape knowledge can be used to guide segmentation processes achieving promising results.

  7. Infrared variation reduction by simultaneous background suppression and target contrast enhancement for deep convolutional neural network-based automatic target recognition

    Science.gov (United States)

    Kim, Sungho

    2017-06-01

    Automatic target recognition (ATR) is a traditionally challenging problem in military applications because of the wide range of infrared (IR) image variations and the limited number of training images. IR variations are caused by various three-dimensional target poses, noncooperative weather conditions (fog and rain), and difficult target acquisition environments. Recently, deep convolutional neural network-based approaches for RGB images (RGB-CNN) showed breakthrough performance in computer vision problems, such as object detection and classification. The direct use of RGB-CNN to the IR ATR problem fails to work because of the IR database problems (limited database size and IR image variations). An IR variation-reduced deep CNN (IVR-CNN) to cope with the problems is presented. The problem of limited IR database size is solved by a commercial thermal simulator (OKTAL-SE). The second problem of IR variations is mitigated by the proposed shifted ramp function-based intensity transformation. This can suppress the background and enhance the target contrast simultaneously. The experimental results on the synthesized IR images generated by the thermal simulator (OKTAL-SE) validated the feasibility of IVR-CNN for military ATR applications.

  8. 3D periodic multiscale TiO_2 architecture: a platform decorated with graphene quantum dots for enhanced photoelectrochemical water splitting

    International Nuclear Information System (INIS)

    Xu, Zhen; Yin, Min; Lu, Linfeng; Chen, Xiaoyuan; Li, Dongdong; Sun, Jing; Ding, Guqiao; Chang, Paichun

    2016-01-01

    Micropatterned TiO_2 nanorods (TiO_2NRs) via three-dimensional (3D) geometry engineering in both microscale and nanoscale decorated with graphene quantum dots (GQDs) have been demonstrated successfully. First, micropillar (MP) and microcave (MC) arrays of anatase TiO_2 films are obtained through the sol–gel based thermal nanoimprinting method. Then they are employed as seed layers in hydrothermal growth to fabricate the 3D micropillar/microcave arrays of rutile TiO_2NRs (NR), which show much-improved photoelectrochemical water-splitting performance than the TiO_2NRs grown on flat seed layer. The zero-dimensional GQDs are sequentially deposited onto the surfaces of the microscale patterned nanorods. Owing to the fast charge separation that resulted from the favorable band alignment of the GQDs and rutile TiO_2, the MP-NR-GQDs electrode achieves a photocurrent density up to 2.92 mA cm"−"2 under simulated one-sun illumination. The incident-photon-to-current-conversion efficiency (IPCE) value up to 72% at 370 nm was achieved on the MP-NR-GQDs electrode, which outperforms the flat-NR counterpart by 69%. The IPCE results also imply that the improved photocurrent mainly benefits from the distinctly enhanced ultraviolet response. The work provides a cost-effective and flexible pathway to develop periodic 3D micropatterned photoelectrodes and is promising for the future deployment of high performance optoelectronic devices. (paper)

  9. Toward combining thematic information with hierarchical multiscale segmentations using tree Markov random field model

    Science.gov (United States)

    Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi

    2017-09-01

    It has been a common idea to produce multiscale segmentations to represent the various geographic objects in high-spatial resolution remote sensing (HR) images. However, it remains a great challenge to automatically select the proper segmentation scale(s) just according to the image information. In this study, we propose a novel way of information fusion at object level by combining hierarchical multiscale segmentations with existed thematic information produced by classification or recognition. The tree Markov random field (T-MRF) model is designed for the multiscale combination framework, through which the object type is determined as close as the existed thematic information. At the same time, the object boundary is jointly determined by the thematic labels and the multiscale segments through the minimization of the energy function. The benefits of the proposed T-MRF combination model include: (1) reducing the dependence of segmentation scale selection when utilizing multiscale segmentations; (2) exploring the hierarchical context naturally imbedded in the multiscale segmentations. The HR images in both urban and rural areas are used in the experiments to show the effectiveness of the proposed combination framework on these two aspects.

  10. Fully automated segmentation of oncological PET volumes using a combined multiscale and statistical model

    International Nuclear Information System (INIS)

    Montgomery, David W. G.; Amira, Abbes; Zaidi, Habib

    2007-01-01

    The widespread application of positron emission tomography (PET) in clinical oncology has driven this imaging technology into a number of new research and clinical arenas. Increasing numbers of patient scans have led to an urgent need for efficient data handling and the development of new image analysis techniques to aid clinicians in the diagnosis of disease and planning of treatment. Automatic quantitative assessment of metabolic PET data is attractive and will certainly revolutionize the practice of functional imaging since it can lower variability across institutions and may enhance the consistency of image interpretation independent of reader experience. In this paper, a novel automated system for the segmentation of oncological PET data aiming at providing an accurate quantitative analysis tool is proposed. The initial step involves expectation maximization (EM)-based mixture modeling using a k-means clustering procedure, which varies voxel order for initialization. A multiscale Markov model is then used to refine this segmentation by modeling spatial correlations between neighboring image voxels. An experimental study using an anthropomorphic thorax phantom was conducted for quantitative evaluation of the performance of the proposed segmentation algorithm. The comparison of actual tumor volumes to the volumes calculated using different segmentation methodologies including standard k-means, spatial domain Markov Random Field Model (MRFM), and the new multiscale MRFM proposed in this paper showed that the latter dramatically reduces the relative error to less than 8% for small lesions (7 mm radii) and less than 3.5% for larger lesions (9 mm radii). The analysis of the resulting segmentations of clinical oncologic PET data seems to confirm that this methodology shows promise and can successfully segment patient lesions. For problematic images, this technique enables the identification of tumors situated very close to nearby high normal physiologic uptake. The

  11. Statistical CT noise reduction with multiscale decomposition and penalized weighted least squares in the projection domain

    International Nuclear Information System (INIS)

    Tang Shaojie; Tang Xiangyang

    2012-01-01

    Purposes: The suppression of noise in x-ray computed tomography (CT) imaging is of clinical relevance for diagnostic image quality and the potential for radiation dose saving. Toward this purpose, statistical noise reduction methods in either the image or projection domain have been proposed, which employ a multiscale decomposition to enhance the performance of noise suppression while maintaining image sharpness. Recognizing the advantages of noise suppression in the projection domain, the authors propose a projection domain multiscale penalized weighted least squares (PWLS) method, in which the angular sampling rate is explicitly taken into consideration to account for the possible variation of interview sampling rate in advanced clinical or preclinical applications. Methods: The projection domain multiscale PWLS method is derived by converting an isotropic diffusion partial differential equation in the image domain into the projection domain, wherein a multiscale decomposition is carried out. With adoption of the Markov random field or soft thresholding objective function, the projection domain multiscale PWLS method deals with noise at each scale. To compensate for the degradation in image sharpness caused by the projection domain multiscale PWLS method, an edge enhancement is carried out following the noise reduction. The performance of the proposed method is experimentally evaluated and verified using the projection data simulated by computer and acquired by a CT scanner. Results: The preliminary results show that the proposed projection domain multiscale PWLS method outperforms the projection domain single-scale PWLS method and the image domain multiscale anisotropic diffusion method in noise reduction. In addition, the proposed method can preserve image sharpness very well while the occurrence of “salt-and-pepper” noise and mosaic artifacts can be avoided. Conclusions: Since the interview sampling rate is taken into account in the projection domain

  12. The Magnetospheric Multiscale Magnetometers

    Science.gov (United States)

    Russell, C. T.; Anderson, B. J.; Baumjohann, W.; Bromund, K. R.; Dearborn, D.; Fischer, D.; Le, G.; Leinweber, H. K.; Leneman, D.; Magnes, W.; hide

    2014-01-01

    The success of the Magnetospheric Multiscale mission depends on the accurate measurement of the magnetic field on all four spacecraft. To ensure this success, two independently designed and built fluxgate magnetometers were developed, avoiding single-point failures. The magnetometers were dubbed the digital fluxgate (DFG), which uses an ASIC implementation and was supplied by the Space Research Institute of the Austrian Academy of Sciences and the analogue magnetometer (AFG) with a more traditional circuit board design supplied by the University of California, Los Angeles. A stringent magnetic cleanliness program was executed under the supervision of the Johns Hopkins University,s Applied Physics Laboratory. To achieve mission objectives, the calibration determined on the ground will be refined in space to ensure all eight magnetometers are precisely inter-calibrated. Near real-time data plays a key role in the transmission of high-resolution observations stored onboard so rapid processing of the low-resolution data is required. This article describes these instruments, the magnetic cleanliness program, and the instrument pre-launch calibrations, the planned in-flight calibration program, and the information flow that provides the data on the rapid time scale needed for mission success.

  13. Multi-scale simulation for homogenization of cement media

    International Nuclear Information System (INIS)

    Abballe, T.

    2011-01-01

    To solve diffusion problems on cement media, two scales must be taken into account: a fine scale, which describes the micrometers wide microstructures present in the media, and a work scale, which is usually a few meters long. Direct numerical simulations are almost impossible because of the huge computational resources (memory, CPU time) required to assess both scales at the same time. To overcome this problem, we present in this thesis multi-scale resolution methods using both Finite Volumes and Finite Elements, along with their efficient implementations. More precisely, we developed a multi-scale simulation tool which uses the SALOME platform to mesh domains and post-process data, and the parallel calculation code MPCube to solve problems. This SALOME/MPCube tool can solve automatically and efficiently multi-scale simulations. Parallel structure of computer clusters can be use to dispatch the more time-consuming tasks. We optimized most functions to account for cement media specificities. We presents numerical experiments on various cement media samples, e.g. mortar and cement paste. From these results, we manage to compute a numerical effective diffusivity of our cement media and to reconstruct a fine scale solution. (author) [fr

  14. Learning multiscale and deep representations for classifying remotely sensed imagery

    Science.gov (United States)

    Zhao, Wenzhi; Du, Shihong

    2016-03-01

    It is widely agreed that spatial features can be combined with spectral properties for improving interpretation performances on very-high-resolution (VHR) images in urban areas. However, many existing methods for extracting spatial features can only generate low-level features and consider limited scales, leading to unpleasant classification results. In this study, multiscale convolutional neural network (MCNN) algorithm was presented to learn spatial-related deep features for hyperspectral remote imagery classification. Unlike traditional methods for extracting spatial features, the MCNN first transforms the original data sets into a pyramid structure containing spatial information at multiple scales, and then automatically extracts high-level spatial features using multiscale training data sets. Specifically, the MCNN has two merits: (1) high-level spatial features can be effectively learned by using the hierarchical learning structure and (2) multiscale learning scheme can capture contextual information at different scales. To evaluate the effectiveness of the proposed approach, the MCNN was applied to classify the well-known hyperspectral data sets and compared with traditional methods. The experimental results shown a significant increase in classification accuracies especially for urban areas.

  15. Multi-technology Integration Based on Low-contrast Microscopic Image Enhancement

    Directory of Open Access Journals (Sweden)

    Haoge Ma

    2014-01-01

    Full Text Available Microscopic image enhancement is an important issue of image processing technique, which is used to improve the visual quality of image. This paper describes a novel multi resolution image segmentation algorithm for low DOF images. The algorithm is designed to separate a sharply focused object of interest from other foreground or background objects. The algorithm is fully automatic in that all parameters are image in dependent. A multiscale-approach based on high frequency wavelet coefficients and their statistics is used to perform context dependent classification of individual blocks of the image. Compared with the state of the art algorithms, this new algorithm provides better accuracy at higher speed.

  16. Multiscale time-dependent density functional theory: Demonstration for plasmons.

    Science.gov (United States)

    Jiang, Jiajian; Abi Mansour, Andrew; Ortoleva, Peter J

    2017-08-07

    Plasmon properties are of significant interest in pure and applied nanoscience. While time-dependent density functional theory (TDDFT) can be used to study plasmons, it becomes impractical for elucidating the effect of size, geometric arrangement, and dimensionality in complex nanosystems. In this study, a new multiscale formalism that addresses this challenge is proposed. This formalism is based on Trotter factorization and the explicit introduction of a coarse-grained (CG) structure function constructed as the Weierstrass transform of the electron wavefunction. This CG structure function is shown to vary on a time scale much longer than that of the latter. A multiscale propagator that coevolves both the CG structure function and the electron wavefunction is shown to bring substantial efficiency over classical propagators used in TDDFT. This efficiency follows from the enhanced numerical stability of the multiscale method and the consequence of larger time steps that can be used in a discrete time evolution. The multiscale algorithm is demonstrated for plasmons in a group of interacting sodium nanoparticles (15-240 atoms), and it achieves improved efficiency over TDDFT without significant loss of accuracy or space-time resolution.

  17. Enhancement of the efficiency of the automatic control system to control the thermal load of steam boilers fired with fuels of several types

    Science.gov (United States)

    Ismatkhodzhaev, S. K.; Kuzishchin, V. F.

    2017-05-01

    An automatic control system to control the thermal load (ACS) in a drum-type boiler under random fluctuations in the blast-furnace and coke-oven gas consumption rates and to control action on the natural gas consumption is considered. The system provides for use of a compensator by the basic disturbance, the blast-furnace gas consumption rate. To enhance the performance of the system, it is proposed to use more accurate mathematical second-order delay models of the channels of the object under control in combination with calculation by frequency methods of the controller parameters as well as determination of the structure and parameters of the compensator considering the statistical characteristics of the disturbances and using simulation. The statistical characteristics of the random blast-furnace gas consumption signal based on experimental data are provided. The random signal is presented in the form of the low-frequency (LF) and high-frequency (HF) components. The models of the correlation functions and spectral densities are developed. The article presents the results of calculating the optimal settings of the control loop with the controlled variable in the form of the "heat" signal with the restricted frequency variation index using three variants of the control performance criteria, viz., the linear and quadratic integral indices under step disturbance and the control error variance under random disturbance by the blastfurnace gas consumption rate. It is recommended to select a compensator designed in the form of series connection of two parts, one of which corresponds to the operator inverse to the transfer function of the PI controller, i.e., in the form of a really differentiating element. This facilitates the realization of the second part of the compensator by the invariance condition similar to transmitting the compensating signal to the object input. The results of simulation under random disturbance by the blast-furnace gas consumption are reported

  18. An automatic respiratory gating method for the improvement of microcirculation evaluation: application to contrast-enhanced ultrasound studies of focal liver lesions

    Energy Technology Data Exchange (ETDEWEB)

    Mule, S; Kachenoura, N; Lucidarme, O; De Oliveira, A; Pellot-Barakat, C; Herment, A; Frouin, F, E-mail: Sebastien.Mule@gmail.com [INSERM UMR-S 678, 75634 Paris Cedex 13 (France)

    2011-08-21

    Contrast-enhanced ultrasound (CEUS), with the recent development of both contrast-specific imaging modalities and microbubble-based contrast agents, allows noninvasive quantification of microcirculation in vivo. Nevertheless, functional parameters obtained by modeling contrast uptake kinetics could be impaired by respiratory motion. Accordingly, we developed an automatic respiratory gating method and tested it on 35 CEUS hepatic datasets with focal lesions. Each dataset included fundamental mode and cadence contrast pulse sequencing (CPS) mode sequences acquired simultaneously. The developed method consisted in (1) the estimation of the respiratory kinetics as a linear combination of the first components provided by a principal components analysis constrained by a prior knowledge on the respiratory rate in the frequency domain, (2) the automated generation of two respiratory-gated subsequences from the CPS mode sequence by detecting end-of-inspiration and end-of-expiration phases from the respiratory kinetics. The fundamental mode enabled a more reliable estimation of the respiratory kinetics than the CPS mode. The k-means algorithm was applied on both the original CPS mode sequences and the respiratory-gated subsequences resulting in clustering maps and associated mean kinetics. Our respiratory gating process allowed better superimposition of manually drawn lesion contours on k-means clustering maps as well as substantial improvement of the quality of contrast uptake kinetics. While the quality of maps and kinetics was satisfactory in only 11/35 datasets before gating, it was satisfactory in 34/35 datasets after gating. Moreover, noise amplitude estimated within the delineated lesions was reduced from 62 {+-} 21 to 40 {+-} 10 (p < 0.01) after gating. These findings were supported by the low residual horizontal (0.44 {+-} 0.29 mm) and vertical (0.15 {+-} 0.16 mm) shifts found during manual motion correction of each respiratory-gated subsequence. The developed

  19. Heat and mass transfer intensification and shape optimization a multi-scale approach

    CERN Document Server

    2013-01-01

    Is the heat and mass transfer intensification defined as a new paradigm of process engineering, or is it just a common and old idea, renamed and given the current taste? Where might intensification occur? How to achieve intensification? How the shape optimization of thermal and fluidic devices leads to intensified heat and mass transfers? To answer these questions, Heat & Mass Transfer Intensification and Shape Optimization: A Multi-scale Approach clarifies  the definition of the intensification by highlighting the potential role of the multi-scale structures, the specific interfacial area, the distribution of driving force, the modes of energy supply and the temporal aspects of processes.   A reflection on the methods of process intensification or heat and mass transfer enhancement in multi-scale structures is provided, including porous media, heat exchangers, fluid distributors, mixers and reactors. A multi-scale approach to achieve intensification and shape optimization is developed and clearly expla...

  20. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.

    Science.gov (United States)

    Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di

    2018-03-06

    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.

  1. An Automatic Cloud Detection Method for ZY-3 Satellite

    Directory of Open Access Journals (Sweden)

    CHEN Zhenwei

    2015-03-01

    Full Text Available Automatic cloud detection for optical satellite remote sensing images is a significant step in the production system of satellite products. For the browse images cataloged by ZY-3 satellite, the tree discriminate structure is adopted to carry out cloud detection. The image was divided into sub-images and their features were extracted to perform classification between clouds and grounds. However, due to the high complexity of clouds and surfaces and the low resolution of browse images, the traditional classification algorithms based on image features are of great limitations. In view of the problem, a prior enhancement processing to original sub-images before classification was put forward in this paper to widen the texture difference between clouds and surfaces. Afterwards, with the secondary moment and first difference of the images, the feature vectors were extended in multi-scale space, and then the cloud proportion in the image was estimated through comprehensive analysis. The presented cloud detection algorithm has already been applied to the ZY-3 application system project, and the practical experiment results indicate that this algorithm is capable of promoting the accuracy of cloud detection significantly.

  2. Fast and Automatic Ultrasound Simulation from CT Images

    Directory of Open Access Journals (Sweden)

    Weijian Cong

    2013-01-01

    Full Text Available Ultrasound is currently widely used in clinical diagnosis because of its fast and safe imaging principles. As the anatomical structures present in an ultrasound image are not as clear as CT or MRI. Physicians usually need advance clinical knowledge and experience to distinguish diseased tissues. Fast simulation of ultrasound provides a cost-effective way for the training and correlation of ultrasound and the anatomic structures. In this paper, a novel method is proposed for fast simulation of ultrasound from a CT image. A multiscale method is developed to enhance tubular structures so as to simulate the blood flow. The acoustic response of common tissues is generated by weighted integration of adjacent regions on the ultrasound propagation path in the CT image, from which parameters, including attenuation, reflection, scattering, and noise, are estimated simultaneously. The thin-plate spline interpolation method is employed to transform the simulation image between polar and rectangular coordinate systems. The Kaiser window function is utilized to produce integration and radial blurring effects of multiple transducer elements. Experimental results show that the developed method is very fast and effective, allowing realistic ultrasound to be fast generated. Given that the developed method is fully automatic, it can be utilized for ultrasound guided navigation in clinical practice and for training purpose.

  3. Multiscale scenarios for nature futures

    CSIR Research Space (South Africa)

    Rosa, IMD

    2017-09-01

    Full Text Available & Evolution, vol. 1: 1416-1419 Multiscale scenarios for nature futures Rosa IMD Pereira HM Ferrier S Alkemade R Acosta LA Akcakaya HR den Belder E Fazel AM Fujimori S Sitas NE ABSTRACT: Targets for human development are increasingly...

  4. Multiscale mechanics of dynamical metamaterials

    NARCIS (Netherlands)

    Geers, M.G.D.; Kouznetsova, V.; Sridhar, A.; Krushynska, A.; Kleiber, M.; Burczynski, T.; Wilde, K.; Gorski, J.; Winkelmann, K.; Smakosz, L.

    2016-01-01

    This contribution focuses on the computational multi-scale solution of wave propagation phenomena in dynamic metamaterials. Taking the Bloch-Floquet solution for the standard elastic case as a point of departure, an extended scheme is presented to solve for heterogeneous visco-elastic materials. The

  5. Multiscale Thermohydrologic Model

    Energy Technology Data Exchange (ETDEWEB)

    T. Buscheck

    2004-10-12

    The purpose of the multiscale thermohydrologic model (MSTHM) is to predict the possible range of thermal-hydrologic conditions, resulting from uncertainty and variability, in the repository emplacement drifts, including the invert, and in the adjoining host rock for the repository at Yucca Mountain. Thus, the goal is to predict the range of possible thermal-hydrologic conditions across the repository; this is quite different from predicting a single expected thermal-hydrologic response. The MSTHM calculates the following thermal-hydrologic parameters: temperature, relative humidity, liquid-phase saturation, evaporation rate, air-mass fraction, gas-phase pressure, capillary pressure, and liquid- and gas-phase fluxes (Table 1-1). These thermal-hydrologic parameters are required to support ''Total System Performance Assessment (TSPA) Model/Analysis for the License Application'' (BSC 2004 [DIRS 168504]). The thermal-hydrologic parameters are determined as a function of position along each of the emplacement drifts and as a function of waste package type. These parameters are determined at various reference locations within the emplacement drifts, including the waste package and drip-shield surfaces and in the invert. The parameters are also determined at various defined locations in the adjoining host rock. The MSTHM uses data obtained from the data tracking numbers (DTNs) listed in Table 4.1-1. The majority of those DTNs were generated from the following analyses and model reports: (1) ''UZ Flow Model and Submodels'' (BSC 2004 [DIRS 169861]); (2) ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004); (3) ''Calibrated Properties Model'' (BSC 2004 [DIRS 169857]); (4) ''Thermal Conductivity of the Potential Repository Horizon'' (BSC 2004 [DIRS 169854]); (5) ''Thermal Conductivity of the Non-Repository Lithostratigraphic Layers

  6. Multi-scale Mexican spotted owl (Strix occidentalis lucida) nest/roost habitat selection in Arizona and a comparison with single-scale modeling results

    Science.gov (United States)

    Brad C. Timm; Kevin McGarigal; Samuel A. Cushman; Joseph L. Ganey

    2016-01-01

    Efficacy of future habitat selection studies will benefit by taking a multi-scale approach. In addition to potentially providing increased explanatory power and predictive capacity, multi-scale habitat models enhance our understanding of the scales at which species respond to their environment, which is critical knowledge required to implement effective...

  7. Enhancing automatic closed-loop glucose control in type 1 diabetes with an adaptive meal bolus calculator - in silico evaluation under intra-day variability.

    Science.gov (United States)

    Herrero, Pau; Bondia, Jorge; Adewuyi, Oloruntoba; Pesl, Peter; El-Sharkawy, Mohamed; Reddy, Monika; Toumazou, Chris; Oliver, Nick; Georgiou, Pantelis

    2017-07-01

    Current prototypes of closed-loop systems for glucose control in type 1 diabetes mellitus, also referred to as artificial pancreas systems, require a pre-meal insulin bolus to compensate for delays in subcutaneous insulin absorption in order to avoid initial post-prandial hyperglycemia. Computing such a meal bolus is a challenging task due to the high intra-subject variability of insulin requirements. Most closed-loop systems compute this pre-meal insulin dose by a standard bolus calculation, as is commonly found in insulin pumps. However, the performance of these calculators is limited due to a lack of adaptiveness in front of dynamic changes in insulin requirements. Despite some initial attempts to include adaptation within these calculators, challenges remain. In this paper we present a new technique to automatically adapt the meal-priming bolus within an artificial pancreas. The technique consists of using a novel adaptive bolus calculator based on Case-Based Reasoning and Run-To-Run control, within a closed-loop controller. Coordination between the adaptive bolus calculator and the controller was required to achieve the desired performance. For testing purposes, the clinically validated Imperial College Artificial Pancreas controller was employed. The proposed system was evaluated against itself but without bolus adaptation. The UVa-Padova T1DM v3.2 system was used to carry out a three-month in silico study on 11 adult and 11 adolescent virtual subjects taking into account inter-and intra-subject variability of insulin requirements and uncertainty on carbohydrate intake. Overall, the closed-loop controller enhanced by an adaptive bolus calculator improves glycemic control when compared to its non-adaptive counterpart. In particular, the following statistically significant improvements were found (non-adaptive vs. adaptive). Adults: mean glucose 142.2 ± 9.4vs. 131.8 ± 4.2mg/dl; percentage time in target [70, 180]mg/dl, 82.0 ± 7.0vs. 89.5 ± 4

  8. MULTISCALE THERMOHYDROLOGIC MODEL

    International Nuclear Information System (INIS)

    T. Buscheck

    2005-01-01

    The intended purpose of the multiscale thermohydrologic model (MSTHM) is to predict the possible range of thermal-hydrologic conditions, resulting from uncertainty and variability, in the repository emplacement drifts, including the invert, and in the adjoining host rock for the repository at Yucca Mountain. The goal of the MSTHM is to predict a reasonable range of possible thermal-hydrologic conditions within the emplacement drift. To be reasonable, this range includes the influence of waste-package-to-waste-package heat output variability relevant to the license application design, as well as the influence of uncertainty and variability in the geologic and hydrologic conditions relevant to predicting the thermal-hydrologic response in emplacement drifts. This goal is quite different from the goal of a model to predict a single expected thermal-hydrologic response. As a result, the development and validation of the MSTHM and the associated analyses using this model are focused on the goal of predicting a reasonable range of thermal-hydrologic conditions resulting from parametric uncertainty and waste-package-to-waste-package heat-output variability. Thermal-hydrologic conditions within emplacement drifts depend primarily on thermal-hydrologic conditions in the host rock at the drift wall and on the temperature difference between the drift wall and the drip-shield and waste-package surfaces. Thus, the ability to predict a reasonable range of relevant in-drift MSTHM output parameters (e.g., temperature and relative humidity) is based on valid predictions of thermal-hydrologic processes in the host rock, as well as valid predictions of heat-transfer processes between the drift wall and the drip-shield and waste-package surfaces. Because the invert contains crushed gravel derived from the host rock, the invert is, in effect, an extension of the host rock, with thermal and hydrologic properties that have been modified by virtue of the crushing (and the resulting

  9. The Magnetospheric Multiscale Mission

    Science.gov (United States)

    Burch, James

    Magnetospheric Multiscale (MMS), a NASA four-spacecraft mission scheduled for launch in November 2014, will investigate magnetic reconnection in the boundary regions of the Earth’s magnetosphere, particularly along its dayside boundary with the solar wind and the neutral sheet in the magnetic tail. Among the important questions about reconnection that will be addressed are the following: Under what conditions can magnetic-field energy be converted to plasma energy by the annihilation of magnetic field through reconnection? How does reconnection vary with time, and what factors influence its temporal behavior? What microscale processes are responsible for reconnection? What determines the rate of reconnection? In order to accomplish its goals the MMS spacecraft must probe both those regions in which the magnetic fields are very nearly antiparallel and regions where a significant guide field exists. From previous missions we know the approximate speeds with which reconnection layers move through space to be from tens to hundreds of km/s. For electron skin depths of 5 to 10 km, the full 3D electron population (10 eV to above 20 keV) has to be sampled at rates greater than 10/s. The MMS Fast-Plasma Instrument (FPI) will sample electrons at greater than 30/s. Because the ion skin depth is larger, FPI will make full ion measurements at rates of greater than 6/s. 3D E-field measurements will be made by MMS once every ms. MMS will use an Active Spacecraft Potential Control device (ASPOC), which emits indium ions to neutralize the photoelectron current and keep the spacecraft from charging to more than +4 V. Because ion dynamics in Hall reconnection depend sensitively on ion mass, MMS includes a new-generation Hot Plasma Composition Analyzer (HPCA) that corrects problems with high proton fluxes that have prevented accurate ion-composition measurements near the dayside magnetospheric boundary. Finally, Energetic Particle Detector (EPD) measurements of electrons and

  10. MULTISCALE THERMOHYDROLOGIC MODEL

    Energy Technology Data Exchange (ETDEWEB)

    T. Buscheck

    2005-07-07

    The intended purpose of the multiscale thermohydrologic model (MSTHM) is to predict the possible range of thermal-hydrologic conditions, resulting from uncertainty and variability, in the repository emplacement drifts, including the invert, and in the adjoining host rock for the repository at Yucca Mountain. The goal of the MSTHM is to predict a reasonable range of possible thermal-hydrologic conditions within the emplacement drift. To be reasonable, this range includes the influence of waste-package-to-waste-package heat output variability relevant to the license application design, as well as the influence of uncertainty and variability in the geologic and hydrologic conditions relevant to predicting the thermal-hydrologic response in emplacement drifts. This goal is quite different from the goal of a model to predict a single expected thermal-hydrologic response. As a result, the development and validation of the MSTHM and the associated analyses using this model are focused on the goal of predicting a reasonable range of thermal-hydrologic conditions resulting from parametric uncertainty and waste-package-to-waste-package heat-output variability. Thermal-hydrologic conditions within emplacement drifts depend primarily on thermal-hydrologic conditions in the host rock at the drift wall and on the temperature difference between the drift wall and the drip-shield and waste-package surfaces. Thus, the ability to predict a reasonable range of relevant in-drift MSTHM output parameters (e.g., temperature and relative humidity) is based on valid predictions of thermal-hydrologic processes in the host rock, as well as valid predictions of heat-transfer processes between the drift wall and the drip-shield and waste-package surfaces. Because the invert contains crushed gravel derived from the host rock, the invert is, in effect, an extension of the host rock, with thermal and hydrologic properties that have been modified by virtue of the crushing (and the resulting

  11. Semi-automatic fluoroscope

    International Nuclear Information System (INIS)

    Tarpley, M.W.

    1976-10-01

    Extruded aluminum-clad uranium-aluminum alloy fuel tubes must pass many quality control tests before irradiation in Savannah River Plant nuclear reactors. Nondestructive test equipment has been built to automatically detect high and low density areas in the fuel tubes using x-ray absorption techniques with a video analysis system. The equipment detects areas as small as 0.060-in. dia with 2 percent penetrameter sensitivity. These areas are graded as to size and density by an operator using electronic gages. Video image enhancement techniques permit inspection of ribbed cylindrical tubes and make possible the testing of areas under the ribs. Operation of the testing machine, the special low light level television camera, and analysis and enhancement techniques are discussed

  12. Integrated Multiscale Latent Variable Regression and Application to Distillation Columns

    Directory of Open Access Journals (Sweden)

    Muddu Madakyaru

    2013-01-01

    Full Text Available Proper control of distillation columns requires estimating some key variables that are challenging to measure online (such as compositions, which are usually estimated using inferential models. Commonly used inferential models include latent variable regression (LVR techniques, such as principal component regression (PCR, partial least squares (PLS, and regularized canonical correlation analysis (RCCA. Unfortunately, measured practical data are usually contaminated with errors, which degrade the prediction abilities of inferential models. Therefore, noisy measurements need to be filtered to enhance the prediction accuracy of these models. Multiscale filtering has been shown to be a powerful feature extraction tool. In this work, the advantages of multiscale filtering are utilized to enhance the prediction accuracy of LVR models by developing an integrated multiscale LVR (IMSLVR modeling algorithm that integrates modeling and feature extraction. The idea behind the IMSLVR modeling algorithm is to filter the process data at different decomposition levels, model the filtered data from each level, and then select the LVR model that optimizes a model selection criterion. The performance of the developed IMSLVR algorithm is illustrated using three examples, one using synthetic data, one using simulated distillation column data, and one using experimental packed bed distillation column data. All examples clearly demonstrate the effectiveness of the IMSLVR algorithm over the conventional methods.

  13. On enhancing energy harvesting performance of the photovoltaic modules using an automatic cooling system and assessing its economic benefits of mitigating greenhouse effects on the environment

    Science.gov (United States)

    Wang, Jen-Cheng; Liao, Min-Sheng; Lee, Yeun-Chung; Liu, Cheng-Yue; Kuo, Kun-Chang; Chou, Cheng-Ying; Huang, Chen-Kang; Jiang, Joe-Air

    2018-02-01

    The performance of photovoltaic (PV) modules under outdoor operation is greatly affected by their location and environmental conditions. The temperature of a PV module gradually increases as it is exposed to solar irradiation, resulting in degradation of its electrical characteristics and power generation efficiency. This study adopts wireless sensor network (WSN) technology to develop an automatic water-cooling system for PV modules in order to improve their PV power generation efficiency. A temperature estimation method is developed to quickly and accurately estimate the PV module temperatures based on weather data provided from the WSN monitoring system. Further, an estimation method is also proposed for evaluation of the electrical characteristics and output power of the PV modules, which is performed remotely via a control platform. The automatic WSN-based water-cooling mechanism is designed to avoid the PV module temperature from reaching saturation. Equipping each PV module with the WSN-based cooling system, the ambient conditions are monitored automatically so that the temperature of the PV module is controlled by sprinkling water on the panel surface. The field-test experiment results show an increase in the energy harvested by the PV modules of approximately 17.75% when using the proposed WSN-based cooling system.

  14. Numerical Analysis of Multiscale Computations

    CERN Document Server

    Engquist, Björn; Tsai, Yen-Hsi R

    2012-01-01

    This book is a snapshot of current research in multiscale modeling, computations and applications. It covers fundamental mathematical theory, numerical algorithms as well as practical computational advice for analysing single and multiphysics models containing a variety of scales in time and space. Complex fluids, porous media flow and oscillatory dynamical systems are treated in some extra depth, as well as tools like analytical and numerical homogenization, and fast multipole method.

  15. Community effort endorsing multiscale modelling, multiscale data science and multiscale computing for systems medicine.

    Science.gov (United States)

    Zanin, Massimiliano; Chorbev, Ivan; Stres, Blaz; Stalidzans, Egils; Vera, Julio; Tieri, Paolo; Castiglione, Filippo; Groen, Derek; Zheng, Huiru; Baumbach, Jan; Schmid, Johannes A; Basilio, José; Klimek, Peter; Debeljak, Nataša; Rozman, Damjana; Schmidt, Harald H H W

    2017-12-05

    Systems medicine holds many promises, but has so far provided only a limited number of proofs of principle. To address this road block, possible barriers and challenges of translating systems medicine into clinical practice need to be identified and addressed. The members of the European Cooperation in Science and Technology (COST) Action CA15120 Open Multiscale Systems Medicine (OpenMultiMed) wish to engage the scientific community of systems medicine and multiscale modelling, data science and computing, to provide their feedback in a structured manner. This will result in follow-up white papers and open access resources to accelerate the clinical translation of systems medicine. © The Author 2017. Published by Oxford University Press.

  16. Differential Geometry Based Multiscale Models

    Science.gov (United States)

    Wei, Guo-Wei

    2010-01-01

    Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atom-istic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier–Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson–Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson–Nernst–Planck equations that

  17. Differential geometry based multiscale models.

    Science.gov (United States)

    Wei, Guo-Wei

    2010-08-01

    Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atomistic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier-Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson-Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson-Nernst-Planck equations that are

  18. Multiscale Simulation of Breaking Wave Impacts

    DEFF Research Database (Denmark)

    Lindberg, Ole

    compare reasonably well. The incompressible and inviscid ALE-WLS model is coupled with the potential flow model of Engsig-Karup et al. [2009], to perform multiscale calculation of breaking wave impacts on a vertical breakwater. The potential flow model provides accurate calculation of the wave...... with a potential flow model to provide multiscale calculation of forces from breaking wave impacts on structures....

  19. Mixed Generalized Multiscale Finite Element Methods and Applications

    KAUST Repository

    Chung, Eric T.

    2015-03-03

    In this paper, we present a mixed generalized multiscale finite element method (GMsFEM) for solving flow in heterogeneous media. Our approach constructs multiscale basis functions following a GMsFEM framework and couples these basis functions using a mixed finite element method, which allows us to obtain a mass conservative velocity field. To construct multiscale basis functions for each coarse edge, we design a snapshot space that consists of fine-scale velocity fields supported in a union of two coarse regions that share the common interface. The snapshot vectors have zero Neumann boundary conditions on the outer boundaries, and we prescribe their values on the common interface. We describe several spectral decompositions in the snapshot space motivated by the analysis. In the paper, we also study oversampling approaches that enhance the accuracy of mixed GMsFEM. A main idea of oversampling techniques is to introduce a small dimensional snapshot space. We present numerical results for two-phase flow and transport, without updating basis functions in time. Our numerical results show that one can achieve good accuracy with a few basis functions per coarse edge if one selects appropriate offline spaces. © 2015 Society for Industrial and Applied Mathematics.

  20. Multivariate refined composite multiscale entropy analysis

    International Nuclear Information System (INIS)

    Humeau-Heurtier, Anne

    2016-01-01

    Multiscale entropy (MSE) has become a prevailing method to quantify signals complexity. MSE relies on sample entropy. However, MSE may yield imprecise complexity estimation at large scales, because sample entropy does not give precise estimation of entropy when short signals are processed. A refined composite multiscale entropy (RCMSE) has therefore recently been proposed. Nevertheless, RCMSE is for univariate signals only. The simultaneous analysis of multi-channel (multivariate) data often over-performs studies based on univariate signals. We therefore introduce an extension of RCMSE to multivariate data. Applications of multivariate RCMSE to simulated processes reveal its better performances over the standard multivariate MSE. - Highlights: • Multiscale entropy quantifies data complexity but may be inaccurate at large scale. • A refined composite multiscale entropy (RCMSE) has therefore recently been proposed. • Nevertheless, RCMSE is adapted to univariate time series only. • We herein introduce an extension of RCMSE to multivariate data. • It shows better performances than the standard multivariate multiscale entropy.

  1. The Adaptive Multi-scale Simulation Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, William R. [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2015-09-01

    The Adaptive Multi-scale Simulation Infrastructure (AMSI) is a set of libraries and tools developed to support the development, implementation, and execution of general multimodel simulations. Using a minimal set of simulation meta-data AMSI allows for minimally intrusive work to adapt existent single-scale simulations for use in multi-scale simulations. Support for dynamic runtime operations such as single- and multi-scale adaptive properties is a key focus of AMSI. Particular focus has been spent on the development on scale-sensitive load balancing operations to allow single-scale simulations incorporated into a multi-scale simulation using AMSI to use standard load-balancing operations without affecting the integrity of the overall multi-scale simulation.

  2. Laser Writing of Multiscale Chiral Polymer Metamaterials

    Directory of Open Access Journals (Sweden)

    E. P. Furlani

    2012-01-01

    Full Text Available A new approach to metamaterials is presented that involves laser-based patterning of novel chiral polymer media, wherein chirality is realized at two distinct length scales, intrinsically at the molecular level and geometrically at a length scale on the order of the wavelength of the incident field. In this approach, femtosecond-pulsed laser-induced two-photon lithography (TPL is used to pattern a photoresist-chiral polymer mixture into planar chiral shapes. Enhanced bulk chirality can be realized by tuning the wavelength-dependent chiral response at both the molecular and geometric level to ensure an overlap of their respective spectra. The approach is demonstrated via the fabrication of a metamaterial consisting of a two-dimensional array of chiral polymer-based L-structures. The fabrication process is described and modeling is performed to demonstrate the distinction between molecular and planar geometric-based chirality and the effects of the enhanced multiscale chirality on the optical response of such media. This new approach to metamaterials holds promise for the development of tunable, polymer-based optical metamaterials with low loss.

  3. Multiscale wavelet representations for mammographic feature analysis

    Science.gov (United States)

    Laine, Andrew F.; Song, Shuwu

    1992-12-01

    This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet coefficients, enhanced by linear, exponential and constant weight functions localized in scale space. By improving the visualization of breast pathology we can improve the changes of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).

  4. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.

    2014-12-01

    In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.

  5. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Science.gov (United States)

    Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia

    2016-06-01

    Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  6. Detecting fine scratches on smooth surfaces with multiscale wavelet representation

    International Nuclear Information System (INIS)

    Yao, Li; Wan, Yan; Yao, Ming; Xu, Bugao

    2012-01-01

    This paper presents a set of image-processing algorithms for automatic detection of fine scratches on smooth surfaces, such as automobile paint surfaces. The scratches to be detected have random directions, inconspicuous gray levels and background noise. The multiscale wavelet transform was used to extract texture features, and a controlled edge fusion model was employed to merge the detailed (horizontal, vertical and diagonal) wavelet coefficient maps. Based on the fused detail map, multivariate statistics were applied to synthesize features in multiple scales and directions, and an optimal threshold was set to separate scratches from the background. The experimental results of 24 automobile paint surface showed that the presented algorithms can effectively suppress background noise and detect scratches accurately. (paper)

  7. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Directory of Open Access Journals (Sweden)

    H. Yue

    2016-06-01

    Full Text Available Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  8. Peridynamic Multiscale Finite Element Methods

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Timothy [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bond, Stephen D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Littlewood, David John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Moore, Stan Gerald [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    The problem of computing quantum-accurate design-scale solutions to mechanics problems is rich with applications and serves as the background to modern multiscale science research. The prob- lem can be broken into component problems comprised of communicating across adjacent scales, which when strung together create a pipeline for information to travel from quantum scales to design scales. Traditionally, this involves connections between a) quantum electronic structure calculations and molecular dynamics and between b) molecular dynamics and local partial differ- ential equation models at the design scale. The second step, b), is particularly challenging since the appropriate scales of molecular dynamic and local partial differential equation models do not overlap. The peridynamic model for continuum mechanics provides an advantage in this endeavor, as the basic equations of peridynamics are valid at a wide range of scales limiting from the classical partial differential equation models valid at the design scale to the scale of molecular dynamics. In this work we focus on the development of multiscale finite element methods for the peridynamic model, in an effort to create a mathematically consistent channel for microscale information to travel from the upper limits of the molecular dynamics scale to the design scale. In particular, we first develop a Nonlocal Multiscale Finite Element Method which solves the peridynamic model at multiple scales to include microscale information at the coarse-scale. We then consider a method that solves a fine-scale peridynamic model to build element-support basis functions for a coarse- scale local partial differential equation model, called the Mixed Locality Multiscale Finite Element Method. Given decades of research and development into finite element codes for the local partial differential equation models of continuum mechanics there is a strong desire to couple local and nonlocal models to leverage the speed and state of the

  9. A rate-dependent multi-scale crack model for concrete

    NARCIS (Netherlands)

    Karamnejad, A.; Nguyen, V.P.; Sluys, L.J.

    2013-01-01

    A multi-scale numerical approach for modeling cracking in heterogeneous quasi-brittle materials under dynamic loading is presented. In the model, a discontinuous crack model is used at macro-scale to simulate fracture and a gradient-enhanced damage model has been used at meso-scale to simulate

  10. Multiscale modelling for tokamak pedestals

    Science.gov (United States)

    Abel, I. G.

    2018-04-01

    Pedestal modelling is crucial to predict the performance of future fusion devices. Current modelling efforts suffer either from a lack of kinetic physics, or an excess of computational complexity. To ameliorate these problems, we take a first-principles multiscale approach to the pedestal. We will present three separate sets of equations, covering the dynamics of edge localised modes (ELMs), the inter-ELM pedestal and pedestal turbulence, respectively. Precisely how these equations should be coupled to each other is covered in detail. This framework is completely self-consistent; it is derived from first principles by means of an asymptotic expansion of the fundamental Vlasov-Landau-Maxwell system in appropriate small parameters. The derivation exploits the narrowness of the pedestal region, the smallness of the thermal gyroradius and the low plasma (the ratio of thermal to magnetic pressures) typical of current pedestal operation to achieve its simplifications. The relationship between this framework and gyrokinetics is analysed, and possibilities to directly match our systems of equations onto multiscale gyrokinetics are explored. A detailed comparison between our model and other models in the literature is performed. Finally, the potential for matching this framework onto an open-field-line region is briefly discussed.

  11. Finding weak points automatically

    International Nuclear Information System (INIS)

    Archinger, P.; Wassenberg, M.

    1999-01-01

    Operators of nuclear power stations have to carry out material tests at selected components by regular intervalls. Therefore a full automaticated test, which achieves a clearly higher reproducibility, compared to part automaticated variations, would provide a solution. In addition the full automaticated test reduces the dose of radiation for the test person. (orig.) [de

  12. An automated vessel segmentation of retinal images using multiscale vesselness

    International Nuclear Information System (INIS)

    Ben Abdallah, M.; Malek, J.; Tourki, R.; Krissian, K.

    2011-01-01

    The ocular fundus image can provide information on pathological changes caused by local ocular diseases and early signs of certain systemic diseases, such as diabetes and hypertension. Automated analysis and interpretation of fundus images has become a necessary and important diagnostic procedure in ophthalmology. The extraction of blood vessels from retinal images is an important and challenging task in medical analysis and diagnosis. In this paper, we introduce an implementation of the anisotropic diffusion which allows reducing the noise and better preserving small structures like vessels in 2D images. A vessel detection filter, based on a multi-scale vesselness function, is then applied to enhance vascular structures.

  13. Expected Navigation Flight Performance for the Magnetospheric Multiscale (MMS) Mission

    Science.gov (United States)

    Olson, Corwin; Wright, Cinnamon; Long, Anne

    2012-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four formation-flying spacecraft placed in highly eccentric elliptical orbits about the Earth. The primary scientific mission objective is to study magnetic reconnection within the Earth s magnetosphere. The baseline navigation concept is the independent estimation of each spacecraft state using GPS pseudorange measurements (referenced to an onboard Ultra Stable Oscillator) and accelerometer measurements during maneuvers. State estimation for the MMS spacecraft is performed onboard each vehicle using the Goddard Enhanced Onboard Navigation System, which is embedded in the Navigator GPS receiver. This paper describes the latest efforts to characterize expected navigation flight performance using upgraded simulation models derived from recent analyses.

  14. A Tensor-Product-Kernel Framework for Multiscale Neural Activity Decoding and Control

    Science.gov (United States)

    Li, Lin; Brockmeier, Austin J.; Choi, John S.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2014-01-01

    Brain machine interfaces (BMIs) have attracted intense attention as a promising technology for directly interfacing computers or prostheses with the brain's motor and sensory areas, thereby bypassing the body. The availability of multiscale neural recordings including spike trains and local field potentials (LFPs) brings potential opportunities to enhance computational modeling by enriching the characterization of the neural system state. However, heterogeneity on data type (spike timing versus continuous amplitude signals) and spatiotemporal scale complicates the model integration of multiscale neural activity. In this paper, we propose a tensor-product-kernel-based framework to integrate the multiscale activity and exploit the complementary information available in multiscale neural activity. This provides a common mathematical framework for incorporating signals from different domains. The approach is applied to the problem of neural decoding and control. For neural decoding, the framework is able to identify the nonlinear functional relationship between the multiscale neural responses and the stimuli using general purpose kernel adaptive filtering. In a sensory stimulation experiment, the tensor-product-kernel decoder outperforms decoders that use only a single neural data type. In addition, an adaptive inverse controller for delivering electrical microstimulation patterns that utilizes the tensor-product kernel achieves promising results in emulating the responses to natural stimulation. PMID:24829569

  15. A case study on the influence of multiscale modelling in design and structural analysis

    DEFF Research Database (Denmark)

    Nicholas, Paul; Zwierzycki, Mateusz; La Magna, Riccardo

    2017-01-01

    . To illustrate the concept of multi-scale modelling, the prototype of a bridge structure that was realised making use of this information transfer between models will be presented. The prototype primarily takes advantage of the geometric and material stiffening effect of incremental metal forming. The local......The current paper discusses the role of multi-scale modelling within the context of design and structural analysis. Depending on the level of detail, a design model may retain, lose or enhance key information. The term multi-scale refers to the break-down of a design and analysis task into multiple...... levels of detail and the transfer of this information between models. Focusing on the influence that different models have on the analysed performance of the structure, the paper will discuss the advantages and trade-offs of coupling multiple levels of abstraction in terms of design and structure...

  16. Image classification using multiscale information fusion based on saliency driven nonlinear diffusion filtering.

    Science.gov (United States)

    Hu, Weiming; Hu, Ruiguang; Xie, Nianhua; Ling, Haibin; Maybank, Stephen

    2014-04-01

    In this paper, we propose saliency driven image multiscale nonlinear diffusion filtering. The resulting scale space in general preserves or even enhances semantically important structures such as edges, lines, or flow-like structures in the foreground, and inhibits and smoothes clutter in the background. The image is classified using multiscale information fusion based on the original image, the image at the final scale at which the diffusion process converges, and the image at a midscale. Our algorithm emphasizes the foreground features, which are important for image classification. The background image regions, whether considered as contexts of the foreground or noise to the foreground, can be globally handled by fusing information from different scales. Experimental tests of the effectiveness of the multiscale space for the image classification are conducted on the following publicly available datasets: 1) the PASCAL 2005 dataset; 2) the Oxford 102 flowers dataset; and 3) the Oxford 17 flowers dataset, with high classification rates.

  17. Spontaneous Facial Mimicry Is Enhanced by the Goal of Inferring Emotional States: Evidence for Moderation of "Automatic" Mimicry by Higher Cognitive Processes.

    Science.gov (United States)

    Murata, Aiko; Saito, Hisamichi; Schug, Joanna; Ogawa, Kenji; Kameda, Tatsuya

    2016-01-01

    A number of studies have shown that individuals often spontaneously mimic the facial expressions of others, a tendency known as facial mimicry. This tendency has generally been considered a reflex-like "automatic" response, but several recent studies have shown that the degree of mimicry may be moderated by contextual information. However, the cognitive and motivational factors underlying the contextual moderation of facial mimicry require further empirical investigation. In this study, we present evidence that the degree to which participants spontaneously mimic a target's facial expressions depends on whether participants are motivated to infer the target's emotional state. In the first study we show that facial mimicry, assessed by facial electromyography, occurs more frequently when participants are specifically instructed to infer a target's emotional state than when given no instruction. In the second study, we replicate this effect using the Facial Action Coding System to show that participants are more likely to mimic facial expressions of emotion when they are asked to infer the target's emotional state, rather than make inferences about a physical trait unrelated to emotion. These results provide convergent evidence that the explicit goal of understanding a target's emotional state affects the degree of facial mimicry shown by the perceiver, suggesting moderation of reflex-like motor activities by higher cognitive processes.

  18. Processing Digital Imagery to Enhance Perceptions of Realism

    Science.gov (United States)

    Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur

    2003-01-01

    Multi-scale retinex with color restoration (MSRCR) is a method of processing digital image data based on Edwin Land s retinex (retina + cortex) theory of human color vision. An outgrowth of basic scientific research and its application to NASA s remote-sensing mission, MSRCR is embodied in a general-purpose algorithm that greatly improves the perception of visual realism and the quantity and quality of perceived information in a digitized image. In addition, the MSRCR algorithm includes provisions for automatic corrections to accelerate and facilitate what could otherwise be a tedious image-editing process. The MSRCR algorithm has been, and is expected to continue to be, the basis for development of commercial image-enhancement software designed to extend and refine its capabilities for diverse applications.

  19. Multiscale modelling in immunology: a review.

    Science.gov (United States)

    Cappuccio, Antonio; Tieri, Paolo; Castiglione, Filippo

    2016-05-01

    One of the greatest challenges in biomedicine is to get a unified view of observations made from the molecular up to the organism scale. Towards this goal, multiscale models have been highly instrumental in contexts such as the cardiovascular field, angiogenesis, neurosciences and tumour biology. More recently, such models are becoming an increasingly important resource to address immunological questions as well. Systematic mining of the literature in multiscale modelling led us to identify three main fields of immunological applications: host-virus interactions, inflammatory diseases and their treatment and development of multiscale simulation platforms for immunological research and for educational purposes. Here, we review the current developments in these directions, which illustrate that multiscale models can consistently integrate immunological data generated at several scales, and can be used to describe and optimize therapeutic treatments of complex immune diseases. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  20. Foundations for a multiscale collaborative Earth model

    KAUST Repository

    Afanasiev, M.; Peter, Daniel; Sager, K.; Simut, S.; Ermert, L.; Krischer, L.; Fichtner, A.

    2015-01-01

    . The CSEM as a computational framework is intended to help bridging the gap between local, regional and global tomography, and to contribute to the development of a global multiscale Earth model. While the current construction serves as a first proof

  1. Collaborating for Multi-Scale Chemical Science

    Energy Technology Data Exchange (ETDEWEB)

    William H. Green

    2006-07-14

    Advanced model reduction methods were developed and integrated into the CMCS multiscale chemical science simulation software. The new technologies were used to simulate HCCI engines and burner flames with exceptional fidelity.

  2. Multiscale Modeling of Wear Degradation

    KAUST Repository

    Moraes, Alvaro; Ruggeri, Fabrizio; Tempone, Raul; Vilanova, Pedro

    2016-01-01

    Cylinder liners of diesel engines used for marine propulsion are naturally subjected to a wear process, and may fail when their wear exceeds a specified limit. Since failures often represent high economical costs, it is utterly important to predict and avoid them. In this work [4], we model the wear process using a pure jump process. Therefore, the inference goal here is to estimate: the number of possible jumps, its sizes, the coefficients and the shapes of the jump intensities. We propose a multiscale approach for the inference problem that can be seen as an indirect inference scheme. We found that using a Gaussian approximation based on moment expansions, it is possible to accurately estimate the jump intensities and the jump amplitudes. We obtained results equivalent to the state of the art but using a simpler and less expensive approach.

  3. Multiscale Modeling of Wear Degradation

    KAUST Repository

    Moraes, Alvaro; Ruggeri, Fabrizio; Tempone, Raul; Vilanova, Pedro

    2015-01-01

    Cylinder liners of diesel engines used for marine propulsion are naturally subjected to a wear process, and may fail when their wear exceeds a specified limit. Since failures often represent high economical costs, it is utterly important to predict and avoid them. In this work [4], we model the wear process using a pure jump process. Therefore, the inference goal here is to estimate: the number of possible jumps, its sizes, the coefficients and the shapes of the jump intensities. We propose a multiscale approach for the inference problem that can be seen as an indirect inference scheme. We found that using a Gaussian approximation based on moment expansions, it is possible to accurately estimate the jump intensities and the jump amplitudes. We obtained results equivalent to the state of the art but using a simpler and less expensive approach.

  4. Multiscale Modeling of Wear Degradation

    KAUST Repository

    Moraes, Alvaro

    2015-01-07

    Cylinder liners of diesel engines used for marine propulsion are naturally subjected to a wear process, and may fail when their wear exceeds a specified limit. Since failures often represent high economical costs, it is utterly important to predict and avoid them. In this work [4], we model the wear process using a pure jump process. Therefore, the inference goal here is to estimate: the number of possible jumps, its sizes, the coefficients and the shapes of the jump intensities. We propose a multiscale approach for the inference problem that can be seen as an indirect inference scheme. We found that using a Gaussian approximation based on moment expansions, it is possible to accurately estimate the jump intensities and the jump amplitudes. We obtained results equivalent to the state of the art but using a simpler and less expensive approach.

  5. Multiscale Modeling of Wear Degradation

    KAUST Repository

    Moraes, Alvaro

    2014-01-06

    Cylinder liners of diesel engines used for marine propulsion are naturally subjected to a wear process, and may fail when their wear exceeds a specified limit. Since failures often represent high economical costs, it is utterly important to predict and avoid them. In this work [4], we model the wear process using a pure jump process. Therefore, the inference goal here is to estimate: the number of possible jumps, its sizes, the coefficients and the shapes of the jump intensities. We propose a multiscale approach for the inference problem that can be seen as an indirect inference scheme. We found that using a Gaussian approximation based on moment expansions, it is possible to accurately estimate the jump intensities and the jump amplitudes. We obtained results equivalent to the state of the art but using a simpler and less expensive approach.

  6. Multiscale Modeling of Wear Degradation

    KAUST Repository

    Moraes, Alvaro

    2016-01-06

    Cylinder liners of diesel engines used for marine propulsion are naturally subjected to a wear process, and may fail when their wear exceeds a specified limit. Since failures often represent high economical costs, it is utterly important to predict and avoid them. In this work [4], we model the wear process using a pure jump process. Therefore, the inference goal here is to estimate: the number of possible jumps, its sizes, the coefficients and the shapes of the jump intensities. We propose a multiscale approach for the inference problem that can be seen as an indirect inference scheme. We found that using a Gaussian approximation based on moment expansions, it is possible to accurately estimate the jump intensities and the jump amplitudes. We obtained results equivalent to the state of the art but using a simpler and less expensive approach.

  7. Wavelets and multiscale signal processing

    CERN Document Server

    Cohen, Albert

    1995-01-01

    Since their appearance in mid-1980s, wavelets and, more generally, multiscale methods have become powerful tools in mathematical analysis and in applications to numerical analysis and signal processing. This book is based on "Ondelettes et Traitement Numerique du Signal" by Albert Cohen. It has been translated from French by Robert D. Ryan and extensively updated by both Cohen and Ryan. It studies the existing relations between filter banks and wavelet decompositions and shows how these relations can be exploited in the context of digital signal processing. Throughout, the book concentrates on the fundamentals. It begins with a chapter on the concept of multiresolution analysis, which contains complete proofs of the basic results. The description of filter banks that are related to wavelet bases is elaborated in both the orthogonal case (Chapter 2), and in the biorthogonal case (Chapter 4). The regularity of wavelets, how this is related to the properties of the filters and the importance of regularity for t...

  8. Multiphysics/multiscale multifluid computations

    International Nuclear Information System (INIS)

    Yadigaroglu, George

    2014-01-01

    Regarding experimentation, interesting examples of multi-scale approaches are found: the small-scale experiments to understand the mechanisms of counter-current flow limitations (CCFL) such as the growth of instabilities on films, droplet entrainment, etc; meso-scale experiments to quantify the CCFL conditions in typical geometries such as tubes and gaps between parallel plates, and finally full-scale experimentation in a typical reactor geometry - the UPTF tests. Another example is the mixing of the atmosphere produced by plumes and jets in a reactor containment: one needs first basic turbulence information that can be obtained at the microscopic level; follow medium-scale experiments to understand the behaviour of jets and plumes; finally reactor-scale tests can be conducted in facilities such as PANDA at PSI, in Switzerland to study the phenomena at large scale

  9. Multiscale modelling of DNA mechanics

    International Nuclear Information System (INIS)

    Dršata, Tomáš; Lankaš, Filip

    2015-01-01

    Mechanical properties of DNA are important not only in a wide range of biological processes but also in the emerging field of DNA nanotechnology. We review some of the recent developments in modeling these properties, emphasizing the multiscale nature of the problem. Modern atomic resolution, explicit solvent molecular dynamics simulations have contributed to our understanding of DNA fine structure and conformational polymorphism. These simulations may serve as data sources to parameterize rigid base models which themselves have undergone major development. A consistent buildup of larger entities involving multiple rigid bases enables us to describe DNA at more global scales. Free energy methods to impose large strains on DNA, as well as bead models and other approaches, are also briefly discussed. (topical review)

  10. Multiscale modeling of pedestrian dynamics

    CERN Document Server

    Cristiani, Emiliano; Tosin, Andrea

    2014-01-01

    This book presents mathematical models and numerical simulations of crowd dynamics. The core topic is the development of a new multiscale paradigm, which bridges the microscopic and macroscopic scales taking the most from each of them for capturing the relevant clues of complexity of crowds. The background idea is indeed that most of the complex trends exhibited by crowds are due to an intrinsic interplay between individual and collective behaviors. The modeling approach promoted in this book pursues actively this intuition and profits from it for designing general mathematical structures susceptible of application also in fields different from the inspiring original one. The book considers also the two most traditional points of view: the microscopic one, in which pedestrians are tracked individually, and the macroscopic one, in which pedestrians are assimilated to a continuum. Selected existing models are critically analyzed. The work is addressed to researchers and graduate students.

  11. Integrated multi-scale modelling and simulation of nuclear fuels

    International Nuclear Information System (INIS)

    Valot, C.; Bertolus, M.; Masson, R.; Malerba, L.; Rachid, J.; Besmann, T.; Phillpot, S.; Stan, M.

    2015-01-01

    This chapter aims at discussing the objectives, implementation and integration of multi-scale modelling approaches applied to nuclear fuel materials. We will first show why the multi-scale modelling approach is required, due to the nature of the materials and by the phenomena involved under irradiation. We will then present the multiple facets of multi-scale modelling approach, while giving some recommendations with regard to its application. We will also show that multi-scale modelling must be coupled with appropriate multi-scale experiments and characterisation. Finally, we will demonstrate how multi-scale modelling can contribute to solving technology issues. (authors)

  12. A concurrent multiscale micromorphic molecular dynamics

    International Nuclear Information System (INIS)

    Li, Shaofan; Tong, Qi

    2015-01-01

    In this work, we have derived a multiscale micromorphic molecular dynamics (MMMD) from first principle to extend the (Andersen)-Parrinello-Rahman molecular dynamics to mesoscale and continuum scale. The multiscale micromorphic molecular dynamics is a con-current three-scale dynamics that couples a fine scale molecular dynamics, a mesoscale micromorphic dynamics, and a macroscale nonlocal particle dynamics together. By choosing proper statistical closure conditions, we have shown that the original Andersen-Parrinello-Rahman molecular dynamics is the homogeneous and equilibrium case of the proposed multiscale micromorphic molecular dynamics. In specific, we have shown that the Andersen-Parrinello-Rahman molecular dynamics can be rigorously formulated and justified from first principle, and its general inhomogeneous case, i.e., the three scale con-current multiscale micromorphic molecular dynamics can take into account of macroscale continuum mechanics boundary condition without the limitation of atomistic boundary condition or periodic boundary conditions. The discovered multiscale scale structure and the corresponding multiscale dynamics reveal a seamless transition from atomistic scale to continuum scale and the intrinsic coupling mechanism among them based on first principle formulation

  13. Multiscale Model Reduction with Generalized Multiscale Finite Element Methods in Geomathematics

    KAUST Repository

    Efendiev, Yalchin R.; Presho, Michael

    2015-01-01

    In this chapter, we discuss multiscale model reduction using Generalized Multiscale Finite Element Methods (GMsFEM) in a number of geomathematical applications. GMsFEM has been recently introduced (Efendiev et al. 2012) and applied to various problems. In the current chapter, we consider some of these applications and outline the basic methodological concepts.

  14. Distributed multiscale computing with MUSCLE 2, the Multiscale Coupling Library and Environment

    NARCIS (Netherlands)

    Borgdorff, J.; Mamonski, M.; Bosak, B.; Kurowski, K.; Ben Belgacem, M.; Chopard, B.; Groen, D.; Coveney, P.V.; Hoekstra, A.G.

    2014-01-01

    We present the Multiscale Coupling Library and Environment: MUSCLE 2. This multiscale component-based execution environment has a simple to use Java, C++, C, Python and Fortran API, compatible with MPI, OpenMP and threading codes. We demonstrate its local and distributed computing capabilities and

  15. A distributed multiscale computation of a tightly coupled model using the Multiscale Modeling Language

    NARCIS (Netherlands)

    Borgdorff, J.; Bona-Casas, C.; Mamonski, M.; Kurowski, K.; Piontek, T.; Bosak, B.; Rycerz, K.; Ciepiela, E.; Gubala, T.; Harezlak, D.; Bubak, M.; Lorenz, E.; Hoekstra, A.G.

    2012-01-01

    Nature is observed at all scales; with multiscale modeling, scientists bring together several scales for a holistic analysis of a phenomenon. The models on these different scales may require significant but also heterogeneous computational resources, creating the need for distributed multiscale

  16. Multiscale Model Reduction with Generalized Multiscale Finite Element Methods in Geomathematics

    KAUST Repository

    Efendiev, Yalchin R.

    2015-09-02

    In this chapter, we discuss multiscale model reduction using Generalized Multiscale Finite Element Methods (GMsFEM) in a number of geomathematical applications. GMsFEM has been recently introduced (Efendiev et al. 2012) and applied to various problems. In the current chapter, we consider some of these applications and outline the basic methodological concepts.

  17. Automatic Photoelectric Telescope Service

    International Nuclear Information System (INIS)

    Genet, R.M.; Boyd, L.J.; Kissell, K.E.; Crawford, D.L.; Hall, D.S.; BDM Corp., McLean, VA; Kitt Peak National Observatory, Tucson, AZ; Dyer Observatory, Nashville, TN)

    1987-01-01

    Automatic observatories have the potential of gathering sizable amounts of high-quality astronomical data at low cost. The Automatic Photoelectric Telescope Service (APT Service) has realized this potential and is routinely making photometric observations of a large number of variable stars. However, without observers to provide on-site monitoring, it was necessary to incorporate special quality checks into the operation of the APT Service at its multiple automatic telescope installation on Mount Hopkins. 18 references

  18. Automatic Fiscal Stabilizers

    Directory of Open Access Journals (Sweden)

    Narcis Eduard Mitu

    2013-11-01

    Full Text Available Policies or institutions (built into an economic system that automatically tend to dampen economic cycle fluctuations in income, employment, etc., without direct government intervention. For example, in boom times, progressive income tax automatically reduces money supply as incomes and spendings rise. Similarly, in recessionary times, payment of unemployment benefits injects more money in the system and stimulates demand. Also called automatic stabilizers or built-in stabilizers.

  19. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. [comp.

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  20. Multiscale Simulation Framework for Coupled Fluid Flow and Mechanical Deformation

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Thomas [California Inst. of Technology (CalTech), Pasadena, CA (United States); Efendiev, Yalchin [Stanford Univ., CA (United States); Tchelepi, Hamdi [Texas A & M Univ., College Station, TX (United States); Durlofsky, Louis [Stanford Univ., CA (United States)

    2016-05-24

    Our work in this project is aimed at making fundamental advances in multiscale methods for flow and transport in highly heterogeneous porous media. The main thrust of this research is to develop a systematic multiscale analysis and efficient coarse-scale models that can capture global effects and extend existing multiscale approaches to problems with additional physics and uncertainties. A key emphasis is on problems without an apparent scale separation. Multiscale solution methods are currently under active investigation for the simulation of subsurface flow in heterogeneous formations. These procedures capture the effects of fine-scale permeability variations through the calculation of specialized coarse-scale basis functions. Most of the multiscale techniques presented to date employ localization approximations in the calculation of these basis functions. For some highly correlated (e.g., channelized) formations, however, global effects are important and these may need to be incorporated into the multiscale basis functions. Other challenging issues facing multiscale simulations are the extension of existing multiscale techniques to problems with additional physics, such as compressibility, capillary effects, etc. In our project, we explore the improvement of multiscale methods through the incorporation of additional (single-phase flow) information and the development of a general multiscale framework for flows in the presence of uncertainties, compressible flow and heterogeneous transport, and geomechanics. We have considered (1) adaptive local-global multiscale methods, (2) multiscale methods for the transport equation, (3) operator-based multiscale methods and solvers, (4) multiscale methods in the presence of uncertainties and applications, (5) multiscale finite element methods for high contrast porous media and their generalizations, and (6) multiscale methods for geomechanics.

  1. Multiscale analysis and computation for flows in heterogeneous media

    Energy Technology Data Exchange (ETDEWEB)

    Efendiev, Yalchin [Texas A & M Univ., College Station, TX (United States); Hou, T. Y. [California Inst. of Technology (CalTech), Pasadena, CA (United States); Durlofsky, L. J. [Stanford Univ., CA (United States); Tchelepi, H. [Stanford Univ., CA (United States)

    2016-08-04

    Our work in this project is aimed at making fundamental advances in multiscale methods for flow and transport in highly heterogeneous porous media. The main thrust of this research is to develop a systematic multiscale analysis and efficient coarse-scale models that can capture global effects and extend existing multiscale approaches to problems with additional physics and uncertainties. A key emphasis is on problems without an apparent scale separation. Multiscale solution methods are currently under active investigation for the simulation of subsurface flow in heterogeneous formations. These procedures capture the effects of fine-scale permeability variations through the calculation of specialized coarse-scale basis functions. Most of the multiscale techniques presented to date employ localization approximations in the calculation of these basis functions. For some highly correlated (e.g., channelized) formations, however, global effects are important and these may need to be incorporated into the multiscale basis functions. Other challenging issues facing multiscale simulations are the extension of existing multiscale techniques to problems with additional physics, such as compressibility, capillary effects, etc. In our project, we explore the improvement of multiscale methods through the incorporation of additional (single-phase flow) information and the development of a general multiscale framework for flows in the presence of uncertainties, compressible flow and heterogeneous transport, and geomechanics. We have considered (1) adaptive local-global multiscale methods, (2) multiscale methods for the transport equation, (3) operator-based multiscale methods and solvers, (4) multiscale methods in the presence of uncertainties and applications, (5) multiscale finite element methods for high contrast porous media and their generalizations, and (6) multiscale methods for geomechanics. Below, we present a brief overview of each of these contributions.

  2. Novel Multiscale Modeling Tool Applied to Pseudomonas aeruginosa Biofilm Formation

    OpenAIRE

    Biggs, Matthew B.; Papin, Jason A.

    2013-01-01

    Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid mod...

  3. Multiscale study of metal nanoparticles

    Science.gov (United States)

    Lee, Byeongchan

    Extremely small structures with reduced dimensionality have emerged as a scientific motif for their interesting properties. In particular, metal nanoparticles have been identified as a fundamental material in many catalytic activities; as a consequence, a better understanding of structure-function relationship of nanoparticles has become crucial. The functional analysis of nanoparticles, reactivity for example, requires an accurate method at the electronic structure level, whereas the structural analysis to find energetically stable local minima is beyond the scope of quantum mechanical methods as the computational cost becomes prohibitingly high. The challenge is that the inherent length scale and accuracy associated with any single method hardly covers the broad scale range spanned by both structural and functional analyses. In order to address this, and effectively explore the energetics and reactivity of metal nanoparticles, a hierarchical multiscale modeling is developed, where methodologies of different length scales, i.e. first principles density functional theory, atomistic calculations, and continuum modeling, are utilized in a sequential fashion. This work has focused on identifying the essential information that bridges two different methods so that a successive use of different methods is seamless. The bond characteristics of low coordination systems have been obtained with first principles calculations, and incorporated into the atomistic simulation. This also rectifies the deficiency of conventional interatomic potentials fitted to bulk properties, and improves the accuracy of atomistic calculations for nanoparticles. For the systematic shape selection of nanoparticles, we have improved the Wulff-type construction using a semi-continuum approach, in which atomistic surface energetics and crystallinity of materials are added on to the continuum framework. The developed multiscale modeling scheme is applied to the rational design of platinum

  4. Neural Bases of Automaticity

    Science.gov (United States)

    Servant, Mathieu; Cassey, Peter; Woodman, Geoffrey F.; Logan, Gordon D.

    2018-01-01

    Automaticity allows us to perform tasks in a fast, efficient, and effortless manner after sufficient practice. Theories of automaticity propose that across practice processing transitions from being controlled by working memory to being controlled by long-term memory retrieval. Recent event-related potential (ERP) studies have sought to test this…

  5. Automatic control systems engineering

    International Nuclear Information System (INIS)

    Shin, Yun Gi

    2004-01-01

    This book gives descriptions of automatic control for electrical electronics, which indicates history of automatic control, Laplace transform, block diagram and signal flow diagram, electrometer, linearization of system, space of situation, state space analysis of electric system, sensor, hydro controlling system, stability, time response of linear dynamic system, conception of root locus, procedure to draw root locus, frequency response, and design of control system.

  6. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  7. Automatic differentiation of functions

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1990-06-01

    Automatic differentiation is a method of computing derivatives of functions to any order in any number of variables. The functions must be expressible as combinations of elementary functions. When evaluated at specific numerical points, the derivatives have no truncation error and are automatically found. The method is illustrated by simple examples. Source code in FORTRAN is provided

  8. Multiscale Processes in Magnetic Reconnection

    Science.gov (United States)

    Surjalal Sharma, A.; Jain, Neeraj

    The characteristic scales of the plasma processes in magnetic reconnection range from the elec-tron skin-depth to the magnetohydrodynamic (MHD) scale, and cross-scale coupling among them play a key role. Modeling these processes requires different physical models, viz. kinetic, electron-magnetohydrodynamics (EMHD), Hall-MHD, and MHD. The shortest scale processes are at the electron scale and these are modeled using an EMHD code, which provides many features of the multiscale behavior. In simulations using initial conditions consisting of pertur-bations with many scale sizes the reconnection takes place at many sites and the plasma flows from these interact with each other. This leads to thin current sheets with length less than 10 electron skin depths. The plasma flows also generate current sheets with multiple peaks, as observed by Cluster. The quadrupole structure of the magnetic field during reconnection starts on the electron scale and the interaction of inflow to the secondary sites and outflow from the dominant site generates a nested structure. In the outflow regions, the interaction of the electron outflows generated at the neighboring sites lead to the development of electron vortices. A signature of the nested structure of the Hall field is seen in Cluster observations, and more details of these features are expected from MMS.

  9. Multiscale reconstruction for MR fingerprinting.

    Science.gov (United States)

    Pierre, Eric Y; Ma, Dan; Chen, Yong; Badve, Chaitra; Griswold, Mark A

    2016-06-01

    To reduce the acquisition time needed to obtain reliable parametric maps with Magnetic Resonance Fingerprinting. An iterative-denoising algorithm is initialized by reconstructing the MRF image series at low image resolution. For subsequent iterations, the method enforces pixel-wise fidelity to the best-matching dictionary template then enforces fidelity to the acquired data at slightly higher spatial resolution. After convergence, parametric maps with desirable spatial resolution are obtained through template matching of the final image series. The proposed method was evaluated on phantom and in vivo data using the highly undersampled, variable-density spiral trajectory and compared with the original MRF method. The benefits of additional sparsity constraints were also evaluated. When available, gold standard parameter maps were used to quantify the performance of each method. The proposed approach allowed convergence to accurate parametric maps with as few as 300 time points of acquisition, as compared to 1000 in the original MRF work. Simultaneous quantification of T1, T2, proton density (PD), and B0 field variations in the brain was achieved in vivo for a 256 × 256 matrix for a total acquisition time of 10.2 s, representing a three-fold reduction in acquisition time. The proposed iterative multiscale reconstruction reliably increases MRF acquisition speed and accuracy. Magn Reson Med 75:2481-2492, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  10. Intelligent Fault Diagnosis of Rotary Machinery Based on Unsupervised Multiscale Representation Learning

    Science.gov (United States)

    Jiang, Guo-Qian; Xie, Ping; Wang, Xiao; Chen, Meng; He, Qun

    2017-11-01

    The performance of traditional vibration based fault diagnosis methods greatly depends on those handcrafted features extracted using signal processing algorithms, which require significant amounts of domain knowledge and human labor, and do not generalize well to new diagnosis domains. Recently, unsupervised representation learning provides an alternative promising solution to feature extraction in traditional fault diagnosis due to its superior learning ability from unlabeled data. Given that vibration signals usually contain multiple temporal structures, this paper proposes a multiscale representation learning (MSRL) framework to learn useful features directly from raw vibration signals, with the aim to capture rich and complementary fault pattern information at different scales. In our proposed approach, a coarse-grained procedure is first employed to obtain multiple scale signals from an original vibration signal. Then, sparse filtering, a newly developed unsupervised learning algorithm, is applied to automatically learn useful features from each scale signal, respectively, and then the learned features at each scale to be concatenated one by one to obtain multiscale representations. Finally, the multiscale representations are fed into a supervised classifier to achieve diagnosis results. Our proposed approach is evaluated using two different case studies: motor bearing and wind turbine gearbox fault diagnosis. Experimental results show that the proposed MSRL approach can take full advantages of the availability of unlabeled data to learn discriminative features and achieved better performance with higher accuracy and stability compared to the traditional approaches.

  11. Improvement and Extension of Shape Evaluation Criteria in Multi-Scale Image Segmentation

    Science.gov (United States)

    Sakamoto, M.; Honda, Y.; Kondo, A.

    2016-06-01

    From the last decade, the multi-scale image segmentation is getting a particular interest and practically being used for object-based image analysis. In this study, we have addressed the issues on multi-scale image segmentation, especially, in improving the performances for validity of merging and variety of derived region's shape. Firstly, we have introduced constraints on the application of spectral criterion which could suppress excessive merging between dissimilar regions. Secondly, we have extended the evaluation for smoothness criterion by modifying the definition on the extent of the object, which was brought for controlling the shape's diversity. Thirdly, we have developed new shape criterion called aspect ratio. This criterion helps to improve the reproducibility on the shape of object to be matched to the actual objectives of interest. This criterion provides constraint on the aspect ratio in the bounding box of object by keeping properties controlled with conventional shape criteria. These improvements and extensions lead to more accurate, flexible, and diverse segmentation results according to the shape characteristics of the target of interest. Furthermore, we also investigated a technique for quantitative and automatic parameterization in multi-scale image segmentation. This approach is achieved by comparing segmentation result with training area specified in advance by considering the maximization of the average area in derived objects or satisfying the evaluation index called F-measure. Thus, it has been possible to automate the parameterization that suited the objectives especially in the view point of shape's reproducibility.

  12. Development of porous structure simulator for multi-scale simulation of irregular porous catalysts

    International Nuclear Information System (INIS)

    Koyama, Michihisa; Suzuki, Ai; Sahnoun, Riadh; Tsuboi, Hideyuki; Hatakeyama, Nozomu; Endou, Akira; Takaba, Hiromitsu; Kubo, Momoji; Del Carpio, Carlos A.; Miyamoto, Akira

    2008-01-01

    Efficient development of highly functional porous materials, used as catalysts in the automobile industry, demands a meticulous knowledge of the nano-scale interface at the electronic and atomistic scale. However, it is often difficult to correlate the microscopic interfacial interactions with macroscopic characteristics of the materials; for instance, the interaction between a precious metal and its support oxide with long-term sintering properties of the catalyst. Multi-scale computational chemistry approaches can contribute to bridge the gap between micro- and macroscopic characteristics of these materials; however this type of multi-scale simulations has been difficult to apply especially to porous materials. To overcome this problem, we have developed a novel mesoscopic approach based on a porous structure simulator. This simulator can construct automatically irregular porous structures on a computer, enabling simulations with complex meso-scale structures. Moreover, in this work we have developed a new method to simulate long-term sintering properties of metal particles on porous catalysts. Finally, we have applied the method to the simulation of sintering properties of Pt on alumina support. This newly developed method has enabled us to propose a multi-scale simulation approach for porous catalysts

  13. Multiscale modeling in biomechanics and mechanobiology

    CERN Document Server

    Hwang, Wonmuk; Kuhl, Ellen

    2015-01-01

    Presenting a state-of-the-art overview of theoretical and computational models that link characteristic biomechanical phenomena, this book provides guidelines and examples for creating multiscale models in representative systems and organisms. It develops the reader's understanding of and intuition for multiscale phenomena in biomechanics and mechanobiology, and introduces a mathematical framework and computational techniques paramount to creating predictive multiscale models.   Biomechanics involves the study of the interactions of physical forces with biological systems at all scales – including molecular, cellular, tissue and organ scales. The emerging field of mechanobiology focuses on the way that cells produce and respond to mechanical forces – bridging the science of mechanics with the disciplines of genetics and molecular biology. Linking disparate spatial and temporal scales using computational techniques is emerging as a key concept in investigating some of the complex problems underlying these...

  14. Deductive multiscale simulation using order parameters

    Science.gov (United States)

    Ortoleva, Peter J.

    2017-05-16

    Illustrative embodiments of systems and methods for the deductive multiscale simulation of macromolecules are disclosed. In one illustrative embodiment, a deductive multiscale simulation method may include (i) constructing a set of order parameters that model one or more structural characteristics of a macromolecule, (ii) simulating an ensemble of atomistic configurations for the macromolecule using instantaneous values of the set of order parameters, (iii) simulating thermal-average forces and diffusivities for the ensemble of atomistic configurations, and (iv) evolving the set of order parameters via Langevin dynamics using the thermal-average forces and diffusivities.

  15. Multiscale phase inversion of seismic marine data

    KAUST Repository

    Fu, Lei

    2017-08-17

    We test the feasibility of applying multiscale phase inversion (MPI) to seismic marine data. To avoid cycle-skipping, the multiscale strategy temporally integrates the traces several times, i.e. high-order integration, to produce low-boost seismograms that are used as input data for the initial iterations of MPI. As the iterations proceed, higher frequencies in the data are boosted by using integrated traces of lower order as the input data. Results with synthetic data and field data from the Gulf of Mexico produce robust and accurate results if the model does not contain strong velocity contrasts such as salt-sediment interfaces.

  16. Multiscale Modeling of Ceramic Matrix Composites

    Science.gov (United States)

    Bednarcyk, Brett A.; Mital, Subodh K.; Pineda, Evan J.; Arnold, Steven M.

    2015-01-01

    Results of multiscale modeling simulations of the nonlinear response of SiC/SiC ceramic matrix composites are reported, wherein the microstructure of the ceramic matrix is captured. This micro scale architecture, which contains free Si material as well as the SiC ceramic, is responsible for residual stresses that play an important role in the subsequent thermo-mechanical behavior of the SiC/SiC composite. Using the novel Multiscale Generalized Method of Cells recursive micromechanics theory, the microstructure of the matrix, as well as the microstructure of the composite (fiber and matrix) can be captured.

  17. Multiscale Computational Fluid Dynamics: Methodology and Application to PECVD of Thin Film Solar Cells

    Directory of Open Access Journals (Sweden)

    Marquis Crose

    2017-02-01

    Full Text Available This work focuses on the development of a multiscale computational fluid dynamics (CFD simulation framework with application to plasma-enhanced chemical vapor deposition of thin film solar cells. A macroscopic, CFD model is proposed which is capable of accurately reproducing plasma chemistry and transport phenomena within a 2D axisymmetric reactor geometry. Additionally, the complex interactions that take place on the surface of a-Si:H thin films are coupled with the CFD simulation using a novel kinetic Monte Carlo scheme which describes the thin film growth, leading to a multiscale CFD model. Due to the significant computational challenges imposed by this multiscale CFD model, a parallel computation strategy is presented which allows for reduced processing time via the discretization of both the gas-phase mesh and microscopic thin film growth processes. Finally, the multiscale CFD model has been applied to the PECVD process at industrially relevant operating conditions revealing non-uniformities greater than 20% in the growth rate of amorphous silicon films across the radius of the wafer.

  18. A new automatic algorithm for quantification of myocardial infarction imaged by late gadolinium enhancement cardiovascular magnetic resonance: experimental validation and comparison to expert delineations in multi-center, multi-vendor patient data.

    Science.gov (United States)

    Engblom, Henrik; Tufvesson, Jane; Jablonowski, Robert; Carlsson, Marcus; Aletras, Anthony H; Hoffmann, Pavel; Jacquier, Alexis; Kober, Frank; Metzler, Bernhard; Erlinge, David; Atar, Dan; Arheden, Håkan; Heiberg, Einar

    2016-05-04

    Late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) using magnitude inversion recovery (IR) or phase sensitive inversion recovery (PSIR) has become clinical standard for assessment of myocardial infarction (MI). However, there is no clinical standard for quantification of MI even though multiple methods have been proposed. Simple thresholds have yielded varying results and advanced algorithms have only been validated in single center studies. Therefore, the aim of this study was to develop an automatic algorithm for MI quantification in IR and PSIR LGE images and to validate the new algorithm experimentally and compare it to expert delineations in multi-center, multi-vendor patient data. The new automatic algorithm, EWA (Expectation Maximization, weighted intensity, a priori information), was implemented using an intensity threshold by Expectation Maximization (EM) and a weighted summation to account for partial volume effects. The EWA algorithm was validated in-vivo against triphenyltetrazolium-chloride (TTC) staining (n = 7 pigs with paired IR and PSIR images) and against ex-vivo high resolution T1-weighted images (n = 23 IR and n = 13 PSIR images). The EWA algorithm was also compared to expert delineation in 124 patients from multi-center, multi-vendor clinical trials 2-6 days following first time ST-elevation myocardial infarction (STEMI) treated with percutaneous coronary intervention (PCI) (n = 124 IR and n = 49 PSIR images). Infarct size by the EWA algorithm in vivo in pigs showed a bias to ex-vivo TTC of -1 ± 4%LVM (R = 0.84) in IR and -2 ± 3%LVM (R = 0.92) in PSIR images and a bias to ex-vivo T1-weighted images of 0 ± 4%LVM (R = 0.94) in IR and 0 ± 5%LVM (R = 0.79) in PSIR images. In multi-center patient studies, infarct size by the EWA algorithm showed a bias to expert delineation of -2 ± 6 %LVM (R = 0.81) in IR images (n = 124) and 0 ± 5%LVM (R = 0.89) in

  19. Multivariate Generalized Multiscale Entropy Analysis

    Directory of Open Access Journals (Sweden)

    Anne Humeau-Heurtier

    2016-11-01

    Full Text Available Multiscale entropy (MSE was introduced in the 2000s to quantify systems’ complexity. MSE relies on (i a coarse-graining procedure to derive a set of time series representing the system dynamics on different time scales; (ii the computation of the sample entropy for each coarse-grained time series. A refined composite MSE (rcMSE—based on the same steps as MSE—also exists. Compared to MSE, rcMSE increases the accuracy of entropy estimation and reduces the probability of inducing undefined entropy for short time series. The multivariate versions of MSE (MMSE and rcMSE (MrcMSE have also been introduced. In the coarse-graining step used in MSE, rcMSE, MMSE, and MrcMSE, the mean value is used to derive representations of the original data at different resolutions. A generalization of MSE was recently published, using the computation of different moments in the coarse-graining procedure. However, so far, this generalization only exists for univariate signals. We therefore herein propose an extension of this generalized MSE to multivariate data. The multivariate generalized algorithms of MMSE and MrcMSE presented herein (MGMSE and MGrcMSE, respectively are first analyzed through the processing of synthetic signals. We reveal that MGrcMSE shows better performance than MGMSE for short multivariate data. We then study the performance of MGrcMSE on two sets of short multivariate electroencephalograms (EEG available in the public domain. We report that MGrcMSE may show better performance than MrcMSE in distinguishing different types of multivariate EEG data. MGrcMSE could therefore supplement MMSE or MrcMSE in the processing of multivariate datasets.

  20. Multiscale modeling of transdermal drug delivery

    Science.gov (United States)

    Rim, Jee Eun

    2006-04-01

    This study addresses the modeling of transdermal diffusion of drugs, to better understand the permeation of molecules through the skin, and especially the stratum corneum, which forms the main permeation barrier of the skin. In transdermal delivery of systemic drugs, the drugs diffuse from a patch placed on the skin through the epidermis to the underlying blood vessels. The epidermis is the outermost layer of the skin and can be further divided into the stratum corneum (SC) and the viable epidermis layers. The SC consists of keratinous cells (corneocytes) embedded in the lipid multi-bilayers of the intercellular space. It is widely accepted that the barrier properties of the skin mostly arises from the ordered structure of the lipid bilayers. The diffusion path, at least for lipophilic molecules, seems to be mainly through the lipid bilayers. Despite the advantages of transdermal drug delivery compared to other drug delivery routes such as oral dosing and injections, the low percutaneous permeability of most compounds is a major difficulty in the wide application of transdermal drug delivery. In fact, many transdermal drug formulations include one or more permeation enhancers that increase the permeation of the drug significantly. During the last two decades, many researchers have studied percutaneous absorption of drugs both experimentally and theoretically. However, many are based on pharmacokinetic compartmental models, in which steady or pseudo-steady state conditions are assumed, with constant diffusivity and partitioning for single component systems. This study presents a framework for studying the multi-component diffusion of drugs coupled with enhancers through the skin by considering the microstructure of the stratum corneum (SC). A multiscale framework of modeling the transdermal diffusion of molecules is presented, by first calculating the microscopic diffusion coefficient in the lipid bilayers of the SC using molecular dynamics (MD). Then a

  1. Enhanced

    Directory of Open Access Journals (Sweden)

    Martin I. Bayala

    2014-06-01

    Full Text Available Land Surface Temperature (LST is a key parameter in the energy balance model. However, the spatial resolution of the retrieved LST from sensors with high temporal resolution is not accurate enough to be used in local-scale studies. To explore the LST–Normalised Difference Vegetation Index relationship potential and obtain thermal images with high spatial resolution, six enhanced image sharpening techniques were assessed: the disaggregation procedure for radiometric surface temperatures (TsHARP, the Dry Edge Quadratic Function, the Difference of Edges (Ts∗DL and three models supported by the relationship of surface temperature and water stress of vegetation (Normalised Difference Water Index, Normalised Difference Infrared Index and Soil wetness index. Energy Balance Station data and in situ measurements were used to validate the enhanced LST images over a mixed agricultural landscape in the sub-humid Pampean Region of Argentina (PRA, during 2006–2010. Landsat Thematic Mapper (TM and Moderate Resolution Imaging Spectroradiometer (EOS-MODIS thermal datasets were assessed for different spatial resolutions (e.g., 960, 720 and 240 m and the performances were compared with global and local TsHARP procedures. Results suggest that the Ts∗DL technique is the most adequate for simulating LST to high spatial resolution over the heterogeneous landscape of a sub-humid region, showing an average root mean square error of less than 1 K.

  2. An infrared small target detection method based on multiscale local homogeneity measure

    Science.gov (United States)

    Nie, Jinyan; Qu, Shaocheng; Wei, Yantao; Zhang, Liming; Deng, Lizhen

    2018-05-01

    Infrared (IR) small target detection plays an important role in the field of image detection area owing to its intrinsic characteristics. This paper presents a multiscale local homogeneity measure (MLHM) for infrared small target detection, which can enhance the performance of IR small target detection system. Firstly, intra-patch homogeneity of the target itself and the inter-patch heterogeneity between target and the local background regions are integrated to enhance the significant of small target. Secondly, a multiscale measure based on local regions is proposed to obtain the most appropriate response. Finally, an adaptive threshold method is applied to small target segmentation. Experimental results on three different scenarios indicate that the MLHM has good performance under the interference of strong noise.

  3. Thai Automatic Speech Recognition

    National Research Council Canada - National Science Library

    Suebvisai, Sinaporn; Charoenpornsawat, Paisarn; Black, Alan; Woszczyna, Monika; Schultz, Tanja

    2005-01-01

    .... We focus on the discussion of the rapid deployment of ASR for Thai under limited time and data resources, including rapid data collection issues, acoustic model bootstrap, and automatic generation of pronunciations...

  4. Automatic Payroll Deposit System.

    Science.gov (United States)

    Davidson, D. B.

    1979-01-01

    The Automatic Payroll Deposit System in Yakima, Washington's Public School District No. 7, directly transmits each employee's salary amount for each pay period to a bank or other financial institution. (Author/MLF)

  5. Automatic Test Systems Aquisition

    National Research Council Canada - National Science Library

    1994-01-01

    We are providing this final memorandum report for your information and use. This report discusses the efforts to achieve commonality in standards among the Military Departments as part of the DoD policy for automatic test systems (ATS...

  6. Brand and automaticity

    OpenAIRE

    Liu, J.

    2008-01-01

    A presumption of most consumer research is that consumers endeavor to maximize the utility of their choices and are in complete control of their purchasing and consumption behavior. However, everyday life experience suggests that many of our choices are not all that reasoned or conscious. Indeed, automaticity, one facet of behavior, is indispensable to complete the portrait of consumers. Despite its importance, little attention is paid to how the automatic side of behavior can be captured and...

  7. Position automatic determination technology

    International Nuclear Information System (INIS)

    1985-10-01

    This book tells of method of position determination and characteristic, control method of position determination and point of design, point of sensor choice for position detector, position determination of digital control system, application of clutch break in high frequency position determination, automation technique of position determination, position determination by electromagnetic clutch and break, air cylinder, cam and solenoid, stop position control of automatic guide vehicle, stacker crane and automatic transfer control.

  8. Automatic intelligent cruise control

    OpenAIRE

    Stanton, NA; Young, MS

    2006-01-01

    This paper reports a study on the evaluation of automatic intelligent cruise control (AICC) from a psychological perspective. It was anticipated that AICC would have an effect upon the psychology of driving—namely, make the driver feel like they have less control, reduce the level of trust in the vehicle, make drivers less situationally aware, but might reduce the workload and make driving might less stressful. Drivers were asked to drive in a driving simulator under manual and automatic inte...

  9. A multiscale approach to mutual information matching

    NARCIS (Netherlands)

    Pluim, J.P.W.; Maintz, J.B.A.; Viergever, M.A.; Hanson, K.M.

    1998-01-01

    Methods based on mutual information have shown promising results for matching of multimodal brain images. This paper discusses a multiscale approach to mutual information matching, aiming for an acceleration of the matching process while considering the accuracy and robustness of the method. Scaling

  10. Multiscale Lyapunov exponent for 2-microlocal functions

    International Nuclear Information System (INIS)

    Dhifaoui, Zouhaier; Kortas, Hedi; Ammou, Samir Ben

    2009-01-01

    The Lyapunov exponent is an important indicator of chaotic dynamics. Using wavelet analysis, we define a multiscale representation of this exponent which we demonstrate the scale-wise dependence for functions belonging to C x 0 s,s ' spaces. An empirical study involving simulated processes and financial time series corroborates the theoretical findings.

  11. Multiscale phenomenology of the cosmic web

    NARCIS (Netherlands)

    Aragón-Calvo, Miguel A.; van de Weygaert, Rien; Jones, Bernard J. T.

    2010-01-01

    We analyse the structure and connectivity of the distinct morphologies that define the cosmic web. With the help of our multiscale morphology filter (MMF), we dissect the matter distribution of a cosmological Lambda cold dark matter N-body computer simulation into cluster, filaments and walls. The

  12. Multiscale Phase Inversion of Seismic Data

    KAUST Repository

    Fu, Lei; Guo, Bowen; Sun, Yonghe; Schuster, Gerard T.

    2017-01-01

    -skipping, the multiscale strategy temporally integrates the traces several times, i.e. high-order integration, to produce low-boost seismograms that are used as input data for the initial iterations of MPI. As the iterations proceed, higher frequencies in the data

  13. Multiscale Modeling of Poromechanics in Geologic Media

    Science.gov (United States)

    Castelletto, N.; Hajibeygi, H.; Klevtsov, S.; Tchelepi, H.

    2017-12-01

    We describe a hybrid MultiScale Finite Element-Finite Volume (h-MSFE-FV) framework for the simulation of single-phase Darcy flow through deformable porous media that exhibit highly heterogeneous poromechanical properties over a wide range of length scales. In such systems, high resolution characterizations are a key requirement to obtain reliable modeling predictions and motivate the development of multiscale solution strategies to cope with the computational burden. A coupled two-field fine-scale mixed FE-FV discretization of the governing equations, namely conservation laws of linear momentum and mass, is first implemented based on a displacement-pressure formulation. After imposing a coarse-scale grid on the given fine-scale problem, for the MSFE displacement stage, the coarse-scale basis functions are obtained by solving local equilibrium problems within coarse elements. Such MSFE stage is then coupled with the MSFV method for flow, in which a dual-coarse grid is introduced to obtain approximate but conservative multiscale solutions. Robustness and accuracy of the proposed multiscale framework is demonstrated using a variety of challenging test problems.

  14. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.; Efendiev, Yalchin R.; Galvis, Juan; Ghommem, Mehdi

    2014-01-01

    residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully

  15. Multiscale information modelling for heart morphogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Abdulla, T; Imms, R; Summers, R [Department of Electronic and Electrical Engineering, Loughborough University, Loughborough (United Kingdom); Schleich, J M, E-mail: T.Abdulla@lboro.ac.u [LTSI Signal and Image Processing Laboratory, University of Rennes 1, Rennes (France)

    2010-07-01

    Science is made feasible by the adoption of common systems of units. As research has become more data intensive, especially in the biomedical domain, it requires the adoption of a common system of information models, to make explicit the relationship between one set of data and another, regardless of format. This is being realised through the OBO Foundry to develop a suite of reference ontologies, and NCBO Bioportal to provide services to integrate biomedical resources and functionality to visualise and create mappings between ontology terms. Biomedical experts tend to be focused at one level of spatial scale, be it biochemistry, cell biology, or anatomy. Likewise, the ontologies they use tend to be focused at a particular level of scale. There is increasing interest in a multiscale systems approach, which attempts to integrate between different levels of scale to gain understanding of emergent effects. This is a return to physiological medicine with a computational emphasis, exemplified by the worldwide Physiome initiative, and the European Union funded Network of Excellence in the Virtual Physiological Human. However, little work has been done on how information modelling itself may be tailored to a multiscale systems approach. We demonstrate how this can be done for the complex process of heart morphogenesis, which requires multiscale understanding in both time and spatial domains. Such an effort enables the integration of multiscale metrology.

  16. Multiscale information modelling for heart morphogenesis

    International Nuclear Information System (INIS)

    Abdulla, T; Imms, R; Summers, R; Schleich, J M

    2010-01-01

    Science is made feasible by the adoption of common systems of units. As research has become more data intensive, especially in the biomedical domain, it requires the adoption of a common system of information models, to make explicit the relationship between one set of data and another, regardless of format. This is being realised through the OBO Foundry to develop a suite of reference ontologies, and NCBO Bioportal to provide services to integrate biomedical resources and functionality to visualise and create mappings between ontology terms. Biomedical experts tend to be focused at one level of spatial scale, be it biochemistry, cell biology, or anatomy. Likewise, the ontologies they use tend to be focused at a particular level of scale. There is increasing interest in a multiscale systems approach, which attempts to integrate between different levels of scale to gain understanding of emergent effects. This is a return to physiological medicine with a computational emphasis, exemplified by the worldwide Physiome initiative, and the European Union funded Network of Excellence in the Virtual Physiological Human. However, little work has been done on how information modelling itself may be tailored to a multiscale systems approach. We demonstrate how this can be done for the complex process of heart morphogenesis, which requires multiscale understanding in both time and spatial domains. Such an effort enables the integration of multiscale metrology.

  17. Multiscale approach to equilibrating model polymer melts

    DEFF Research Database (Denmark)

    Svaneborg, Carsten; Ali Karimi-Varzaneh, Hossein; Hojdis, Nils

    2016-01-01

    We present an effective and simple multiscale method for equilibrating Kremer Grest model polymer melts of varying stiffness. In our approach, we progressively equilibrate the melt structure above the tube scale, inside the tube and finally at the monomeric scale. We make use of models designed...

  18. Multiscale optimization of saturated poroelastic actuators

    DEFF Research Database (Denmark)

    Andreasen, Casper Schousboe; Sigmund, Ole

    A multiscale method for optimizing the material micro structure in a macroscopically heterogeneous saturated poroelastic media with respect to macro properties is presented. The method is based on topology optimization using the homogenization technique, here applied to the optimization of a bi...

  19. Generalized multiscale finite element methods: Oversampling strategies

    KAUST Repository

    Efendiev, Yalchin R.; Galvis, Juan; Li, Guanglian; Presho, Michael

    2014-01-01

    In this paper, we propose oversampling strategies in the generalized multiscale finite element method (GMsFEM) framework. The GMsFEM, which has been recently introduced in Efendiev et al. (2013b) [Generalized Multiscale Finite Element Methods, J. Comput. Phys., vol. 251, pp. 116-135, 2013], allows solving multiscale parameter-dependent problems at a reduced computational cost by constructing a reduced-order representation of the solution on a coarse grid. The main idea of the method consists of (1) the construction of snapshot space, (2) the construction of the offline space, and (3) construction of the online space (the latter for parameter-dependent problems). In Efendiev et al. (2013b) [Generalized Multiscale Finite Element Methods, J. Comput. Phys., vol. 251, pp. 116-135, 2013], it was shown that the GMsFEM provides a flexible tool to solve multiscale problems with a complex input space by generating appropriate snapshot, offline, and online spaces. In this paper, we develop oversampling techniques to be used in this context (see Hou and Wu (1997) where oversampling is introduced for multiscale finite element methods). It is known (see Hou and Wu (1997)) that the oversampling can improve the accuracy of multiscale methods. In particular, the oversampling technique uses larger regions (larger than the target coarse block) in constructing local basis functions. Our motivation stems from the analysis presented in this paper, which shows that when using oversampling techniques in the construction of the snapshot space and offline space, GMsFEM will converge independent of small scales and high contrast under certain assumptions. We consider the use of a multiple eigenvalue problems to improve the convergence and discuss their relation to single spectral problems that use oversampled regions. The oversampling procedures proposed in this paper differ from those in Hou and Wu (1997). In particular, the oversampling domains are partially used in constructing local

  20. Pricing perpetual American options under multiscale stochastic elasticity of variance

    International Nuclear Information System (INIS)

    Yoon, Ji-Hun

    2015-01-01

    Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk

  1. Multiscale Permutation Entropy Based Rolling Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Jinde Zheng

    2014-01-01

    Full Text Available A new rolling bearing fault diagnosis approach based on multiscale permutation entropy (MPE, Laplacian score (LS, and support vector machines (SVMs is proposed in this paper. Permutation entropy (PE was recently proposed and defined to measure the randomicity and detect dynamical changes of time series. However, for the complexity of mechanical systems, the randomicity and dynamic changes of the vibration signal will exist in different scales. Thus, the definition of MPE is introduced and employed to extract the nonlinear fault characteristics from the bearing vibration signal in different scales. Besides, the SVM is utilized to accomplish the fault feature classification to fulfill diagnostic procedure automatically. Meanwhile, in order to avoid a high dimension of features, the Laplacian score (LS is used to refine the feature vector by ranking the features according to their importance and correlations with the main fault information. Finally, the rolling bearing fault diagnosis method based on MPE, LS, and SVM is proposed and applied to the experimental data. The experimental data analysis results indicate that the proposed method could identify the fault categories effectively.

  2. Fighter/Attack Automatic Collision Avoidance Systems Business Case

    National Research Council Canada - National Science Library

    Mapes, Peter B

    2006-01-01

    .... This study concludes that implementation of Automatic Collision Avoidance Systems (Auto-CAS) in F-16, F/A-18, F/A-22, and F-35 aircraft would save aircrew lives and preserve, and enhance combat capability.

  3. The automatic component of habit in health behavior: habit as cue-contingent automaticity.

    Science.gov (United States)

    Orbell, Sheina; Verplanken, Bas

    2010-07-01

    Habit might be usefully characterized as a form of automaticity that involves the association of a cue and a response. Three studies examined habitual automaticity in regard to different aspects of the cue-response relationship characteristic of unhealthy and healthy habits. In each study, habitual automaticity was assessed by the Self-Report Habit Index (SRHI). In Study 1 SRHI scores correlated with attentional bias to smoking cues in a Stroop task. Study 2 examined the ability of a habit cue to elicit an unwanted habit response. In a prospective field study, habitual automaticity in relation to smoking when drinking alcohol in a licensed public house (pub) predicted the likelihood of cigarette-related action slips 2 months later after smoking in pubs had become illegal. In Study 3 experimental group participants formed an implementation intention to floss in response to a specified situational cue. Habitual automaticity of dental flossing was rapidly enhanced compared to controls. The studies provided three different demonstrations of the importance of cues in the automatic operation of habits. Habitual automaticity assessed by the SRHI captured aspects of a habit that go beyond mere frequency or consistency of the behavior. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  4. Toward comprehensive detection of sight threatening retinal disease using a multiscale AM-FM methodology

    Science.gov (United States)

    Agurto, C.; Barriga, S.; Murray, V.; Murillo, S.; Zamora, G.; Bauman, W.; Pattichis, M.; Soliz, P.

    2011-03-01

    In the United States and most of the western world, the leading causes of vision impairment and blindness are age-related macular degeneration (AMD), diabetic retinopathy (DR), and glaucoma. In the last decade, research in automatic detection of retinal lesions associated with eye diseases has produced several automatic systems for detection and screening of AMD, DR, and glaucoma. However. advanced, sight-threatening stages of DR and AMD can present with lesions not commonly addressed by current approaches to automatic screening. In this paper we present an automatic eye screening system based on multiscale Amplitude Modulation-Frequency Modulation (AM-FM) decompositions that addresses not only the early stages, but also advanced stages of retinal and optic nerve disease. Ten different experiments were performed in which abnormal features such as neovascularization, drusen, exudates, pigmentation abnormalities, geographic atrophy (GA), and glaucoma were classified. The algorithm achieved an accuracy detection range of [0.77 to 0.98] area under the ROC curve for a set of 810 images. When set to a specificity value of 0.60, the sensitivity of the algorithm to the detection of abnormal features ranged between 0.88 and 1.00. Our system demonstrates that, given an appropriate training set, it is possible to use a unique algorithm to detect a broad range of eye diseases.

  5. The adaptive value of habitat preferences from a multi-scale spatial perspective: insights from marsh-nesting avian species

    Directory of Open Access Journals (Sweden)

    Jan Jedlikowski

    2017-03-01

    Full Text Available Background Habitat selection and its adaptive outcomes are crucial features for animal life-history strategies. Nevertheless, congruence between habitat preferences and breeding success has been rarely demonstrated, which may result from the single-scale evaluation of animal choices. As habitat selection is a complex multi-scale process in many groups of animal species, investigating adaptiveness of habitat selection in a multi-scale framework is crucial. In this study, we explore whether habitat preferences acting at different spatial scales enhance the fitness of bird species, and check the appropriateness of single vs. multi-scale models. We expected that variables found to be more important for habitat selection at individual scale(s, would coherently play a major role in affecting nest survival at the same scale(s. Methods We considered habitat preferences of two Rallidae species, little crake (Zapornia parva and water rail (Rallus aquaticus, at three spatial scales (landscape, territory, and nest-site and related them to nest survival. Single-scale versus multi-scale models (GLS and glmmPQL were compared to check which model better described adaptiveness of habitat preferences. Consistency between the effect of variables on habitat selection and on nest survival was checked to investigate their adaptive value. Results In both species, multi-scale models for nest survival were more supported than single-scale ones. In little crake, the multi-scale model indicated vegetation density and water depth at the territory scale, as well as vegetation height at nest-site scale, as the most important variables. The first two variables were among the most important for nest survival and habitat selection, and the coherent effects suggested the adaptive value of habitat preferences. In water rail, the multi-scale model of nest survival showed vegetation density at territory scale and extent of emergent vegetation within landscape scale as the most

  6. Automatic Program Development

    DEFF Research Database (Denmark)

    Automatic Program Development is a tribute to Robert Paige (1947-1999), our accomplished and respected colleague, and moreover our good friend, whose untimely passing was a loss to our academic and research community. We have collected the revised, updated versions of the papers published in his...... honor in the Higher-Order and Symbolic Computation Journal in the years 2003 and 2005. Among them there are two papers by Bob: (i) a retrospective view of his research lines, and (ii) a proposal for future studies in the area of the automatic program derivation. The book also includes some papers...... by members of the IFIP Working Group 2.1 of which Bob was an active member. All papers are related to some of the research interests of Bob and, in particular, to the transformational development of programs and their algorithmic derivation from formal specifications. Automatic Program Development offers...

  7. Semi-Automatic Science Workflow Synthesis for High-End Computing on the NASA Earth Exchange

    Data.gov (United States)

    National Aeronautics and Space Administration — Enhance capabilities for collaborative data analysis and modeling in Earth sciences. Develop components for automatic workflow capture, archiving and management....

  8. Automatic text summarization

    CERN Document Server

    Torres Moreno, Juan Manuel

    2014-01-01

    This new textbook examines the motivations and the different algorithms for automatic document summarization (ADS). We performed a recent state of the art. The book shows the main problems of ADS, difficulties and the solutions provided by the community. It presents recent advances in ADS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several exemples are included in order to clarify the theoretical concepts.  The books currently available in the area of Automatic Document Summarization are not recent. Powerful algorithms have been develop

  9. Automatic Ultrasound Scanning

    DEFF Research Database (Denmark)

    Moshavegh, Ramin

    on the user adjustments on the scanner interface to optimize the scan settings. This explains the huge interest in the subject of this PhD project entitled “AUTOMATIC ULTRASOUND SCANNING”. The key goals of the project have been to develop automated techniques to minimize the unnecessary settings...... on the scanners, and to improve the computer-aided diagnosis (CAD) in ultrasound by introducing new quantitative measures. Thus, four major issues concerning automation of the medical ultrasound are addressed in this PhD project. They touch upon gain adjustments in ultrasound, automatic synthetic aperture image...

  10. Automatic NAA. Saturation activities

    International Nuclear Information System (INIS)

    Westphal, G.P.; Grass, F.; Kuhnert, M.

    2008-01-01

    A system for Automatic NAA is based on a list of specific saturation activities determined for one irradiation position at a given neutron flux and a single detector geometry. Originally compiled from measurements of standard reference materials, the list may be extended also by the calculation of saturation activities from k 0 and Q 0 factors, and f and α values of the irradiation position. A systematic improvement of the SRM approach is currently being performed by pseudo-cyclic activation analysis, to reduce counting errors. From these measurements, the list of saturation activities is recalculated in an automatic procedure. (author)

  11. A mathematical framework for multiscale science and engineering: the variational multiscale method and interscale transfer operators

    International Nuclear Information System (INIS)

    Shadid, John Nicolas; Lehoucq, Richard B.; Christon, Mark Allen; Slepoy, Alexander; Bochev, Pavel Blagoveston; Collis, Samuel Scott; Wagner, Gregory John

    2004-01-01

    Existing approaches in multiscale science and engineering have evolved from a range of ideas and solutions that are reflective of their original problem domains. As a result, research in multiscale science has followed widely diverse and disjoint paths, which presents a barrier to cross pollination of ideas and application of methods outside their application domains. The status of the research environment calls for an abstract mathematical framework that can provide a common language to formulate and analyze multiscale problems across a range of scientific and engineering disciplines. In such a framework, critical common issues arising in multiscale problems can be identified, explored and characterized in an abstract setting. This type of overarching approach would allow categorization and clarification of existing models and approximations in a landscape of seemingly disjoint, mutually exclusive and ad hoc methods. More importantly, such an approach can provide context for both the development of new techniques and their critical examination. As with any new mathematical framework, it is necessary to demonstrate its viability on problems of practical importance. At Sandia, lab-centric, prototype application problems in fluid mechanics, reacting flows, magnetohydrodynamics (MHD), shock hydrodynamics and materials science span an important subset of DOE Office of Science applications and form an ideal proving ground for new approaches in multiscale science.

  12. Multiscale Simulations for Coupled Flow and Transport Using the Generalized Multiscale Finite Element Method

    KAUST Repository

    Chung, Eric

    2015-12-11

    In this paper, we develop a mass conservative multiscale method for coupled flow and transport in heterogeneous porous media. We consider a coupled system consisting of a convection-dominated transport equation and a flow equation. We construct a coarse grid solver based on the Generalized Multiscale Finite Element Method (GMsFEM) for a coupled system. In particular, multiscale basis functions are constructed based on some snapshot spaces for the pressure and the concentration equations and some local spectral decompositions in the snapshot spaces. The resulting approach uses a few multiscale basis functions in each coarse block (for both the pressure and the concentration) to solve the coupled system. We use the mixed framework, which allows mass conservation. Our main contributions are: (1) the development of a mass conservative GMsFEM for the coupled flow and transport; (2) the development of a robust multiscale method for convection-dominated transport problems by choosing appropriate test and trial spaces within Petrov-Galerkin mixed formulation. We present numerical results and consider several heterogeneous permeability fields. Our numerical results show that with only a few basis functions per coarse block, we can achieve a good approximation.

  13. Multiscale Simulations for Coupled Flow and Transport Using the Generalized Multiscale Finite Element Method

    KAUST Repository

    Chung, Eric; Efendiev, Yalchin R.; Leung, Wing; Ren, Jun

    2015-01-01

    In this paper, we develop a mass conservative multiscale method for coupled flow and transport in heterogeneous porous media. We consider a coupled system consisting of a convection-dominated transport equation and a flow equation. We construct a coarse grid solver based on the Generalized Multiscale Finite Element Method (GMsFEM) for a coupled system. In particular, multiscale basis functions are constructed based on some snapshot spaces for the pressure and the concentration equations and some local spectral decompositions in the snapshot spaces. The resulting approach uses a few multiscale basis functions in each coarse block (for both the pressure and the concentration) to solve the coupled system. We use the mixed framework, which allows mass conservation. Our main contributions are: (1) the development of a mass conservative GMsFEM for the coupled flow and transport; (2) the development of a robust multiscale method for convection-dominated transport problems by choosing appropriate test and trial spaces within Petrov-Galerkin mixed formulation. We present numerical results and consider several heterogeneous permeability fields. Our numerical results show that with only a few basis functions per coarse block, we can achieve a good approximation.

  14. Multiscale image contrast amplification (MUSICA)

    Science.gov (United States)

    Vuylsteke, Pieter; Schoeters, Emile P.

    1994-05-01

    This article presents a novel approach to the problem of detail contrast enhancement, based on multiresolution representation of the original image. The image is decomposed into a weighted sum of smooth, localized, 2D basis functions at multiple scales. Each transform coefficient represents the amount of local detail at some specific scale and at a specific position in the image. Detail contrast is enhanced by non-linear amplification of the transform coefficients. An inverse transform is then applied to the modified coefficients. This yields a uniformly contrast- enhanced image without artefacts. The MUSICA-algorithm is being applied routinely to computed radiography images of chest, skull, spine, shoulder, pelvis, extremities, and abdomen examinations, with excellent acceptance. It is useful for a wide range of applications in the medical, graphical, and industrial area.

  15. International Conference on Multiscale Methods and Partial Differential Equations.

    Energy Technology Data Exchange (ETDEWEB)

    Thomas Hou

    2006-12-12

    The International Conference on Multiscale Methods and Partial Differential Equations (ICMMPDE for short) was held at IPAM, UCLA on August 26-27, 2005. The conference brought together researchers, students and practitioners with interest in the theoretical, computational and practical aspects of multiscale problems and related partial differential equations. The conference provided a forum to exchange and stimulate new ideas from different disciplines, and to formulate new challenging multiscale problems that will have impact in applications.

  16. Residual-driven online generalized multiscale finite element methods

    KAUST Repository

    Chung, Eric T.; Efendiev, Yalchin R.; Leung, Wing Tat

    2015-01-01

    In the paper, theoretical and numerical results are presented. Our numerical results show that if the offline space is sufficiently large (in terms of the dimension) such that the coarse space contains all multiscale spectral basis functions that correspond to small eigenvalues, then the error reduction by adding online multiscale basis function is independent of the contrast. We discuss various ways computing online multiscale basis functions which include a use of small dimensional offline spaces.

  17. A multi-scale correlative investigation of ductile fracture

    International Nuclear Information System (INIS)

    Daly, M.; Burnett, T.L.; Pickering, E.J.; Tuck, O.C.G.; Léonard, F.; Kelley, R.; Withers, P.J.; Sherry, A.H.

    2017-01-01

    The use of novel multi-scale correlative methods, which involve the coordinated characterisation of matter across a range of length scales, are becoming of increasing value to materials scientists. Here, we describe for the first time how a multi-scale correlative approach can be used to investigate the nature of ductile fracture in metals. Specimens of a nuclear pressure vessel steel, SA508 Grade 3, are examined following ductile fracture using medium and high-resolution 3D X-ray computed tomography (CT) analyses, and a site-specific analysis using a dual beam plasma focused ion beam scanning electron microscope (PFIB-SEM). The methods are employed sequentially to characterise damage by void nucleation and growth in one volume of interest, allowing for the imaging of voids that ranged in size from less than 100 nm to over 100 μm. This enables the examination of voids initiated at carbide particles to be detected, as well as the large voids initiated at inclusions. We demonstrate that this multi-scale correlative approach is a powerful tool, which not only enhances our understanding of ductile failure through detailed characterisation of microstructure, but also provides quantitative information about the size, volume fractions and spatial distributions of voids that can be used to inform models of failure. It is found that the vast majority of large voids nucleated at MnS inclusions, and that the volume of a void varied according to the volume of its initiating inclusion raised to the power 3/2. The most severe voiding was concentrated within 500 μm of the fracture surface, but measurable damage was found to extend to a depth of at least 3 mm. Microvoids associated with carbides (carbide-initiated voids) were found to be concentrated around larger inclusion-initiated voids at depths of at least 400 μm. Methods for quantifying X-ray CT void data are discussed, and a procedure for using this data to calibrate parameters in the Gurson-Tvergaard Needleman (GTN

  18. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kuprat, A.P., E-mail: andrew.kuprat@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Kabilan, S., E-mail: senthil.kabilan@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Carson, J.P., E-mail: james.carson@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Corley, R.A., E-mail: rick.corley@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Einstein, D.R., E-mail: daniel.einstein@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States)

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton’s method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD–ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  19. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Science.gov (United States)

    Kuprat, A. P.; Kabilan, S.; Carson, J. P.; Corley, R. A.; Einstein, D. R.

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton's method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  20. A bidirectional coupling procedure applied to multiscale respiratory modeling

    International Nuclear Information System (INIS)

    Kuprat, A.P.; Kabilan, S.; Carson, J.P.; Corley, R.A.; Einstein, D.R.

    2013-01-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton’s method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD–ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  1. A Multiscale Time-Splitting Discrete Fracture Model of Nanoparticles Transport in Fractured Porous Media

    KAUST Repository

    El-Amin, Mohamed F.; Kou, Jisheng; Sun, Shuyu

    2017-01-01

    Recently, applications of nanoparticles have been considered in many branches of petroleum engineering, especially, enhanced oil recovery. The current paper is devoted to investigate the problem of nanoparticles transport in fractured porous media, numerically. We employed the discrete-fracture model (DFM) to represent the flow and transport in the fractured formations. The system of the governing equations consists of the mass conservation law, Darcy's law, nanoparticles concentration in water, deposited nanoparticles concentration on the pore-wall, and entrapped nanoparticles concentration in the pore-throat. The variation of porosity and permeability due to the nanoparticles deposition/entrapment on/in the pores is also considered. We employ the multiscale time-splitting strategy to control different time-step sizes for different physics, such as pressure and concentration. The cell-centered finite difference (CCFD) method is used for the spatial discretization. Numerical examples are provided to demonstrate the efficiency of the proposed multiscale time splitting approach.

  2. Multiscale Polymer Composites: A Review of the Interlaminar Fracture Toughness Improvement

    Directory of Open Access Journals (Sweden)

    Vishwesh Dikshit

    2017-10-01

    Full Text Available Composite materials are prone to delamination as they are weaker in the thickness direction. Carbon nanotubes (CNTs are introduced as a multiscale reinforcement into the fiber reinforced polymer composites to suppress the delamination phenomenon. This review paper presents the detailed progress made by the scientific and research community to-date in improving the Mode I and Mode II interlaminar fracture toughness (ILFT by various methodologies including the effect of multiscale reinforcement. Methods of measuring the Mode I and Mode II fracture toughness of the composites along with the solutions to improve them are presented. The use of different methodologies and approaches along with their performance in enhancing the fracture toughness of the composites is summarized. The current state of polymer-fiber-nanotube composites and their future perspective are also deliberated.

  3. A multiscale decomposition approach to detect abnormal vasculature in the optic disc.

    Science.gov (United States)

    Agurto, Carla; Yu, Honggang; Murray, Victor; Pattichis, Marios S; Nemeth, Sheila; Barriga, Simon; Soliz, Peter

    2015-07-01

    This paper presents a multiscale method to detect neovascularization in the optic disc (NVD) using fundus images. Our method is applied to a manually selected region of interest (ROI) containing the optic disc. All the vessels in the ROI are segmented by adaptively combining contrast enhancement methods with a vessel segmentation technique. Textural features extracted using multiscale amplitude-modulation frequency-modulation, morphological granulometry, and fractal dimension are used. A linear SVM is used to perform the classification, which is tested by means of 10-fold cross-validation. The performance is evaluated using 300 images achieving an AUC of 0.93 with maximum accuracy of 88%. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. A Multiscale Time-Splitting Discrete Fracture Model of Nanoparticles Transport in Fractured Porous Media

    KAUST Repository

    El-Amin, Mohamed F.

    2017-06-06

    Recently, applications of nanoparticles have been considered in many branches of petroleum engineering, especially, enhanced oil recovery. The current paper is devoted to investigate the problem of nanoparticles transport in fractured porous media, numerically. We employed the discrete-fracture model (DFM) to represent the flow and transport in the fractured formations. The system of the governing equations consists of the mass conservation law, Darcy\\'s law, nanoparticles concentration in water, deposited nanoparticles concentration on the pore-wall, and entrapped nanoparticles concentration in the pore-throat. The variation of porosity and permeability due to the nanoparticles deposition/entrapment on/in the pores is also considered. We employ the multiscale time-splitting strategy to control different time-step sizes for different physics, such as pressure and concentration. The cell-centered finite difference (CCFD) method is used for the spatial discretization. Numerical examples are provided to demonstrate the efficiency of the proposed multiscale time splitting approach.

  5. Improved anomaly detection using multi-scale PLS and generalized likelihood ratio test

    KAUST Repository

    Madakyaru, Muddu

    2017-02-16

    Process monitoring has a central role in the process industry to enhance productivity, efficiency, and safety, and to avoid expensive maintenance. In this paper, a statistical approach that exploit the advantages of multiscale PLS models (MSPLS) and those of a generalized likelihood ratio (GLR) test to better detect anomalies is proposed. Specifically, to consider the multivariate and multi-scale nature of process dynamics, a MSPLS algorithm combining PLS and wavelet analysis is used as modeling framework. Then, GLR hypothesis testing is applied using the uncorrelated residuals obtained from MSPLS model to improve the anomaly detection abilities of these latent variable based fault detection methods even further. Applications to a simulated distillation column data are used to evaluate the proposed MSPLS-GLR algorithm.

  6. Improved anomaly detection using multi-scale PLS and generalized likelihood ratio test

    KAUST Repository

    Madakyaru, Muddu; Harrou, Fouzi; Sun, Ying

    2017-01-01

    Process monitoring has a central role in the process industry to enhance productivity, efficiency, and safety, and to avoid expensive maintenance. In this paper, a statistical approach that exploit the advantages of multiscale PLS models (MSPLS) and those of a generalized likelihood ratio (GLR) test to better detect anomalies is proposed. Specifically, to consider the multivariate and multi-scale nature of process dynamics, a MSPLS algorithm combining PLS and wavelet analysis is used as modeling framework. Then, GLR hypothesis testing is applied using the uncorrelated residuals obtained from MSPLS model to improve the anomaly detection abilities of these latent variable based fault detection methods even further. Applications to a simulated distillation column data are used to evaluate the proposed MSPLS-GLR algorithm.

  7. Multiscale Modeling of Point and Line Defects in Cubic Lattices

    National Research Council Canada - National Science Library

    Chung, P. W; Clayton, J. D

    2007-01-01

    .... This multiscale theory explicitly captures heterogeneity in microscopic atomic motion in crystalline materials, attributed, for example, to the presence of various point and line lattice defects...

  8. Towards practical multiscale approach for analysis of reinforced concrete structures

    Science.gov (United States)

    Moyeda, Arturo; Fish, Jacob

    2017-12-01

    We present a novel multiscale approach for analysis of reinforced concrete structural elements that overcomes two major hurdles in utilization of multiscale technologies in practice: (1) coupling between material and structural scales due to consideration of large representative volume elements (RVE), and (2) computational complexity of solving complex nonlinear multiscale problems. The former is accomplished using a variant of computational continua framework that accounts for sizeable reinforced concrete RVEs by adjusting the location of quadrature points. The latter is accomplished by means of reduced order homogenization customized for structural elements. The proposed multiscale approach has been verified against direct numerical simulations and validated against experimental results.

  9. Cliff : the automatized zipper

    NARCIS (Netherlands)

    Baharom, M.Z.; Toeters, M.J.; Delbressine, F.L.M.; Bangaru, C.; Feijs, L.M.G.

    2016-01-01

    It is our strong believe that fashion - more specifically apparel - can support us so much more in our daily life than it currently does. The Cliff project takes the opportunity to create a generic automatized zipper. It is a response to the struggle by elderly, people with physical disability, and

  10. Automatic Complexity Analysis

    DEFF Research Database (Denmark)

    Rosendahl, Mads

    1989-01-01

    One way to analyse programs is to to derive expressions for their computational behaviour. A time bound function (or worst-case complexity) gives an upper bound for the computation time as a function of the size of input. We describe a system to derive such time bounds automatically using abstract...

  11. Automatic Oscillating Turret.

    Science.gov (United States)

    1981-03-01

    Final Report: February 1978 ZAUTOMATIC OSCILLATING TURRET SYSTEM September 1980 * 6. PERFORMING 01G. REPORT NUMBER .J7. AUTHOR(S) S. CONTRACT OR GRANT...o....e.... *24 APPENDIX P-4 OSCILLATING BUMPER TURRET ...................... 25 A. DESCRIPTION 1. Turret Controls ...Other criteria requirements were: 1. Turret controls inside cab. 2. Automatic oscillation with fixed elevation to range from 20* below the horizontal to

  12. Reactor component automatic grapple

    International Nuclear Information System (INIS)

    Greenaway, P.R.

    1982-01-01

    A grapple for handling nuclear reactor components in a medium such as liquid sodium which, upon proper seating and alignment of the grapple with the component as sensed by a mechanical logic integral to the grapple, automatically seizes the component. The mechanical logic system also precludes seizure in the absence of proper seating and alignment. (author)

  13. Automatic sweep circuit

    International Nuclear Information System (INIS)

    Keefe, D.J.

    1980-01-01

    An automatically sweeping circuit for searching for an evoked response in an output signal in time with respect to a trigger input is described. Digital counters are used to activate a detector at precise intervals, and monitoring is repeated for statistical accuracy. If the response is not found then a different time window is examined until the signal is found

  14. Automatic sweep circuit

    Science.gov (United States)

    Keefe, Donald J.

    1980-01-01

    An automatically sweeping circuit for searching for an evoked response in an output signal in time with respect to a trigger input. Digital counters are used to activate a detector at precise intervals, and monitoring is repeated for statistical accuracy. If the response is not found then a different time window is examined until the signal is found.

  15. Recursive automatic classification algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bauman, E V; Dorofeyuk, A A

    1982-03-01

    A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.

  16. Automatic Commercial Permit Sets

    Energy Technology Data Exchange (ETDEWEB)

    Grana, Paul [Folsom Labs, Inc., San Francisco, CA (United States)

    2017-12-21

    Final report for Folsom Labs’ Solar Permit Generator project, which has successfully completed, resulting in the development and commercialization of a software toolkit within the cloud-based HelioScope software environment that enables solar engineers to automatically generate and manage draft documents for permit submission.

  17. Multiscale Study of Currents Affected by Topography

    Science.gov (United States)

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Multiscale Study of Currents Affected by Topography ...the effects of topography on the ocean general and regional circulation with a focus on the wide range of scales of interactions. The small-scale...details of the topography and the waves, eddies, drag, and turbulence it generates (at spatial scales ranging from meters to mesoscale) interact in the

  18. A multiscale approach to Brownian motors

    International Nuclear Information System (INIS)

    Pavliotis, G.A.

    2005-01-01

    The problem of Brownian motion in a periodic potential, under the influence of external forcing, which is either random or periodic in time, is studied in this Letter. Multiscale techniques are used to derive general formulae for the steady state particle current and the effective diffusion tensor. These formulae are then applied to calculate the effective diffusion coefficient for a Brownian particle in a periodic potential driven simultaneously by additive Gaussian white and colored noise. Our theoretical findings are supported by numerical simulations

  19. Multiscale modeling of mucosal immune responses

    Science.gov (United States)

    2015-01-01

    Computational modeling techniques are playing increasingly important roles in advancing a systems-level mechanistic understanding of biological processes. Computer simulations guide and underpin experimental and clinical efforts. This study presents ENteric Immune Simulator (ENISI), a multiscale modeling tool for modeling the mucosal immune responses. ENISI's modeling environment can simulate in silico experiments from molecular signaling pathways to tissue level events such as tissue lesion formation. ENISI's architecture integrates multiple modeling technologies including ABM (agent-based modeling), ODE (ordinary differential equations), SDE (stochastic modeling equations), and PDE (partial differential equations). This paper focuses on the implementation and developmental challenges of ENISI. A multiscale model of mucosal immune responses during colonic inflammation, including CD4+ T cell differentiation and tissue level cell-cell interactions was developed to illustrate the capabilities, power and scope of ENISI MSM. Background Computational techniques are becoming increasingly powerful and modeling tools for biological systems are of greater needs. Biological systems are inherently multiscale, from molecules to tissues and from nano-seconds to a lifespan of several years or decades. ENISI MSM integrates multiple modeling technologies to understand immunological processes from signaling pathways within cells to lesion formation at the tissue level. This paper examines and summarizes the technical details of ENISI, from its initial version to its latest cutting-edge implementation. Implementation Object-oriented programming approach is adopted to develop a suite of tools based on ENISI. Multiple modeling technologies are integrated to visualize tissues, cells as well as proteins; furthermore, performance matching between the scales is addressed. Conclusion We used ENISI MSM for developing predictive multiscale models of the mucosal immune system during gut

  20. Multiscale modeling of mucosal immune responses.

    Science.gov (United States)

    Mei, Yongguo; Abedi, Vida; Carbo, Adria; Zhang, Xiaoying; Lu, Pinyi; Philipson, Casandra; Hontecillas, Raquel; Hoops, Stefan; Liles, Nathan; Bassaganya-Riera, Josep

    2015-01-01

    Computational techniques are becoming increasingly powerful and modeling tools for biological systems are of greater needs. Biological systems are inherently multiscale, from molecules to tissues and from nano-seconds to a lifespan of several years or decades. ENISI MSM integrates multiple modeling technologies to understand immunological processes from signaling pathways within cells to lesion formation at the tissue level. This paper examines and summarizes the technical details of ENISI, from its initial version to its latest cutting-edge implementation. Object-oriented programming approach is adopted to develop a suite of tools based on ENISI. Multiple modeling technologies are integrated to visualize tissues, cells as well as proteins; furthermore, performance matching between the scales is addressed. We used ENISI MSM for developing predictive multiscale models of the mucosal immune system during gut inflammation. Our modeling predictions dissect the mechanisms by which effector CD4+ T cell responses contribute to tissue damage in the gut mucosa following immune dysregulation.Computational modeling techniques are playing increasingly important roles in advancing a systems-level mechanistic understanding of biological processes. Computer simulations guide and underpin experimental and clinical efforts. This study presents ENteric Immune Simulator (ENISI), a multiscale modeling tool for modeling the mucosal immune responses. ENISI's modeling environment can simulate in silico experiments from molecular signaling pathways to tissue level events such as tissue lesion formation. ENISI's architecture integrates multiple modeling technologies including ABM (agent-based modeling), ODE (ordinary differential equations), SDE (stochastic modeling equations), and PDE (partial differential equations). This paper focuses on the implementation and developmental challenges of ENISI. A multiscale model of mucosal immune responses during colonic inflammation, including CD4+ T

  1. Multi-scale Regions from Edge Fragments

    DEFF Research Database (Denmark)

    Kazmi, Wajahat; Andersen, Hans Jørgen

    2014-01-01

    In this article we introduce a novel method for detecting multi-scale salient regions around edges using a graph based image compression algorithm. Images are recursively decomposed into triangles arranged into a binary tree using linear interpolation. The entropy of any local region of the image......), their performance is comparable to SIFT (Lowe, 2004).We also show that when they are used together with MSERs (Matas et al., 2002), the performance of MSERs is boosted....

  2. Multiscale estimation of excess mass from gravity data

    Science.gov (United States)

    Castaldo, Raffaele; Fedi, Maurizio; Florio, Giovanni

    2014-06-01

    We describe a multiscale method to estimate the excess mass of gravity anomaly sources, based on the theory of source moments. Using a multipole expansion of the potential field and considering only the data along the vertical direction, a system of linear equations is obtained. The choice of inverting data along a vertical profile can help us to reduce the interference effects due to nearby anomalies and will allow a local estimate of the source parameters. A criterion is established allowing the selection of the optimal highest altitude of the vertical profile data and truncation order of the series expansion. The inversion provides an estimate of the total anomalous mass and of the depth to the centre of mass. The method has several advantages with respect to classical methods, such as the Gauss' method: (i) we need just a 1-D inversion to obtain our estimates, being the inverted data sampled along a single vertical profile; (ii) the resolution may be straightforward enhanced by using vertical derivatives; (iii) the centre of mass is also estimated, besides the excess mass; (iv) the method is very robust versus noise; (v) the profile may be chosen in such a way to minimize the effects from interfering anomalies or from side effects due to the a limited area extension. The multiscale estimation of excess mass method can be successfully used in various fields of application. Here, we analyse the gravity anomaly generated by a sulphide body in the Skelleftea ore district, North Sweden, obtaining source mass and volume estimates in agreement with the known information. We show also that these estimates are substantially improved with respect to those obtained with the classical approach.

  3. Engineering Digestion: Multiscale Processes of Food Digestion.

    Science.gov (United States)

    Bornhorst, Gail M; Gouseti, Ourania; Wickham, Martin S J; Bakalis, Serafim

    2016-03-01

    Food digestion is a complex, multiscale process that has recently become of interest to the food industry due to the developing links between food and health or disease. Food digestion can be studied by using either in vitro or in vivo models, each having certain advantages or disadvantages. The recent interest in food digestion has resulted in a large number of studies in this area, yet few have provided an in-depth, quantitative description of digestion processes. To provide a framework to develop these quantitative comparisons, a summary is given here between digestion processes and parallel unit operations in the food and chemical industry. Characterization parameters and phenomena are suggested for each step of digestion. In addition to the quantitative characterization of digestion processes, the multiscale aspect of digestion must also be considered. In both food systems and the gastrointestinal tract, multiple length scales are involved in food breakdown, mixing, absorption. These different length scales influence digestion processes independently as well as through interrelated mechanisms. To facilitate optimized development of functional food products, a multiscale, engineering approach may be taken to describe food digestion processes. A framework for this approach is described in this review, as well as examples that demonstrate the importance of process characterization as well as the multiple, interrelated length scales in the digestion process. © 2016 Institute of Food Technologists®

  4. Acoustics of multiscale sorptive porous materials

    Science.gov (United States)

    Venegas, R.; Boutin, C.; Umnova, O.

    2017-08-01

    This paper investigates sound propagation in multiscale rigid-frame porous materials that support mass transfer processes, such as sorption and different types of diffusion, in addition to the usual visco-thermo-inertial interactions. The two-scale asymptotic method of homogenization for periodic media is successively used to derive the macroscopic equations describing sound propagation through the material. This allowed us to conclude that the macroscopic mass balance is significantly modified by sorption, inter-scale (micro- to/from nanopore scales) mass diffusion, and inter-scale (pore to/from micro- and nanopore scales) pressure diffusion. This modification is accounted for by the dynamic compressibility of the effective saturating fluid that presents atypical properties that lead to slower speed of sound and higher sound attenuation, particularly at low frequencies. In contrast, it is shown that the physical processes occurring at the micro-nano-scale do not affect the macroscopic fluid flow through the material. The developed theory is exemplified by introducing an analytical model for multiscale sorptive granular materials, which is experimentally validated by comparing its predictions with acoustic measurements on granular activated carbons. Furthermore, we provide empirical evidence supporting an alternative method for measuring sorption and mass diffusion properties of multiscale sorptive materials using sound waves.

  5. Multivariate multiscale entropy of financial markets

    Science.gov (United States)

    Lu, Yunfan; Wang, Jun

    2017-11-01

    In current process of quantifying the dynamical properties of the complex phenomena in financial market system, the multivariate financial time series are widely concerned. In this work, considering the shortcomings and limitations of univariate multiscale entropy in analyzing the multivariate time series, the multivariate multiscale sample entropy (MMSE), which can evaluate the complexity in multiple data channels over different timescales, is applied to quantify the complexity of financial markets. Its effectiveness and advantages have been detected with numerical simulations with two well-known synthetic noise signals. For the first time, the complexity of four generated trivariate return series for each stock trading hour in China stock markets is quantified thanks to the interdisciplinary application of this method. We find that the complexity of trivariate return series in each hour show a significant decreasing trend with the stock trading time progressing. Further, the shuffled multivariate return series and the absolute multivariate return series are also analyzed. As another new attempt, quantifying the complexity of global stock markets (Asia, Europe and America) is carried out by analyzing the multivariate returns from them. Finally we utilize the multivariate multiscale entropy to assess the relative complexity of normalized multivariate return volatility series with different degrees.

  6. Multiscale Phase Inversion of Seismic Data

    KAUST Repository

    Fu, Lei

    2017-12-02

    We present a scheme for multiscale phase inversion (MPI) of seismic data that is less sensitive to the unmodeled physics of wave propagation and a poor starting model than standard full waveform inversion (FWI). To avoid cycle-skipping, the multiscale strategy temporally integrates the traces several times, i.e. high-order integration, to produce low-boost seismograms that are used as input data for the initial iterations of MPI. As the iterations proceed, higher frequencies in the data are boosted by using integrated traces of lower order as the input data. The input data are also filtered into different narrow frequency bands for the MPI implementation. At low frequencies, we show that MPI with windowed reflections approximates wave equation inversion of the reflection traveltimes, except no traveltime picking is needed. Numerical results with synthetic acoustic data show that MPI is more robust than conventional multiscale FWI when the initial model is far from the true model. Results from synthetic viscoacoustic and elastic data show that MPI is less sensitive than FWI to some of the unmodeled physics. Inversion of marine data shows that MPI is more robust and produces modestly more accurate results than FWI for this data set.

  7. Multiscale Persistent Functions for Biomolecular Structure Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Kelin [Nanyang Technological University (Singapore). Division of Mathematical Sciences, School of Physical, Mathematical Sciences and School of Biological Sciences; Li, Zhiming [Central China Normal University, Wuhan (China). Key Laboratory of Quark and Lepton Physics (MOE) and Institute of Particle Physics; Mu, Lin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Computer Science and Mathematics Division

    2017-11-02

    Here in this paper, we introduce multiscale persistent functions for biomolecular structure characterization. The essential idea is to combine our multiscale rigidity functions (MRFs) with persistent homology analysis, so as to construct a series of multiscale persistent functions, particularly multiscale persistent entropies, for structure characterization. To clarify the fundamental idea of our method, the multiscale persistent entropy (MPE) model is discussed in great detail. Mathematically, unlike the previous persistent entropy (Chintakunta et al. in Pattern Recognit 48(2):391–401, 2015; Merelli et al. in Entropy 17(10):6872–6892, 2015; Rucco et al. in: Proceedings of ECCS 2014, Springer, pp 117–128, 2016), a special resolution parameter is incorporated into our model. Various scales can be achieved by tuning its value. Physically, our MPE can be used in conformational entropy evaluation. More specifically, it is found that our method incorporates in it a natural classification scheme. This is achieved through a density filtration of an MRF built from angular distributions. To further validate our model, a systematical comparison with the traditional entropy evaluation model is done. Additionally, it is found that our model is able to preserve the intrinsic topological features of biomolecular data much better than traditional approaches, particularly for resolutions in the intermediate range. Moreover, by comparing with traditional entropies from various grid sizes, bond angle-based methods and a persistent homology-based support vector machine method (Cang et al. in Mol Based Math Biol 3:140–162, 2015), we find that our MPE method gives the best results in terms of average true positive rate in a classic protein structure classification test. More interestingly, all-alpha and all-beta protein classes can be clearly separated from each other with zero error only in our model. Finally, a special protein structure index (PSI) is proposed, for the first

  8. Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®

    Energy Technology Data Exchange (ETDEWEB)

    Stander, Nielen; Basudhar, Anirban; Basu, Ushnish; Gandikota, Imtiaz; Savic, Vesna; Sun, Xin; Choi, Kyoo Sil; Hu, Xiaohua; Pourboghrat, F.; Park, Taejoon; Mapar, Aboozar; Kumar, Shavan; Ghassemi-Armaki, Hassan; Abu-Farha, Fadi

    2015-09-14

    Test Ban Treaty of 1996 which banned surface testing of nuclear devices [1]. This had the effect that experimental work was reduced from large scale tests to multiscale experiments to provide material models with validation at different length scales. In the subsequent years industry realized that multi-scale modeling and simulation-based design were transferable to the design optimization of any structural system. Horstemeyer [1] lists a number of advantages of the use of multiscale modeling. Among these are: the reduction of product development time by alleviating costly trial-and-error iterations as well as the reduction of product costs through innovations in material, product and process designs. Multi-scale modeling can reduce the number of costly large scale experiments and can increase product quality by providing more accurate predictions. Research tends to be focussed on each particular length scale, which enhances accuracy in the long term. This paper serves as an introduction to the LS-OPT and LS-DYNA methodology for multi-scale modeling. It mainly focuses on an approach to integrate material identification using material models of different length scales. As an example, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a homogenized State Variable (SV) model, is discussed and the parameter identification of the individual material models of different length scales is demonstrated. The paper concludes with thoughts on integrating the multi-scale methodology into the overall vehicle design.

  9. Automatic indexing, compiling and classification

    International Nuclear Information System (INIS)

    Andreewsky, Alexandre; Fluhr, Christian.

    1975-06-01

    A review of the principles of automatic indexing, is followed by a comparison and summing-up of work by the authors and by a Soviet staff from the Moscou INFORM-ELECTRO Institute. The mathematical and linguistic problems of the automatic building of thesaurus and automatic classification are examined [fr

  10. The multiscale nature of streamers

    International Nuclear Information System (INIS)

    Ebert, U; Montijn, C; Briels, T M P; Hundsdorfer, W; Meulenbroek, B; Rocco, A; Veldhuizen, E M van

    2006-01-01

    Streamers are a generic mode of electric breakdown of large gas volumes. They play a role in the initial stages of sparks and lightning, in technical corona reactors and in high altitude sprite discharges above thunderclouds. Streamers are characterized by a self-generated field enhancement at the head of the growing discharge channel. We briefly review recent streamer experiments and sprite observations. Then we sketch our recent work on computations of growing and branching streamers, we discuss concepts and solutions of analytical model reductions and we review different branching concepts and outline a hierarchy of model reductions

  11. The multiscale nature of streamers

    Energy Technology Data Exchange (ETDEWEB)

    Ebert, U [Centrum voor Wiskunde en Informatica (CWI), PO Box 94079, 1090GB Amsterdam (Netherlands); Faculty of Physics, Eindhoven University of Technology, PO Box 513, 5600MB Eindhoven (Netherlands); Montijn, C [Centrum voor Wiskunde en Informatica (CWI), PO Box 94079, 1090GB Amsterdam (Netherlands); Briels, T M P [Faculty of Physics, Eindhoven University of Technology, PO Box 513, 5600MB Eindhoven (Netherlands); Hundsdorfer, W [Centrum voor Wiskunde en Informatica (CWI), PO Box 94079, 1090GB Amsterdam (Netherlands); Meulenbroek, B [Centrum voor Wiskunde en Informatica (CWI), PO Box 94079, 1090GB Amsterdam (Netherlands); Rocco, A [Centrum voor Wiskunde en Informatica (CWI), PO Box 94079, 1090GB Amsterdam (Netherlands); University of Oxford, Department of Statistics, 1 South Parks Road, Oxford OX1 3TG (United Kingdom); Veldhuizen, E M van [Faculty of Physics, Eindhoven University of Technology, PO Box 513, 5600MB Eindhoven (Netherlands)

    2006-05-15

    Streamers are a generic mode of electric breakdown of large gas volumes. They play a role in the initial stages of sparks and lightning, in technical corona reactors and in high altitude sprite discharges above thunderclouds. Streamers are characterized by a self-generated field enhancement at the head of the growing discharge channel. We briefly review recent streamer experiments and sprite observations. Then we sketch our recent work on computations of growing and branching streamers, we discuss concepts and solutions of analytical model reductions and we review different branching concepts and outline a hierarchy of model reductions.

  12. IMPROVEMENT AND EXTENSION OF SHAPE EVALUATION CRITERIA IN MULTI-SCALE IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    M. Sakamoto

    2016-06-01

    Full Text Available From the last decade, the multi-scale image segmentation is getting a particular interest and practically being used for object-based image analysis. In this study, we have addressed the issues on multi-scale image segmentation, especially, in improving the performances for validity of merging and variety of derived region’s shape. Firstly, we have introduced constraints on the application of spectral criterion which could suppress excessive merging between dissimilar regions. Secondly, we have extended the evaluation for smoothness criterion by modifying the definition on the extent of the object, which was brought for controlling the shape’s diversity. Thirdly, we have developed new shape criterion called aspect ratio. This criterion helps to improve the reproducibility on the shape of object to be matched to the actual objectives of interest. This criterion provides constraint on the aspect ratio in the bounding box of object by keeping properties controlled with conventional shape criteria. These improvements and extensions lead to more accurate, flexible, and diverse segmentation results according to the shape characteristics of the target of interest. Furthermore, we also investigated a technique for quantitative and automatic parameterization in multi-scale image segmentation. This approach is achieved by comparing segmentation result with training area specified in advance by considering the maximization of the average area in derived objects or satisfying the evaluation index called F-measure. Thus, it has been possible to automate the parameterization that suited the objectives especially in the view point of shape’s reproducibility.

  13. Automatization of welding

    International Nuclear Information System (INIS)

    Iwabuchi, Masashi; Tomita, Jinji; Nishihara, Katsunori.

    1978-01-01

    Automatization of welding is one of the effective measures for securing high degree of quality of nuclear power equipment, as well as for correspondence to the environment at the site of plant. As the latest ones of the automatic welders practically used for welding of nuclear power apparatuses in factories of Toshiba and IHI, those for pipes and lining tanks are described here. The pipe welder performs the battering welding on the inside of pipe end as the so-called IGSCC countermeasure and the succeeding butt welding through the same controller. The lining tank welder is able to perform simultaneous welding of two parallel weld lines on a large thin plate lining tank. Both types of the welders are demonstrating excellent performance at the shops as well as at the plant site. (author)

  14. Automatic structural scene digitalization.

    Science.gov (United States)

    Tang, Rui; Wang, Yuhan; Cosker, Darren; Li, Wenbin

    2017-01-01

    In this paper, we present an automatic system for the analysis and labeling of structural scenes, floor plan drawings in Computer-aided Design (CAD) format. The proposed system applies a fusion strategy to detect and recognize various components of CAD floor plans, such as walls, doors, windows and other ambiguous assets. Technically, a general rule-based filter parsing method is fist adopted to extract effective information from the original floor plan. Then, an image-processing based recovery method is employed to correct information extracted in the first step. Our proposed method is fully automatic and real-time. Such analysis system provides high accuracy and is also evaluated on a public website that, on average, archives more than ten thousands effective uses per day and reaches a relatively high satisfaction rate.

  15. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  16. Automatic food decisions

    DEFF Research Database (Denmark)

    Mueller Loose, Simone

    Consumers' food decisions are to a large extent shaped by automatic processes, which are either internally directed through learned habits and routines or externally influenced by context factors and visual information triggers. Innovative research methods such as eye tracking, choice experiments...... and food diaries allow us to better understand the impact of unconscious processes on consumers' food choices. Simone Mueller Loose will provide an overview of recent research insights into the effects of habit and context on consumers' food choices....

  17. Automatic LOD selection

    OpenAIRE

    Forsman, Isabelle

    2017-01-01

    In this paper a method to automatically generate transition distances for LOD, improving image stability and performance is presented. Three different methods were tested all measuring the change between two level of details using the spatial frequency. The methods were implemented as an optional pre-processing step in order to determine the transition distances from multiple view directions. During run-time both view direction based selection and the furthest distance for each direction was ...

  18. Multiscale methods in computational fluid and solid mechanics

    NARCIS (Netherlands)

    Borst, de R.; Hulshoff, S.J.; Lenz, S.; Munts, E.A.; Brummelen, van E.H.; Wall, W.; Wesseling, P.; Onate, E.; Periaux, J.

    2006-01-01

    First, an attempt is made towards gaining a more systematic understanding of recent progress in multiscale modelling in computational solid and fluid mechanics. Sub- sequently, the discussion is focused on variational multiscale methods for the compressible and incompressible Navier-Stokes

  19. Transitions of the Multi-Scale Singularity Trees

    DEFF Research Database (Denmark)

    Somchaipeng, Kerawit; Sporring, Jon; Kreiborg, Sven

    2005-01-01

    Multi-Scale Singularity Trees(MSSTs) [10] are multi-scale image descriptors aimed at representing the deep structures of images. Changes in images are directly translated to changes in the deep structures; therefore transitions in MSSTs. Because MSSTs can be used to represent the deep structure...

  20. A multi-scale tensor voting approach for small retinal vessel segmentation in high resolution fundus images.

    Science.gov (United States)

    Christodoulidis, Argyrios; Hurtut, Thomas; Tahar, Houssem Ben; Cheriet, Farida

    2016-09-01

    Segmenting the retinal vessels from fundus images is a prerequisite for many CAD systems for the automatic detection of diabetic retinopathy lesions. So far, research efforts have concentrated mainly on the accurate localization of the large to medium diameter vessels. However, failure to detect the smallest vessels at the segmentation step can lead to false positive lesion detection counts in a subsequent lesion analysis stage. In this study, a new hybrid method for the segmentation of the smallest vessels is proposed. Line detection and perceptual organization techniques are combined in a multi-scale scheme. Small vessels are reconstructed from the perceptual-based approach via tracking and pixel painting. The segmentation was validated in a high resolution fundus image database including healthy and diabetic subjects using pixel-based as well as perceptual-based measures. The proposed method achieves 85.06% sensitivity rate, while the original multi-scale line detection method achieves 81.06% sensitivity rate for the corresponding images (p<0.05). The improvement in the sensitivity rate for the database is 6.47% when only the smallest vessels are considered (p<0.05). For the perceptual-based measure, the proposed method improves the detection of the vasculature by 7.8% against the original multi-scale line detection method (p<0.05). Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Automatic Road Pavement Assessment with Image Processing: Review and Comparison

    Directory of Open Access Journals (Sweden)

    Sylvie Chambon

    2011-01-01

    Full Text Available In the field of noninvasive sensing techniques for civil infrastructures monitoring, this paper addresses the problem of crack detection, in the surface of the French national roads, by automatic analysis of optical images. The first contribution is a state of the art of the image-processing tools applied to civil engineering. The second contribution is about fine-defect detection in pavement surface. The approach is based on a multi-scale extraction and a Markovian segmentation. Third, an evaluation and comparison protocol which has been designed for evaluating this difficult task—the road pavement crack detection—is introduced. Finally, the proposed method is validated, analysed, and compared to a detection approach based on morphological tools.

  2. Multi-Scale Validation of a Nanodiamond Drug Delivery System and Multi-Scale Engineering Education

    Science.gov (United States)

    Schwalbe, Michelle Kristin

    2010-01-01

    This dissertation has two primary concerns: (i) evaluating the uncertainty and prediction capabilities of a nanodiamond drug delivery model using Bayesian calibration and bias correction, and (ii) determining conceptual difficulties of multi-scale analysis from an engineering education perspective. A Bayesian uncertainty quantification scheme…

  3. Multi-Scale Residual Convolutional Neural Network for Haze Removal of Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Hou Jiang

    2018-06-01

    Full Text Available Haze removal is a pre-processing step that operates on at-sensor radiance data prior to the physically based image correction step to enhance hazy imagery visually. Most current haze removal methods focus on point-to-point operations and utilize information in the spectral domain, without taking consideration of the multi-scale spatial information of haze. In this paper, we propose a multi-scale residual convolutional neural network (MRCNN for haze removal of remote sensing images. MRCNN utilizes 3D convolutional kernels to extract spatial–spectral correlation information and abstract features from surrounding neighborhoods for haze transmission estimation. It takes advantage of dilated convolution to aggregate multi-scale contextual information for the purpose of improving its prediction accuracy. Meanwhile, residual learning is utilized to avoid the loss of weak information while deepening the network. Our experiments indicate that MRCNN performs accurately, achieving an extremely low validation error and testing error. The haze removal results of several scenes of Landsat 8 Operational Land Imager (OLI data show that the visibility of the dehazed images is significantly improved, and the color of recovered surface is consistent with the actual scene. Quantitative analysis proves that the dehazed results of MRCNN are superior to the traditional methods and other networks. Additionally, a comparison to haze-free data illustrates the spectral consistency after haze removal and reveals the changes in the vegetation index.

  4. Mechanical characterization of epoxy composite with multiscale reinforcements: Carbon nanotubes and short carbon fibers

    International Nuclear Information System (INIS)

    Rahmanian, S.; Suraya, A.R.; Shazed, M.A.; Zahari, R.; Zainudin, E.S.

    2014-01-01

    Highlights: • Multiscale composite was prepared by incorporation of carbon nanotubes and fibers. • Carbon nanotubes were also grown on short carbon fibers to enhance stress transfer. • Significant improvements were achieved in mechanical properties of composites. • Synergic effect of carbon nanotubes and fibers was demonstrated. - Abstract: Carbon nanotubes (CNT) and short carbon fibers were incorporated into an epoxy matrix to fabricate a high performance multiscale composite. To improve the stress transfer between epoxy and carbon fibers, CNT were also grown on fibers through chemical vapor deposition (CVD) method to produce CNT grown short carbon fibers (CSCF). Mechanical characterization of composites was performed to investigate the synergy effects of CNT and CSCF in the epoxy matrix. The multiscale composites revealed significant improvement in elastic and storage modulus, strength as well as impact resistance in comparison to CNT–epoxy or CSCF–epoxy composites. An optimum content of CNT was found which provided the maximum stiffness and strength. The synergic reinforcing effects of combined fillers were analyzed on the fracture surface of composites through optical and scanning electron microscopy (SEM)

  5. Efficient algorithms for multiscale modeling in porous media

    KAUST Repository

    Wheeler, Mary F.; Wildey, Tim; Xue, Guangri

    2010-01-01

    We describe multiscale mortar mixed finite element discretizations for second-order elliptic and nonlinear parabolic equations modeling Darcy flow in porous media. The continuity of flux is imposed via a mortar finite element space on a coarse grid scale, while the equations in the coarse elements (or subdomains) are discretized on a fine grid scale. We discuss the construction of multiscale mortar basis and extend this concept to nonlinear interface operators. We present a multiscale preconditioning strategy to minimize the computational cost associated with construction of the multiscale mortar basis. We also discuss the use of appropriate quadrature rules and approximation spaces to reduce the saddle point system to a cell-centered pressure scheme. In particular, we focus on multiscale mortar multipoint flux approximation method for general hexahedral grids and full tensor permeabilities. Numerical results are presented to verify the accuracy and efficiency of these approaches. © 2010 John Wiley & Sons, Ltd.

  6. A complete categorization of multiscale models of infectious disease systems.

    Science.gov (United States)

    Garira, Winston

    2017-12-01

    Modelling of infectious disease systems has entered a new era in which disease modellers are increasingly turning to multiscale modelling to extend traditional modelling frameworks into new application areas and to achieve higher levels of detail and accuracy in characterizing infectious disease systems. In this paper we present a categorization framework for categorizing multiscale models of infectious disease systems. The categorization framework consists of five integration frameworks and five criteria. We use the categorization framework to give a complete categorization of host-level immuno-epidemiological models (HL-IEMs). This categorization framework is also shown to be applicable in categorizing other types of multiscale models of infectious diseases beyond HL-IEMs through modifying the initial categorization framework presented in this study. Categorization of multiscale models of infectious disease systems in this way is useful in bringing some order to the discussion on the structure of these multiscale models.

  7. Efficient algorithms for multiscale modeling in porous media

    KAUST Repository

    Wheeler, Mary F.

    2010-09-26

    We describe multiscale mortar mixed finite element discretizations for second-order elliptic and nonlinear parabolic equations modeling Darcy flow in porous media. The continuity of flux is imposed via a mortar finite element space on a coarse grid scale, while the equations in the coarse elements (or subdomains) are discretized on a fine grid scale. We discuss the construction of multiscale mortar basis and extend this concept to nonlinear interface operators. We present a multiscale preconditioning strategy to minimize the computational cost associated with construction of the multiscale mortar basis. We also discuss the use of appropriate quadrature rules and approximation spaces to reduce the saddle point system to a cell-centered pressure scheme. In particular, we focus on multiscale mortar multipoint flux approximation method for general hexahedral grids and full tensor permeabilities. Numerical results are presented to verify the accuracy and efficiency of these approaches. © 2010 John Wiley & Sons, Ltd.

  8. Information Management Workflow and Tools Enabling Multiscale Modeling Within ICME Paradigm

    Science.gov (United States)

    Arnold, Steven M.; Bednarcyk, Brett A.; Austin, Nic; Terentjev, Igor; Cebon, Dave; Marsden, Will

    2016-01-01

    With the increased emphasis on reducing the cost and time to market of new materials, the need for analytical tools that enable the virtual design and optimization of materials throughout their processing - internal structure - property - performance envelope, along with the capturing and storing of the associated material and model information across its lifecycle, has become critical. This need is also fueled by the demands for higher efficiency in material testing; consistency, quality and traceability of data; product design; engineering analysis; as well as control of access to proprietary or sensitive information. Fortunately, material information management systems and physics-based multiscale modeling methods have kept pace with the growing user demands. Herein, recent efforts to establish workflow for and demonstrate a unique set of web application tools for linking NASA GRC's Integrated Computational Materials Engineering (ICME) Granta MI database schema and NASA GRC's Integrated multiscale Micromechanics Analysis Code (ImMAC) software toolset are presented. The goal is to enable seamless coupling between both test data and simulation data, which is captured and tracked automatically within Granta MI®, with full model pedigree information. These tools, and this type of linkage, are foundational to realizing the full potential of ICME, in which materials processing, microstructure, properties, and performance are coupled to enable application-driven design and optimization of materials and structures.

  9. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    Science.gov (United States)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  10. A New Multiscale Technique for Time-Accurate Geophysics Simulations

    Science.gov (United States)

    Omelchenko, Y. A.; Karimabadi, H.

    2006-12-01

    Large-scale geophysics systems are frequently described by multiscale reactive flow models (e.g., wildfire and climate models, multiphase flows in porous rocks, etc.). Accurate and robust simulations of such systems by traditional time-stepping techniques face a formidable computational challenge. Explicit time integration suffers from global (CFL and accuracy) timestep restrictions due to inhomogeneous convective and diffusion processes, as well as closely coupled physical and chemical reactions. Application of adaptive mesh refinement (AMR) to such systems may not be always sufficient since its success critically depends on a careful choice of domain refinement strategy. On the other hand, implicit and timestep-splitting integrations may result in a considerable loss of accuracy when fast transients in the solution become important. To address this issue, we developed an alternative explicit approach to time-accurate integration of such systems: Discrete-Event Simulation (DES). DES enables asynchronous computation by automatically adjusting the CPU resources in accordance with local timescales. This is done by encapsulating flux- conservative updates of numerical variables in the form of events, whose execution and synchronization is explicitly controlled by imposing accuracy and causality constraints. As a result, at each time step DES self- adaptively updates only a fraction of the global system state, which eliminates unnecessary computation of inactive elements. DES can be naturally combined with various mesh generation techniques. The event-driven paradigm results in robust and fast simulation codes, which can be efficiently parallelized via a new preemptive event processing (PEP) technique. We discuss applications of this novel technology to time-dependent diffusion-advection-reaction and CFD models representative of various geophysics applications.

  11. Classification of high-resolution remote sensing images based on multi-scale superposition

    Science.gov (United States)

    Wang, Jinliang; Gao, Wenjie; Liu, Guangjie

    2017-07-01

    Landscape structures and process on different scale show different characteristics. In the study of specific target landmarks, the most appropriate scale for images can be attained by scale conversion, which improves the accuracy and efficiency of feature identification and classification. In this paper, the authors carried out experiments on multi-scale classification by taking the Shangri-la area in the north-western Yunnan province as the research area and the images from SPOT5 HRG and GF-1 Satellite as date sources. Firstly, the authors upscaled the two images by cubic convolution, and calculated the optimal scale for different objects on the earth shown in images by variation functions. Then the authors conducted multi-scale superposition classification on it by Maximum Likelyhood, and evaluated the classification accuracy. The results indicates that: (1) for most of the object on the earth, the optimal scale appears in the bigger scale instead of the original one. To be specific, water has the biggest optimal scale, i.e. around 25-30m; farmland, grassland, brushwood, roads, settlement places and woodland follows with 20-24m. The optimal scale for shades and flood land is basically as the same as the original one, i.e. 8m and 10m respectively. (2) Regarding the classification of the multi-scale superposed images, the overall accuracy of the ones from SPOT5 HRG and GF-1 Satellite is 12.84% and 14.76% higher than that of the original multi-spectral images, respectively, and Kappa coefficient is 0.1306 and 0.1419 higher, respectively. Hence, the multi-scale superposition classification which was applied in the research area can enhance the classification accuracy of remote sensing images .

  12. SegAN: Adversarial Network with Multi-scale L1 Loss for Medical Image Segmentation.

    Science.gov (United States)

    Xue, Yuan; Xu, Tao; Zhang, Han; Long, L Rodney; Huang, Xiaolei

    2018-05-03

    Inspired by classic Generative Adversarial Networks (GANs), we propose a novel end-to-end adversarial neural network, called SegAN, for the task of medical image segmentation. Since image segmentation requires dense, pixel-level labeling, the single scalar real/fake output of a classic GAN's discriminator may be ineffective in producing stable and sufficient gradient feedback to the networks. Instead, we use a fully convolutional neural network as the segmentor to generate segmentation label maps, and propose a novel adversarial critic network with a multi-scale L 1 loss function to force the critic and segmentor to learn both global and local features that capture long- and short-range spatial relationships between pixels. In our SegAN framework, the segmentor and critic networks are trained in an alternating fashion in a min-max game: The critic is trained by maximizing a multi-scale loss function, while the segmentor is trained with only gradients passed along by the critic, with the aim to minimize the multi-scale loss function. We show that such a SegAN framework is more effective and stable for the segmentation task, and it leads to better performance than the state-of-the-art U-net segmentation method. We tested our SegAN method using datasets from the MICCAI BRATS brain tumor segmentation challenge. Extensive experimental results demonstrate the effectiveness of the proposed SegAN with multi-scale loss: on BRATS 2013 SegAN gives performance comparable to the state-of-the-art for whole tumor and tumor core segmentation while achieves better precision and sensitivity for Gd-enhance tumor core segmentation; on BRATS 2015 SegAN achieves better performance than the state-of-the-art in both dice score and precision.

  13. Multiscale approaches to high efficiency photovoltaics

    Directory of Open Access Journals (Sweden)

    Connolly James Patrick

    2016-01-01

    Full Text Available While renewable energies are achieving parity around the globe, efforts to reach higher solar cell efficiencies becomes ever more difficult as they approach the limiting efficiency. The so-called third generation concepts attempt to break this limit through a combination of novel physical processes and new materials and concepts in organic and inorganic systems. Some examples of semi-empirical modelling in the field are reviewed, in particular for multispectral solar cells on silicon (French ANR project MultiSolSi. Their achievements are outlined, and the limits of these approaches shown. This introduces the main topic of this contribution, which is the use of multiscale experimental and theoretical techniques to go beyond the semi-empirical understanding of these systems. This approach has already led to great advances at modelling which have led to modelling software, which is widely known. Yet, a survey of the topic reveals a fragmentation of efforts across disciplines, firstly, such as organic and inorganic fields, but also between the high efficiency concepts such as hot carrier cells and intermediate band concepts. We show how this obstacle to the resolution of practical research obstacles may be lifted by inter-disciplinary cooperation across length scales, and across experimental and theoretical fields, and finally across materials systems. We present a European COST Action “MultiscaleSolar” kicking off in early 2015, which brings together experimental and theoretical partners in order to develop multiscale research in organic and inorganic materials. The goal of this defragmentation and interdisciplinary collaboration is to develop understanding across length scales, which will enable the full potential of third generation concepts to be evaluated in practise, for societal and industrial applications.

  14. Automatic Fabric Defect Detection with a Multi-Scale Convolutional Denoising Autoencoder Network Model.

    Science.gov (United States)

    Mei, Shuang; Wang, Yudan; Wen, Guojun

    2018-04-02

    Fabric defect detection is a necessary and essential step of quality control in the textile manufacturing industry. Traditional fabric inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. In this paper, we propose an unsupervised learning-based automated approach to detect and localize fabric defects without any manual intervention. This approach is used to reconstruct image patches with a convolutional denoising autoencoder network at multiple Gaussian pyramid levels and to synthesize detection results from the corresponding resolution channels. The reconstruction residual of each image patch is used as the indicator for direct pixel-wise prediction. By segmenting and synthesizing the reconstruction residual map at each resolution level, the final inspection result can be generated. This newly developed method has several prominent advantages for fabric defect detection. First, it can be trained with only a small amount of defect-free samples. This is especially important for situations in which collecting large amounts of defective samples is difficult and impracticable. Second, owing to the multi-modal integration strategy, it is relatively more robust and accurate compared to general inspection methods (the results at each resolution level can be viewed as a modality). Third, according to our results, it can address multiple types of textile fabrics, from simple to more complex. Experimental results demonstrate that the proposed model is robust and yields good overall performance with high precision and acceptable recall rates.

  15. Automatic Fabric Defect Detection with a Multi-Scale Convolutional Denoising Autoencoder Network Model

    Directory of Open Access Journals (Sweden)

    Shuang Mei

    2018-04-01

    Full Text Available Fabric defect detection is a necessary and essential step of quality control in the textile manufacturing industry. Traditional fabric inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. In this paper, we propose an unsupervised learning-based automated approach to detect and localize fabric defects without any manual intervention. This approach is used to reconstruct image patches with a convolutional denoising autoencoder network at multiple Gaussian pyramid levels and to synthesize detection results from the corresponding resolution channels. The reconstruction residual of each image patch is used as the indicator for direct pixel-wise prediction. By segmenting and synthesizing the reconstruction residual map at each resolution level, the final inspection result can be generated. This newly developed method has several prominent advantages for fabric defect detection. First, it can be trained with only a small amount of defect-free samples. This is especially important for situations in which collecting large amounts of defective samples is difficult and impracticable. Second, owing to the multi-modal integration strategy, it is relatively more robust and accurate compared to general inspection methods (the results at each resolution level can be viewed as a modality. Third, according to our results, it can address multiple types of textile fabrics, from simple to more complex. Experimental results demonstrate that the proposed model is robust and yields good overall performance with high precision and acceptable recall rates.

  16. Structure and multiscale mechanics of carbon nanomaterials

    CERN Document Server

    2016-01-01

    This book aims at providing a broad overview on the relationship between structure and mechanical properties of carbon nanomaterials from world-leading scientists in the field. The main aim is to get an in-depth understanding of the broad range of mechanical properties of carbon materials based on their unique nanostructure and on defects of several types and at different length scales. Besides experimental work mainly based on the use of (in-situ) Raman and X-ray scattering and on nanoindentation, the book also covers some aspects of multiscale modeling of the mechanics of carbon nanomaterials.

  17. Multiscale agent-based cancer modeling.

    Science.gov (United States)

    Zhang, Le; Wang, Zhihui; Sagotsky, Jonathan A; Deisboeck, Thomas S

    2009-04-01

    Agent-based modeling (ABM) is an in silico technique that is being used in a variety of research areas such as in social sciences, economics and increasingly in biomedicine as an interdisciplinary tool to study the dynamics of complex systems. Here, we describe its applicability to integrative tumor biology research by introducing a multi-scale tumor modeling platform that understands brain cancer as a complex dynamic biosystem. We summarize significant findings of this work, and discuss both challenges and future directions for ABM in the field of cancer research.

  18. Multi-scale Modelling of Segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2016-01-01

    pieces. In a second experiment on non-real-time segmentation, musicians indicated boundaries and their strength for six examples. Kernel density estimation was used to develop multi-scale segmentation models. Contrary to previous research, no relationship was found between boundary strength and boundary......While listening to music, people often unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects...

  19. Automatic quantitative renal scintigraphy

    International Nuclear Information System (INIS)

    Valeyre, J.; Deltour, G.; Delisle, M.J.; Bouchard, A.

    1976-01-01

    Renal scintigraphy data may be analyzed automatically by the use of a processing system coupled to an Anger camera (TRIDAC-MULTI 8 or CINE 200). The computing sequence is as follows: normalization of the images; background noise subtraction on both images; evaluation of mercury 197 uptake by the liver and spleen; calculation of the activity fractions on each kidney with respect to the injected dose, taking into account the kidney depth and the results referred to normal values; edition of the results. Automation minimizes the scattering parameters and by its simplification is a great asset in routine work [fr

  20. AUTOMATIC FREQUENCY CONTROL SYSTEM

    Science.gov (United States)

    Hansen, C.F.; Salisbury, J.D.

    1961-01-10

    A control is described for automatically matching the frequency of a resonant cavity to that of a driving oscillator. The driving oscillator is disconnected from the cavity and a secondary oscillator is actuated in which the cavity is the frequency determining element. A low frequency is mixed with the output of the driving oscillator and the resultant lower and upper sidebands are separately derived. The frequencies of the sidebands are compared with the secondary oscillator frequency. deriving a servo control signal to adjust a tuning element in the cavity and matching the cavity frequency to that of the driving oscillator. The driving oscillator may then be connected to the cavity.

  1. Automatic dipole subtraction

    International Nuclear Information System (INIS)

    Hasegawa, K.

    2008-01-01

    The Catani-Seymour dipole subtraction is a general procedure to treat infrared divergences in real emission processes at next-to-leading order in QCD. We automatized the procedure in a computer code. The code is useful especially for the processes with many parton legs. In this talk, we first explain the algorithm of the dipole subtraction and the whole structure of our code. After that we show the results for some processes where the infrared divergences of real emission processes are subtracted. (author)

  2. Automatic programmable air ozonizer

    International Nuclear Information System (INIS)

    Gubarev, S.P.; Klosovsky, A.V.; Opaleva, G.P.; Taran, V.S.; Zolototrubova, M.I.

    2015-01-01

    In this paper we describe a compact, economical, easy to manage auto air ozonator developed at the Institute of Plasma Physics of the NSC KIPT. It is designed for sanitation, disinfection of premises and cleaning the air from foreign odors. A distinctive feature of the developed device is the generation of a given concentration of ozone, approximately 0.7 maximum allowable concentration (MAC), and automatic maintenance of a specified level. This allows people to be inside the processed premises during operation. The microprocessor controller to control the operation of the ozonator was developed

  3. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    OpenAIRE

    Gantt, B.; Kelly, J. T.; Bash, J. O.

    2015-01-01

    Sea spray aerosols (SSAs) impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. Model evaluations of SSA emissions have mainly focused on the global scale, but regional-scale evaluations are also important due to the localized impact of SSAs on atmospheric chemistry near the coast. In this study, SSA emissions in the Community Multiscale Air Quality (CMAQ) model were updated to enhance the...

  4. Automatic personnel contamination monitor

    International Nuclear Information System (INIS)

    Lattin, Kenneth R.

    1978-01-01

    United Nuclear Industries, Inc. (UNI) has developed an automatic personnel contamination monitor (APCM), which uniquely combines the design features of both portal and hand and shoe monitors. In addition, this prototype system also has a number of new features, including: micro computer control and readout, nineteen large area gas flow detectors, real-time background compensation, self-checking for system failures, and card reader identification and control. UNI's experience in operating the Hanford N Reactor, located in Richland, Washington, has shown the necessity of automatically monitoring plant personnel for contamination after they have passed through the procedurally controlled radiation zones. This final check ensures that each radiation zone worker has been properly checked before leaving company controlled boundaries. Investigation of the commercially available portal and hand and shoe monitors indicated that they did not have the sensitivity or sophistication required for UNI's application, therefore, a development program was initiated, resulting in the subject monitor. Field testing shows good sensitivity to personnel contamination with the majority of alarms showing contaminants on clothing, face and head areas. In general, the APCM has sensitivity comparable to portal survey instrumentation. The inherit stand-in, walk-on feature of the APCM not only makes it easy to use, but makes it difficult to bypass. (author)

  5. A multiscale model for virus capsid dynamics.

    Science.gov (United States)

    Chen, Changjun; Saxena, Rishu; Wei, Guo-Wei

    2010-01-01

    Viruses are infectious agents that can cause epidemics and pandemics. The understanding of virus formation, evolution, stability, and interaction with host cells is of great importance to the scientific community and public health. Typically, a virus complex in association with its aquatic environment poses a fabulous challenge to theoretical description and prediction. In this work, we propose a differential geometry-based multiscale paradigm to model complex biomolecule systems. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum domain of the fluid mechanical description of the aquatic environment from the microscopic discrete domain of the atomistic description of the biomolecule. A multiscale action functional is constructed as a unified framework to derive the governing equations for the dynamics of different scales. We show that the classical Navier-Stokes equation for the fluid dynamics and Newton's equation for the molecular dynamics can be derived from the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows.

  6. Neural network based multiscale image restoration approach

    Science.gov (United States)

    de Castro, Ana Paula A.; da Silva, José D. S.

    2007-02-01

    This paper describes a neural network based multiscale image restoration approach. Multilayer perceptrons are trained with artificial images of degraded gray level circles, in an attempt to make the neural network learn inherent space relations of the degraded pixels. The present approach simulates the degradation by a low pass Gaussian filter blurring operation and the addition of noise to the pixels at pre-established rates. The training process considers the degraded image as input and the non-degraded image as output for the supervised learning process. The neural network thus performs an inverse operation by recovering a quasi non-degraded image in terms of least squared. The main difference of the approach to existing ones relies on the fact that the space relations are taken from different scales, thus providing relational space data to the neural network. The approach is an attempt to come up with a simple method that leads to an optimum solution to the problem. Considering different window sizes around a pixel simulates the multiscale operation. In the generalization phase the neural network is exposed to indoor, outdoor, and satellite degraded images following the same steps use for the artificial circle image.

  7. Multiscale coherent structures in tokamak plasma turbulence

    International Nuclear Information System (INIS)

    Xu, G. S.; Wan, B. N.; Zhang, W.; Yang, Q. W.; Wang, L.; Wen, Y. Z.

    2006-01-01

    A 12-tip poloidal probe array is used on the HT-7 superconducting tokamak [Li, Wan, and Mao, Plasma Phys. Controlled Fusion 42, 135 (2000)] to measure plasma turbulence in the edge region. Some statistical analysis techniques are used to characterize the turbulence structures. It is found that the plasma turbulence is composed of multiscale coherent structures, i.e., turbulent eddies and there is self-similarity in a relative short scale range. The presence of the self-similarity is found due to the structural similarity of these eddies between different scales. These turbulent eddies constitute the basic convection cells, so the self-similar range is just the dominant scale range relevant to transport. The experimental results also indicate that the plasma turbulence is dominated by low-frequency and long-wavelength fluctuation components and its dispersion relation shows typical electron-drift-wave characteristics. Some large-scale coherent structures intermittently burst out and exhibit a very long poloidal extent, even longer than 6 cm. It is found that these large-scale coherent structures are mainly contributed by the low-frequency and long-wavelength fluctuating components and their presence is responsible for the observations of long-range correlations, i.e., the correlation in the scale range much longer than the turbulence decorrelation scale. These experimental observations suggest that the coexistence of multiscale coherent structures results in the self-similar turbulent state

  8. Multiscale structure in eco-evolutionary dynamics

    Science.gov (United States)

    Stacey, Blake C.

    In a complex system, the individual components are neither so tightly coupled or correlated that they can all be treated as a single unit, nor so uncorrelated that they can be approximated as independent entities. Instead, patterns of interdependency lead to structure at multiple scales of organization. Evolution excels at producing such complex structures. In turn, the existence of these complex interrelationships within a biological system affects the evolutionary dynamics of that system. I present a mathematical formalism for multiscale structure, grounded in information theory, which makes these intuitions quantitative, and I show how dynamics defined in terms of population genetics or evolutionary game theory can lead to multiscale organization. For complex systems, "more is different," and I address this from several perspectives. Spatial host--consumer models demonstrate the importance of the structures which can arise due to dynamical pattern formation. Evolutionary game theory reveals the novel effects which can result from multiplayer games, nonlinear payoffs and ecological stochasticity. Replicator dynamics in an environment with mesoscale structure relates to generalized conditionalization rules in probability theory. The idea of natural selection "acting at multiple levels" has been mathematized in a variety of ways, not all of which are equivalent. We will face down the confusion, using the experience developed over the course of this thesis to clarify the situation.

  9. Institute for Multiscale Modeling of Biological Interactions

    Energy Technology Data Exchange (ETDEWEB)

    Paulaitis, Michael E; Garcia-Moreno, Bertrand; Lenhoff, Abraham

    2009-12-26

    The Institute for Multiscale Modeling of Biological Interactions (IMMBI) has two primary goals: Foster interdisciplinary collaborations among faculty and their research laboratories that will lead to novel applications of multiscale simulation and modeling methods in the biological sciences and engineering; and Building on the unique biophysical/biology-based engineering foundations of the participating faculty, train scientists and engineers to apply computational methods that collectively span multiple time and length scales of biological organization. The success of IMMBI will be defined by the following: Size and quality of the applicant pool for pre-doctoral and post-doctoral fellows; Academic performance; Quality of the pre-doctoral and post-doctoral research; Impact of the research broadly and to the DOE (ASCR program) mission; Distinction of the next career step for pre-doctoral and post-doctoral fellows; and Faculty collaborations that result from IMMBI activities. Specific details about accomplishments during the three years of DOE support for IMMBI have been documented in Annual Progress Reports (April 2005, June 2006, and March 2007) and a Report for a National Academy of Sciences Review (October 2005) that were submitted to DOE on the dates indicated. An overview of these accomplishments is provided.

  10. Multi-scale biomedical systems: measurement challenges

    International Nuclear Information System (INIS)

    Summers, R

    2016-01-01

    Multi-scale biomedical systems are those that represent interactions in materials, sensors, and systems from a holistic perspective. It is possible to view such multi-scale activity using measurement of spatial scale or time scale, though in this paper only the former is considered. The biomedical application paradigm comprises interactions that range from quantum biological phenomena at scales of 10-12 for one individual to epidemiological studies of disease spread in populations that in a pandemic lead to measurement at a scale of 10+7. It is clear that there are measurement challenges at either end of this spatial scale, but those challenges that relate to the use of new technologies that deal with big data and health service delivery at the point of care are also considered. The measurement challenges lead to the use, in many cases, of model-based measurement and the adoption of virtual engineering. It is these measurement challenges that will be uncovered in this paper. (paper)

  11. Multiscale permutation entropy analysis of electrocardiogram

    Science.gov (United States)

    Liu, Tiebing; Yao, Wenpo; Wu, Min; Shi, Zhaorong; Wang, Jun; Ning, Xinbao

    2017-04-01

    To make a comprehensive nonlinear analysis to ECG, multiscale permutation entropy (MPE) was applied to ECG characteristics extraction to make a comprehensive nonlinear analysis of ECG. Three kinds of ECG from PhysioNet database, congestive heart failure (CHF) patients, healthy young and elderly subjects, are applied in this paper. We set embedding dimension to 4 and adjust scale factor from 2 to 100 with a step size of 2, and compare MPE with multiscale entropy (MSE). As increase of scale factor, MPE complexity of the three ECG signals are showing first-decrease and last-increase trends. When scale factor is between 10 and 32, complexities of the three ECG had biggest difference, entropy of the elderly is 0.146 less than the CHF patients and 0.025 larger than the healthy young in average, in line with normal physiological characteristics. Test results showed that MPE can effectively apply in ECG nonlinear analysis, and can effectively distinguish different ECG signals.

  12. A Multiscale Model for Virus Capsid Dynamics

    Directory of Open Access Journals (Sweden)

    Changjun Chen

    2010-01-01

    Full Text Available Viruses are infectious agents that can cause epidemics and pandemics. The understanding of virus formation, evolution, stability, and interaction with host cells is of great importance to the scientific community and public health. Typically, a virus complex in association with its aquatic environment poses a fabulous challenge to theoretical description and prediction. In this work, we propose a differential geometry-based multiscale paradigm to model complex biomolecule systems. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum domain of the fluid mechanical description of the aquatic environment from the microscopic discrete domain of the atomistic description of the biomolecule. A multiscale action functional is constructed as a unified framework to derive the governing equations for the dynamics of different scales. We show that the classical Navier-Stokes equation for the fluid dynamics and Newton's equation for the molecular dynamics can be derived from the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows.

  13. Multiscale Convolutional Neural Networks for Hand Detection

    Directory of Open Access Journals (Sweden)

    Shiyang Yan

    2017-01-01

    Full Text Available Unconstrained hand detection in still images plays an important role in many hand-related vision problems, for example, hand tracking, gesture analysis, human action recognition and human-machine interaction, and sign language recognition. Although hand detection has been extensively studied for decades, it is still a challenging task with many problems to be tackled. The contributing factors for this complexity include heavy occlusion, low resolution, varying illumination conditions, different hand gestures, and the complex interactions between hands and objects or other hands. In this paper, we propose a multiscale deep learning model for unconstrained hand detection in still images. Deep learning models, and deep convolutional neural networks (CNNs in particular, have achieved state-of-the-art performances in many vision benchmarks. Developed from the region-based CNN (R-CNN model, we propose a hand detection scheme based on candidate regions generated by a generic region proposal algorithm, followed by multiscale information fusion from the popular VGG16 model. Two benchmark datasets were applied to validate the proposed method, namely, the Oxford Hand Detection Dataset and the VIVA Hand Detection Challenge. We achieved state-of-the-art results on the Oxford Hand Detection Dataset and had satisfactory performance in the VIVA Hand Detection Challenge.

  14. Residual-driven online generalized multiscale finite element methods

    KAUST Repository

    Chung, Eric T.

    2015-09-08

    The construction of local reduced-order models via multiscale basis functions has been an area of active research. In this paper, we propose online multiscale basis functions which are constructed using the offline space and the current residual. Online multiscale basis functions are constructed adaptively in some selected regions based on our error indicators. We derive an error estimator which shows that one needs to have an offline space with certain properties to guarantee that additional online multiscale basis function will decrease the error. This error decrease is independent of physical parameters, such as the contrast and multiple scales in the problem. The offline spaces are constructed using Generalized Multiscale Finite Element Methods (GMsFEM). We show that if one chooses a sufficient number of offline basis functions, one can guarantee that additional online multiscale basis functions will reduce the error independent of contrast. We note that the construction of online basis functions is motivated by the fact that the offline space construction does not take into account distant effects. Using the residual information, we can incorporate the distant information provided the offline approximation satisfies certain properties. In the paper, theoretical and numerical results are presented. Our numerical results show that if the offline space is sufficiently large (in terms of the dimension) such that the coarse space contains all multiscale spectral basis functions that correspond to small eigenvalues, then the error reduction by adding online multiscale basis function is independent of the contrast. We discuss various ways computing online multiscale basis functions which include a use of small dimensional offline spaces.

  15. On automatic visual inspection of reflective surfaces

    DEFF Research Database (Denmark)

    Kulmann, Lionel

    1995-01-01

    surfaces, providing new and exciting applications subject to automated visual inspection. Several contextual features have been surveyed along with introduction of novel methods to perform data-dependent enhancement of local surface appearance . Morphological methods have been described and utilized......This thesis descrbes different methods to perform automatic visual inspection of reflective manufactured products, with the aim of increasing productivity, reduce cost and improve the quality level of the production. We investigate two different systems performing automatic visual inspection....... The first is the inspection of highly reflective aluminum sheets, used by the Danish company Bang & Olufsen, as a part of the exterior design and general appearance of their audio and video products. The second is the inspection of IBM hard disk read/write heads for defects during manufacturing. We have...

  16. Disentangling Complexity in Bayesian Automatic Adaptive Quadrature

    Science.gov (United States)

    Adam, Gheorghe; Adam, Sanda

    2018-02-01

    The paper describes a Bayesian automatic adaptive quadrature (BAAQ) solution for numerical integration which is simultaneously robust, reliable, and efficient. Detailed discussion is provided of three main factors which contribute to the enhancement of these features: (1) refinement of the m-panel automatic adaptive scheme through the use of integration-domain-length-scale-adapted quadrature sums; (2) fast early problem complexity assessment - enables the non-transitive choice among three execution paths: (i) immediate termination (exceptional cases); (ii) pessimistic - involves time and resource consuming Bayesian inference resulting in radical reformulation of the problem to be solved; (iii) optimistic - asks exclusively for subrange subdivision by bisection; (3) use of the weaker accuracy target from the two possible ones (the input accuracy specifications and the intrinsic integrand properties respectively) - results in maximum possible solution accuracy under minimum possible computing time.

  17. Analysis of individual brain activation maps using hierarchical description and multiscale detection

    International Nuclear Information System (INIS)

    Poline, J.B.; Mazoyer, B.M.

    1994-01-01

    The authors propose a new method for the analysis of brain activation images that aims at detecting activated volumes rather than pixels. The method is based on Poisson process modeling, hierarchical description, and multiscale detection (MSD). Its performances have been assessed using both Monte Carlo simulated images and experimental PET brain activation data. As compared to other methods, the MSD approach shows enhanced sensitivity with a controlled overall type I error, and has the ability to provide an estimate of the spatial limits of the detected signals. It is applicable to any kind of difference image for which the spatial autocorrelation function can be approximated by a stationary Gaussian function

  18. Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Navigation Accuracy to Major Error Sources

    Science.gov (United States)

    Olson, Corwin; Long, Anne; Car[emter. Russell

    2011-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.

  19. Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Naviation Accuracy to Major Error Sources

    Science.gov (United States)

    Olson, Corwin; Long, Anne; Carpenter, J. Russell

    2011-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.

  20. Multi-scale MHD analysis of heliotron plasma in change of background field

    International Nuclear Information System (INIS)

    Ichiguchi, K.; Sakakibara, S.; Ohdachi, S.; Carreras, B.A.

    2012-11-01

    A partial collapse observed in the Large Helical Device (LHD) experiments shifting the magnetic axis inwardly with a real time control of the background field is analyzed with a magnetohydrodynamics (MHD) numerical simulation. The simulation is carried out with a multi-scale simulation scheme. In the simulation, the equilibrium also evolves including the change of the pressure and the rotational transform due to the perturbation dynamics. The simulation result agrees with the experiments qualitatively, which shows that the mechanism is attributed to the destabilization of an infernal-like mode. The destabilization is caused by the change of the background field through the enhancement of the magnetic hill. (author)

  1. Automatic segmentation of coronary angiograms based on fuzzy inferring and probabilistic tracking

    Directory of Open Access Journals (Sweden)

    Shoujun Zhou

    2010-08-01

    Full Text Available Abstract Background Segmentation of the coronary angiogram is important in computer-assisted artery motion analysis or reconstruction of 3D vascular structures from a single-plan or biplane angiographic system. Developing fully automated and accurate vessel segmentation algorithms is highly challenging, especially when extracting vascular structures with large variations in image intensities and noise, as well as with variable cross-sections or vascular lesions. Methods This paper presents a novel tracking method for automatic segmentation of the coronary artery tree in X-ray angiographic images, based on probabilistic vessel tracking and fuzzy structure pattern inferring. The method is composed of two main steps: preprocessing and tracking. In preprocessing, multiscale Gabor filtering and Hessian matrix analysis were used to enhance and extract vessel features from the original angiographic image, leading to a vessel feature map as well as a vessel direction map. In tracking, a seed point was first automatically detected by analyzing the vessel feature map. Subsequently, two operators [e.g., a probabilistic tracking operator (PTO and a vessel structure pattern detector (SPD] worked together based on the detected seed point to extract vessel segments or branches one at a time. The local structure pattern was inferred by a multi-feature based fuzzy inferring function employed in the SPD. The identified structure pattern, such as crossing or bifurcation, was used to control the tracking process, for example, to keep tracking the current segment or start tracking a new one, depending on the detected pattern. Results By appropriate integration of these advanced preprocessing and tracking steps, our tracking algorithm is able to extract both vessel axis lines and edge points, as well as measure the arterial diameters in various complicated cases. For example, it can walk across gaps along the longitudinal vessel direction, manage varying vessel

  2. Analysis of complex time series using refined composite multiscale entropy

    International Nuclear Information System (INIS)

    Wu, Shuen-De; Wu, Chiu-Wen; Lin, Shiou-Gwo; Lee, Kung-Yen; Peng, Chung-Kang

    2014-01-01

    Multiscale entropy (MSE) is an effective algorithm for measuring the complexity of a time series that has been applied in many fields successfully. However, MSE may yield an inaccurate estimation of entropy or induce undefined entropy because the coarse-graining procedure reduces the length of a time series considerably at large scales. Composite multiscale entropy (CMSE) was recently proposed to improve the accuracy of MSE, but it does not resolve undefined entropy. Here we propose a refined composite multiscale entropy (RCMSE) to improve CMSE. For short time series analyses, we demonstrate that RCMSE increases the accuracy of entropy estimation and reduces the probability of inducing undefined entropy.

  3. Multi-Scale Scattering Transform in Music Similarity Measuring

    Science.gov (United States)

    Wang, Ruobai

    Scattering transform is a Mel-frequency spectrum based, time-deformation stable method, which can be used in evaluating music similarity. Compared with Dynamic time warping, it has better performance in detecting similar audio signals under local time-frequency deformation. Multi-scale scattering means to combine scattering transforms of different window lengths. This paper argues that, multi-scale scattering transform is a good alternative of dynamic time warping in music similarity measuring. We tested the performance of multi-scale scattering transform against other popular methods, with data designed to represent different conditions.

  4. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Science.gov (United States)

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  5. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang; Shen, ChaoHui

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  6. Automatic sets and Delone sets

    International Nuclear Information System (INIS)

    Barbe, A; Haeseler, F von

    2004-01-01

    Automatic sets D part of Z m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D part of Z m to be a Delone set in R m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples

  7. Multiscale Modeling of Microbial Communities

    Science.gov (United States)

    Blanchard, Andrew

    Although bacteria are single-celled organisms, they exist in nature primarily in the form of complex communities, participating in a vast array of social interactions through regulatory gene networks. The social interactions between individual cells drive the emergence of community structures, resulting in an intricate relationship across multiple spatiotemporal scales. Here, I present my work towards developing and applying the tools necessary to model the complex dynamics of bacterial communities. In Chapter 2, I utilize a reaction-diffusion model to determine the population dynamics for a population with two species. One species (CDI+) utilizes contact dependent inhibition to kill the other sensitive species (CDI-). The competition can produce diverse patterns, including extinction, coexistence, and localized aggregation. The emergence, relative abundance, and characteristic features of these patterns are collectively determined by the competitive benefit of CDI and its growth disadvantage for a given rate of population diffusion. The results provide a systematic and statistical view of CDI-based bacterial population competition, expanding the spectrum of our knowledge about CDI systems and possibly facilitating new experimental tests for a deeper understanding of bacterial interactions. In the following chapter, I present a systematic computational survey on the relationship between social interaction types and population structures for two-species communities by developing and utilizing a hybrid computational framework that combines discrete element techniques with reaction-diffusion equations. The impact of deleterious and beneficial interactions on the community are quantified. Deleterious interactions generate an increased variance in relative abundance, a drastic decrease in surviving lineages, and a rough expanding front. In contrast, beneficial interactions contribute to a reduced variance in relative abundance, an enhancement in lineage number, and a

  8. Toward multiscale modelings of grain-fluid systems

    Science.gov (United States)

    Chareyre, Bruno; Yuan, Chao; Montella, Eduard P.; Salager, Simon

    2017-06-01

    Computationally efficient methods have been developed for simulating partially saturated granular materials in the pendular regime. In contrast, one hardly avoid expensive direct resolutions of 2-phase fluid dynamics problem for mixed pendular-funicular situations or even saturated regimes. Following previous developments for single-phase flow, a pore-network approach of the coupling problems is described. The geometry and movements of phases and interfaces are described on the basis of a tetrahedrization of the pore space, introducing elementary objects such as bridge, meniscus, pore body and pore throat, together with local rules of evolution. As firmly established local rules are still missing on some aspects (entry capillary pressure and pore-scale pressure-saturation relations, forces on the grains, or kinetics of transfers in mixed situations) a multi-scale numerical framework is introduced, enhancing the pore-network approach with the help of direct simulations. Small subsets of a granular system are extracted, in which multiphase scenario are solved using the Lattice-Boltzman method (LBM). In turns, a global problem is assembled and solved at the network scale, as illustrated by a simulated primary drainage.

  9. Multiscale Analysis of the Predictability of Stock Returns

    Directory of Open Access Journals (Sweden)

    Paweł Fiedor

    2015-06-01

    Full Text Available Due to the strong complexity of financial markets, economics does not have a unified theory of price formation in financial markets. The most common assumption is the Efficient-Market Hypothesis, which has been attacked by a number of researchers, using different tools. There were varying degrees to which these tools complied with the formal definitions of efficiency and predictability. In our earlier work, we analysed the predictability of stock returns at two time scales using the entropy rate, which can be directly linked to the mathematical definition of predictability. Nonetheless, none of the above-mentioned studies allow any general understanding of how the financial markets work, beyond disproving the Efficient-Market Hypothesis. In our previous study, we proposed the Maximum Entropy Production Principle, which uses the entropy rate to create a general principle underlying the price formation processes. Both of these studies show that the predictability of price changes is higher at the transaction level intraday scale than the scale of daily returns, but ignore all scales in between. In this study we extend these ideas using the multiscale entropy analysis framework to enhance our understanding of the predictability of price formation processes at various time scales.

  10. A multiscale crack-bridging model of cellulose nanopaper

    Science.gov (United States)

    Meng, Qinghua; Li, Bo; Li, Teng; Feng, Xi-Qiao

    2017-06-01

    The conflict between strength and toughness is a long-standing challenge in advanced materials design. Recently, a fundamental bottom-up material design strategy has been demonstrated using cellulose nanopaper to achieve significant simultaneous increase in both strength and toughness. Fertile opportunities of such a design strategy aside, mechanistic understanding is much needed to thoroughly explore its full potential. To this end, here we establish a multiscale crack-bridging model to reveal the toughening mechanisms in cellulose nanopaper. A cohesive law is developed to characterize the interfacial properties between cellulose nanofibrils by considering their hydrogen bonding nature. In the crack-bridging zone, the hydrogen bonds between neighboring cellulose nanofibrils may break and reform at the molecular scale, rendering a superior toughness at the macroscopic scale. It is found that cellulose nanofibrils exhibit a distinct size-dependence in enhancing the fracture toughness of cellulose nanopaper. An optimal range of the length-to-radius ratio of nanofibrils is required to achieve higher fracture toughness of cellulose nanopaper. A unified law is proposed to correlate the fracture toughness of cellulose nanopaper with its microstructure and material parameters. The results obtained from this model agree well with relevant experiments. This work not only helps decipher the fundamental mechanisms underlying the remarkable mechanical properties of cellulose nanopaper but also provides a guide to design a wide range of advanced functional materials.

  11. Theoretical insights into multiscale electronic processes in organic photovoltaics

    Science.gov (United States)

    Tretiak, Sergei

    Present day electronic devices are enabled by design and implementation of precise interfaces that control the flow of charge carriers. This requires robust and predictive multiscale approaches for theoretical description of underlining complex phenomena. Combined with thorough experimental studies such approaches provide a reliable estimate of physical properties of nanostructured materials and enable a rational design of devices. From this perspective I will discuss first principle modeling of small-molecule bulk-heterojunction organic solar cells and push-pull chromophores for tunable-color organic light emitters. The emphasis is on electronic processes involving intra- and intermolecular energy or charge transfer driven by strong electron-phonon coupling inherent to pi-conjugated systems. Finally I will describe how precise manipulation and control of organic-organic interfaces in a photovoltaic device can increase its power conversion efficiency by 2-5 times in a model bilayer system. Applications of these design principles to practical architectures like bulk heterojunction devices lead to an enhancement in power conversion efficiency from 4.0% to 7.0%. These interface manipulation strategies are universally applicable to any donor-acceptor interface, making them both fundamentally interesting and technologically important for achieving high efficiency organic electronic devices.

  12. Dynamical glucometry: Use of multiscale entropy analysis in diabetes

    Science.gov (United States)

    Costa, Madalena D.; Henriques, Teresa; Munshi, Medha N.; Segal, Alissa R.; Goldberger, Ary L.

    2014-09-01

    Diabetes mellitus (DM) is one of the world's most prevalent medical conditions. Contemporary management focuses on lowering mean blood glucose values toward a normal range, but largely ignores the dynamics of glucose fluctuations. We probed analyte time series obtained from continuous glucose monitor (CGM) sensors. We show that the fluctuations in CGM values sampled every 5 min are not uncorrelated noise. Next, using multiscale entropy analysis, we quantified the complexity of the temporal structure of the CGM time series from a group of elderly subjects with type 2 DM and age-matched controls. We further probed the structure of these CGM time series using detrended fluctuation analysis. Our findings indicate that the dynamics of glucose fluctuations from control subjects are more complex than those of subjects with type 2 DM over time scales ranging from about 5 min to 5 h. These findings support consideration of a new framework, dynamical glucometry, to guide mechanistic research and to help assess and compare therapeutic interventions, which should enhance complexity of glucose fluctuations and not just lower mean and variance of blood glucose levels.

  13. Automatic identification in mining

    Energy Technology Data Exchange (ETDEWEB)

    Puckett, D; Patrick, C [Mine Computers and Electronics Inc., Morehead, KY (United States)

    1998-06-01

    The feasibility of monitoring the locations and vital statistics of equipment and personnel in surface and underground mining operations has increased with advancements in radio frequency identification (RFID) technology. This paper addresses the use of RFID technology, which is relatively new to the mining industry, to track surface equipment in mine pits, loading points and processing facilities. Specific applications are discussed, including both simplified and complex truck tracking systems and an automatic pit ticket system. This paper concludes with a discussion of the future possibilities of using RFID technology in mining including monitoring heart and respiration rates, body temperatures and exertion levels; monitoring repetitious movements for the study of work habits; and logging air quality via personnel sensors. 10 refs., 5 figs.

  14. Automatic quantitative metallography

    International Nuclear Information System (INIS)

    Barcelos, E.J.B.V.; Ambrozio Filho, F.; Cunha, R.C.

    1976-01-01

    The quantitative determination of metallographic parameters is analysed through the description of Micro-Videomat automatic image analysis system and volumetric percentage of perlite in nodular cast irons, porosity and average grain size in high-density sintered pellets of UO 2 , and grain size of ferritic steel. Techniques adopted are described and results obtained are compared with the corresponding ones by the direct counting process: counting of systematic points (grid) to measure volume and intersections method, by utilizing a circunference of known radius for the average grain size. The adopted technique for nodular cast iron resulted from the small difference of optical reflectivity of graphite and perlite. Porosity evaluation of sintered UO 2 pellets is also analyzed [pt

  15. Automatic surveying techniques

    International Nuclear Information System (INIS)

    Sah, R.

    1976-01-01

    In order to investigate the feasibility of automatic surveying methods in a more systematic manner, the PEP organization signed a contract in late 1975 for TRW Systems Group to undertake a feasibility study. The completion of this study resulted in TRW Report 6452.10-75-101, dated December 29, 1975, which was largely devoted to an analysis of a survey system based on an Inertial Navigation System. This PEP note is a review and, in some instances, an extension of that TRW report. A second survey system which employed an ''Image Processing System'' was also considered by TRW, and it will be reviewed in the last section of this note. 5 refs., 5 figs., 3 tabs

  16. AUTOMATIC ARCHITECTURAL STYLE RECOGNITION

    Directory of Open Access Journals (Sweden)

    M. Mathias

    2012-09-01

    Full Text Available Procedural modeling has proven to be a very valuable tool in the field of architecture. In the last few years, research has soared to automatically create procedural models from images. However, current algorithms for this process of inverse procedural modeling rely on the assumption that the building style is known. So far, the determination of the building style has remained a manual task. In this paper, we propose an algorithm which automates this process through classification of architectural styles from facade images. Our classifier first identifies the images containing buildings, then separates individual facades within an image and determines the building style. This information could then be used to initialize the building reconstruction process. We have trained our classifier to distinguish between several distinct architectural styles, namely Flemish Renaissance, Haussmannian and Neoclassical. Finally, we demonstrate our approach on various street-side images.

  17. Variational multiscale models for charge transport.

    Science.gov (United States)

    Wei, Guo-Wei; Zheng, Qiong; Chen, Zhan; Xia, Kelin

    2012-01-01

    This work presents a few variational multiscale models for charge transport in complex physical, chemical and biological systems and engineering devices, such as fuel cells, solar cells, battery cells, nanofluidics, transistors and ion channels. An essential ingredient of the present models, introduced in an earlier paper (Bulletin of Mathematical Biology, 72, 1562-1622, 2010), is the use of differential geometry theory of surfaces as a natural means to geometrically separate the macroscopic domain from the microscopic domain, meanwhile, dynamically couple discrete and continuum descriptions. Our main strategy is to construct the total energy functional of a charge transport system to encompass the polar and nonpolar free energies of solvation, and chemical potential related energy. By using the Euler-Lagrange variation, coupled Laplace-Beltrami and Poisson-Nernst-Planck (LB-PNP) equations are derived. The solution of the LB-PNP equations leads to the minimization of the total free energy, and explicit profiles of electrostatic potential and densities of charge species. To further reduce the computational complexity, the Boltzmann distribution obtained from the Poisson-Boltzmann (PB) equation is utilized to represent the densities of certain charge species so as to avoid the computationally expensive solution of some Nernst-Planck (NP) equations. Consequently, the coupled Laplace-Beltrami and Poisson-Boltzmann-Nernst-Planck (LB-PBNP) equations are proposed for charge transport in heterogeneous systems. A major emphasis of the present formulation is the consistency between equilibrium LB-PB theory and non-equilibrium LB-PNP theory at equilibrium. Another major emphasis is the capability of the reduced LB-PBNP model to fully recover the prediction of the LB-PNP model at non-equilibrium settings. To account for the fluid impact on the charge transport, we derive coupled Laplace-Beltrami, Poisson-Nernst-Planck and Navier-Stokes equations from the variational principle

  18. Variational multiscale models for charge transport

    Science.gov (United States)

    Wei, Guo-Wei; Zheng, Qiong; Chen, Zhan; Xia, Kelin

    2012-01-01

    This work presents a few variational multiscale models for charge transport in complex physical, chemical and biological systems and engineering devices, such as fuel cells, solar cells, battery cells, nanofluidics, transistors and ion channels. An essential ingredient of the present models, introduced in an earlier paper (Bulletin of Mathematical Biology, 72, 1562-1622, 2010), is the use of differential geometry theory of surfaces as a natural means to geometrically separate the macroscopic domain from the microscopic domain, meanwhile, dynamically couple discrete and continuum descriptions. Our main strategy is to construct the total energy functional of a charge transport system to encompass the polar and nonpolar free energies of solvation, and chemical potential related energy. By using the Euler-Lagrange variation, coupled Laplace-Beltrami and Poisson-Nernst-Planck (LB-PNP) equations are derived. The solution of the LB-PNP equations leads to the minimization of the total free energy, and explicit profiles of electrostatic potential and densities of charge species. To further reduce the computational complexity, the Boltzmann distribution obtained from the Poisson-Boltzmann (PB) equation is utilized to represent the densities of certain charge species so as to avoid the computationally expensive solution of some Nernst-Planck (NP) equations. Consequently, the coupled Laplace-Beltrami and Poisson-Boltzmann-Nernst-Planck (LB-PBNP) equations are proposed for charge transport in heterogeneous systems. A major emphasis of the present formulation is the consistency between equilibrium LB-PB theory and non-equilibrium LB-PNP theory at equilibrium. Another major emphasis is the capability of the reduced LB-PBNP model to fully recover the prediction of the LB-PNP model at non-equilibrium settings. To account for the fluid impact on the charge transport, we derive coupled Laplace-Beltrami, Poisson-Nernst-Planck and Navier-Stokes equations from the variational principle

  19. RBF Multiscale Collocation for Second Order Elliptic Boundary Value Problems

    KAUST Repository

    Farrell, Patricio; Wendland, Holger

    2013-01-01

    In this paper, we discuss multiscale radial basis function collocation methods for solving elliptic partial differential equations on bounded domains. The approximate solution is constructed in a multilevel fashion, each level using compactly

  20. A multiscale mortar multipoint flux mixed finite element method

    KAUST Repository

    Wheeler, Mary Fanett; Xue, Guangri; Yotov, Ivan

    2012-01-01

    In this paper, we develop a multiscale mortar multipoint flux mixed finite element method for second order elliptic problems. The equations in the coarse elements (or subdomains) are discretized on a fine grid scale by a multipoint flux mixed finite

  1. Fast Multiscale Reservoir Simulations using POD-DEIM Model Reduction

    KAUST Repository

    Ghasemi, Mohammadreza; Yang, Yanfang; Gildin, Eduardo; Efendiev, Yalchin R.; Calo, Victor M.

    2015-01-01

    snapshots are inexpensively computed using local model reduction techniques based on Generalized Multiscale Finite Element Method (GMsFEM) which provides (1) a hierarchical approximation of snapshot vectors (2) adaptive computations by using coarse grids (3

  2. Multi-Scale Simulation of High Energy Density Ionic Liquids

    National Research Council Canada - National Science Library

    Voth, Gregory A

    2007-01-01

    The focus of this AFOSR project was the molecular dynamics (MD) simulation of ionic liquid structure, dynamics, and interfacial properties, as well as multi-scale descriptions of these novel liquids (e.g...

  3. Multi-scale modeling strategies in materials science—The ...

    Indian Academy of Sciences (India)

    Unknown

    Multi-scale models; quasicontinuum method; finite elements. 1. Introduction ... boundary with external stresses, and the interaction of a lattice dislocation with a grain ..... mum value of se over the elements that touch node α. The acceleration of ...

  4. Lifetime statistics of quantum chaos studied by a multiscale analysis

    KAUST Repository

    Di Falco, A.; Krauss, T. F.; Fratalocchi, Andrea

    2012-01-01

    on a silicon-on-insulator substrate. We calculate resonances through a multiscale procedure that combines energy landscape analysis and wavelet transforms. Experimental data is found to follow the universal predictions arising from random matrix theory

  5. Randomized Oversampling for Generalized Multiscale Finite Element Methods

    KAUST Repository

    Calo, Victor M.; Efendiev, Yalchin R.; Galvis, Juan; Li, Guanglian

    2016-01-01

    boundary conditions defined in a domain larger than the target region. Furthermore, we perform an eigenvalue decomposition in this small space. We study the application of randomized sampling for GMsFEM in conjunction with adaptivity, where local multiscale

  6. Toward the multiscale nature of stress corrosion cracking

    Directory of Open Access Journals (Sweden)

    Xiaolong Liu

    2018-02-01

    Full Text Available This article reviews the multiscale nature of stress corrosion cracking (SCC observed by high-resolution characterizations in austenite stainless steels and Ni-base superalloys in light water reactors (including boiling water reactors, pressurized water reactors, and supercritical water reactors with related opinions. A new statistical summary and comparison of observed degradation phenomena at different length scales is included. The intrinsic causes of this multiscale nature of SCC are discussed based on existing evidence and related opinions, ranging from materials theory to practical processing technologies. Questions of interest are then discussed to improve bottom-up understanding of the intrinsic causes. Last, a multiscale modeling and simulation methodology is proposed as a promising interdisciplinary solution to understand the intrinsic causes of the multiscale nature of SCC in light water reactors, based on a review of related supporting application evidence.

  7. Multiscale model reduction for shale gas transport in fractured media

    KAUST Repository

    Akkutlu, I. Y.; Efendiev, Yalchin R.; Vasilyeva, Maria

    2016-01-01

    fracture distributions on an unstructured grid; (2) develop GMsFEM for nonlinear flows; and (3) develop online basis function strategies to adaptively improve the convergence. The number of multiscale basis functions in each coarse region represents

  8. Distributed Multiscale Data Analysis and Processing for Sensor Networks

    National Research Council Canada - National Science Library

    Wagner, Raymond; Sarvotham, Shriram; Choi, Hyeokho; Baraniuk, Richard

    2005-01-01

    .... Second, the communication overhead of multiscale algorithms can become prohibitive. In this paper, we take a first step in addressing both shortcomings by introducing two new distributed multiresolution transforms...

  9. Examining Multiscale Movement Coordination in Collaborative Problem Solving

    DEFF Research Database (Denmark)

    Wiltshire, Travis; Steffensen, Sune Vork

    2017-01-01

    During collaborative problem solving (CPS), coordination occurs at different spatial and temporal scales. This multiscale coordination should, at least on some scales, play a functional role in facilitating effective collaboration outcomes. To evaluate this, we conducted a study of computer...

  10. Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.

    Directory of Open Access Journals (Sweden)

    Matthew B Biggs

    Full Text Available Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.

  11. Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.

    Science.gov (United States)

    Biggs, Matthew B; Papin, Jason A

    2013-01-01

    Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.

  12. A Unification of Inheritance and Automatic Program Specialization

    DEFF Research Database (Denmark)

    Schultz, Ulrik Pagh

    2004-01-01

    , inheritance is used to control the automatic application of program specialization to class members during compilation to obtain an efficient implementation. This paper presents the language JUST, which integrates object-oriented concepts, block structure, and techniques from automatic program specialization......The object-oriented style of programming facilitates program adaptation and enhances program genericness, but at the expense of efficiency. Automatic program specialization can be used to generate specialized, efficient implementations for specific scenarios, but requires the program...... to be structured appropriately for specialization and is yet another new concept for the programmer to understand and apply. We have unified automatic program specialization and inheritance into a single concept, and implemented this approach in a modified version of Java named JUST. When programming in JUST...

  13. An approach to multiscale modelling with graph grammars.

    Science.gov (United States)

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-09-01

    Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.

  14. Long-term Stable Conservative Multiscale Methods for Vortex Flows

    Science.gov (United States)

    2017-10-31

    Computing Department, Florida State (January 2016) - L. Rebholz, SIAM Southeast 2016, Special session on Recent advances in fluid flow and...Multiscale Methods for Vortex Flows (x) Material has been given an OPSEC review and it has been determined to be non sensitive and, except for...distribution is unlimited. UU UU UU UU 31-10-2017 1-Aug-2014 31-Jul-2017 Final Report: Long-term Stable Conservative Multiscale Methods for Vortex Flows

  15. RFP for the Auroral Multiscale Midex (AMM) Mission star tracker

    DEFF Research Database (Denmark)

    Riis, Troels; Betto, Maurizio; Jørgensen, John Leif

    1999-01-01

    This document is in response to the John Hopkins University - Applied Physics Laboratory RFP for the Auroral Multiscale Midex Mission star tracker.It describes the functionality, the requirements and the performance of the ASC Star Tracker.......This document is in response to the John Hopkins University - Applied Physics Laboratory RFP for the Auroral Multiscale Midex Mission star tracker.It describes the functionality, the requirements and the performance of the ASC Star Tracker....

  16. MUSIC: MUlti-Scale Initial Conditions

    Science.gov (United States)

    Hahn, Oliver; Abel, Tom

    2013-11-01

    MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.

  17. Multi-scale modeling of composites

    DEFF Research Database (Denmark)

    Azizi, Reza

    A general method to obtain the homogenized response of metal-matrix composites is developed. It is assumed that the microscopic scale is sufficiently small compared to the macroscopic scale such that the macro response does not affect the micromechanical model. Therefore, the microscopic scale......-Mandel’s energy principle is used to find macroscopic operators based on micro-mechanical analyses using the finite element method under generalized plane strain condition. A phenomenologically macroscopic model for metal matrix composites is developed based on constitutive operators describing the elastic...... to plastic deformation. The macroscopic operators found, can be used to model metal matrix composites on the macroscopic scale using a hierarchical multi-scale approach. Finally, decohesion under tension and shear loading is studied using a cohesive law for the interface between matrix and fiber....

  18. Quantifying multiscale inefficiency in electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Uritskaya, Olga Y. [Department of Economics, University of Calgary, Calgary, Alberta T2N 1N4, and Department of Economics and Management, St. Petersburg Polytechnic University, St. Petersburg (Russian Federation); Serletis, Apostolos [Department of Economics, University of Calgary, Calgary, Alberta (Canada)

    2008-11-15

    One of the basic features of efficient markets is the absence of correlations between price increments over any time scale leading to random walk-type behavior of prices. In this paper, we propose a new approach for measuring deviations from the efficient market state based on an analysis of scale-dependent fractal exponent characterizing correlations at different time scales. The approach is applied to two electricity markets, Alberta and Mid Columbia (Mid-C), as well as to the AECO Alberta natural gas market (for purposes of providing a comparison between storable and non-storable commodities). We show that price fluctuations in all studied markets are not efficient, with electricity prices exhibiting complex multiscale correlated behavior not captured by monofractal methods used in previous studies. (author)

  19. On multiscale moving contact line theory.

    Science.gov (United States)

    Li, Shaofan; Fan, Houfu

    2015-07-08

    In this paper, a multiscale moving contact line (MMCL) theory is presented and employed to simulate liquid droplet spreading and capillary motion. The proposed MMCL theory combines a coarse-grained adhesive contact model with a fluid interface membrane theory, so that it can couple molecular scale adhesive interaction and surface tension with hydrodynamics of microscale flow. By doing so, the intermolecular force, the van der Waals or double layer force, separates and levitates the liquid droplet from the supporting solid substrate, which avoids the shear stress singularity caused by the no-slip condition in conventional hydrodynamics theory of moving contact line. Thus, the MMCL allows the difference of the surface energies and surface stresses to drive droplet spreading naturally. To validate the proposed MMCL theory, we have employed it to simulate droplet spreading over various elastic substrates. The numerical simulation results obtained by using MMCL are in good agreement with the molecular dynamics results reported in the literature.

  20. Numerical methods and analysis of multiscale problems

    CERN Document Server

    Madureira, Alexandre L

    2017-01-01

    This book is about numerical modeling of multiscale problems, and introduces several asymptotic analysis and numerical techniques which are necessary for a proper approximation of equations that depend on different physical scales. Aimed at advanced undergraduate and graduate students in mathematics, engineering and physics – or researchers seeking a no-nonsense approach –, it discusses examples in their simplest possible settings, removing mathematical hurdles that might hinder a clear understanding of the methods. The problems considered are given by singular perturbed reaction advection diffusion equations in one and two-dimensional domains, partial differential equations in domains with rough boundaries, and equations with oscillatory coefficients. This work shows how asymptotic analysis can be used to develop and analyze models and numerical methods that are robust and work well for a wide range of parameters.

  1. Quantifying multiscale inefficiency in electricity markets

    International Nuclear Information System (INIS)

    Uritskaya, Olga Y.; Serletis, Apostolos

    2008-01-01

    One of the basic features of efficient markets is the absence of correlations between price increments over any time scale leading to random walk-type behavior of prices. In this paper, we propose a new approach for measuring deviations from the efficient market state based on an analysis of scale-dependent fractal exponent characterizing correlations at different time scales. The approach is applied to two electricity markets, Alberta and Mid Columbia (Mid-C), as well as to the AECO Alberta natural gas market (for purposes of providing a comparison between storable and non-storable commodities). We show that price fluctuations in all studied markets are not efficient, with electricity prices exhibiting complex multiscale correlated behavior not captured by monofractal methods used in previous studies. (author)

  2. Multiscale modeling of three-dimensional genome

    Science.gov (United States)

    Zhang, Bin; Wolynes, Peter

    The genome, the blueprint of life, contains nearly all the information needed to build and maintain an entire organism. A comprehensive understanding of the genome is of paramount interest to human health and will advance progress in many areas, including life sciences, medicine, and biotechnology. The overarching goal of my research is to understand the structure-dynamics-function relationships of the human genome. In this talk, I will be presenting our efforts in moving towards that goal, with a particular emphasis on studying the three-dimensional organization, the structure of the genome with multi-scale approaches. Specifically, I will discuss the reconstruction of genome structures at both interphase and metaphase by making use of data from chromosome conformation capture experiments. Computationally modeling of chromatin fiber at atomistic level from first principles will also be presented as our effort for studying the genome structure from bottom up.

  3. Multiscale simulation approach for battery production systems

    CERN Document Server

    Schönemann, Malte

    2017-01-01

    Addressing the challenge of improving battery quality while reducing high costs and environmental impacts of the production, this book presents a multiscale simulation approach for battery production systems along with a software environment and an application procedure. Battery systems are among the most important technologies of the 21st century since they are enablers for the market success of electric vehicles and stationary energy storage solutions. However, the performance of batteries so far has limited possible applications. Addressing this challenge requires an interdisciplinary understanding of dynamic cause-effect relationships between processes, equipment, materials, and environmental conditions. The approach in this book supports the integrated evaluation of improvement measures and is usable for different planning horizons. It is applied to an exemplary battery cell production and module assembly in order to demonstrate the effectiveness and potential benefits of the simulation.

  4. Hybrid stochastic simplifications for multiscale gene networks

    Directory of Open Access Journals (Sweden)

    Debussche Arnaud

    2009-09-01

    Full Text Available Abstract Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion 123 which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.

  5. MULTISCALE DYNAMICS OF SOLAR MAGNETIC STRUCTURES

    International Nuclear Information System (INIS)

    Uritsky, Vadim M.; Davila, Joseph M.

    2012-01-01

    Multiscale topological complexity of the solar magnetic field is among the primary factors controlling energy release in the corona, including associated processes in the photospheric and chromospheric boundaries. We present a new approach for analyzing multiscale behavior of the photospheric magnetic flux underlying these dynamics as depicted by a sequence of high-resolution solar magnetograms. The approach involves two basic processing steps: (1) identification of timing and location of magnetic flux origin and demise events (as defined by DeForest et al.) by tracking spatiotemporal evolution of unipolar and bipolar photospheric regions, and (2) analysis of collective behavior of the detected magnetic events using a generalized version of the Grassberger-Procaccia correlation integral algorithm. The scale-free nature of the developed algorithms makes it possible to characterize the dynamics of the photospheric network across a wide range of distances and relaxation times. Three types of photospheric conditions are considered to test the method: a quiet photosphere, a solar active region (NOAA 10365) in a quiescent non-flaring state, and the same active region during a period of M-class flares. The results obtained show (1) the presence of a topologically complex asymmetrically fragmented magnetic network in the quiet photosphere driven by meso- and supergranulation, (2) the formation of non-potential magnetic structures with complex polarity separation lines inside the active region, and (3) statistical signatures of canceling bipolar magnetic structures coinciding with flaring activity in the active region. Each of these effects can represent an unstable magnetic configuration acting as an energy source for coronal dissipation and heating.

  6. Parallel multiscale simulations of a brain aneurysm

    Energy Technology Data Exchange (ETDEWEB)

    Grinberg, Leopold [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States); Fedosov, Dmitry A. [Institute of Complex Systems and Institute for Advanced Simulation, Forschungszentrum Jülich, Jülich 52425 (Germany); Karniadakis, George Em, E-mail: george_karniadakis@brown.edu [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States)

    2013-07-01

    Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multiscale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier–Stokes solver NεκTαr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers (NεκTαr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300 K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in

  7. Multiscale sampling model for motion integration.

    Science.gov (United States)

    Sherbakov, Lena; Yazdanbakhsh, Arash

    2013-09-30

    Biologically plausible strategies for visual scene integration across spatial and temporal domains continues to be a challenging topic. The fundamental question we address is whether classical problems in motion integration, such as the aperture problem, can be solved in a model that samples the visual scene at multiple spatial and temporal scales in parallel. We hypothesize that fast interareal connections that allow feedback of information between cortical layers are the key processes that disambiguate motion direction. We developed a neural model showing how the aperture problem can be solved using different spatial sampling scales between LGN, V1 layer 4, V1 layer 6, and area MT. Our results suggest that multiscale sampling, rather than feedback explicitly, is the key process that gives rise to end-stopped cells in V1 and enables area MT to solve the aperture problem without the need for calculating intersecting constraints or crafting intricate patterns of spatiotemporal receptive fields. Furthermore, the model explains why end-stopped cells no longer emerge in the absence of V1 layer 6 activity (Bolz & Gilbert, 1986), why V1 layer 4 cells are significantly more end-stopped than V1 layer 6 cells (Pack, Livingstone, Duffy, & Born, 2003), and how it is possible to have a solution to the aperture problem in area MT with no solution in V1 in the presence of driving feedback. In summary, while much research in the field focuses on how a laminar architecture can give rise to complicated spatiotemporal receptive fields to solve problems in the motion domain, we show that one can reframe motion integration as an emergent property of multiscale sampling achieved concurrently within lamina and across multiple visual areas.

  8. Parallel multiscale simulations of a brain aneurysm

    International Nuclear Information System (INIS)

    Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em

    2013-01-01

    Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multiscale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier–Stokes solver NεκTαr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers (NεκTαr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300 K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in

  9. Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition

    Science.gov (United States)

    Kim, Jonghwa; André, Elisabeth

    This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.

  10. Development of an automatic reactor inspection system

    International Nuclear Information System (INIS)

    Kim, Jae Hee; Eom, Heung Seop; Lee, Jae Cheol; Choi, Yoo Raek; Moon, Soon Seung

    2002-02-01

    Using recent technologies on a mobile robot computer science, we developed an automatic inspection system for weld lines of the reactor vessel. The ultrasonic inspection of the reactor pressure vessel is currently performed by commercialized robot manipulators. Since, however, the conventional fixed type robot manipulator is very huge, heavy and expensive, it needs long inspection time and is hard to handle and maintain. In order to resolve these problems, we developed a new automatic inspection system using a small mobile robot crawling on the vertical wall of the reactor vessel. According to our conceptual design, we developed the reactor inspection system including an underwater inspection robot, a laser position control subsystem, an ultrasonic data acquisition/analysis subsystem and a main control subsystem. We successfully carried out underwater experiments on the reactor vessel mockup, and real reactor ready for Ulchine nuclear power plant unit 6 at Dusan Heavy Industry in Korea. After this project, we have a plan to commercialize our inspection system. Using this system, we can expect much reduction of the inspection time, performance enhancement, automatic management of inspection history, etc. In the economic point of view, we can also expect import substitution more than 4 million dollars. The established essential technologies for intelligent control and automation are expected to be synthetically applied to the automation of similar systems in nuclear power plants

  11. Local contrast-enhanced MR images via high dynamic range processing.

    Science.gov (United States)

    Chandra, Shekhar S; Engstrom, Craig; Fripp, Jurgen; Neubert, Ales; Jin, Jin; Walker, Duncan; Salvado, Olivier; Ho, Charles; Crozier, Stuart

    2018-09-01

    To develop a local contrast-enhancing and feature-preserving high dynamic range (HDR) image processing algorithm for multichannel and multisequence MR images of multiple body regions and tissues, and to evaluate its performance for structure visualization, bias field (correction) mitigation, and automated tissue segmentation. A multiscale-shape and detail-enhancement HDR-MRI algorithm is applied to data sets of multichannel and multisequence MR images of the brain, knee, breast, and hip. In multisequence 3T hip images, agreement between automatic cartilage segmentations and corresponding synthesized HDR-MRI series were computed for mean voxel overlap established from manual segmentations for a series of cases. Qualitative comparisons between the developed HDR-MRI and standard synthesis methods were performed on multichannel 7T brain and knee data, and multisequence 3T breast and knee data. The synthesized HDR-MRI series provided excellent enhancement of fine-scale structure from multiple scales and contrasts, while substantially reducing bias field effects in 7T brain gradient echo, T 1 and T 2 breast images and 7T knee multichannel images. Evaluation of the HDR-MRI approach on 3T hip multisequence images showed superior outcomes for automatic cartilage segmentations with respect to manual segmentation, particularly around regions with hyperintense synovial fluid, across a set of 3D sequences. The successful combination of multichannel/sequence MR images into a single-fused HDR-MR image format provided consolidated visualization of tissues within 1 omnibus image, enhanced definition of thin, complex anatomical structures in the presence of variable or hyperintense signals, and improved tissue (cartilage) segmentation outcomes. © 2018 International Society for Magnetic Resonance in Medicine.

  12. Multiscale bilateral filtering for improving image quality in digital breast tomosynthesis

    Science.gov (United States)

    Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir M.; Samala, Ravi K.

    2015-01-01

    Purpose: Detection of subtle microcalcifications in digital breast tomosynthesis (DBT) is a challenging task because of the large, noisy DBT volume. It is important to enhance the contrast-to-noise ratio (CNR) of microcalcifications in DBT reconstruction. Most regularization methods depend on local gradient and may treat the ill-defined margins or subtle spiculations of masses and subtle microcalcifications as noise because of their small gradient. The authors developed a new multiscale bilateral filtering (MSBF) regularization method for the simultaneous algebraic reconstruction technique (SART) to improve the CNR of microcalcifications without compromising the quality of masses. Methods: The MSBF exploits a multiscale structure of DBT images to suppress noise and selectively enhance high frequency structures. At the end of each SART iteration, every DBT slice is decomposed into several frequency bands via Laplacian pyramid decomposition. No regularization is applied to the low frequency bands so that subtle edges of masses and structured background are preserved. Bilateral filtering is applied to the high frequency bands to enhance microcalcifications while suppressing noise. The regularized DBT images are used for updating in the next SART iteration. The new MSBF method was compared with the nonconvex total p-variation (TpV) method for noise regularization with SART. A GE GEN2 prototype DBT system was used for acquisition of projections at 21 angles in 3° increments over a ±30° range. The reconstruction image quality with no regularization (NR) and that with the two regularization methods were compared using the DBT scans of a heterogeneous breast phantom and several human subjects with masses and microcalcifications. The CNR and the full width at half maximum (FWHM) of the line profiles of microcalcifications and across the spiculations within their in-focus DBT slices were used as image quality measures. Results: The MSBF method reduced contouring artifacts

  13. Automatic EEG spike detection.

    Science.gov (United States)

    Harner, Richard

    2009-10-01

    Since the 1970s advances in science and technology during each succeeding decade have renewed the expectation of efficient, reliable automatic epileptiform spike detection (AESD). But even when reinforced with better, faster tools, clinically reliable unsupervised spike detection remains beyond our reach. Expert-selected spike parameters were the first and still most widely used for AESD. Thresholds for amplitude, duration, sharpness, rise-time, fall-time, after-coming slow waves, background frequency, and more have been used. It is still unclear which of these wave parameters are essential, beyond peak-peak amplitude and duration. Wavelet parameters are very appropriate to AESD but need to be combined with other parameters to achieve desired levels of spike detection efficiency. Artificial Neural Network (ANN) and expert-system methods may have reached peak efficiency. Support Vector Machine (SVM) technology focuses on outliers rather than centroids of spike and nonspike data clusters and should improve AESD efficiency. An exemplary spike/nonspike database is suggested as a tool for assessing parameters and methods for AESD and is available in CSV or Matlab formats from the author at brainvue@gmail.com. Exploratory Data Analysis (EDA) is presented as a graphic method for finding better spike parameters and for the step-wise evaluation of the spike detection process.

  14. AUTOMATIC ROAD GAP DETECTION USING FUZZY INFERENCE SYSTEM

    Directory of Open Access Journals (Sweden)

    S. Hashemi

    2012-09-01

    Full Text Available Automatic feature extraction from aerial and satellite images is a high-level data processing which is still one of the most important research topics of the field. In this area, most of the researches are focused on the early step of road detection, where road tracking methods, morphological analysis, dynamic programming and snakes, multi-scale and multi-resolution methods, stereoscopic and multi-temporal analysis, hyper spectral experiments, are some of the mature methods in this field. Although most researches are focused on detection algorithms, none of them can extract road network perfectly. On the other hand, post processing algorithms accentuated on the refining of road detection results, are not developed as well. In this article, the main is to design an intelligent method to detect and compensate road gaps remained on the early result of road detection algorithms. The proposed algorithm consists of five main steps as follow: 1 Short gap coverage: In this step, a multi-scale morphological is designed that covers short gaps in a hierarchical scheme. 2 Long gap detection: In this step, the long gaps, could not be covered in the previous stage, are detected using a fuzzy inference system. for this reason, a knowledge base consisting of some expert rules are designed which are fired on some gap candidates of the road detection results. 3 Long gap coverage: In this stage, detected long gaps are compensated by two strategies of linear and polynomials for this reason, shorter gaps are filled by line fitting while longer ones are compensated by polynomials.4 Accuracy assessment: In order to evaluate the obtained results, some accuracy assessment criteria are proposed. These criteria are obtained by comparing the obtained results with truly compensated ones produced by a human expert. The complete evaluation of the obtained results whit their technical discussions are the materials of the full paper.

  15. Automatic Road Gap Detection Using Fuzzy Inference System

    Science.gov (United States)

    Hashemi, S.; Valadan Zoej, M. J.; Mokhtarzadeh, M.

    2011-09-01

    Automatic feature extraction from aerial and satellite images is a high-level data processing which is still one of the most important research topics of the field. In this area, most of the researches are focused on the early step of road detection, where road tracking methods, morphological analysis, dynamic programming and snakes, multi-scale and multi-resolution methods, stereoscopic and multi-temporal analysis, hyper spectral experiments, are some of the mature methods in this field. Although most researches are focused on detection algorithms, none of them can extract road network perfectly. On the other hand, post processing algorithms accentuated on the refining of road detection results, are not developed as well. In this article, the main is to design an intelligent method to detect and compensate road gaps remained on the early result of road detection algorithms. The proposed algorithm consists of five main steps as follow: 1) Short gap coverage: In this step, a multi-scale morphological is designed that covers short gaps in a hierarchical scheme. 2) Long gap detection: In this step, the long gaps, could not be covered in the previous stage, are detected using a fuzzy inference system. for this reason, a knowledge base consisting of some expert rules are designed which are fired on some gap candidates of the road detection results. 3) Long gap coverage: In this stage, detected long gaps are compensated by two strategies of linear and polynomials for this reason, shorter gaps are filled by line fitting while longer ones are compensated by polynomials.4) Accuracy assessment: In order to evaluate the obtained results, some accuracy assessment criteria are proposed. These criteria are obtained by comparing the obtained results with truly compensated ones produced by a human expert. The complete evaluation of the obtained results whit their technical discussions are the materials of the full paper.

  16. Expanded Mixed Multiscale Finite Element Methods and Their Applications for Flows in Porous Media

    KAUST Repository

    Jiang, L.; Copeland, D.; Moulton, J. D.

    2012-01-01

    We develop a family of expanded mixed multiscale finite element methods (MsFEMs) and their hybridizations for second-order elliptic equations. This formulation expands the standard mixed multiscale finite element formulation in the sense that four

  17. Characteristics and design improvement of AP1000 automatic depressurization system

    International Nuclear Information System (INIS)

    Jin Fei

    2012-01-01

    Automatic depressurization system, as a specialty of AP1000 Design, enhances capability of mitigating design basis accidents for plant. Advancement of the system is discussed by comparing with traditional PWR design and analyzing system functions, such as depressurizing and venting. System design improvement during China Project performance is also described. At the end, suggestions for the system in China Project are listed. (author)

  18. Multi-scale approximation of Vlasov equation

    International Nuclear Information System (INIS)

    Mouton, A.

    2009-09-01

    One of the most important difficulties of numerical simulation of magnetized plasmas is the existence of multiple time and space scales, which can be very different. In order to produce good simulations of these multi-scale phenomena, it is recommended to develop some models and numerical methods which are adapted to these problems. Nowadays, the two-scale convergence theory introduced by G. Nguetseng and G. Allaire is one of the tools which can be used to rigorously derive multi-scale limits and to obtain new limit models which can be discretized with a usual numerical method: this procedure is so-called a two-scale numerical method. The purpose of this thesis is to develop a two-scale semi-Lagrangian method and to apply it on a gyrokinetic Vlasov-like model in order to simulate a plasma submitted to a large external magnetic field. However, the physical phenomena we have to simulate are quite complex and there are many questions without answers about the behaviour of a two-scale numerical method, especially when such a method is applied on a nonlinear model. In a first part, we develop a two-scale finite volume method and we apply it on the weakly compressible 1D isentropic Euler equations. Even if this mathematical context is far from a Vlasov-like model, it is a relatively simple framework in order to study the behaviour of a two-scale numerical method in front of a nonlinear model. In a second part, we develop a two-scale semi-Lagrangian method for the two-scale model developed by E. Frenod, F. Salvarani et E. Sonnendrucker in order to simulate axisymmetric charged particle beams. Even if the studied physical phenomena are quite different from magnetic fusion experiments, the mathematical context of the one-dimensional paraxial Vlasov-Poisson model is very simple for establishing the basis of a two-scale semi-Lagrangian method. In a third part, we use the two-scale convergence theory in order to improve M. Bostan's weak-* convergence results about the finite

  19. Multiscale Modeling of Mesoscale and Interfacial Phenomena

    Science.gov (United States)

    Petsev, Nikolai Dimitrov

    we provide a novel and general framework for multiscale modeling of systems featuring one or more dissolved species. This makes it possible to retain molecular detail for parts of the problem that require it while using a simple, continuum description for parts where high detail is unnecessary, reducing the number of degrees of freedom (i.e. number of particles) dramatically. This opens the possibility for modeling ion transport in biological processes and biomolecule assembly in ionic solution, as well as electrokinetic phenomena at interfaces such as corrosion. The number of particles in the system is further reduced through an integrated boundary approach, which we apply to colloidal suspensions. In this thesis, we describe this general framework for multiscale modeling single- and multicomponent systems, provide several simple equilibrium and non-equilibrium case studies, and discuss future applications.

  20. Fusion of dynamic contrast-enhanced magnetic resonance mammography at 3.0 T with X-ray mammograms: Pilot study evaluation using dedicated semi-automatic registration software

    Energy Technology Data Exchange (ETDEWEB)

    Dietzel, Matthias, E-mail: dietzelmatthias2@hotmail.com [Institute of Diagnostic and Interventional Radiology, Friedrich-Schiller-University Jena, Erlanger Allee 101, D-07740 Jena (Germany); Hopp, Torsten; Ruiter, Nicole [Karlsruhe Institute of Technology (KIT), Institute for Data Processing and Electronics, Postfach 3640, D-76021 Karlsruhe (Germany); Zoubi, Ramy [Institute of Diagnostic and Interventional Radiology, Friedrich-Schiller-University Jena, Erlanger Allee 101, D-07740 Jena (Germany); Runnebaum, Ingo B. [Clinic of Gynecology and Obstetrics, Friedrich-Schiller-University Jena, Bachstrasse 18, D-07743 Jena (Germany); Kaiser, Werner A. [Institute of Diagnostic and Interventional Radiology, Friedrich-Schiller-University Jena, Erlanger Allee 101, D-07740 Jena (Germany); Medical School, University of Harvard, 25 Shattuck Street, Boston, MA 02115 (United States); Baltzer, Pascal A.T. [Institute of Diagnostic and Interventional Radiology, Friedrich-Schiller-University Jena, Erlanger Allee 101, D-07740 Jena (Germany)

    2011-08-15

    Rationale and objectives: To evaluate the semi-automatic image registration accuracy of X-ray-mammography (XR-M) with high-resolution high-field (3.0 T) MR-mammography (MR-M) in an initial pilot study. Material and methods: MR-M was acquired on a high-field clinical scanner at 3.0 T (T1-weighted 3D VIBE {+-} Gd). XR-M was obtained with state-of-the-art full-field digital systems. Seven patients with clearly delineable mass lesions >10 mm both in XR-M and MR-M were enrolled (exclusion criteria: previous breast surgery; surgical intervention between XR-M and MR-M). XR-M and MR-M were matched using a dedicated image-registration algorithm allowing semi-automatic non-linear deformation of MR-M based on finite-element modeling. To identify registration errors (RE) a virtual craniocaudal 2D mammogram was calculated by the software from MR-M (with and w/o Gadodiamide/Gd) and matched with corresponding XR-M. To quantify REs the geometric center of the lesions in the virtual vs. conventional mammogram were subtracted. The robustness of registration was quantified by registration of X-MRs to both MR-Ms with and w/o Gadodiamide. Results: Image registration was performed successfully for all patients. Overall RE was 8.2 mm (1 min after Gd; confidence interval/CI: 2.0-14.4 mm, standard deviation/SD: 6.7 mm) vs. 8.9 mm (no Gd; CI: 4.0-13.9 mm, SD: 5.4 mm). The mean difference between pre- vs. post-contrast was 0.7 mm (SD: 1.9 mm). Conclusion: Image registration of high-field 3.0 T MR-mammography with X-ray-mammography is feasible. For this study applying a high-resolution protocol at 3.0 T, the registration was robust and the overall registration error was sufficient for clinical application.

  1. Fusion of dynamic contrast-enhanced magnetic resonance mammography at 3.0 T with X-ray mammograms: Pilot study evaluation using dedicated semi-automatic registration software

    International Nuclear Information System (INIS)

    Dietzel, Matthias; Hopp, Torsten; Ruiter, Nicole; Zoubi, Ramy; Runnebaum, Ingo B.; Kaiser, Werner A.; Baltzer, Pascal A.T.

    2011-01-01

    Rationale and objectives: To evaluate the semi-automatic image registration accuracy of X-ray-mammography (XR-M) with high-resolution high-field (3.0 T) MR-mammography (MR-M) in an initial pilot study. Material and methods: MR-M was acquired on a high-field clinical scanner at 3.0 T (T1-weighted 3D VIBE ± Gd). XR-M was obtained with state-of-the-art full-field digital systems. Seven patients with clearly delineable mass lesions >10 mm both in XR-M and MR-M were enrolled (exclusion criteria: previous breast surgery; surgical intervention between XR-M and MR-M). XR-M and MR-M were matched using a dedicated image-registration algorithm allowing semi-automatic non-linear deformation of MR-M based on finite-element modeling. To identify registration errors (RE) a virtual craniocaudal 2D mammogram was calculated by the software from MR-M (with and w/o Gadodiamide/Gd) and matched with corresponding XR-M. To quantify REs the geometric center of the lesions in the virtual vs. conventional mammogram were subtracted. The robustness of registration was quantified by registration of X-MRs to both MR-Ms with and w/o Gadodiamide. Results: Image registration was performed successfully for all patients. Overall RE was 8.2 mm (1 min after Gd; confidence interval/CI: 2.0-14.4 mm, standard deviation/SD: 6.7 mm) vs. 8.9 mm (no Gd; CI: 4.0-13.9 mm, SD: 5.4 mm). The mean difference between pre- vs. post-contrast was 0.7 mm (SD: 1.9 mm). Conclusion: Image registration of high-field 3.0 T MR-mammography with X-ray-mammography is feasible. For this study applying a high-resolution protocol at 3.0 T, the registration was robust and the overall registration error was sufficient for clinical application.

  2. Towards automatic exchange of information

    OpenAIRE

    Oberson, Xavier

    2015-01-01

    This article describes the various steps that led towards automatic exchange of information as the global standard and the issues that remain to be solved. First, the various competing models of exchange information, such as Double Tax Treaty (DTT), TIEA's, FATCA or UE Directives are described with a view to show how they interact between themselves. Second, the so-called Rubik Strategy is summarized and compared with an automatic exchange of information (AEOI). The third part then describes ...

  3. Stay Focused! The Effects of Internal and External Focus of Attention on Movement Automaticity in Patients with Stroke

    NARCIS (Netherlands)

    Kal, E. C.; van der Kamp, J.; Houdijk, H.; Groet, E.; van Bennekom, C. A. M.; Scherder, E. J. A.

    2015-01-01

    Dual-task performance is often impaired after stroke. This may be resolved by enhancing patients' automaticity of movement. This study sets out to test the constrained action hypothesis, which holds that automaticity of movement is enhanced by triggering an external focus (on movement effects),

  4. Self-organizing neural networks for automatic detection and classification of contrast-enhancing lesions in dynamic MR-mammography; Selbstorganisierende neuronale Netze zur automatischen Detektion und Klassifikation von Kontrast(mittel)-verstaerkten Laesionen in der dynamischen MR-Mammographie

    Energy Technology Data Exchange (ETDEWEB)

    Vomweg, T.W.; Teifke, A.; Kauczor, H.U.; Achenbach, T.; Rieker, O.; Schreiber, W.G.; Heitmann, K.R.; Beier, T.; Thelen, M. [Klinik und Poliklinik fuer Radiologie, Klinikum der Univ. Mainz (Germany)

    2005-05-01

    Purpose: Investigation and statistical evaluation of 'Self-Organizing Maps', a special type of neural networks in the field of artificial intelligence, classifying contrast enhancing lesions in dynamic MR-mammography. Material and Methods: 176 investigations with proven histology after core biopsy or operation were randomly divided into two groups. Several Self-Organizing Maps were trained by investigations of the first group to detect and classify contrast enhancing lesions in dynamic MR-mammography. Each single pixel's signal/time curve of all patients within the second group was analyzed by the Self-Organizing Maps. The likelihood of malignancy was visualized by color overlays on the MR-images. At last assessment of contrast-enhancing lesions by each different network was rated visually and evaluated statistically. Results: A well balanced neural network achieved a sensitivity of 90.5% and a specificity of 72.2% in predicting malignancy of 88 enhancing lesions. Detailed analysis of false-positive results revealed that every second fibroadenoma showed a 'typical malignant' signal/time curve without any chance to differentiate between fibroadenomas and malignant tissue regarding contrast enhancement alone; but this special group of lesions was represented by a well-defined area of the Self-Organizing Map. Discussion: Self-Organizing Maps are capable of classifying a dynamic signal/time curve as 'typical benign' or 'typical malignant'. Therefore, they can be used as second opinion. In view of the now known localization of fibroadenomas enhancing like malignant tumors at the Self-Organizing Map, these lesions could be passed to further analysis by additional post-processing elements (e.g., based on T2-weighted series or morphology analysis) in the future. (orig.)

  5. Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Plechac, Petr [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Mathematics; Univ. of Delaware, Newark, DE (United States). Dept. of Mathematics; Vlachos, Dionisios [Univ. of Delaware, Newark, DE (United States). Dept. of Chemical and Biomolecular Engineering; Katsoulakis, Markos [Univ. of Massachusetts, Amherst, MA (United States). Dept. of Mathematics

    2013-09-05

    The overall objective of this project is to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals. Specific goals include: (i) Development of rigorous spatio-temporal coarse-grained kinetic Monte Carlo (KMC) mathematics and simulation for microscopic processes encountered in biomass transformation. (ii) Development of hybrid multiscale simulation that links stochastic simulation to a deterministic partial differential equation (PDE) model for an entire reactor. (iii) Development of hybrid multiscale simulation that links KMC simulation with quantum density functional theory (DFT) calculations. (iv) Development of parallelization of models of (i)-(iii) to take advantage of Petaflop computing and enable real world applications of complex, multiscale models. In this NCE period, we continued addressing these objectives and completed the proposed work. Main initiatives, key results, and activities are outlined.

  6. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.

    This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  7. 2D deblending using the multi-scale shaping scheme

    Science.gov (United States)

    Li, Qun; Ban, Xingan; Gong, Renbin; Li, Jinnuo; Ge, Qiang; Zu, Shaohuan

    2018-01-01

    Deblending can be posed as an inversion problem, which is ill-posed and requires constraint to obtain unique and stable solution. In blended record, signal is coherent, whereas interference is incoherent in some domains (e.g., common receiver domain and common offset domain). Due to the different sparsity, coefficients of signal and interference locate in different curvelet scale domains and have different amplitudes. Take into account the two differences, we propose a 2D multi-scale shaping scheme to constrain the sparsity to separate the blended record. In the domain where signal concentrates, the multi-scale scheme passes all the coefficients representing signal, while, in the domain where interference focuses, the multi-scale scheme suppresses the coefficients representing interference. Because the interference is suppressed evidently at each iteration, the constraint of multi-scale shaping operator in all scale domains are weak to guarantee the convergence of algorithm. We evaluate the performance of the multi-scale shaping scheme and the traditional global shaping scheme by using two synthetic and one field data examples.

  8. Conformal-Based Surface Morphing and Multi-Scale Representation

    Directory of Open Access Journals (Sweden)

    Ka Chun Lam

    2014-05-01

    Full Text Available This paper presents two algorithms, based on conformal geometry, for the multi-scale representations of geometric shapes and surface morphing. A multi-scale surface representation aims to describe a 3D shape at different levels of geometric detail, which allows analyzing or editing surfaces at the global or local scales effectively. Surface morphing refers to the process of interpolating between two geometric shapes, which has been widely applied to estimate or analyze deformations in computer graphics, computer vision and medical imaging. In this work, we propose two geometric models for surface morphing and multi-scale representation for 3D surfaces. The basic idea is to represent a 3D surface by its mean curvature function, H, and conformal factor function λ, which uniquely determine the geometry of the surface according to Riemann surface theory. Once we have the (λ, H parameterization of the surface, post-processing of the surface can be done directly on the conformal parameter domain. In particular, the problem of multi-scale representations of shapes can be reduced to the signal filtering on the λ and H parameters. On the other hand, the surface morphing problem can be transformed to an interpolation process of two sets of (λ, H parameters. We test the proposed algorithms on 3D human face data and MRI-derived brain surfaces. Experimental results show that our proposed methods can effectively obtain multi-scale surface representations and give natural surface morphing results.

  9. Multiscale Modeling in the Clinic: Drug Design and Development

    Energy Technology Data Exchange (ETDEWEB)

    Clancy, Colleen E.; An, Gary; Cannon, William R.; Liu, Yaling; May, Elebeoba E.; Ortoleva, Peter; Popel, Aleksander S.; Sluka, James P.; Su, Jing; Vicini, Paolo; Zhou, Xiaobo; Eckmann, David M.

    2016-02-17

    A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions to guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.

  10. Microphysics in Multi-scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  11. Multiscale image restoration in nulear medicine

    International Nuclear Information System (INIS)

    Jammal, G.

    2001-01-01

    This work develops, analyzes and validates a new multiscale restoration framework for denoising and deconvolution in photon limited imagery. Denoising means the estimation of the intensity of a Poisson process from a single observation of the counts, whereas deconvolution refers to the recovery of an object related through a linear system of equations to the intensity function of the Poisson data. The developed framework has been named DeQuant in analogy to Denoising when the noise is of Quantum nature. DeQuant works according to the following scheme. (1) It starts by testing the statistical significance of the wavelet coefficients of the Poisson process, based on the knowledge of their probability density function. (2) A regularization constraint assigns a new value to the non significant coefficients enabling therewith to reduce artifacts and incorporate realistic prior information into the estimation process. Finally, (3) the application of the inverse wavelet transform yields the restored object. The whole procedure is iterated before obtaining the final estimate. The validation of DeQuant on nuclear medicine images showed excellent results. The obtained estimates enable a greater diagnostic confidence in clinical nuclear medicine since they give the physician the access to the diagnosis relevant information with a measure of the significance of the detected structures [de

  12. Multiscale Reconstruction for Magnetic Resonance Fingerprinting

    Science.gov (United States)

    Pierre, Eric Y.; Ma, Dan; Chen, Yong; Badve, Chaitra; Griswold, Mark A.

    2015-01-01

    Purpose To reduce acquisition time needed to obtain reliable parametric maps with Magnetic Resonance Fingerprinting. Methods An iterative-denoising algorithm is initialized by reconstructing the MRF image series at low image resolution. For subsequent iterations, the method enforces pixel-wise fidelity to the best-matching dictionary template then enforces fidelity to the acquired data at slightly higher spatial resolution. After convergence, parametric maps with desirable spatial resolution are obtained through template matching of the final image series. The proposed method was evaluated on phantom and in-vivo data using the highly-undersampled, variable-density spiral trajectory and compared with the original MRF method. The benefits of additional sparsity constraints were also evaluated. When available, gold standard parameter maps were used to quantify the performance of each method. Results The proposed approach allowed convergence to accurate parametric maps with as few as 300 time points of acquisition, as compared to 1000 in the original MRF work. Simultaneous quantification of T1, T2, proton density (PD) and B0 field variations in the brain was achieved in vivo for a 256×256 matrix for a total acquisition time of 10.2s, representing a 3-fold reduction in acquisition time. Conclusions The proposed iterative multiscale reconstruction reliably increases MRF acquisition speed and accuracy. PMID:26132462

  13. Predicting FLDs Using a Multiscale Modeling Scheme

    Science.gov (United States)

    Wu, Z.; Loy, C.; Wang, E.; Hegadekatte, V.

    2017-09-01

    The measurement of a single forming limit diagram (FLD) requires significant resources and is time consuming. We have developed a multiscale modeling scheme to predict FLDs using a combination of limited laboratory testing, crystal plasticity (VPSC) modeling, and dual sequential-stage finite element (ABAQUS/Explicit) modeling with the Marciniak-Kuczynski (M-K) criterion to determine the limit strain. We have established a means to work around existing limitations in ABAQUS/Explicit by using an anisotropic yield locus (e.g., BBC2008) in combination with the M-K criterion. We further apply a VPSC model to reduce the number of laboratory tests required to characterize the anisotropic yield locus. In the present work, we show that the predicted FLD is in excellent agreement with the measured FLD for AA5182 in the O temper. Instead of 13 different tests as for a traditional FLD determination within Novelis, our technique uses just four measurements: tensile properties in three orientations; plane strain tension; biaxial bulge; and the sheet crystallographic texture. The turnaround time is consequently far less than for the traditional laboratory measurement of the FLD.

  14. A multiscale modeling approach for biomolecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Bowling, Alan, E-mail: bowling@uta.edu; Haghshenas-Jaryani, Mahdi, E-mail: mahdi.haghshenasjaryani@mavs.uta.edu [The University of Texas at Arlington, Department of Mechanical and Aerospace Engineering (United States)

    2015-04-15

    This paper presents a new multiscale molecular dynamic model for investigating the effects of external interactions, such as contact and impact, during stepping and docking of motor proteins and other biomolecular systems. The model retains the mass properties ensuring that the result satisfies Newton’s second law. This idea is presented using a simple particle model to facilitate discussion of the rigid body model; however, the particle model does provide insights into particle dynamics at the nanoscale. The resulting three-dimensional model predicts a significant decrease in the effect of the random forces associated with Brownian motion. This conclusion runs contrary to the widely accepted notion that the motor protein’s movements are primarily the result of thermal effects. This work focuses on the mechanical aspects of protein locomotion; the effect ATP hydrolysis is estimated as internal forces acting on the mechanical model. In addition, the proposed model can be numerically integrated in a reasonable amount of time. Herein, the differences between the motion predicted by the old and new modeling approaches are compared using a simplified model of myosin V.

  15. Multiscale modeling of polyisoprene on graphite

    International Nuclear Information System (INIS)

    Pandey, Yogendra Narayan; Brayton, Alexander; Doxastakis, Manolis; Burkhart, Craig; Papakonstantopoulos, George J.

    2014-01-01

    The local dynamics and the conformational properties of polyisoprene next to a smooth graphite surface constructed by graphene layers are studied by a multiscale methodology. First, fully atomistic molecular dynamics simulations of oligomers next to the surface are performed. Subsequently, Monte Carlo simulations of a systematically derived coarse-grained model generate numerous uncorrelated structures for polymer systems. A new reverse backmapping strategy is presented that reintroduces atomistic detail. Finally, multiple extensive fully atomistic simulations with large systems of long macromolecules are employed to examine local dynamics in proximity to graphite. Polyisoprene repeat units arrange close to a parallel configuration with chains exhibiting a distribution of contact lengths. Efficient Monte Carlo algorithms with the coarse-grain model are capable of sampling these distributions for any molecular weight in quantitative agreement with predictions from atomistic models. Furthermore, molecular dynamics simulations with well-equilibrated systems at all length-scales support an increased dynamic heterogeneity that is emerging from both intermolecular interactions with the flat surface and intramolecular cooperativity. This study provides a detailed comprehensive picture of polyisoprene on a flat surface and consists of an effort to characterize such systems in atomistic detail

  16. Multiscale Modeling of UHTC: Thermal Conductivity

    Science.gov (United States)

    Lawson, John W.; Murry, Daw; Squire, Thomas; Bauschlicher, Charles W.

    2012-01-01

    We are developing a multiscale framework in computational modeling for the ultra high temperature ceramics (UHTC) ZrB2 and HfB2. These materials are characterized by high melting point, good strength, and reasonable oxidation resistance. They are candidate materials for a number of applications in extreme environments including sharp leading edges of hypersonic aircraft. In particular, we used a combination of ab initio methods, atomistic simulations and continuum computations to obtain insights into fundamental properties of these materials. Ab initio methods were used to compute basic structural, mechanical and thermal properties. From these results, a database was constructed to fit a Tersoff style interatomic potential suitable for atomistic simulations. These potentials were used to evaluate the lattice thermal conductivity of single crystals and the thermal resistance of simple grain boundaries. Finite element method (FEM) computations using atomistic results as inputs were performed with meshes constructed on SEM images thereby modeling the realistic microstructure. These continuum computations showed the reduction in thermal conductivity due to the grain boundary network.

  17. Fast Plasma Investigation for Magnetospheric Multiscale

    Science.gov (United States)

    Pollock, C.; Moore, T.; Coffey, V.; Dorelli J.; Giles, B.; Adrian, M.; Chandler, M.; Duncan, C.; Figueroa-Vinas, A.; Garcia, K.; hide

    2016-01-01

    The Fast Plasma Investigation (FPI) was developed for flight on the Magnetospheric Multiscale (MMS) mission to measure the differential directional flux of magnetospheric electrons and ions with unprecedented time resolution to resolve kinetic-scale plasma dynamics. This increased resolution has been accomplished by placing four dual 180-degree top hat spectrometers for electrons and four dual 180-degree top hat spectrometers for ions around the periphery of each of four MMS spacecraft. Using electrostatic field-of-view deflection, the eight spectrometers for each species together provide 4pi-sr-field-of-view with, at worst, 11.25-degree sample spacing. Energy/charge sampling is provided by swept electrostatic energy/charge selection over the range from 10 eVq to 30000 eVq. The eight dual spectrometers on each spacecraft are controlled and interrogated by a single block redundant Instrument Data Processing Unit, which in turn interfaces to the observatory's Instrument Suite Central Instrument Data processor. This paper described the design of FPI, its ground and in-flight calibration, its operational concept, and its data products.

  18. Multiscale Concrete Modeling of Aging Degradation

    Energy Technology Data Exchange (ETDEWEB)

    Hammi, Yousseff [Mississippi State Univ., Mississippi State, MS (United States); Gullett, Philipp [Mississippi State Univ., Mississippi State, MS (United States); Horstemeyer, Mark F. [Mississippi State Univ., Mississippi State, MS (United States)

    2015-07-31

    In this work a numerical finite element framework is implemented to enable the integration of coupled multiscale and multiphysics transport processes. A User Element subroutine (UEL) in Abaqus is used to simultaneously solve stress equilibrium, heat conduction, and multiple diffusion equations for 2D and 3D linear and quadratic elements. Transport processes in concrete structures and their degradation mechanisms are presented along with the discretization of the governing equations. The multiphysics modeling framework is theoretically extended to the linear elastic fracture mechanics (LEFM) by introducing the eXtended Finite Element Method (XFEM) and based on the XFEM user element implementation of Giner et al. [2009]. A damage model that takes into account the damage contribution from the different degradation mechanisms is theoretically developed. The total contribution of damage is forwarded to a Multi-Stage Fatigue (MSF) model to enable the assessment of the fatigue life and the deterioration of reinforced concrete structures in a nuclear power plant. Finally, two examples are presented to illustrate the developed multiphysics user element implementation and the XFEM implementation of Giner et al. [2009].

  19. Magnetospheric MultiScale (MMS) System Manager

    Science.gov (United States)

    Schiff, Conrad; Maher, Francis Alfred; Henely, Sean Philip; Rand, David

    2014-01-01

    The Magnetospheric MultiScale (MMS) mission is an ambitious NASA space science mission in which 4 spacecraft are flown in tight formation about a highly elliptical orbit. Each spacecraft has multiple instruments that measure particle and field compositions in the Earths magnetosphere. By controlling the members relative motion, MMS can distinguish temporal and spatial fluctuations in a way that a single spacecraft cannot.To achieve this control, 2 sets of four maneuvers, distributed evenly across the spacecraft must be performed approximately every 14 days. Performing a single maneuver on an individual spacecraft is usually labor intensive and the complexity becomes clearly increases with four. As a result, the MMS flight dynamics team turned to the System Manager to put the routine or error-prone under machine control freeing the analysts for activities that require human judgment.The System Manager is an expert system that is capable of handling operations activities associated with performing MMS maneuvers. As an expert system, it can work off a known schedule, launching jobs based on a one-time occurrence or on a set reoccurring schedule. It is also able to detect situational changes and use event-driven programming to change schedules, adapt activities, or call for help.

  20. An Improved Algorithm Based on Minimum Spanning Tree for Multi-scale Segmentation of Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    LI Hui

    2015-07-01

    Full Text Available As the basis of object-oriented information extraction from remote sensing imagery,image segmentation using multiple image features,exploiting spatial context information, and by a multi-scale approach are currently the research focuses. Using an optimization approach of the graph theory, an improved multi-scale image segmentation method is proposed. In this method, the image is applied with a coherent enhancement anisotropic diffusion filter followed by a minimum spanning tree segmentation approach, and the resulting segments are merged with reference to a minimum heterogeneity criterion.The heterogeneity criterion is defined as a function of the spectral characteristics and shape parameters of segments. The purpose of the merging step is to realize the multi-scale image segmentation. Tested on two images, the proposed method was visually and quantitatively compared with the segmentation method employed in the eCognition software. The results show that the proposed method is effective and outperforms the latter on areas with subtle spectral differences.

  1. A Risk Assessment System with Automatic Extraction of Event Types

    Science.gov (United States)

    Capet, Philippe; Delavallade, Thomas; Nakamura, Takuya; Sandor, Agnes; Tarsitano, Cedric; Voyatzi, Stavroula

    In this article we describe the joint effort of experts in linguistics, information extraction and risk assessment to integrate EventSpotter, an automatic event extraction engine, into ADAC, an automated early warning system. By detecting as early as possible weak signals of emerging risks ADAC provides a dynamic synthetic picture of situations involving risk. The ADAC system calculates risk on the basis of fuzzy logic rules operated on a template graph whose leaves are event types. EventSpotter is based on a general purpose natural language dependency parser, XIP, enhanced with domain-specific lexical resources (Lexicon-Grammar). Its role is to automatically feed the leaves with input data.

  2. Automatic dam concrete placing system; Dam concrete dasetsu sagyo no jidoka system

    Energy Technology Data Exchange (ETDEWEB)

    Yoneda, Y; Hori, Y; Nakayama, T; Yoshihara, K; Hironaka, T [Okumura Corp., Osaka (Japan)

    1994-11-15

    An automatic concrete placing system was developed for concrete dam construction. This system consists of the following five subsystems: a wireless data transmission system, an automatic dam concrete mixing system, a consistency determination system, an automatic dam concrete loading and transporting system, and a remote concrete bucket opening and closing system. The system includes the following features: mixing amount by mixing ratio and mixing intervals can be instructed from a concrete placing site by using a wireless handy terminal; concrete is mixed automatically in a batcher plant; a transfer car is started, and concrete is charged into a bucket automatically; the mixed concrete is determined of its properties automatically; labor cost can be reduced, the work efficiency improved, and the safety enhanced; and the system introduction has resulted in unattended operation from the aggregate draw-out to a bunker line, manpower saving of five persons, and reduction in cycle time by 10%. 11 figs., 2 tabs.

  3. Algorithmic foundation of multi-scale spatial representation

    CERN Document Server

    Li, Zhilin

    2006-01-01

    With the widespread use of GIS, multi-scale representation has become an important issue in the realm of spatial data handling. However, no book to date has systematically tackled the different aspects of this discipline. Emphasizing map generalization, Algorithmic Foundation of Multi-Scale Spatial Representation addresses the mathematical basis of multi-scale representation, specifically, the algorithmic foundation.Using easy-to-understand language, the author focuses on geometric transformations, with each chapter surveying a particular spatial feature. After an introduction to the essential operations required for geometric transformations as well as some mathematical and theoretical background, the book describes algorithms for a class of point features/clusters. It then examines algorithms for individual line features, such as the reduction of data points, smoothing (filtering), and scale-driven generalization, followed by a discussion of algorithms for a class of line features including contours, hydrog...

  4. Generalized multiscale finite element method. Symmetric interior penalty coupling

    KAUST Repository

    Efendiev, Yalchin R.; Galvis, Juan; Lazarov, Raytcho D.; Moon, M.; Sarkis, Marcus V.

    2013-01-01

    Motivated by applications to numerical simulations of flows in highly heterogeneous porous media, we develop multiscale finite element methods for second order elliptic equations. We discuss a multiscale model reduction technique in the framework of the discontinuous Galerkin finite element method. We propose two different finite element spaces on the coarse mesh. The first space is based on a local eigenvalue problem that uses an interior weighted L2-norm and a boundary weighted L2-norm for computing the "mass" matrix. The second choice is based on generation of a snapshot space and subsequent selection of a subspace of a reduced dimension. The approximation with these multiscale spaces is based on the discontinuous Galerkin finite element method framework. We investigate the stability and derive error estimates for the methods and further experimentally study their performance on a representative number of numerical examples. © 2013 Elsevier Inc.

  5. Generalized multiscale finite element method. Symmetric interior penalty coupling

    KAUST Repository

    Efendiev, Yalchin R.

    2013-12-01

    Motivated by applications to numerical simulations of flows in highly heterogeneous porous media, we develop multiscale finite element methods for second order elliptic equations. We discuss a multiscale model reduction technique in the framework of the discontinuous Galerkin finite element method. We propose two different finite element spaces on the coarse mesh. The first space is based on a local eigenvalue problem that uses an interior weighted L2-norm and a boundary weighted L2-norm for computing the "mass" matrix. The second choice is based on generation of a snapshot space and subsequent selection of a subspace of a reduced dimension. The approximation with these multiscale spaces is based on the discontinuous Galerkin finite element method framework. We investigate the stability and derive error estimates for the methods and further experimentally study their performance on a representative number of numerical examples. © 2013 Elsevier Inc.

  6. Integrated multiscale biomaterials experiment and modelling: a perspective

    Science.gov (United States)

    Buehler, Markus J.; Genin, Guy M.

    2016-01-01

    Advances in multiscale models and computational power have enabled a broad toolset to predict how molecules, cells, tissues and organs behave and develop. A key theme in biological systems is the emergence of macroscale behaviour from collective behaviours across a range of length and timescales, and a key element of these models is therefore hierarchical simulation. However, this predictive capacity has far outstripped our ability to validate predictions experimentally, particularly when multiple hierarchical levels are involved. The state of the art represents careful integration of multiscale experiment and modelling, and yields not only validation, but also insights into deformation and relaxation mechanisms across scales. We present here a sampling of key results that highlight both challenges and opportunities for integrated multiscale experiment and modelling in biological systems. PMID:28981126

  7. Complexity multiscale asynchrony measure and behavior for interacting financial dynamics

    Science.gov (United States)

    Yang, Ge; Wang, Jun; Niu, Hongli

    2016-08-01

    A stochastic financial price process is proposed and investigated by the finite-range multitype contact dynamical system, in an attempt to study the nonlinear behaviors of real asset markets. The viruses spreading process in a finite-range multitype system is used to imitate the interacting behaviors of diverse investment attitudes in a financial market, and the empirical research on descriptive statistics and autocorrelation behaviors of return time series is performed for different values of propagation rates. Then the multiscale entropy analysis is adopted to study several different shuffled return series, including the original return series, the corresponding reversal series, the random shuffled series, the volatility shuffled series and the Zipf-type shuffled series. Furthermore, we propose and compare the multiscale cross-sample entropy and its modification algorithm called composite multiscale cross-sample entropy. We apply them to study the asynchrony of pairs of time series under different time scales.

  8. Study on high density multi-scale calculation technique

    International Nuclear Information System (INIS)

    Sekiguchi, S.; Tanaka, Y.; Nakada, H.; Nishikawa, T.; Yamamoto, N.; Yokokawa, M.

    2004-01-01

    To understand degradation of nuclear materials under irradiation, it is essential to know as much about each phenomenon observed from multi-scale points of view; they are micro-scale in atomic-level, macro-level in structural scale and intermediate level. In this study for application to meso-scale materials (100A ∼ 2μm), computer technology approaching from micro- and macro-scales was developed including modeling and computer application using computational science and technology method. And environmental condition of grid technology for multi-scale calculation was prepared. The software and MD (molecular dynamics) stencil for verifying the multi-scale calculation were improved and their movement was confirmed. (A. Hishinuma)

  9. Coherent multiscale image processing using dual-tree quaternion wavelets.

    Science.gov (United States)

    Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G

    2008-07-01

    The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.

  10. Optimization Approach for Multi-scale Segmentation of Remotely Sensed Imagery under k-means Clustering Guidance

    Directory of Open Access Journals (Sweden)

    WANG Huixian

    2015-05-01

    Full Text Available In order to adapt different scale land cover segmentation, an optimized approach under the guidance of k-means clustering for multi-scale segmentation is proposed. At first, small scale segmentation and k-means clustering are used to process the original images; then the result of k-means clustering is used to guide objects merging procedure, in which Otsu threshold method is used to automatically select the impact factor of k-means clustering; finally we obtain the segmentation results which are applicable to different scale objects. FNEA method is taken for an example and segmentation experiments are done using a simulated image and a real remote sensing image from GeoEye-1 satellite, qualitative and quantitative evaluation demonstrates that the proposed method can obtain high quality segmentation results.

  11. Generalized multiscale finite element methods (GMsFEM)

    KAUST Repository

    Efendiev, Yalchin R.; Galvis, Juan; Hou, Thomasyizhao

    2013-01-01

    In this paper, we propose a general approach called Generalized Multiscale Finite Element Method (GMsFEM) for performing multiscale simulations for problems without scale separation over a complex input space. As in multiscale finite element methods (MsFEMs), the main idea of the proposed approach is to construct a small dimensional local solution space that can be used to generate an efficient and accurate approximation to the multiscale solution with a potentially high dimensional input parameter space. In the proposed approach, we present a general procedure to construct the offline space that is used for a systematic enrichment of the coarse solution space in the online stage. The enrichment in the online stage is performed based on a spectral decomposition of the offline space. In the online stage, for any input parameter, a multiscale space is constructed to solve the global problem on a coarse grid. The online space is constructed via a spectral decomposition of the offline space and by choosing the eigenvectors corresponding to the largest eigenvalues. The computational saving is due to the fact that the construction of the online multiscale space for any input parameter is fast and this space can be re-used for solving the forward problem with any forcing and boundary condition. Compared with the other approaches where global snapshots are used, the local approach that we present in this paper allows us to eliminate unnecessary degrees of freedom on a coarse-grid level. We present various examples in the paper and some numerical results to demonstrate the effectiveness of our method. © 2013 Elsevier Inc.

  12. Generalized multiscale finite element methods (GMsFEM)

    KAUST Repository

    Efendiev, Yalchin R.

    2013-10-01

    In this paper, we propose a general approach called Generalized Multiscale Finite Element Method (GMsFEM) for performing multiscale simulations for problems without scale separation over a complex input space. As in multiscale finite element methods (MsFEMs), the main idea of the proposed approach is to construct a small dimensional local solution space that can be used to generate an efficient and accurate approximation to the multiscale solution with a potentially high dimensional input parameter space. In the proposed approach, we present a general procedure to construct the offline space that is used for a systematic enrichment of the coarse solution space in the online stage. The enrichment in the online stage is performed based on a spectral decomposition of the offline space. In the online stage, for any input parameter, a multiscale space is constructed to solve the global problem on a coarse grid. The online space is constructed via a spectral decomposition of the offline space and by choosing the eigenvectors corresponding to the largest eigenvalues. The computational saving is due to the fact that the construction of the online multiscale space for any input parameter is fast and this space can be re-used for solving the forward problem with any forcing and boundary condition. Compared with the other approaches where global snapshots are used, the local approach that we present in this paper allows us to eliminate unnecessary degrees of freedom on a coarse-grid level. We present various examples in the paper and some numerical results to demonstrate the effectiveness of our method. © 2013 Elsevier Inc.

  13. Multiscale Finite Element Methods for Flows on Rough Surfaces

    KAUST Repository

    Efendiev, Yalchin

    2013-01-01

    In this paper, we present the Multiscale Finite Element Method (MsFEM) for problems on rough heterogeneous surfaces. We consider the diffusion equation on oscillatory surfaces. Our objective is to represent small-scale features of the solution via multiscale basis functions described on a coarse grid. This problem arises in many applications where processes occur on surfaces or thin layers. We present a unified multiscale finite element framework that entails the use of transformations that map the reference surface to the deformed surface. The main ingredients of MsFEM are (1) the construction of multiscale basis functions and (2) a global coupling of these basis functions. For the construction of multiscale basis functions, our approach uses the transformation of the reference surface to a deformed surface. On the deformed surface, multiscale basis functions are defined where reduced (1D) problems are solved along the edges of coarse-grid blocks to calculate nodalmultiscale basis functions. Furthermore, these basis functions are transformed back to the reference configuration. We discuss the use of appropriate transformation operators that improve the accuracy of the method. The method has an optimal convergence if the transformed surface is smooth and the image of the coarse partition in the reference configuration forms a quasiuniform partition. In this paper, we consider such transformations based on harmonic coordinates (following H. Owhadi and L. Zhang [Comm. Pure and Applied Math., LX(2007), pp. 675-723]) and discuss gridding issues in the reference configuration. Numerical results are presented where we compare the MsFEM when two types of deformations are used formultiscale basis construction. The first deformation employs local information and the second deformation employs a global information. Our numerical results showthat one can improve the accuracy of the simulations when a global information is used. © 2013 Global-Science Press.

  14. Automatic liver contouring for radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Li, Dengwang; Kapp, Daniel S; Xing, Lei; Liu, Li

    2015-01-01

    To develop automatic and efficient liver contouring software for planning 3D-CT and four-dimensional computed tomography (4D-CT) for application in clinical radiation therapy treatment planning systems.The algorithm comprises three steps for overcoming the challenge of similar intensities between the liver region and its surrounding tissues. First, the total variation model with the L1 norm (TV-L1), which has the characteristic of multi-scale decomposition and an edge-preserving property, is used for removing the surrounding muscles and tissues. Second, an improved level set model that contains both global and local energy functions is utilized to extract liver contour information sequentially. In the global energy function, the local correlation coefficient (LCC) is constructed based on the gray level co-occurrence matrix both of the initial liver region and the background region. The LCC can calculate the correlation of a pixel with the foreground and background regions, respectively. The LCC is combined with intensity distribution models to classify pixels during the evolutionary process of the level set based method. The obtained liver contour is used as the candidate liver region for the following step. In the third step, voxel-based texture characterization is employed for refining the liver region and obtaining the final liver contours.The proposed method was validated based on the planning CT images of a group of 25 patients undergoing radiation therapy treatment planning. These included ten lung cancer patients with normal appearing livers and ten patients with hepatocellular carcinoma or liver metastases. The method was also tested on abdominal 4D-CT images of a group of five patients with hepatocellular carcinoma or liver metastases. The false positive volume percentage, the false negative volume percentage, and the dice similarity coefficient between liver contours obtained by a developed algorithm and a current standard delineated by the expert group

  15. Multiscale approach to the physics of radiation damage with ions

    Energy Technology Data Exchange (ETDEWEB)

    Surdutovich, Eugene [Physics Department, Oakland University, 2200 N. Squirrel Rd., Rochester MI 48309 (United States); Solov' yov, Andrey V. [Frankfurt Institute for Advanced Studies, Goethe University, Ruth-Moufang-Str. 1, Frankfurt am Main 60438 (Germany)

    2013-04-19

    We review a multiscale approach to the physics of ion-beam cancer therapy, an approach suggested in order to understand the interplay of a large number of phenomena involved in radiation damage scenario occurring on a range of temporal, spatial, and energy scales. We briefly overview its history and present the current stage of its development. The differences of the multiscale approach from other methods of understanding and assessment of radiation damage are discussed as well as its relationship to other branches of physics, chemistry and biology.

  16. Multiscale integration schemes for jump-diffusion systems

    Energy Technology Data Exchange (ETDEWEB)

    Givon, D.; Kevrekidis, I.G.

    2008-12-09

    We study a two-time-scale system of jump-diffusion stochastic differential equations. We analyze a class of multiscale integration methods for these systems, which, in the spirit of [1], consist of a hybridization between a standard solver for the slow components and short runs for the fast dynamics, which are used to estimate the effect that the fast components have on the slow ones. We obtain explicit bounds for the discrepancy between the results of the multiscale integration method and the slow components of the original system.

  17. Multiscale simulation of molecular processes in cellular environments.

    Science.gov (United States)

    Chiricotto, Mara; Sterpone, Fabio; Derreumaux, Philippe; Melchionna, Simone

    2016-11-13

    We describe the recent advances in studying biological systems via multiscale simulations. Our scheme is based on a coarse-grained representation of the macromolecules and a mesoscopic description of the solvent. The dual technique handles particles, the aqueous solvent and their mutual exchange of forces resulting in a stable and accurate methodology allowing biosystems of unprecedented size to be simulated.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2016 The Author(s).

  18. Multiscale entropy based study of the pathological time series

    International Nuclear Information System (INIS)

    Wang Jun; Ma Qianli

    2008-01-01

    This paper studies the multiscale entropy (MSE) of electrocardiogram's ST segment and compares the MSE results of ST segment with that of electrocardiogram in the first time. Electrocardiogram complexity changing characteristics has important clinical significance for early diagnosis. Study shows that the average MSE values and the varying scope fluctuation could be more effective to reveal the heart health status. Particularly the multiscale values varying scope fluctuation is a more sensitive parameter for early heart disease detection and has a clinical diagnostic significance. (general)

  19. Multiscale Shannon entropy and its application in the stock market

    Science.gov (United States)

    Gu, Rongbao

    2017-10-01

    In this paper, we perform a multiscale entropy analysis on the Dow Jones Industrial Average Index using the Shannon entropy. The stock index shows the characteristic of multi-scale entropy that caused by noise in the market. The entropy is demonstrated to have significant predictive ability for the stock index in both long-term and short-term, and empirical results verify that noise does exist in the market and can affect stock price. It has important implications on market participants such as noise traders.

  20. Multiscale modeling of complex materials phenomenological, theoretical and computational aspects

    CERN Document Server

    Trovalusci, Patrizia

    2014-01-01

    The papers in this volume deal with materials science, theoretical mechanics and experimental and computational techniques at multiple scales, providing a sound base and a framework for many applications which are hitherto treated in a phenomenological sense. The basic principles are formulated of multiscale modeling strategies towards modern complex multiphase materials subjected to various types of mechanical, thermal loadings and environmental effects. The focus is on problems where mechanics is highly coupled with other concurrent physical phenomena. Attention is also focused on the historical origins of multiscale modeling and foundations of continuum mechanics currently adopted to model non-classical continua with substructure, for which internal length scales play a crucial role.

  1. Multi-scale magnetic field intermittence in the plasma sheet

    Directory of Open Access Journals (Sweden)

    Z. Vörös

    2003-09-01

    Full Text Available This paper demonstrates that intermittent magnetic field fluctuations in the plasma sheet exhibit transitory, localized, and multi-scale features. We propose a multifractal-based algorithm, which quantifies intermittence on the basis of the statistical distribution of the "strength of burstiness", estimated within a sliding window. Interesting multi-scale phenomena observed by the Cluster spacecraft include large-scale motion of the current sheet and bursty bulk flow associated turbulence, interpreted as a cross-scale coupling (CSC process.Key words. Magnetospheric physics (magnetotail; plasma sheet – Space plasma physics (turbulence

  2. Modeling Temporal Evolution and Multiscale Structure in Networks

    DEFF Research Database (Denmark)

    Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2013-01-01

    Many real-world networks exhibit both temporal evolution and multiscale structure. We propose a model for temporally correlated multifurcating hierarchies in complex networks which jointly capture both effects. We use the Gibbs fragmentation tree as prior over multifurcating trees and a change......-point model to account for the temporal evolution of each vertex. We demonstrate that our model is able to infer time-varying multiscale structure in synthetic as well as three real world time-evolving complex networks. Our modeling of the temporal evolution of hierarchies brings new insights...

  3. Multiscale modeling of emergent materials: biological and soft matter

    DEFF Research Database (Denmark)

    Murtola, Teemu; Bunker, Alex; Vattulainen, Ilpo

    2009-01-01

    In this review, we focus on four current related issues in multiscale modeling of soft and biological matter. First, we discuss how to use structural information from detailed models (or experiments) to construct coarse-grained ones in a hierarchical and systematic way. This is discussed in the c......In this review, we focus on four current related issues in multiscale modeling of soft and biological matter. First, we discuss how to use structural information from detailed models (or experiments) to construct coarse-grained ones in a hierarchical and systematic way. This is discussed...

  4. Multiscale simulation of water flow past a C540 fullerene

    DEFF Research Database (Denmark)

    Walther, Jens Honore; Praprotnik, Matej; Kotsalis, Evangelos M.

    2012-01-01

    We present a novel, three-dimensional, multiscale algorithm for simulations of water flow past a fullerene. We employ the Schwarz alternating overlapping domain method to couple molecular dynamics (MD) of liquid water around the C540 buckyball with a Lattice–Boltzmann (LB) description for the Nav......We present a novel, three-dimensional, multiscale algorithm for simulations of water flow past a fullerene. We employ the Schwarz alternating overlapping domain method to couple molecular dynamics (MD) of liquid water around the C540 buckyball with a Lattice–Boltzmann (LB) description...

  5. Rough Set Approach to Incomplete Multiscale Information System

    Science.gov (United States)

    Yang, Xibei; Qi, Yong; Yu, Dongjun; Yu, Hualong; Song, Xiaoning; Yang, Jingyu

    2014-01-01

    Multiscale information system is a new knowledge representation system for expressing the knowledge with different levels of granulations. In this paper, by considering the unknown values, which can be seen everywhere in real world applications, the incomplete multiscale information system is firstly investigated. The descriptor technique is employed to construct rough sets at different scales for analyzing the hierarchically structured data. The problem of unravelling decision rules at different scales is also addressed. Finally, the reduct descriptors are formulated to simplify decision rules, which can be derived from different scales. Some numerical examples are employed to substantiate the conceptual arguments. PMID:25276852

  6. Extended phase-space methods for enhanced sampling in molecular simulations: a review

    Directory of Open Access Journals (Sweden)

    Hiroshi eFujisaki

    2015-09-01

    Full Text Available Molecular Dynamics simulations are a powerful approach to study biomolecular conformational changes or protein-ligand, protein-protein and protein-DNA/RNA interactions. Straightforward applications however are often hampered by incomplete sampling, since in a typical simulated trajectory the system will spend most of its time trapped by high energy barriers in restricted regions of the configuration space. Over the years, several techniques have been designed to overcome this problem and enhance space sampling. Here, we review a class of methods that rely on the idea of extending the set of dynamical variables of the system by adding extra ones associated to functions describing the process under study. In particular, we illustrate the Temperature Accelerated Molecular Dynamics (TAMD, Logarithmic Mean Force Dynamics (LogMFD, andMultiscale Enhanced Sampling (MSES algorithms. We also discuss combinations with techniques for searching reaction paths. We show the advantages presented by this approach and how it allows to quickly sample important regions of the free energy landscape via automatic exploration.

  7. Integrated multiscale modeling of molecular computing devices

    International Nuclear Information System (INIS)

    Cummings, Peter T; Leng Yongsheng

    2005-01-01

    Molecular electronics, in which single organic molecules are designed to perform the functions of transistors, diodes, switches and other circuit elements used in current siliconbased microelecronics, is drawing wide interest as a potential replacement technology for conventional silicon-based lithographically etched microelectronic devices. In addition to their nanoscopic scale, the additional advantage of molecular electronics devices compared to silicon-based lithographically etched devices is the promise of being able to produce them cheaply on an industrial scale using wet chemistry methods (i.e., self-assembly from solution). The design of molecular electronics devices, and the processes to make them on an industrial scale, will require a thorough theoretical understanding of the molecular and higher level processes involved. Hence, the development of modeling techniques for molecular electronics devices is a high priority from both a basic science point of view (to understand the experimental studies in this field) and from an applied nanotechnology (manufacturing) point of view. Modeling molecular electronics devices requires computational methods at all length scales - electronic structure methods for calculating electron transport through organic molecules bonded to inorganic surfaces, molecular simulation methods for determining the structure of self-assembled films of organic molecules on inorganic surfaces, mesoscale methods to understand and predict the formation of mesoscale patterns on surfaces (including interconnect architecture), and macroscopic scale methods (including finite element methods) for simulating the behavior of molecular electronic circuit elements in a larger integrated device. Here we describe a large Department of Energy project involving six universities and one national laboratory aimed at developing integrated multiscale methods for modeling molecular electronics devices. The project is funded equally by the Office of Basic

  8. A multiscale approach to mapping seabed sediments.

    Directory of Open Access Journals (Sweden)

    Benjamin Misiuk

    Full Text Available Benthic habitat maps, including maps of seabed sediments, have become critical spatial-decision support tools for marine ecological management and conservation. Despite the increasing recognition that environmental variables should be considered at multiple spatial scales, variables used in habitat mapping are often implemented at a single scale. The objective of this study was to evaluate the potential for using environmental variables at multiple scales for modelling and mapping seabed sediments. Sixteen environmental variables were derived from multibeam echosounder data collected near Qikiqtarjuaq, Nunavut, Canada at eight spatial scales ranging from 5 to 275 m, and were tested as predictor variables for modelling seabed sediment distributions. Using grain size data obtained from grab samples, we tested which scales of each predictor variable contributed most to sediment models. Results showed that the default scale was often not the best. Out of 129 potential scale-dependent variables, 11 were selected to model the additive log-ratio of mud and sand at five different scales, and 15 were selected to model the additive log-ratio of gravel and sand, also at five different scales. Boosted Regression Tree models that explained between 46.4 and 56.3% of statistical deviance produced multiscale predictions of mud, sand, and gravel that were correlated with cross-validated test data (Spearman's ρmud = 0.77, ρsand = 0.71, ρgravel = 0.58. Predictions of individual size fractions were classified to produce a map of seabed sediments that is useful for marine spatial planning. Based on the scale-dependence of variables in this study, we concluded that spatial scale consideration is at least as important as variable selection in seabed mapping.

  9. Quantum theory of multiscale coarse-graining.

    Science.gov (United States)

    Han, Yining; Jin, Jaehyeok; Wagner, Jacob W; Voth, Gregory A

    2018-03-14

    Coarse-grained (CG) models serve as a powerful tool to simulate molecular systems at much longer temporal and spatial scales. Previously, CG models and methods have been built upon classical statistical mechanics. The present paper develops a theory and numerical methodology for coarse-graining in quantum statistical mechanics, by generalizing the multiscale coarse-graining (MS-CG) method to quantum Boltzmann statistics. A rigorous derivation of the sufficient thermodynamic consistency condition is first presented via imaginary time Feynman path integrals. It identifies the optimal choice of CG action functional and effective quantum CG (qCG) force field to generate a quantum MS-CG (qMS-CG) description of the equilibrium system that is consistent with the quantum fine-grained model projected onto the CG variables. A variational principle then provides a class of algorithms for optimally approximating the qMS-CG force fields. Specifically, a variational method based on force matching, which was also adopted in the classical MS-CG theory, is generalized to quantum Boltzmann statistics. The qMS-CG numerical algorithms and practical issues in implementing this variational minimization procedure are also discussed. Then, two numerical examples are presented to demonstrate the method. Finally, as an alternative strategy, a quasi-classical approximation for the thermal density matrix expressed in the CG variables is derived. This approach provides an interesting physical picture for coarse-graining in quantum Boltzmann statistical mechanics in which the consistency with the quantum particle delocalization is obviously manifest, and it opens up an avenue for using path integral centroid-based effective classical force fields in a coarse-graining methodology.

  10. First results from the Magnetospheric Multiscale mission

    Science.gov (United States)

    Lavraud, B.

    2017-12-01

    Since its launch in March 2015, NASA's Magnetospheric Multiscale mission (MMS) provides a wealth of unprecedented high resolution measurements of space plasma properties and dynamics in the near-Earth environment. MMS was designed in the first place to study the fundamental process of collision-less magnetic reconnection. The two first results reviewed here pertain to this topic and highlight how the extremely high resolution MMS data (electrons, in particular, with full three dimensional measurements at 30 ms in burst mode) have permitted to tackle electron dynamics in unprecedented details. The first result demonstrates how electrons become demagnetized and scattered near the magnetic reconnection X line as a result of increased magnetic field curvature, together with a decrease in its magnitude. The second result demonstrates that electrons form crescent-shaped, agyrotropic distribution functions very near the X line, suggestive of the existence of a perpendicular current aligned with the local electric field and consistent with the energy conversion expected in magnetic reconnection (such that J\\cdot E > 0). Aside from magnetic reconnection, we show how MMS contributes to topics such as wave properties and their interaction with particles. Thanks again to extremely high resolution measurements, the lossless and periodical energy exchange between wave electromagnetic fields and particles, as expected in the case of kinetic Alfvén waves, was confirmed. Although not discussed, MMS has the potential to solve many other outstanding issues in collision-less plasma physics, for example regarding shock or turbulence acceleration, with obvious broader impacts in astrophysics in general.

  11. Quantum theory of multiscale coarse-graining

    Science.gov (United States)

    Han, Yining; Jin, Jaehyeok; Wagner, Jacob W.; Voth, Gregory A.

    2018-03-01

    Coarse-grained (CG) models serve as a powerful tool to simulate molecular systems at much longer temporal and spatial scales. Previously, CG models and methods have been built upon classical statistical mechanics. The present paper develops a theory and numerical methodology for coarse-graining in quantum statistical mechanics, by generalizing the multiscale coarse-graining (MS-CG) method to quantum Boltzmann statistics. A rigorous derivation of the sufficient thermodynamic consistency condition is first presented via imaginary time Feynman path integrals. It identifies the optimal choice of CG action functional and effective quantum CG (qCG) force field to generate a quantum MS-CG (qMS-CG) description of the equilibrium system that is consistent with the quantum fine-grained model projected onto the CG variables. A variational principle then provides a class of algorithms for optimally approximating the qMS-CG force fields. Specifically, a variational method based on force matching, which was also adopted in the classical MS-CG theory, is generalized to quantum Boltzmann statistics. The qMS-CG numerical algorithms and practical issues in implementing this variational minimization procedure are also discussed. Then, two numerical examples are presented to demonstrate the method. Finally, as an alternative strategy, a quasi-classical approximation for the thermal density matrix expressed in the CG variables is derived. This approach provides an interesting physical picture for coarse-graining in quantum Boltzmann statistical mechanics in which the consistency with the quantum particle delocalization is obviously manifest, and it opens up an avenue for using path integral centroid-based effective classical force fields in a coarse-graining methodology.

  12. Multiscale peak detection in wavelet space.

    Science.gov (United States)

    Zhang, Zhi-Min; Tong, Xia; Peng, Ying; Ma, Pan; Zhang, Ming-Jin; Lu, Hong-Mei; Chen, Xiao-Qing; Liang, Yi-Zeng

    2015-12-07

    Accurate peak detection is essential for analyzing high-throughput datasets generated by analytical instruments. Derivatives with noise reduction and matched filtration are frequently used, but they are sensitive to baseline variations, random noise and deviations in the peak shape. A continuous wavelet transform (CWT)-based method is more practical and popular in this situation, which can increase the accuracy and reliability by identifying peaks across scales in wavelet space and implicitly removing noise as well as the baseline. However, its computational load is relatively high and the estimated features of peaks may not be accurate in the case of peaks that are overlapping, dense or weak. In this study, we present multi-scale peak detection (MSPD) by taking full advantage of additional information in wavelet space including ridges, valleys, and zero-crossings. It can achieve a high accuracy by thresholding each detected peak with the maximum of its ridge. It has been comprehensively evaluated with MALDI-TOF spectra in proteomics, the CAMDA 2006 SELDI dataset as well as the Romanian database of Raman spectra, which is particularly suitable for detecting peaks in high-throughput analytical signals. Receiver operating characteristic (ROC) curves show that MSPD can detect more true peaks while keeping the false discovery rate lower than MassSpecWavelet and MALDIquant methods. Superior results in Raman spectra suggest that MSPD seems to be a more universal method for peak detection. MSPD has been designed and implemented efficiently in Python and Cython. It is available as an open source package at .

  13. Automatic validation of numerical solutions

    DEFF Research Database (Denmark)

    Stauning, Ole

    1997-01-01

    This thesis is concerned with ``Automatic Validation of Numerical Solutions''. The basic theory of interval analysis and self-validating methods is introduced. The mean value enclosure is applied to discrete mappings for obtaining narrow enclosures of the iterates when applying these mappings...... differential equations, but in this thesis, we describe how to use the methods for enclosing iterates of discrete mappings, and then later use them for discretizing solutions of ordinary differential equations. The theory of automatic differentiation is introduced, and three methods for obtaining derivatives...... are described: The forward, the backward, and the Taylor expansion methods. The three methods have been implemented in the C++ program packages FADBAD/TADIFF. Some examples showing how to use the three metho ds are presented. A feature of FADBAD/TADIFF not present in other automatic differentiation packages...

  14. Automatic sample changers maintenance manual

    International Nuclear Information System (INIS)

    Myers, T.A.

    1978-10-01

    This manual describes and provides trouble-shooting aids for the Automatic Sample Changer electronics on the automatic beta counting system, developed by the Los Alamos Scientific Laboratory Group CNC-11. The output of a gas detector is shaped by a preamplifier, then is coupled to an amplifier. Amplifier output is discriminated and is the input to a scaler. An identification number is associated with each sample. At a predetermined count length, the identification number, scaler data plus other information is punched out on a data card. The next sample to be counted is automatically selected. The beta counter uses the same electronics as the prior count did, the only difference being the sample identification number and sample itself. This manual is intended as a step-by-step aid in trouble-shooting the electronics associated with positioning the sample, counting the sample, and getting the needed data punched on an 80-column data card

  15. Automatic analysis of microscopic images of red blood cell aggregates

    Science.gov (United States)

    Menichini, Pablo A.; Larese, Mónica G.; Riquelme, Bibiana D.

    2015-06-01

    Red blood cell aggregation is one of the most important factors in blood viscosity at stasis or at very low rates of flow. The basic structure of aggregates is a linear array of cell commonly termed as rouleaux. Enhanced or abnormal aggregation is seen in clinical conditions, such as diabetes and hypertension, producing alterations in the microcirculation, some of which can be analyzed through the characterization of aggregated cells. Frequently, image processing and analysis for the characterization of RBC aggregation were done manually or semi-automatically using interactive tools. We propose a system that processes images of RBC aggregation and automatically obtains the characterization and quantification of the different types of RBC aggregates. Present technique could be interesting to perform the adaptation as a routine used in hemorheological and Clinical Biochemistry Laboratories because this automatic method is rapid, efficient and economical, and at the same time independent of the user performing the analysis (repeatability of the analysis).

  16. Hierarchical assembly strategy and multiscale structural origin of exceptional mechanical performance in nacre

    Science.gov (United States)

    Huang, Zaiwang

    Nacre (mother of pearl) is a self-assembled hierarchical nanocomposite in possession of exquisite multiscale architecture and exceptional mechanical properties. Previous work has shown that the highly-ordered brick-mortar-like structure in nacre is assembled via epitaxial growth and the aragonite platelets are pure single-crystals. Our results challenge this conclusion and propose that nacre's individual aragonite platelets are constructed with highly-aligned aragonite nanoparticles mediated by screw dislocation and amorphous aggregation. The underlying physics mechanism why the aragonite nanoparticles choose highly-oriented attachment as its crystallization pathway is rationalized in terms of thermodynamics. The aragonite nanoparticle order-disorder transformation can be triggered by high temperature and mechanical deformation, which in turn confirms that the aragonite nanoparticles are basic building blocks for aragonite platelets. Particularly fascinating is the fracture toughness enhancement of nacre through exquisitely collecting mechanically inferior calcium carbonate (CaCO3) and biomolecules. The sandwich-like microarchitecture with a geometrically staggered arrangement can induce crack deflection along its biopolymer interface, thus significantly enhancing nacre's fracture toughness. Our new findings ambiguously demonstrate that, aside from crack deflection, the advancing crack can invade aragonite platelet, leaving a zigzag crack propagation pathway. These unexpected experimental observations disclose, for the first time, the inevitable structural role of aragonite platelets in enhancing nacre's fracture toughness. Simultaneously, the findings that the crack propagates in a zigzag manner within individual aragonite platelets overturn the previously well-established wisdom that considers aragonite platelets as brittle single-crystals. Moreover, we investigated the dynamical mechanical response of nacre under unixial compression. Our results show that the

  17. Detecting Multi-scale Structures in Chandra Images of Centaurus A

    Science.gov (United States)

    Karovska, M.; Fabbiano, G.; Elvis, M. S.; Evans, I. N.; Kim, D. W.; Prestwich, A. H.; Schwartz, D. A.; Murray, S. S.; Forman, W.; Jones, C.; Kraft, R. P.; Isobe, T.; Cui, W.; Schreier, E. J.

    1999-12-01

    Centaurus A (NGC 5128) is a giant early-type galaxy with a merger history, containing the nearest radio-bright AGN. Recent Chandra High Resolution Camera (HRC) observations of Cen A reveal X-ray multi-scale structures in this object with unprecedented detail and clarity. We show the results of an analysis of the Chandra data with smoothing and edge enhancement techniques that allow us to enhance and quantify the multi-scale structures present in the HRC images. These techniques include an adaptive smoothing algorithm (Ebeling et al 1999), and a multi-directional gradient detection algorithm (Karovska et al 1994). The Ebeling et al adaptive smoothing algorithm, which is incorporated in the CXC analysis s/w package, is a powerful tool for smoothing images containing complex structures at various spatial scales. The adaptively smoothed images of Centaurus A show simultaneously the high-angular resolution bright structures at scales as small as an arcsecond and the extended faint structures as large as several arc minutes. The large scale structures suggest complex symmetry, including a component possibly associated with the inner radio lobes (as suggested by the ROSAT HRI data, Dobereiner et al 1996), and a separate component with an orthogonal symmetry that may be associated with the galaxy as a whole. The dust lane and the x-ray ridges are very clearly visible. The adaptively smoothed images and the edge-enhanced images also suggest several filamentary features including a large filament-like structure extending as far as about 5 arcminutes to North-West.

  18. Multiscale Drivers of Global Environmental Health

    Science.gov (United States)

    Desai, Manish Anil

    In this dissertation, I motivate, develop, and demonstrate three such approaches for investigating multiscale drivers of global environmental health: (1) a metric for analyzing contributions and responses to climate change from global to sectoral scales, (2) a framework for unraveling the influence of environmental change on infectious diseases at regional to local scales, and (3) a model for informing the design and evaluation of clean cooking interventions at community to household scales. The full utility of climate debt as an analytical perspective will remain untapped without tools that can be manipulated by a wide range of analysts, including global environmental health researchers. Chapter 2 explains how international natural debt (IND) apportions global radiative forcing from fossil fuel carbon dioxide and methane, the two most significant climate altering pollutants, to individual entities -- primarily countries but also subnational states and economic sectors, with even finer scales possible -- as a function of unique trajectories of historical emissions, taking into account the quite different radiative efficiencies and atmospheric lifetimes of each pollutant. Owing to its straightforward and transparent derivation, IND can readily operationalize climate debt to consider issues of equity and efficiency and drive scenario exercises that explore the response to climate change at multiple scales. Collectively, the analyses presented in this chapter demonstrate how IND can inform a range of key question on climate change mitigation at multiple scales, compelling environmental health towards an appraisal of the causes and not just the consequences of climate change. The environmental change and infectious disease (EnvID) conceptual framework of Chapter 3 builds on a rich history of prior efforts in epidemiologic theory, environmental science, and mathematical modeling by: (1) articulating a flexible and logical system specification; (2) incorporating

  19. Front-end vision and multi-scale image analysis multi-scale computer vision theory and applications, written in Mathematica

    CERN Document Server

    Romeny, Bart M Haar

    2008-01-01

    Front-End Vision and Multi-Scale Image Analysis is a tutorial in multi-scale methods for computer vision and image processing. It builds on the cross fertilization between human visual perception and multi-scale computer vision (`scale-space') theory and applications. The multi-scale strategies recognized in the first stages of the human visual system are carefully examined, and taken as inspiration for the many geometric methods discussed. All chapters are written in Mathematica, a spectacular high-level language for symbolic and numerical manipulations. The book presents a new and effective

  20. Automatic Construction of Finite Algebras

    Institute of Scientific and Technical Information of China (English)

    张健

    1995-01-01

    This paper deals with model generation for equational theories,i.e.,automatically generating (finite)models of a given set of (logical) equations.Our method of finite model generation and a tool for automatic construction of finite algebras is described.Some examples are given to show the applications of our program.We argue that,the combination of model generators and theorem provers enables us to get a better understanding of logical theories.A brief comparison betwween our tool and other similar tools is also presented.

  1. Development of an automatic scaler

    International Nuclear Information System (INIS)

    He Yuehong

    2009-04-01

    A self-designed automatic scaler is introduced. A microcontroller LPC936 is used as the master chip in the scaler. A counter integrated with the micro-controller is configured to operate as external pulse counter. Software employed in the scaler is based on a embedded real-time operating system kernel named Small RTOS. Data storage, calculation and some other functions are also provided. The scaler is designed for applications with low cost, low power consumption solutions. By now, the automatic scaler has been applied in a surface contamination instrument. (authors)

  2. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard

    2014-01-01

    Annual Review in Automatic Programming focuses on the techniques of automatic programming used with digital computers. Topics covered range from the design of machine-independent programming languages to the use of recursive procedures in ALGOL 60. A multi-pass translation scheme for ALGOL 60 is described, along with some commercial source languages. The structure and use of the syntax-directed compiler is also considered.Comprised of 12 chapters, this volume begins with a discussion on the basic ideas involved in the description of a computing process as a program for a computer, expressed in

  3. Grinding Parts For Automatic Welding

    Science.gov (United States)

    Burley, Richard K.; Hoult, William S.

    1989-01-01

    Rollers guide grinding tool along prospective welding path. Skatelike fixture holds rotary grinder or file for machining large-diameter rings or ring segments in preparation for welding. Operator grasps handles to push rolling fixture along part. Rollers maintain precise dimensional relationship so grinding wheel cuts precise depth. Fixture-mounted grinder machines surface to quality sufficient for automatic welding; manual welding with attendant variations and distortion not necessary. Developed to enable automatic welding of parts, manual welding of which resulted in weld bead permeated with microscopic fissures.

  4. Impact of model complexity and multi-scale data integration on the estimation of hydrogeological parameters in a dual-porosity aquifer

    Science.gov (United States)

    Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi

    2018-03-01

    This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.

  5. The Potential of Automatic Word Comparison for Historical Linguistics.

    Science.gov (United States)

    List, Johann-Mattis; Greenhill, Simon J; Gray, Russell D

    2017-01-01

    The amount of data from languages spoken all over the world is rapidly increasing. Traditional manual methods in historical linguistics need to face the challenges brought by this influx of data. Automatic approaches to word comparison could provide invaluable help to pre-analyze data which can be later enhanced by experts. In this way, computational approaches can take care of the repetitive and schematic tasks leaving experts to concentrate on answering interesting questions. Here we test the potential of automatic methods to detect etymologically related words (cognates) in cross-linguistic data. Using a newly compiled database of expert cognate judgments across five different language families, we compare how well different automatic approaches distinguish related from unrelated words. Our results show that automatic methods can identify cognates with a very high degree of accuracy, reaching 89% for the best-performing method Infomap. We identify the specific strengths and weaknesses of these different methods and point to major challenges for future approaches. Current automatic approaches for cognate detection-although not perfect-could become an important component of future research in historical linguistics.

  6. Evolutionary game dynamics of controlled and automatic decision-making.

    Science.gov (United States)

    Toupo, Danielle F P; Strogatz, Steven H; Cohen, Jonathan D; Rand, David G

    2015-07-01

    We integrate dual-process theories of human cognition with evolutionary game theory to study the evolution of automatic and controlled decision-making processes. We introduce a model in which agents who make decisions using either automatic or controlled processing compete with each other for survival. Agents using automatic processing act quickly and so are more likely to acquire resources, but agents using controlled processing are better planners and so make more effective use of the resources they have. Using the replicator equation, we characterize the conditions under which automatic or controlled agents dominate, when coexistence is possible and when bistability occurs. We then extend the replicator equation to consider feedback between the state of the population and the environment. Under conditions in which having a greater proportion of controlled agents either enriches the environment or enhances the competitive advantage of automatic agents, we find that limit cycles can occur, leading to persistent oscillations in the population dynamics. Critically, however, these limit cycles only emerge when feedback occurs on a sufficiently long time scale. Our results shed light on the connection between evolution and human cognition and suggest necessary conditions for the rise and fall of rationality.

  7. A consideration of the operation of automatic production machines.

    Science.gov (United States)

    Hoshi, Toshiro; Sugimoto, Noboru

    2015-01-01

    At worksites, various automatic production machines are in use to release workers from muscular labor or labor in the detrimental environment. On the other hand, a large number of industrial accidents have been caused by automatic production machines. In view of this, this paper considers the operation of automatic production machines from the viewpoint of accident prevention, and points out two types of machine operation - operation for which quick performance is required (operation that is not permitted to be delayed) - and operation for which composed performance is required (operation that is not permitted to be performed in haste). These operations are distinguished by operation buttons of suitable colors and shapes. This paper shows that these characteristics are evaluated as "asymmetric on the time-axis". Here, in order for workers to accept the risk of automatic production machines, it is preconditioned in general that harm should be sufficiently small or avoidance of harm is easy. In this connection, this paper shows the possibility of facilitating the acceptance of the risk of automatic production machines by enhancing the asymmetric on the time-axis.

  8. Evolutionary game dynamics of controlled and automatic decision-making

    Science.gov (United States)

    Toupo, Danielle F. P.; Strogatz, Steven H.; Cohen, Jonathan D.; Rand, David G.

    2015-07-01

    We integrate dual-process theories of human cognition with evolutionary game theory to study the evolution of automatic and controlled decision-making processes. We introduce a model in which agents who make decisions using either automatic or controlled processing compete with each other for survival. Agents using automatic processing act quickly and so are more likely to acquire resources, but agents using controlled processing are better planners and so make more effective use of the resources they have. Using the replicator equation, we characterize the conditions under which automatic or controlled agents dominate, when coexistence is possible and when bistability occurs. We then extend the replicator equation to consider feedback between the state of the population and the environment. Under conditions in which having a greater proportion of controlled agents either enriches the environment or enhances the competitive advantage of automatic agents, we find that limit cycles can occur, leading to persistent oscillations in the population dynamics. Critically, however, these limit cycles only emerge when feedback occurs on a sufficiently long time scale. Our results shed light on the connection between evolution and human cognition and suggest necessary conditions for the rise and fall of rationality.

  9. Riparian ecosystems and buffers - multiscale structure, function, and management: introduction

    Science.gov (United States)

    Kathleen A. Dwire; Richard R. Lowrance

    2006-01-01

    Given the importance of issues related to improved understanding and management of riparian ecosystems and buffers, the American Water Resources Association (AWRA) sponsored a Summer Specialty Conference in June 2004 at Olympic Valley, California, entitled 'Riparian Ecosystems and Buffers: Multiscale Structure, Function, and Management.' The primary objective...

  10. A Multiscale Enrichment Procedure for Nonlinear Monotone Operators

    KAUST Repository

    Efendiev, Yalchin R.; Galvis, J.; Presho, M.; Zhou, J.

    2014-01-01

    . Galvis, R. Lazarov, S. Margenov and J. Ren, Robust two-level domain decomposition preconditioners for high-contrast anisotropic flows in multiscale media. Submitted.; Y. Efendiev, J. Galvis and X. Wu, J. Comput. Phys. 230 (2011) 937–955; J. Galvis and Y

  11. Multiscale Modeling of Wear Degradation in Cylinder Liners

    KAUST Repository

    Moraes, Alvaro; Ruggeri, Fabrizio; Tempone, Raul; Vilanova, Pedro

    2014-01-01

    both to predict and to avoid them. To achieve this, a monitoring system of the wear level should be implemented to decrease the risk of failure. In this work, we take a first step into the development of a multiscale indirect inference methodology

  12. Multiscale approach to the physics of radiation damage with ions

    International Nuclear Information System (INIS)

    Surdutovich, E.; Solov'yov, A.

    2014-01-01

    The multiscale approach to the assessment of bio-damage resulting upon irradiation of biological media with ions is reviewed, explained and compared to other approaches. The processes of ion propagation in the medium concurrent with ionization and excitation of molecules, transport of secondary products, dynamics of the medium, and biological damage take place on a number of different temporal, spatial and energy scales. The multiscale approach, a physical phenomenon-based analysis of the scenario that leads to radiation damage, has been designed to consider all relevant effects on a variety of scales and develop an approach to the quantitative assessment of biological damage as a result of irradiation with ions. Presently, physical and chemical effects are included in the scenario while the biological effects such as DNA repair are only mentioned. This paper explains the scenario of radiation damage with ions, overviews its major parts, and applies the multiscale approach to different experimental conditions. On the basis of this experience, the recipe for application of the multiscale approach is formulated. The recipe leads to the calculation of relative biological effectiveness. (authors)

  13. Multiscale modeling and simulation of brain blood flow

    Energy Technology Data Exchange (ETDEWEB)

    Perdikaris, Paris, E-mail: parisp@mit.edu [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Grinberg, Leopold, E-mail: leopoldgrinberg@us.ibm.com [IBM T.J Watson Research Center, 1 Rogers St, Cambridge, Massachusetts 02142 (United States); Karniadakis, George Em, E-mail: george-karniadakis@brown.edu [Division of Applied Mathematics, Brown University, Providence, Rhode Island 02912 (United States)

    2016-02-15

    The aim of this work is to present an overview of recent advances in multi-scale modeling of brain blood flow. In particular, we present some approaches that enable the in silico study of multi-scale and multi-physics phenomena in the cerebral vasculature. We discuss the formulation of continuum and atomistic modeling approaches, present a consistent framework for their concurrent coupling, and list some of the challenges that one needs to overcome in achieving a seamless and scalable integration of heterogeneous numerical solvers. The effectiveness of the proposed framework is demonstrated in a realistic case involving modeling the thrombus formation process taking place on the wall of a patient-specific cerebral aneurysm. This highlights the ability of multi-scale algorithms to resolve important biophysical processes that span several spatial and temporal scales, potentially yielding new insight into the key aspects of brain blood flow in health and disease. Finally, we discuss open questions in multi-scale modeling and emerging topics of future research.

  14. Multiscale Path Metrics for the Analysis of Discrete Geometric Structures

    Science.gov (United States)

    2017-11-30

    Report: Multiscale Path Metrics for the Analysis of Discrete Geometric Structures The views, opinions and/or findings contained in this report are those...Analysis of Discrete Geometric Structures Report Term: 0-Other Email: tomasi@cs.duke.edu Distribution Statement: 1-Approved for public release

  15. Multiscale analysis of structure development in expanded starch snacks

    Science.gov (United States)

    van der Sman, R. G. M.; Broeze, J.

    2014-11-01

    In this paper we perform a multiscale analysis of the food structuring process of the expansion of starchy snack foods like keropok, which obtains a solid foam structure. In particular, we want to investigate the validity of the hypothesis of Kokini and coworkers, that expansion is optimal at the moisture content, where the glass transition and the boiling line intersect. In our analysis we make use of several tools, (1) time scale analysis from the field of physical transport phenomena, (2) the scale separation map (SSM) developed within a multiscale simulation framework of complex automata, (3) the supplemented state diagram (SSD), depicting phase transition and glass transition lines, and (4) a multiscale simulation model for the bubble expansion. Results of the time scale analysis are plotted in the SSD, and give insight into the dominant physical processes involved in expansion. Furthermore, the results of the time scale analysis are used to construct the SSM, which has aided us in the construction of the multiscale simulation model. Simulation results are plotted in the SSD. This clearly shows that the hypothesis of Kokini is qualitatively true, but has to be refined. Our results show that bubble expansion is optimal for moisture content, where the boiling line for gas pressure of 4 bars intersects the isoviscosity line of the critical viscosity 106 Pa.s, which runs parallel to the glass transition line.

  16. On a multiscale approach for filter efficiency simulations

    KAUST Repository

    Iliev, Oleg

    2014-07-01

    Filtration in general, and the dead end depth filtration of solid particles out of fluid in particular, is intrinsic multiscale problem. The deposition (capturing of particles) essentially depends on local velocity, on microgeometry (pore scale geometry) of the filtering medium and on the diameter distribution of the particles. The deposited (captured) particles change the microstructure of the porous media what leads to change of permeability. The changed permeability directly influences the velocity field and pressure distribution inside the filter element. To close the loop, we mention that the velocity influences the transport and deposition of particles. In certain cases one can evaluate the filtration efficiency considering only microscale or only macroscale models, but in general an accurate prediction of the filtration efficiency requires multiscale models and algorithms. This paper discusses the single scale and the multiscale models, and presents a fractional time step discretization algorithm for the multiscale problem. The velocity within the filter element is computed at macroscale, and is used as input for the solution of microscale problems at selected locations of the porous medium. The microscale problem is solved with respect to transport and capturing of individual particles, and its solution is postprocessed to provide permeability values for macroscale computations. Results from computational experiments with an oil filter are presented and discussed.

  17. Multi-scale and multi-orientation medical image analysis

    NARCIS (Netherlands)

    Haar Romenij, ter B.M.; Deserno, T.M.

    2011-01-01

    Inspired by multi-scale and multi-orientation mechanisms recognized in the first stages of our visual system, this chapter gives a tutorial overview of the basic principles. Images are discrete, measured data. The optimal aperture for an observation with as little artefacts as possible, is derived

  18. Multiscale Modeling of Fracture Processes in Cementitious Materials

    NARCIS (Netherlands)

    Qian, Z.

    2012-01-01

    Concrete is a composite construction material, which is composed primarily of coarse aggregates, sands and cement paste. The fracture processes in concrete are complicated, because of the multiscale and multiphase nature of the material. In the past decades, comprehensive effort has been put to

  19. Covariance, correlation matrix, and the multiscale community structure of networks.

    Science.gov (United States)

    Shen, Hua-Wei; Cheng, Xue-Qi; Fang, Bin-Xing

    2010-07-01

    Empirical studies show that real world networks often exhibit multiple scales of topological descriptions. However, it is still an open problem how to identify the intrinsic multiple scales of networks. In this paper, we consider detecting the multiscale community structure of network from the perspective of dimension reduction. According to this perspective, a covariance matrix of network is defined to uncover the multiscale community structure through the translation and rotation transformations. It is proved that the covariance matrix is the unbiased version of the well-known modularity matrix. We then point out that the translation and rotation transformations fail to deal with the heterogeneous network, which is very common in nature and society. To address this problem, a correlation matrix is proposed through introducing the rescaling transformation into the covariance matrix. Extensive tests on real world and artificial networks demonstrate that the correlation matrix significantly outperforms the covariance matrix, identically the modularity matrix, as regards identifying the multiscale community structure of network. This work provides a novel perspective to the identification of community structure and thus various dimension reduction methods might be used for the identification of community structure. Through introducing the correlation matrix, we further conclude that the rescaling transformation is crucial to identify the multiscale community structure of network, as well as the translation and rotation transformations.

  20. Hybrid continuum–molecular modelling of multiscale internal gas flows

    International Nuclear Information System (INIS)

    Patronis, Alexander; Lockerby, Duncan A.; Borg, Matthew K.; Reese, Jason M.

    2013-01-01

    We develop and apply an efficient multiscale method for simulating a large class of low-speed internal rarefied gas flows. The method is an extension of the hybrid atomistic–continuum approach proposed by Borg et al. (2013) [28] for the simulation of micro/nano flows of high-aspect ratio. The major new extensions are: (1) incorporation of fluid compressibility; (2) implementation using the direct simulation Monte Carlo (DSMC) method for dilute rarefied gas flows, and (3) application to a broader range of geometries, including periodic, non-periodic, pressure-driven, gravity-driven and shear-driven internal flows. The multiscale method is applied to micro-scale gas flows through a periodic converging–diverging channel (driven by an external acceleration) and a non-periodic channel with a bend (driven by a pressure difference), as well as the flow between two eccentric cylinders (with the inner rotating relative to the outer). In all these cases there exists a wide variation of Knudsen number within the geometries, as well as substantial compressibility despite the Mach number being very low. For validation purposes, our multiscale simulation results are compared to those obtained from full-scale DSMC simulations: very close agreement is obtained in all cases for all flow variables considered. Our multiscale simulation is an order of magnitude more computationally efficient than the full-scale DSMC for the first and second test cases, and two orders of magnitude more efficient for the third case

  1. Randomized Oversampling for Generalized Multiscale Finite Element Methods

    KAUST Repository

    Calo, Victor M.

    2016-03-23

    In this paper, we develop efficient multiscale methods for flows in heterogeneous media. We use the generalized multiscale finite element (GMsFEM) framework. GMsFEM approximates the solution space locally using a few multiscale basis functions. This approximation selects an appropriate snapshot space and a local spectral decomposition, e.g., the use of oversampled regions, in order to achieve an efficient model reduction. However, the successful construction of snapshot spaces may be costly if too many local problems need to be solved in order to obtain these spaces. We use a moderate quantity of local solutions (or snapshot vectors) with random boundary conditions on oversampled regions with zero forcing to deliver an efficient methodology. Motivated by the randomized algorithm presented in [P. G. Martinsson, V. Rokhlin, and M. Tygert, A Randomized Algorithm for the approximation of Matrices, YALEU/DCS/TR-1361, Yale University, 2006], we consider a snapshot space which consists of harmonic extensions of random boundary conditions defined in a domain larger than the target region. Furthermore, we perform an eigenvalue decomposition in this small space. We study the application of randomized sampling for GMsFEM in conjunction with adaptivity, where local multiscale spaces are adaptively enriched. Convergence analysis is provided. We present representative numerical results to validate the method proposed.

  2. Multi-Scale Pattern Recognition for Image Classification and Segmentation

    NARCIS (Netherlands)

    Li, Y.

    2013-01-01

    Scale is an important parameter of images. Different objects or image structures (e.g. edges and corners) can appear at different scales and each is meaningful only over a limited range of scales. Multi-scale analysis has been widely used in image processing and computer vision, serving as the basis

  3. A Liver-centric Multiscale Modeling Framework for Xenobiotics

    Science.gov (United States)

    We describe a multi-scale framework for modeling acetaminophen-induced liver toxicity. Acetaminophen is a widely used analgesic. Overdose of acetaminophen can result in liver injury via its biotransformation into toxic product, which further induce massive necrosis. Our study foc...

  4. Adaptive Multiscale Finite Element Method for Subsurface Flow Simulation

    NARCIS (Netherlands)

    Van Esch, J.M.

    2010-01-01

    Natural geological formations generally show multiscale structural and functional heterogeneity evolving over many orders of magnitude in space and time. In subsurface hydrological simulations the geological model focuses on the structural hierarchy of physical sub units and the flow model addresses

  5. Multiscale topology optimization of solid and fluid structures

    DEFF Research Database (Denmark)

    Andreasen, Casper Schousboe

    This thesis considers the application of the topology optimization method to multiscale problems, specifically the fluid-structure interaction problem. By multiple-scale methods the governing equations, the Navier-Cauchy and the incompressible Navier-Stokes equations are expanded and separated...

  6. A practical multiscale approach for optimization of structural damping

    DEFF Research Database (Denmark)

    Andreassen, Erik; Jensen, Jakob Søndergaard

    2016-01-01

    A simple and practical multiscale approach suitable for topology optimization of structural damping in a component ready for additive manufacturing is presented.The approach consists of two steps: First, the homogenized loss factor of a two-phase material is maximized. This is done in order...

  7. Fast 2D Simulation of Superconductors: a Multiscale Approach

    DEFF Research Database (Denmark)

    Rodriguez Zermeno, Victor Manuel; Sørensen, Mads Peter; Pedersen, Niels Falsig

    2009-01-01

    This work presents a method to calculate AC losses in thin conductors such as the commercially available second generation superconducting wires through a multiscale meshing technique. The main idea is to use large aspect ratio elements to accurately simulate thin material layers. For a single thin...

  8. Control algorithm for multiscale flow simulations of water

    DEFF Research Database (Denmark)

    Kotsalis, E. M.; Walther, Jens Honore; Kaxiras, E.

    2009-01-01

    We present a multiscale algorithm to couple atomistic water models with continuum incompressible flow simulations via a Schwarz domain decomposition approach. The coupling introduces an inhomogeneity in the description of the atomistic domain and prevents the use of periodic boundary conditions...

  9. Hypoglycemia-Related Electroencephalogram Changes Assessed by Multiscale Entropy

    DEFF Research Database (Denmark)

    Fabris, C.; Sparacino, G.; Sejling, A. S.

    2014-01-01

    derivation in the two glycemic intervals was assessed using the multiscale entropy (MSE) approach, obtaining measures of sample entropy (SampEn) at various temporal scales. The comparison of how signal irregularity measured by SampEn varies as the temporal scale increases in the two glycemic states provides...

  10. Elimination of intermediate species in multiscale stochastic reaction networks

    DEFF Research Database (Denmark)

    Cappelletti, Daniele; Wiuf, Carsten

    2016-01-01

    such as the substrate-enzyme complex in the Michaelis-Menten mechanism. Such species are virtually in all real-world networks, they are typically short-lived, degraded at a fast rate and hard to observe experimentally. We provide conditions under which the Markov process of a multiscale reaction network...

  11. Efficient topology optimisation of multiscale and multiphysics problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe

    The aim of this Thesis is to present efficient methods for optimising high-resolution problems of a multiscale and multiphysics nature. The Thesis consists of two parts: one treating topology optimisation of microstructural details and the other treating topology optimisation of conjugate heat...

  12. Computer-Aided Multiscale Modelling for Chemical Process Engineering

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Gani, Rafiqul

    2007-01-01

    Chemical processes are generally modeled through monoscale approaches, which, while not adequate, satisfy a useful role in product-process design. In this case, use of a multi-dimensional and multi-scale model-based approach has importance in product-process development. A computer-aided framework...

  13. Cyclic Matching Pursuits with Multiscale Time-frequency Dictionaries

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Christensen, Mads Græsbøll

    2010-01-01

    We generalize cyclic matching pursuit (CMP), propose an orthogonal variant, and examine their performance using multiscale time-frequency dictionaries in the sparse approximation of signals. Overall, we find that the cyclic approach of CMP produces signal models that have a much lower approximation...

  14. Multiscale perspectives of species richness in East Africa

    NARCIS (Netherlands)

    Said, M.

    2003-01-01

    This dissertation describes and analyses animal species richness in East Africa from a multi-scale perspective. We studied diversity patterns at sub-continental, national and sub-national level. The study demonstrated that species diversity patterns were scale-dependent. Diversity patterns varied

  15. Multiscale equation-free algorithms for molecular dynamics

    Science.gov (United States)

    Abi Mansour, Andrew

    Molecular dynamics is a physics-based computational tool that has been widely employed to study the dynamics and structure of macromolecules and their assemblies at the atomic scale. However, the efficiency of molecular dynamics simulation is limited because of the broad spectrum of timescales involved. To overcome this limitation, an equation-free algorithm is presented for simulating these systems using a multiscale model cast in terms of atomistic and coarse-grained variables. Both variables are evolved in time in such a way that the cross-talk between short and long scales is preserved. In this way, the coarse-grained variables guide the evolution of the atom-resolved states, while the latter provide the Newtonian physics for the former. While the atomistic variables are evolved using short molecular dynamics runs, time advancement at the coarse-grained level is achieved with a scheme that uses information from past and future states of the system while accounting for both the stochastic and deterministic features of the coarse-grained dynamics. To complete the multiscale cycle, an atom-resolved state consistent with the updated coarse-grained variables is recovered using algorithms from mathematical optimization. This multiscale paradigm is extended to nanofluidics using concepts from hydrodynamics, and it is demonstrated for macromolecular and nanofluidic systems. A toolkit is developed for prototyping these algorithms, which are then implemented within the GROMACS simulation package and released as an open source multiscale simulator.

  16. The automatic lumber planing mill

    Science.gov (United States)

    Peter Koch

    1957-01-01

    It is probable that a truly automatic planning operation could be devised if some of the variables commonly present in the mill-run lumber were eliminated and the remaining variables kept under close control. This paper will deal with the more general situation faced by mostl umber manufacturing plants. In other words, it will be assumed that the incoming lumber has...

  17. Automatic Validation of Protocol Narration

    DEFF Research Database (Denmark)

    Bodei, Chiara; Buchholtz, Mikael; Degano, Pierpablo

    2003-01-01

    We perform a systematic expansion of protocol narrations into terms of a process algebra in order to make precise some of the detailed checks that need to be made in a protocol. We then apply static analysis technology to develop an automatic validation procedure for protocols. Finally, we...

  18. Automatically Preparing Safe SQL Queries

    Science.gov (United States)

    Bisht, Prithvi; Sistla, A. Prasad; Venkatakrishnan, V. N.

    We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.

  19. The Automatic Measurement of Targets

    DEFF Research Database (Denmark)

    Höhle, Joachim

    1997-01-01

    The automatic measurement of targets is demonstrated by means of a theoretical example and by an interactive measuring program for real imagery from a réseau camera. The used strategy is a combination of two methods: the maximum correlation coefficient and the correlation in the subpixel range...... interactive software is also part of a computer-assisted learning program on digital photogrammetry....

  20. Automatic analysis of ultrasonic data

    International Nuclear Information System (INIS)

    Horteur, P.; Colin, J.; Benoist, P.; Bonis, M.; Paradis, L.

    1986-10-01

    This paper describes an automatic and self-contained data processing system, transportable on site, able to perform images such as ''A. Scan'', ''B. Scan'', ... to present very quickly the results of the control. It can be used in the case of pressure vessel inspection [fr

  1. Upgradation of automatic liquid scintillation counting system

    International Nuclear Information System (INIS)

    Bhattacharya, Sadhana; Behere, Anita; Sonalkar, S.Y.; Vaidya, P.P.

    2001-01-01

    This paper describes the upgradation of Microprocessor based Automatic Liquid Scintillation Counting systems (MLSC). This system was developed in 1980's and subsequently many systems were manufactured and supplied to Environment Survey labs at various Nuclear Power Plants. Recently this system has been upgraded to a more sophisticated one by using PC add-on hardware and developing Windows based software. The software implements more intuitive graphical user interface and also enhances the features making it comparable with commercially available systems. It implements data processing using full spectrum analysis as against channel ratio method adopted earlier, improving the accuracy of the results. Also it facilitates qualitative as well as quantitative analysis of the β-spectrum. It is possible to analyze a sample containing an unknown β-source. (author)

  2. Uncertainty Quantification and Management for Multi-scale Nuclear Materials Modeling

    International Nuclear Information System (INIS)

    McDowell, David; Deo, Chaitanya; Zhu, Ting; Wang, Yan

    2015-01-01

    Understanding and improving microstructural mechanical stability in metals and alloys is central to the development of high strength and high ductility materials for cladding and cores structures in advanced fast reactors. Design and enhancement of radiation-induced damage tolerant alloys are facilitated by better understanding the connection of various unit processes to collective responses in a multiscale model chain, including: dislocation nucleation, absorption and desorption at interfaces; vacancy production, radiation-induced segregation of Cr and Ni at defect clusters (point defect sinks) in BCC Fe-Cr ferritic/martensitic steels; investigation of interaction of interstitials and vacancies with impurities (V, Nb, Ta, Mo, W, Al, Si, P, S); time evolution of swelling (cluster growth) phenomena of irradiated materials; and energetics and kinetics of dislocation bypass of defects formed by interstitial clustering and formation of prismatic loops, informing statistical models of continuum character with regard to processes of dislocation glide, vacancy agglomeration and swelling, climb and cross slip.

  3. Uncertainty Quantification and Management for Multi-scale Nuclear Materials Modeling

    Energy Technology Data Exchange (ETDEWEB)

    McDowell, David [Georgia Inst. of Technology, Atlanta, GA (United States); Deo, Chaitanya [Georgia Inst. of Technology, Atlanta, GA (United States); Zhu, Ting [Georgia Inst. of Technology, Atlanta, GA (United States); Wang, Yan [Georgia Inst. of Technology, Atlanta, GA (United States)

    2015-10-21

    Understanding and improving microstructural mechanical stability in metals and alloys is central to the development of high strength and high ductility materials for cladding and cores structures in advanced fast reactors. Design and enhancement of radiation-induced damage tolerant alloys are facilitated by better understanding the connection of various unit processes to collective responses in a multiscale model chain, including: dislocation nucleation, absorption and desorption at interfaces; vacancy production, radiation-induced segregation of Cr and Ni at defect clusters (point defect sinks) in BCC Fe-Cr ferritic/martensitic steels; investigation of interaction of interstitials and vacancies with impurities (V, Nb, Ta, Mo, W, Al, Si, P, S); time evolution of swelling (cluster growth) phenomena of irradiated materials; and energetics and kinetics of dislocation bypass of defects formed by interstitial clustering and formation of prismatic loops, informing statistical models of continuum character with regard to processes of dislocation glide, vacancy agglomeration and swelling, climb and cross slip.

  4. Microstructure-based multiscale modeling of elevated temperature deformation in aluminum alloys

    International Nuclear Information System (INIS)

    Krajewski, Paul E.; Hector, Louis G.; Du Ningning; Bower, Allan F.

    2010-01-01

    A multiscale model for predicting elevated temperature deformation in Al-Mg alloys is presented. Constitutive models are generated from a theoretical methodology and used to investigate the effects of grain size on formability. Flow data are computed with a polycrystalline, microstructure-based model which accounts for grain boundary sliding, stress-induced diffusion, and dislocation creep. Favorable agreement is found between the computed flow data and elevated temperature tensile measurements. A creep constitutive model is then fit to the computed flow data and used in finite-element simulations of two simple gas pressure forming processes, where favorable results are observed. These results are fully consistent with gas pressure forming experiments, and suggest a greater role for constitutive models, derived largely from theoretical methodologies, in the design of Al alloys with enhanced elevated temperature formability. The methodology detailed herein provides a framework for incorporation of results from atomistic-scale models of dislocation creep and diffusion.

  5. Foundations for a multiscale collaborative Earth model

    KAUST Repository

    Afanasiev, M.

    2015-11-11

    of the CSEM development, the broad global updates mostly act to remove artefacts from the assembly of the initial CSEM. During the future evolution of the CSEM, the reference data set will be used to account for the influence of small-scale refinements on large-scale global structure. The CSEM as a computational framework is intended to help bridging the gap between local, regional and global tomography, and to contribute to the development of a global multiscale Earth model. While the current construction serves as a first proof of concept, future refinements and additions will require community involvement, which is welcome at this stage already.

  6. Classification of high resolution imagery based on fusion of multiscale texture features

    International Nuclear Information System (INIS)

    Liu, Jinxiu; Liu, Huiping; Lv, Ying; Xue, Xiaojuan

    2014-01-01

    In high resolution data classification process, combining texture features with spectral bands can effectively improve the classification accuracy. However, the window size which is difficult to choose is regarded as an important factor influencing overall classification accuracy in textural classification and current approaches to image texture analysis only depend on a single moving window which ignores different scale features of various land cover types. In this paper, we propose a new method based on the fusion of multiscale texture features to overcome these problems. The main steps in new method include the classification of fixed window size spectral/textural images from 3×3 to 15×15 and comparison of all the posterior possibility values for every pixel, as a result the biggest probability value is given to the pixel and the pixel belongs to a certain land cover type automatically. The proposed approach is tested on University of Pavia ROSIS data. The results indicate that the new method improve the classification accuracy compared to results of methods based on fixed window size textural classification

  7. Domain Decomposition Preconditioners for Multiscale Flows in High-Contrast Media

    KAUST Repository

    Galvis, Juan; Efendiev, Yalchin

    2010-01-01

    In this paper, we study domain decomposition preconditioners for multiscale flows in high-contrast media. We consider flow equations governed by elliptic equations in heterogeneous media with a large contrast in the coefficients. Our main goal is to develop domain decomposition preconditioners with the condition number that is independent of the contrast when there are variations within coarse regions. This is accomplished by designing coarse-scale spaces and interpolators that represent important features of the solution within each coarse region. The important features are characterized by the connectivities of high-conductivity regions. To detect these connectivities, we introduce an eigenvalue problem that automatically detects high-conductivity regions via a large gap in the spectrum. A main observation is that this eigenvalue problem has a few small, asymptotically vanishing eigenvalues. The number of these small eigenvalues is the same as the number of connected high-conductivity regions. The coarse spaces are constructed such that they span eigenfunctions corresponding to these small eigenvalues. These spaces are used within two-level additive Schwarz preconditioners as well as overlapping methods for the Schur complement to design preconditioners. We show that the condition number of the preconditioned systems is independent of the contrast. More detailed studies are performed for the case when the high-conductivity region is connected within coarse block neighborhoods. Our numerical experiments confirm the theoretical results presented in this paper. © 2010 Society for Industrial and Applied Mathematics.

  8. Multiscale Geoscene Segmentation for Extracting Urban Functional Zones from VHR Satellite Images

    Directory of Open Access Journals (Sweden)

    Xiuyuan Zhang

    2018-02-01

    Full Text Available Urban functional zones, such as commercial, residential, and industrial zones, are basic units of urban planning, and play an important role in monitoring urbanization. However, historical functional-zone maps are rarely available for cities in developing countries, as traditional urban investigations focus on geographic objects rather than functional zones. Recent studies have sought to extract functional zones automatically from very-high-resolution (VHR satellite images, and they mainly concentrate on classification techniques, but ignore zone segmentation which delineates functional-zone boundaries and is fundamental to functional-zone analysis. To resolve the issue, this study presents a novel segmentation method, geoscene segmentation, which can identify functional zones at multiple scales by aggregating diverse urban objects considering their features and spatial patterns. In experiments, we applied this method to three Chinese cities—Beijing, Putian, and Zhuhai—and generated detailed functional-zone maps with diverse functional categories. These experimental results indicate our method effectively delineates urban functional zones with VHR imagery; different categories of functional zones extracted by using different scale parameters; and spatial patterns that are more important than the features of individual objects in extracting functional zones. Accordingly, the presented multiscale geoscene segmentation method is important for urban-functional-zone analysis, and can provide valuable data for city planners.

  9. Mixing in 3D Sparse Multi-Scale Grid Generated Turbulence

    Science.gov (United States)

    Usama, Syed; Kopec, Jacek; Tellez, Jackson; Kwiatkowski, Kamil; Redondo, Jose; Malik, Nadeem

    2017-04-01

    Flat 2D fractal grids are known to alter turbulence characteristics downstream of the grid as compared to the regular grids with the same blockage ratio and the same mass inflow rates [1]. This has excited interest in the turbulence community for possible exploitation for enhanced mixing and related applications. Recently, a new 3D multi-scale grid design has been proposed [2] such that each generation of length scale of turbulence grid elements is held in its own frame, the overall effect is a 3D co-planar arrangement of grid elements. This produces a 'sparse' grid system whereby each generation of grid elements produces a turbulent wake pattern that interacts with the other wake patterns downstream. A critical motivation here is that the effective blockage ratio in the 3D Sparse Grid Turbulence (3DSGT) design is significantly lower than in the flat 2D counterpart - typically the blockage ratio could be reduced from say 20% in 2D down to 4% in the 3DSGT. If this idea can be realized in practice, it could potentially greatly enhance the efficiency of turbulent mixing and transfer processes clearly having many possible applications. Work has begun on the 3DSGT experimentally using Surface Flow Image Velocimetry (SFIV) [3] at the European facility in the Max Planck Institute for Dynamics and Self-Organization located in Gottingen, Germany and also at the Technical University of Catalonia (UPC) in Spain, and numerically using Direct Numerical Simulation (DNS) at King Fahd University of Petroleum & Minerals (KFUPM) in Saudi Arabia and in University of Warsaw in Poland. DNS is the most useful method to compare the experimental results with, and we are studying different types of codes such as Imcompact3d, and OpenFoam. Many variables will eventually be investigated for optimal mixing conditions. For example, the number of scale generations, the spacing between frames, the size ratio of grid elements, inflow conditions, etc. We will report upon the first set of findings

  10. Computational multiscale modeling of intergranular cracking

    International Nuclear Information System (INIS)

    Simonovski, Igor; Cizelj, Leon

    2011-01-01

    A novel computational approach for simulation of intergranular cracks in a polycrystalline aggregate is proposed in this paper. The computational model includes a topological model of the experimentally determined microstructure of a 400 μm diameter stainless steel wire and automatic finite element discretization of the grains and grain boundaries. The microstructure was spatially characterized by X-ray diffraction contrast tomography and contains 362 grains and some 1600 grain boundaries. Available constitutive models currently include isotropic elasticity for the grain interior and cohesive behavior with damage for the grain boundaries. The experimentally determined lattice orientations are employed to distinguish between resistant low energy and susceptible high energy grain boundaries in the model. The feasibility and performance of the proposed computational approach is demonstrated by simulating the onset and propagation of intergranular cracking. The preliminary numerical results are outlined and discussed.

  11. Multiscale model reduction for shale gas transport in fractured media

    KAUST Repository

    Akkutlu, I. Y.

    2016-05-18

    In this paper, we develop a multiscale model reduction technique that describes shale gas transport in fractured media. Due to the pore-scale heterogeneities and processes, we use upscaled models to describe the matrix. We follow our previous work (Akkutlu et al. Transp. Porous Media 107(1), 235–260, 2015), where we derived an upscaled model in the form of generalized nonlinear diffusion model to describe the effects of kerogen. To model the interaction between the matrix and the fractures, we use Generalized Multiscale Finite Element Method (Efendiev et al. J. Comput. Phys. 251, 116–135, 2013, 2015). In this approach, the matrix and the fracture interaction is modeled via local multiscale basis functions. In Efendiev et al. (2015), we developed the GMsFEM and applied for linear flows with horizontal or vertical fracture orientations aligned with a Cartesian fine grid. The approach in Efendiev et al. (2015) does not allow handling arbitrary fracture distributions. In this paper, we (1) consider arbitrary fracture distributions on an unstructured grid; (2) develop GMsFEM for nonlinear flows; and (3) develop online basis function strategies to adaptively improve the convergence. The number of multiscale basis functions in each coarse region represents the degrees of freedom needed to achieve a certain error threshold. Our approach is adaptive in a sense that the multiscale basis functions can be added in the regions of interest. Numerical results for two-dimensional problem are presented to demonstrate the efficiency of proposed approach. © 2016 Springer International Publishing Switzerland

  12. Multiscale decomposition for heterogeneous land-atmosphere systems

    Science.gov (United States)

    Liu, Shaofeng; Shao, Yaping; Hintz, Michael; Lennartz-Sassinek, Sabine

    2015-02-01

    The land-atmosphere system is characterized by pronounced land surface heterogeneity and vigorous atmospheric turbulence both covering a wide range of scales. The multiscale surface heterogeneities and multiscale turbulent eddies interact nonlinearly with each other. Understanding these multiscale processes quantitatively is essential to the subgrid parameterizations for weather and climate models. In this paper, we propose a method for surface heterogeneity quantification and turbulence structure identification. The first part of the method is an orthogonal transform in the probability density function (PDF) domain, in contrast to the orthogonal wavelet transforms which are performed in the physical space. As the basis of the whole method, the orthogonal PDF transform (OPT) is used to asymptotically reconstruct the original signals by representing the signal values with multilevel approximations. The "patch" idea is then applied to these reconstructed fields in order to recognize areas at the land surface or in turbulent flows that are of the same characteristics. A patch here is a connected area with the same approximation. For each recognized patch, a length scale is then defined to build the energy spectrum. The OPT and related energy spectrum analysis, as a whole referred to as the orthogonal PDF decomposition (OPD), is applied to two-dimensional heterogeneous land surfaces and atmospheric turbulence fields for test. The results show that compared to the wavelet transforms, the OPD can reconstruct the original signal more effectively, and accordingly, its energy spectrum represents the signal's multiscale variation more accurately. The method we propose in this paper is of general nature and therefore can be of interest for problems of multiscale process description in other geophysical disciplines.

  13. Multiscale stabilization for convection-dominated diffusion in heterogeneous media

    KAUST Repository

    Calo, Victor M.

    2016-02-23

    We develop a Petrov-Galerkin stabilization method for multiscale convection-diffusion transport systems. Existing stabilization techniques add a limited number of degrees of freedom in the form of bubble functions or a modified diffusion, which may not be sufficient to stabilize multiscale systems. We seek a local reduced-order model for this kind of multiscale transport problems and thus, develop a systematic approach for finding reduced-order approximations of the solution. We start from a Petrov-Galerkin framework using optimal weighting functions. We introduce an auxiliary variable to a mixed formulation of the problem. The auxiliary variable stands for the optimal weighting function. The problem reduces to finding a test space (a dimensionally reduced space for this auxiliary variable), which guarantees that the error in the primal variable (representing the solution) is close to the projection error of the full solution on the dimensionally reduced space that approximates the solution. To find the test space, we reformulate some recent mixed Generalized Multiscale Finite Element Methods. We introduce snapshots and local spectral problems that appropriately define local weight and trial spaces. In particular, we use energy minimizing snapshots and local spectral decompositions in the natural norm associated with the auxiliary variable. The resulting spectral decomposition adaptively identifies and builds the optimal multiscale space to stabilize the system. We discuss the stability and its relation to the approximation property of the test space. We design online basis functions, which accelerate convergence in the test space, and consequently, improve stability. We present several numerical examples and show that one needs a few test functions to achieve an error similar to the projection error in the primal variable irrespective of the Peclet number.

  14. Multiscale Roughness Influencing on Transport Behavior of Passive Solute through a Single Self-affine Fracture

    Science.gov (United States)

    Dou, Z.

    2017-12-01

    In this study, the influence of multi-scale roughness on transport behavior of the passive solute through the self-affine fracture was investigated. The single self-affine fracture was constructed by the successive random additions (SRA) and the fracture roughness was decomposed into two different scales (i.e. large-scale primary roughness and small-scale secondary roughness) by the Wavelet analysis technique. The fluid flow in fractures, which was characterized by the Forchheimer's law, showed the non-linear flow behaviors such as eddies and tortuous streamlines. The results indicated that the small-scale secondary roughness was primarily responsible for the non-linear flow behaviors. The direct simulations of asymptotic passive solute transport represented the Non-Fickian transport characteristics (i.e. early arrivals and long tails) in breakthrough curves (BTCs) and residence time distributions (RTDs) with and without consideration of the secondary roughness. Analysis of multiscale BTCs and RTDs showed that the small-scale secondary roughness played a significant role in enhancing the Non-Fickian transport characteristics. We found that removing small-scale secondary roughness led to the lengthening arrival and shortening tail. The peak concentration in BTCs decreased as the secondary roughness was removed, implying that the secondary could also enhance the solute dilution. The estimated BTCs by the Fickian advection-dispersion equation (ADE) yielded errors which decreased with the small-scale secondary roughness being removed. The mobile-immobile model (MIM) was alternatively implemented to characterize the Non-Fickian transport. We found that the MIM was more capable of estimating Non-Fickian BTCs. The small-scale secondary roughness resulted in the decreasing mobile domain fraction and the increasing mass exchange rate between immobile and mobile domains. The estimated parameters from the MIM could provide insight into the inherent mechanism of roughness

  15. Automatic differentiation algorithms in model analysis

    NARCIS (Netherlands)

    Huiskes, M.J.

    2002-01-01

    Title: Automatic differentiation algorithms in model analysis
    Author: M.J. Huiskes
    Date: 19 March, 2002

    In this thesis automatic differentiation algorithms and derivative-based methods

  16. Automatisms: bridging clinical neurology with criminal law.

    Science.gov (United States)

    Rolnick, Joshua; Parvizi, Josef

    2011-03-01

    The law, like neurology, grapples with the relationship between disease states and behavior. Sometimes, the two disciplines share the same terminology, such as automatism. In law, the "automatism defense" is a claim that action was involuntary or performed while unconscious. Someone charged with a serious crime can acknowledge committing the act and yet may go free if, relying on the expert testimony of clinicians, the court determines that the act of crime was committed in a state of automatism. In this review, we explore the relationship between the use of automatism in the legal and clinical literature. We close by addressing several issues raised by the automatism defense: semantic ambiguity surrounding the term automatism, the presence or absence of consciousness during automatisms, and the methodological obstacles that have hindered the study of cognition during automatisms. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. Automatic terrain modeling using transfinite element analysis

    KAUST Repository

    Collier, Nathan; Calo, Victor M.

    2010-01-01

    An automatic procedure for modeling terrain is developed based on L2 projection-based interpolation of discrete terrain data onto transfinite function spaces. The function space is refined automatically by the use of image processing techniques

  18. Automatic localization of the da Vinci surgical instrument tips in 3-D transrectal ultrasound.

    Science.gov (United States)

    Mohareri, Omid; Ramezani, Mahdi; Adebar, Troy K; Abolmaesumi, Purang; Salcudean, Septimiu E

    2013-09-01

    Robot-assisted laparoscopic radical prostatectomy (RALRP) using the da Vinci surgical system is the current state-of-the-art treatment option for clinically confined prostate cancer. Given the limited field of view of the surgical site in RALRP, several groups have proposed the integration of transrectal ultrasound (TRUS) imaging in the surgical workflow to assist with accurate resection of the prostate and the sparing of the neurovascular bundles (NVBs). We previously introduced a robotic TRUS manipulator and a method for automatically tracking da Vinci surgical instruments with the TRUS imaging plane, in order to facilitate the integration of intraoperative TRUS in RALRP. Rapid and automatic registration of the kinematic frames of the da Vinci surgical system and the robotic TRUS probe manipulator is a critical component of the instrument tracking system. In this paper, we propose a fully automatic registration technique based on automatic 3-D TRUS localization of robot instrument tips pressed against the air-tissue boundary anterior to the prostate. The detection approach uses a multiscale filtering technique to identify and localize surgical instrument tips in the TRUS volume, and could also be used to detect other surface fiducials in 3-D ultrasound. Experiments have been performed using a tissue phantom and two ex vivo tissue samples to show the feasibility of the proposed methods. Also, an initial in vivo evaluation of the system has been carried out on a live anaesthetized dog with a da Vinci Si surgical system and a target registration error (defined as the root mean square distance of corresponding points after registration) of 2.68 mm has been achieved. Results show this method's accuracy and consistency for automatic registration of TRUS images to the da Vinci surgical system.

  19. [Automatic adjustment control system for DC glow discharge plasma source].

    Science.gov (United States)

    Wan, Zhen-zhen; Wang, Yong-qing; Li, Xiao-jia; Wang, Hai-zhou; Shi, Ning

    2011-03-01

    There are three important parameters in the DC glow discharge process, the discharge current, discharge voltage and argon pressure in discharge source. These parameters influence each other during glow discharge process. This paper presents an automatic control system for DC glow discharge plasma source. This system collects and controls discharge voltage automatically by adjusting discharge source pressure while the discharge current is constant in the glow discharge process. The design concept, circuit principle and control program of this automatic control system are described. The accuracy is improved by this automatic control system with the method of reducing the complex operations and manual control errors. This system enhances the control accuracy of glow discharge voltage, and reduces the time to reach discharge voltage stability. The glow discharge voltage stability test results with automatic control system are provided as well, the accuracy with automatic control system is better than 1% FS which is improved from 4% FS by manual control. Time to reach discharge voltage stability has been shortened to within 30 s by automatic control from more than 90 s by manual control. Standard samples like middle-low alloy steel and tin bronze have been tested by this automatic control system. The concentration analysis precision has been significantly improved. The RSDs of all the test result are better than 3.5%. In middle-low alloy steel standard sample, the RSD range of concentration test result of Ti, Co and Mn elements is reduced from 3.0%-4.3% by manual control to 1.7%-2.4% by automatic control, and that for S and Mo is also reduced from 5.2%-5.9% to 3.3%-3.5%. In tin bronze standard sample, the RSD range of Sn, Zn and Al elements is reduced from 2.6%-4.4% to 1.0%-2.4%, and that for Si, Ni and Fe is reduced from 6.6%-13.9% to 2.6%-3.5%. The test data is also shown in this paper.

  20. Multiscale Observation System for Sea Ice Drift and Deformation

    Science.gov (United States)

    Lensu, M.; Haapala, J. J.; Heiler, I.; Karvonen, J.; Suominen, M.

    2011-12-01

    The drift and deformation of sea ice cover is most commonly followed from successive SAR images. The time interval between the images is seldom less than one day which provides rather crude approximation of the motion fields as ice can move tens of kilometers per day. This is particulary so from the viewpoint of operative services, seeking to provide real time information for ice navigating ships and other end users, as leads are closed and opened or ridge fields created in time scales of one hour or less. The ice forecast models are in a need of better temporal resolution for ice motion data as well. We present experiences from a multiscale monitoring system set up to the Bay of Bothnia, the northernmost basin of the Baltic Sea. The basin generates difficult ice conditions every winter while the ports are kept open with the help of an icebreaker fleet. The key addition to SAR imagery is the use of coastal radars for the monitoring of coastal ice fields. An independent server is used to tap the radar signal and process it to suit ice monitoring purposes. This is done without interfering the basic use of the radars, the ship traffic monitoring. About 20 images per minute are captured and sent to the headquarters for motion field extraction, website animation and distribution. This provides very detailed real time picture of the ice movement and deformation within 20 km range. The real time movements are followed in addition with ice drifter arrays, and using AIS ship identification data, from which the translation of ship cannels due to ice drift can be found out. To the operative setup is associated an extensive research effort that uses the data for ice drift model enhancement. The Baltic ice models seek to forecast conditions relevant to ship traffic, especilly hazardous ones like severe ice compression. The main missing link here is downscaling, or the relation of local scale ice dynamics and kinematics to the ice model scale behaviour. The data flow when