WorldWideScience

Sample records for limits multiresolution analyses

  1. Adaptive multi-resolution Modularity for detecting communities in networks

    Science.gov (United States)

    Chen, Shi; Wang, Zhi-Zhong; Bao, Mei-Hua; Tang, Liang; Zhou, Ji; Xiang, Ju; Li, Jian-Ming; Yi, Chen-He

    2018-02-01

    Community structure is a common topological property of complex networks, which attracted much attention from various fields. Optimizing quality functions for community structures is a kind of popular strategy for community detection, such as Modularity optimization. Here, we introduce a general definition of Modularity, by which several classical (multi-resolution) Modularity can be derived, and then propose a kind of adaptive (multi-resolution) Modularity that can combine the advantages of different Modularity. By applying the Modularity to various synthetic and real-world networks, we study the behaviors of the methods, showing the validity and advantages of the multi-resolution Modularity in community detection. The adaptive Modularity, as a kind of multi-resolution method, can naturally solve the first-type limit of Modularity and detect communities at different scales; it can quicken the disconnecting of communities and delay the breakup of communities in heterogeneous networks; and thus it is expected to generate the stable community structures in networks more effectively and have stronger tolerance against the second-type limit of Modularity.

  2. On frame multiresolution analysis

    DEFF Research Database (Denmark)

    Christensen, Ole

    2003-01-01

    We use the freedom in frame multiresolution analysis to construct tight wavelet frames (even in the case where the refinable function does not generate a tight frame). In cases where a frame multiresolution does not lead to a construction of a wavelet frame we show how one can nevertheless...

  3. A multi-resolution approach to heat kernels on discrete surfaces

    KAUST Repository

    Vaxman, Amir; Ben-Chen, Mirela; Gotsman, Craig

    2010-01-01

    process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel

  4. Multiresolution Analysis Adapted to Irregularly Spaced Data

    Directory of Open Access Journals (Sweden)

    Anissa Mokraoui

    2009-01-01

    Full Text Available This paper investigates the mathematical background of multiresolution analysis in the specific context where the signal is represented by irregularly sampled data at known locations. The study is related to the construction of nested piecewise polynomial multiresolution spaces represented by their corresponding orthonormal bases. Using simple spline basis orthonormalization procedures involves the construction of a large family of orthonormal spline scaling bases defined on consecutive bounded intervals. However, if no more additional conditions than those coming from multiresolution are imposed on each bounded interval, the orthonormal basis is represented by a set of discontinuous scaling functions. The spline wavelet basis also has the same problem. Moreover, the dimension of the corresponding wavelet basis increases with the spline degree. An appropriate orthonormalization procedure of the basic spline space basis, whatever the degree of the spline, allows us to (i provide continuous scaling and wavelet functions, (ii reduce the number of wavelets to only one, and (iii reduce the complexity of the filter bank. Examples of the multiresolution implementations illustrate that the main important features of the traditional multiresolution are also satisfied.

  5. Large-Scale Multi-Resolution Representations for Accurate Interactive Image and Volume Operations

    KAUST Repository

    Sicat, Ronell B.

    2015-11-25

    The resolutions of acquired image and volume data are ever increasing. However, the resolutions of commodity display devices remain limited. This leads to an increasing gap between data and display resolutions. To bridge this gap, the standard approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i.e., the output, and not the full size of the input. Multi-resolution representations, such as image mipmaps, and volume octrees, are crucial in providing these operations direct access to any subset of the data at any resolution corresponding to the output. Despite its widespread use, this standard approach has some shortcomings in three important application areas, namely non-linear image operations, multi-resolution volume rendering, and large-scale image exploration. This dissertation presents new multi-resolution representations for large-scale images and volumes that address these shortcomings. Standard multi-resolution representations require low-pass pre-filtering for anti- aliasing. However, linear pre-filters do not commute with non-linear operations. This becomes problematic when applying non-linear operations directly to any coarse resolution levels in standard representations. Particularly, this leads to inaccurate output when applying non-linear image operations, e.g., color mapping and detail-aware filters, to multi-resolution images. Similarly, in multi-resolution volume rendering, this leads to inconsistency artifacts which manifest as erroneous differences in rendering outputs across resolution levels. To address these issues, we introduce the sparse pdf maps and sparse pdf volumes representations for large-scale images and volumes, respectively. These representations sparsely encode continuous probability density functions (pdfs) of multi-resolution pixel

  6. Signal and image multiresolution analysis

    CERN Document Server

    Ouahabi, Abdelialil

    2012-01-01

    Multiresolution analysis using the wavelet transform has received considerable attention in recent years by researchers in various fields. It is a powerful tool for efficiently representing signals and images at multiple levels of detail with many inherent advantages, including compression, level-of-detail display, progressive transmission, level-of-detail editing, filtering, modeling, fractals and multifractals, etc.This book aims to provide a simple formalization and new clarity on multiresolution analysis, rendering accessible obscure techniques, and merging, unifying or completing

  7. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    KAUST Repository

    Sicat, Ronell Barrera; Kruger, Jens; Moller, Torsten; Hadwiger, Markus

    2014-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined

  8. Adaptive Multiresolution Methods: Practical issues on Data Structures, Implementation and Parallelization*

    Directory of Open Access Journals (Sweden)

    Bachmann M.

    2011-12-01

    Full Text Available The concept of fully adaptive multiresolution finite volume schemes has been developed and investigated during the past decade. Here grid adaptation is realized by performing a multiscale decomposition of the discrete data at hand. By means of hard thresholding the resulting multiscale data are compressed. From the remaining data a locally refined grid is constructed. The aim of the present work is to give a self-contained overview on the construction of an appropriate multiresolution analysis using biorthogonal wavelets, its efficient realization by means of hash maps using global cell identifiers and the parallelization of the multiresolution-based grid adaptation via MPI using space-filling curves. Le concept des schémas de volumes finis multi-échelles et adaptatifs a été développé et etudié pendant les dix dernières années. Ici le maillage adaptatif est réalisé en effectuant une décomposition multi-échelle des données discrètes proches. En les tronquant à l’aide d’une valeur seuil fixée, les données multi-échelles obtenues sont compressées. A partir de celles-ci, le maillage est raffiné localement. Le but de ce travail est de donner un aperçu concis de la construction d’une analyse appropriée de multiresolution utilisant les fonctions ondelettes biorthogonales, de son efficacité d’application en terme de tables de hachage en utilisant des identification globales de cellule et de la parallélisation du maillage adaptatif multirésolution via MPI à l’aide des courbes remplissantes.

  9. A multiresolution method for solving the Poisson equation using high order regularization

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Walther, Jens Honore

    2016-01-01

    We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches and regulari......We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches...... and regularized Green's functions corresponding to the difference in the spatial resolution between the patches. The full solution is obtained utilizing the linearity of the Poisson equation enabling super-position of solutions. We show that the multiresolution Poisson solver produces convergence rates...

  10. Multiresolution analysis applied to text-independent phone segmentation

    International Nuclear Information System (INIS)

    Cherniz, AnalIa S; Torres, MarIa E; Rufiner, Hugo L; Esposito, Anna

    2007-01-01

    Automatic speech segmentation is of fundamental importance in different speech applications. The most common implementations are based on hidden Markov models. They use a statistical modelling of the phonetic units to align the data along a known transcription. This is an expensive and time-consuming process, because of the huge amount of data needed to train the system. Text-independent speech segmentation procedures have been developed to overcome some of these problems. These methods detect transitions in the evolution of the time-varying features that represent the speech signal. Speech representation plays a central role is the segmentation task. In this work, two new speech parameterizations based on the continuous multiresolution entropy, using Shannon entropy, and the continuous multiresolution divergence, using Kullback-Leibler distance, are proposed. These approaches have been compared with the classical Melbank parameterization. The proposed encodings increase significantly the segmentation performance. Parameterization based on the continuous multiresolution divergence shows the best results, increasing the number of correctly detected boundaries and decreasing the amount of erroneously inserted points. This suggests that the parameterization based on multiresolution information measures provide information related to acoustic features that take into account phonemic transitions

  11. Multiresolution signal decomposition schemes

    NARCIS (Netherlands)

    J. Goutsias (John); H.J.A.M. Heijmans (Henk)

    1998-01-01

    textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis

  12. A new class of morphological pyramids for multiresolution image analysis

    NARCIS (Netherlands)

    Roerdink, Jos B.T.M.; Asano, T; Klette, R; Ronse, C

    2003-01-01

    We study nonlinear multiresolution signal decomposition based on morphological pyramids. Motivated by a problem arising in multiresolution volume visualization, we introduce a new class of morphological pyramids. In this class the pyramidal synthesis operator always has the same form, i.e. a

  13. Interactive indirect illumination using adaptive multiresolution splatting.

    Science.gov (United States)

    Nichols, Greg; Wyman, Chris

    2010-01-01

    Global illumination provides a visual richness not achievable with the direct illumination models used by most interactive applications. To generate global effects, numerous approximations attempt to reduce global illumination costs to levels feasible in interactive contexts. One such approximation, reflective shadow maps, samples a shadow map to identify secondary light sources whose contributions are splatted into eye space. This splatting introduces significant overdraw that is usually reduced by artificially shrinking each splat's radius of influence. This paper introduces a new multiresolution approach for interactively splatting indirect illumination. Instead of reducing GPU fill rate by reducing splat size, we reduce fill rate by rendering splats into a multiresolution buffer. This takes advantage of the low-frequency nature of diffuse and glossy indirect lighting, allowing rendering of indirect contributions at low resolution where lighting changes slowly and at high-resolution near discontinuities. Because this multiresolution rendering occurs on a per-splat basis, we can significantly reduce fill rate without arbitrarily clipping splat contributions below a given threshold-those regions simply are rendered at a coarse resolution.

  14. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    Science.gov (United States)

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  15. An ROI multi-resolution compression method for 3D-HEVC

    Science.gov (United States)

    Ti, Chunli; Guan, Yudong; Xu, Guodong; Teng, Yidan; Miao, Xinyuan

    2017-09-01

    3D High Efficiency Video Coding (3D-HEVC) provides a significant potential on increasing the compression ratio of multi-view RGB-D videos. However, the bit rate still rises dramatically with the improvement of the video resolution, which will bring challenges to the transmission network, especially the mobile network. This paper propose an ROI multi-resolution compression method for 3D-HEVC to better preserve the information in ROI on condition of limited bandwidth. This is realized primarily through ROI extraction and compression multi-resolution preprocessed video as alternative data according to the network conditions. At first, the semantic contours are detected by the modified structured forests to restrain the color textures inside objects. The ROI is then determined utilizing the contour neighborhood along with the face region and foreground area of the scene. Secondly, the RGB-D videos are divided into slices and compressed via 3D-HEVC under different resolutions for selection by the audiences and applications. Afterwards, the reconstructed low-resolution videos from 3D-HEVC encoder are directly up-sampled via Laplace transformation and used to replace the non-ROI areas of the high-resolution videos. Finally, the ROI multi-resolution compressed slices are obtained by compressing the ROI preprocessed videos with 3D-HEVC. The temporal and special details of non-ROI are reduced in the low-resolution videos, so the ROI will be better preserved by the encoder automatically. Experiments indicate that the proposed method can keep the key high-frequency information with subjective significance while the bit rate is reduced.

  16. Traffic Multiresolution Modeling and Consistency Analysis of Urban Expressway Based on Asynchronous Integration Strategy

    Directory of Open Access Journals (Sweden)

    Liyan Zhang

    2017-01-01

    Full Text Available The paper studies multiresolution traffic flow simulation model of urban expressway. Firstly, compared with two-level hybrid model, three-level multiresolution hybrid model has been chosen. Then, multiresolution simulation framework and integration strategies are introduced. Thirdly, the paper proposes an urban expressway multiresolution traffic simulation model by asynchronous integration strategy based on Set Theory, which includes three submodels: macromodel, mesomodel, and micromodel. After that, the applicable conditions and derivation process of the three submodels are discussed in detail. In addition, in order to simulate and evaluate the multiresolution model, “simple simulation scenario” of North-South Elevated Expressway in Shanghai has been established. The simulation results showed the following. (1 Volume-density relationships of three submodels are unanimous with detector data. (2 When traffic density is high, macromodel has a high precision and smaller error and the dispersion of results is smaller. Compared with macromodel, simulation accuracies of micromodel and mesomodel are lower but errors are bigger. (3 Multiresolution model can simulate characteristics of traffic flow, capture traffic wave, and keep the consistency of traffic state transition. Finally, the results showed that the novel multiresolution model can have higher simulation accuracy and it is feasible and effective in the real traffic simulation scenario.

  17. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    KAUST Repository

    Sicat, Ronell Barrera

    2014-12-31

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  18. A multiresolution model of rhythmic expectancy

    NARCIS (Netherlands)

    Smith, L.M.; Honing, H.; Miyazaki, K.; Hiraga, Y.; Adachi, M.; Nakajima, Y.; Tsuzaki, M.

    2008-01-01

    We describe a computational model of rhythmic cognition that predicts expected onset times. A dynamic representation of musical rhythm, the multiresolution analysis using the continuous wavelet transform is used. This representation decomposes the temporal structure of a musical rhythm into time

  19. Information Extraction of High-Resolution Remotely Sensed Image Based on Multiresolution Segmentation

    Directory of Open Access Journals (Sweden)

    Peng Shao

    2014-08-01

    Full Text Available The principle of multiresolution segmentation was represented in detail in this study, and the canny algorithm was applied for edge-detection of a remotely sensed image based on this principle. The target image was divided into regions based on object-oriented multiresolution segmentation and edge-detection. Furthermore, object hierarchy was created, and a series of features (water bodies, vegetation, roads, residential areas, bare land and other information were extracted by the spectral and geometrical features. The results indicate that the edge-detection has a positive effect on multiresolution segmentation, and overall accuracy of information extraction reaches to 94.6% by the confusion matrix.

  20. Morphological pyramids in multiresolution MIP rendering of large volume data : Survey and new results

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.

    We survey and extend nonlinear signal decompositions based on morphological pyramids, and their application to multiresolution maximum intensity projection (MIP) volume rendering with progressive refinement and perfect reconstruction. The structure of the resulting multiresolution rendering

  1. Homogeneous hierarchies: A discrete analogue to the wavelet-based multiresolution approximation

    Energy Technology Data Exchange (ETDEWEB)

    Mirkin, B. [Rutgers Univ., Piscataway, NJ (United States)

    1996-12-31

    A correspondence between discrete binary hierarchies and some orthonormal bases of the n-dimensional Euclidean space can be applied to such problems as clustering, ordering, identifying/testing in very large data bases, or multiresolution image/signal processing. The latter issue is considered in the paper. The binary hierarchy based multiresolution theory is expected to lead to effective methods for data processing because of relaxing the regularity restrictions of the classical theory.

  2. Multiresolution persistent homology for excessively large biomolecular datasets

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Kelin; Zhao, Zhixiong [Department of Mathematics, Michigan State University, East Lansing, Michigan 48824 (United States); Wei, Guo-Wei, E-mail: wei@math.msu.edu [Department of Mathematics, Michigan State University, East Lansing, Michigan 48824 (United States); Department of Electrical and Computer Engineering, Michigan State University, East Lansing, Michigan 48824 (United States); Department of Biochemistry and Molecular Biology, Michigan State University, East Lansing, Michigan 48824 (United States)

    2015-10-07

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.

  3. EFFECTIVE MULTI-RESOLUTION TRANSFORM IDENTIFICATION FOR CHARACTERIZATION AND CLASSIFICATION OF TEXTURE GROUPS

    Directory of Open Access Journals (Sweden)

    S. Arivazhagan

    2011-11-01

    Full Text Available Texture classification is important in applications of computer image analysis for characterization or classification of images based on local spatial variations of intensity or color. Texture can be defined as consisting of mutually related elements. This paper proposes an experimental approach for identification of suitable multi-resolution transform for characterization and classification of different texture groups based on statistical and co-occurrence features derived from multi-resolution transformed sub bands. The statistical and co-occurrence feature sets are extracted for various multi-resolution transforms such as Discrete Wavelet Transform (DWT, Stationary Wavelet Transform (SWT, Double Density Wavelet Transform (DDWT and Dual Tree Complex Wavelet Transform (DTCWT and then, the transform that maximizes the texture classification performance for the particular texture group is identified.

  4. Static multiresolution grids with inline hierarchy information for cosmic ray propagation

    Energy Technology Data Exchange (ETDEWEB)

    Müller, Gero, E-mail: gero.mueller@physik.rwth-aachen.de [III. Physikalisches Institut A, RWTH Aachen University, D-52056 Aachen (Germany)

    2016-08-01

    For numerical simulations of cosmic-ray propagation fast access to static magnetic field data is required. We present a data structure for multiresolution vector grids which is optimized for fast access, low overhead and shared memory use. The hierarchy information is encoded into the grid itself, reducing the memory overhead. Benchmarks show that in certain scenarios the differences in deflections introduced by sampling the magnetic field model can be significantly reduced when using the multiresolution approach.

  5. Multiresolution with Hierarchical Modulations for Long Term Evolution of UMTS

    Directory of Open Access Journals (Sweden)

    Soares Armando

    2009-01-01

    Full Text Available In the Long Term Evolution (LTE of UMTS the Interactive Mobile TV scenario is expected to be a popular service. By using multiresolution with hierarchical modulations this service is expected to be broadcasted to larger groups achieving significant reduction in power transmission or increasing the average throughput. Interactivity in the uplink direction will not be affected by multiresolution in the downlink channels, since it will be supported by dedicated uplink channels. The presence of interactivity will allow for a certain amount of link quality feedback for groups or individuals. As a result, an optimization of the achieved throughput will be possible. In this paper system level simulations of multi-cellular networks considering broadcast/multicast transmissions using the OFDM/OFDMA based LTE technology are presented to evaluate the capacity, in terms of number of TV channels with given bit rates or total spectral efficiency and coverage. multiresolution with hierarchical modulations is presented to evaluate the achievable throughput gain compared to single resolution systems of Multimedia Broadcast/Multicast Service (MBMS standardised in Release 6.

  6. LOD map--A visual interface for navigating multiresolution volume visualization.

    Science.gov (United States)

    Wang, Chaoli; Shen, Han-Wei

    2006-01-01

    In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets.

  7. Multiresolution analysis of Bursa Malaysia KLCI time series

    Science.gov (United States)

    Ismail, Mohd Tahir; Dghais, Amel Abdoullah Ahmed

    2017-05-01

    In general, a time series is simply a sequence of numbers collected at regular intervals over a period. Financial time series data processing is concerned with the theory and practice of processing asset price over time, such as currency, commodity data, and stock market data. The primary aim of this study is to understand the fundamental characteristics of selected financial time series by using the time as well as the frequency domain analysis. After that prediction can be executed for the desired system for in sample forecasting. In this study, multiresolution analysis which the assist of discrete wavelet transforms (DWT) and maximal overlap discrete wavelet transform (MODWT) will be used to pinpoint special characteristics of Bursa Malaysia KLCI (Kuala Lumpur Composite Index) daily closing prices and return values. In addition, further case study discussions include the modeling of Bursa Malaysia KLCI using linear ARIMA with wavelets to address how multiresolution approach improves fitting and forecasting results.

  8. A multi-resolution approach to heat kernels on discrete surfaces

    KAUST Repository

    Vaxman, Amir

    2010-07-26

    Studying the behavior of the heat diffusion process on a manifold is emerging as an important tool for analyzing the geometry of the manifold. Unfortunately, the high complexity of the computation of the heat kernel - the key to the diffusion process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel approximation method for the heat kernel at short times results in an efficient and robust algorithm for computing the heat kernels of detailed models. We show experimentally that our method can achieve good approximations in a fraction of the time required by traditional algorithms. Finally, we demonstrate how these heat kernels can be used to improve a diffusion-based feature extraction algorithm. © 2010 ACM.

  9. Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches

    Science.gov (United States)

    Duchaineau, Mark

    2001-06-01

    Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results.

  10. Adaptive multiresolution method for MAP reconstruction in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Acar, Erman, E-mail: erman.acar@tut.fi [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland); Peltonen, Sari; Ruotsalainen, Ulla [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland)

    2016-11-15

    3D image reconstruction with electron tomography holds problems due to the severely limited range of projection angles and low signal to noise ratio of the acquired projection images. The maximum a posteriori (MAP) reconstruction methods have been successful in compensating for the missing information and suppressing noise with their intrinsic regularization techniques. There are two major problems in MAP reconstruction methods: (1) selection of the regularization parameter that controls the balance between the data fidelity and the prior information, and (2) long computation time. One aim of this study is to provide an adaptive solution to the regularization parameter selection problem without having additional knowledge about the imaging environment and the sample. The other aim is to realize the reconstruction using sequences of resolution levels to shorten the computation time. The reconstructions were analyzed in terms of accuracy and computational efficiency using a simulated biological phantom and publically available experimental datasets of electron tomography. The numerical and visual evaluations of the experiments show that the adaptive multiresolution method can provide more accurate results than the weighted back projection (WBP), simultaneous iterative reconstruction technique (SIRT), and sequential MAP expectation maximization (sMAPEM) method. The method is superior to sMAPEM also in terms of computation time and usability since it can reconstruct 3D images significantly faster without requiring any parameter to be set by the user. - Highlights: • An adaptive multiresolution reconstruction method is introduced for electron tomography. • The method provides more accurate results than the conventional reconstruction methods. • The missing wedge and noise problems can be compensated by the method efficiently.

  11. Stain Deconvolution Using Statistical Analysis of Multi-Resolution Stain Colour Representation.

    Directory of Open Access Journals (Sweden)

    Najah Alsubaie

    Full Text Available Stain colour estimation is a prominent factor of the analysis pipeline in most of histology image processing algorithms. Providing a reliable and efficient stain colour deconvolution approach is fundamental for robust algorithm. In this paper, we propose a novel method for stain colour deconvolution of histology images. This approach statistically analyses the multi-resolutional representation of the image to separate the independent observations out of the correlated ones. We then estimate the stain mixing matrix using filtered uncorrelated data. We conducted an extensive set of experiments to compare the proposed method to the recent state of the art methods and demonstrate the robustness of this approach using three different datasets of scanned slides, prepared in different labs using different scanners.

  12. A multiresolution remeshed Vortex-In-Cell algorithm using patches

    DEFF Research Database (Denmark)

    Rasmussen, Johannes Tophøj; Cottet, Georges-Henri; Walther, Jens Honore

    2011-01-01

    We present a novel multiresolution Vortex-In-Cell algorithm using patches of varying resolution. The Poisson equation relating the fluid vorticity and velocity is solved using Fast Fourier Transforms subject to free space boundary conditions. Solid boundaries are implemented using the semi...

  13. W-transform method for feature-oriented multiresolution image retrieval

    Energy Technology Data Exchange (ETDEWEB)

    Kwong, M.K.; Lin, B. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1995-07-01

    Image database management is important in the development of multimedia technology. Since an enormous amount of digital images is likely to be generated within the next few decades in order to integrate computers, television, VCR, cables, telephone and various imaging devices. Effective image indexing and retrieval systems are urgently needed so that images can be easily organized, searched, transmitted, and presented. Here, the authors present a local-feature-oriented image indexing and retrieval method based on Kwong, and Tang`s W-transform. Multiresolution histogram comparison is an effective method for content-based image indexing and retrieval. However, most recent approaches perform multiresolution analysis for whole images but do not exploit the local features present in the images. Since W-transform is featured by its ability to handle images of arbitrary size, with no periodicity assumptions, it provides a natural tool for analyzing local image features and building indexing systems based on such features. In this approach, the histograms of the local features of images are used in the indexing, system. The system not only can retrieve images that are similar or identical to the query images but also can retrieve images that contain features specified in the query images, even if the retrieved images as a whole might be very different from the query images. The local-feature-oriented method also provides a speed advantage over the global multiresolution histogram comparison method. The feature-oriented approach is expected to be applicable in managing large-scale image systems such as video databases and medical image databases.

  14. A multi-resolution envelope-power based model for speech intelligibility

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Ewert, Stephan D.; Dau, Torsten

    2013-01-01

    The speech-based envelope power spectrum model (sEPSM) presented by Jørgensen and Dau [(2011). J. Acoust. Soc. Am. 130, 1475-1487] estimates the envelope power signal-to-noise ratio (SNRenv) after modulation-frequency selective processing. Changes in this metric were shown to account well...... to conditions with stationary interferers, due to the long-term integration of the envelope power, and cannot account for increased intelligibility typically obtained with fluctuating maskers. Here, a multi-resolution version of the sEPSM is presented where the SNRenv is estimated in temporal segments...... with a modulation-filter dependent duration. The multi-resolution sEPSM is demonstrated to account for intelligibility obtained in conditions with stationary and fluctuating interferers, and noisy speech distorted by reverberation or spectral subtraction. The results support the hypothesis that the SNRenv...

  15. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization

    Science.gov (United States)

    Nielson, Gregory M.

    1997-01-01

    This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.

  16. Combining nonlinear multiresolution system and vector quantization for still image compression

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Y.

    1993-12-17

    It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge-preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized to the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.

  17. Evolved Multiresolution Transforms for Optimized Image Compression and Reconstruction Under Quantization

    National Research Council Canada - National Science Library

    Moore, Frank

    2005-01-01

    ...) First, this research demonstrates that a GA can evolve a single set of coefficients describing a single matched forward and inverse transform pair that can be used at each level of a multiresolution...

  18. High Order Wavelet-Based Multiresolution Technology for Airframe Noise Prediction, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a novel, high-accuracy, high-fidelity, multiresolution (MRES), wavelet-based framework for efficient prediction of airframe noise sources and...

  19. Multi-Resolution Multimedia QoE Models for IPTV Applications

    Directory of Open Access Journals (Sweden)

    Prasad Calyam

    2012-01-01

    Full Text Available Internet television (IPTV is rapidly gaining popularity and is being widely deployed in content delivery networks on the Internet. In order to proactively deliver optimum user quality of experience (QoE for IPTV, service providers need to identify network bottlenecks in real time. In this paper, we develop psycho-acoustic-visual models that can predict user QoE of multimedia applications in real time based on online network status measurements. Our models are neural network based and cater to multi-resolution IPTV applications that include QCIF, QVGA, SD, and HD resolutions encoded using popular audio and video codec combinations. On the network side, our models account for jitter and loss levels, as well as router queuing disciplines: packet-ordered and time-ordered FIFO. We evaluate the performance of our multi-resolution multimedia QoE models in terms of prediction characteristics, accuracy, speed, and consistency. Our evaluation results demonstrate that the models are pertinent for real-time QoE monitoring and resource adaptation in IPTV content delivery networks.

  20. Identifying Spatial Units of Human Occupation in the Brazilian Amazon Using Landsat and CBERS Multi-Resolution Imagery

    OpenAIRE

    Dal’Asta, Ana Paula; Brigatti, Newton; Amaral, Silvana; Escada, Maria Isabel Sobral; Monteiro, Antonio Miguel Vieira

    2012-01-01

    Every spatial unit of human occupation is part of a network structuring an extensive process of urbanization in the Amazon territory. Multi-resolution remote sensing data were used to identify and map human presence and activities in the Sustainable Forest District of Cuiabá-Santarém highway (BR-163), west of Pará, Brazil. The limits of spatial units of human occupation were mapped based on digital classification of Landsat-TM5 (Thematic Mapper 5) image (30m spatial resolution). High-spatial-...

  1. Layout Optimization of Structures with Finite-size Features using Multiresolution Analysis

    DEFF Research Database (Denmark)

    Chellappa, S.; Diaz, A. R.; Bendsøe, Martin P.

    2004-01-01

    A scheme for layout optimization in structures with multiple finite-sized heterogeneities is presented. Multiresolution analysis is used to compute reduced operators (stiffness matrices) representing the elastic behavior of material distributions with heterogeneities of sizes that are comparable...

  2. A Quantitative Analysis of an EEG Epileptic Record Based on MultiresolutionWavelet Coefficients

    Directory of Open Access Journals (Sweden)

    Mariel Rosenblatt

    2014-11-01

    Full Text Available The characterization of the dynamics associated with electroencephalogram (EEG signal combining an orthogonal discrete wavelet transform analysis with quantifiers originated from information theory is reviewed. In addition, an extension of this methodology based on multiresolution quantities, called wavelet leaders, is presented. In particular, the temporal evolution of Shannon entropy and the statistical complexity evaluated with different sets of multiresolution wavelet coefficients are considered. Both methodologies are applied to the quantitative EEG time series analysis of a tonic-clonic epileptic seizure, and comparative results are presented. In particular, even when both methods describe the dynamical changes of the EEG time series, the one based on wavelet leaders presents a better time resolution.

  3. Multiresolution signal decomposition transforms, subbands, and wavelets

    CERN Document Server

    Akansu, Ali N

    1992-01-01

    This book provides an in-depth, integrated, and up-to-date exposition of the topic of signal decomposition techniques. Application areas of these techniques include speech and image processing, machine vision, information engineering, High-Definition Television, and telecommunications. The book will serve as the major reference for those entering the field, instructors teaching some or all of the topics in an advanced graduate course and researchers needing to consult an authoritative source.n The first book to give a unified and coherent exposition of multiresolutional signal decompos

  4. Efficient Human Action and Gait Analysis Using Multiresolution Motion Energy Histogram

    Directory of Open Access Journals (Sweden)

    Kuo-Chin Fan

    2010-01-01

    Full Text Available Average Motion Energy (AME image is a good way to describe human motions. However, it has to face the computation efficiency problem with the increasing number of database templates. In this paper, we propose a histogram-based approach to improve the computation efficiency. We convert the human action/gait recognition problem to a histogram matching problem. In order to speed up the recognition process, we adopt a multiresolution structure on the Motion Energy Histogram (MEH. To utilize the multiresolution structure more efficiently, we propose an automated uneven partitioning method which is achieved by utilizing the quadtree decomposition results of MEH. In that case, the computation time is only relevant to the number of partitioned histogram bins, which is much less than the AME method. Two applications, action recognition and gait classification, are conducted in the experiments to demonstrate the feasibility and validity of the proposed approach.

  5. Large-Scale Multi-Resolution Representations for Accurate Interactive Image and Volume Operations

    KAUST Repository

    Sicat, Ronell Barrera

    2015-01-01

    approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i

  6. A multi-resolution HEALPix data structure for spherically mapped point data

    Directory of Open Access Journals (Sweden)

    Robert W. Youngren

    2017-06-01

    Full Text Available Data describing entities with locations that are points on a sphere are described as spherically mapped. Several data structures designed for spherically mapped data have been developed. One of them, known as Hierarchical Equal Area iso-Latitude Pixelization (HEALPix, partitions the sphere into twelve diamond-shaped equal-area base cells and then recursively subdivides each cell into four diamond-shaped subcells, continuing to the desired level of resolution. Twelve quadtrees, one associated with each base cell, store the data records associated with that cell and its subcells.HEALPix has been used successfully for numerous applications, notably including cosmic microwave background data analysis. However, for applications involving sparse point data HEALPix has possible drawbacks, including inefficient memory utilization, overwriting of proximate points, and return of spurious points for certain queries.A multi-resolution variant of HEALPix specifically optimized for sparse point data was developed. The new data structure allows different areas of the sphere to be subdivided at different levels of resolution. It combines HEALPix positive features with the advantages of multi-resolution, including reduced memory requirements and improved query performance.An implementation of the new Multi-Resolution HEALPix (MRH data structure was tested using spherically mapped data from four different scientific applications (warhead fragmentation trajectories, weather station locations, galaxy locations, and synthetic locations. Four types of range queries were applied to each data structure for each dataset. Compared to HEALPix, MRH used two to four orders of magnitude less memory for the same data, and on average its queries executed 72% faster. Keywords: Computer science

  7. Adaptive multiresolution Hermite-Binomial filters for image edge and texture analysis

    NARCIS (Netherlands)

    Gu, Y.H.; Katsaggelos, A.K.

    1994-01-01

    A new multiresolution image analysis approach using adaptive Hermite-Binomial filters is presented in this paper. According to the local image structural and textural properties, the analysis filter kernels are made adaptive both in their scales and orders. Applications of such an adaptive filtering

  8. MR-CDF: Managing multi-resolution scientific data

    Science.gov (United States)

    Salem, Kenneth

    1993-01-01

    MR-CDF is a system for managing multi-resolution scientific data sets. It is an extension of the popular CDF (Common Data Format) system. MR-CDF provides a simple functional interface to client programs for storage and retrieval of data. Data is stored so that low resolution versions of the data can be provided quickly. Higher resolutions are also available, but not as quickly. By managing data with MR-CDF, an application can be relieved of the low-level details of data management, and can easily trade data resolution for improved access time.

  9. Multiresolution Computation of Conformal Structures of Surfaces

    Directory of Open Access Journals (Sweden)

    Xianfeng Gu

    2003-10-01

    Full Text Available An efficient multiresolution method to compute global conformal structures of nonzero genus triangle meshes is introduced. The homology, cohomology groups of meshes are computed explicitly, then a basis of harmonic one forms and a basis of holomorphic one forms are constructed. A progressive mesh is generated to represent the original surface at different resolutions. The conformal structure is computed for the coarse level first, then used as the estimation for that of the finer level, by using conjugate gradient method it can be refined to the conformal structure of the finer level.

  10. An efficient multi-resolution GA approach to dental image alignment

    Science.gov (United States)

    Nassar, Diaa Eldin; Ogirala, Mythili; Adjeroh, Donald; Ammar, Hany

    2006-02-01

    Automating the process of postmortem identification of individuals using dental records is receiving an increased attention in forensic science, especially with the large volume of victims encountered in mass disasters. Dental radiograph alignment is a key step required for automating the dental identification process. In this paper, we address the problem of dental radiograph alignment using a Multi-Resolution Genetic Algorithm (MR-GA) approach. We use location and orientation information of edge points as features; we assume that affine transformations suffice to restore geometric discrepancies between two images of a tooth, we efficiently search the 6D space of affine parameters using GA progressively across multi-resolution image versions, and we use a Hausdorff distance measure to compute the similarity between a reference tooth and a query tooth subject to a possible alignment transform. Testing results based on 52 teeth-pair images suggest that our algorithm converges to reasonable solutions in more than 85% of the test cases, with most of the error in the remaining cases due to excessive misalignments.

  11. Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure

    Science.gov (United States)

    Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.

    2014-08-01

    Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver

  12. Multisensor multiresolution data fusion for improvement in classification

    Science.gov (United States)

    Rubeena, V.; Tiwari, K. C.

    2016-04-01

    The rapid advancements in technology have facilitated easy availability of multisensor and multiresolution remote sensing data. Multisensor, multiresolution data contain complementary information and fusion of such data may result in application dependent significant information which may otherwise remain trapped within. The present work aims at improving classification by fusing features of coarse resolution hyperspectral (1 m) LWIR and fine resolution (20 cm) RGB data. The classification map comprises of eight classes. The class names are Road, Trees, Red Roof, Grey Roof, Concrete Roof, Vegetation, bare Soil and Unclassified. The processing methodology for hyperspectral LWIR data comprises of dimensionality reduction, resampling of data by interpolation technique for registering the two images at same spatial resolution, extraction of the spatial features to improve classification accuracy. In the case of fine resolution RGB data, the vegetation index is computed for classifying the vegetation class and the morphological building index is calculated for buildings. In order to extract the textural features, occurrence and co-occurence statistics is considered and the features will be extracted from all the three bands of RGB data. After extracting the features, Support Vector Machine (SVMs) has been used for training and classification. To increase the classification accuracy, post processing steps like removal of any spurious noise such as salt and pepper noise is done which is followed by filtering process by majority voting within the objects for better object classification.

  13. Deep learning for classification of islanding and grid disturbance based on multi-resolution singular spectrum entropy

    Science.gov (United States)

    Li, Tie; He, Xiaoyang; Tang, Junci; Zeng, Hui; Zhou, Chunying; Zhang, Nan; Liu, Hui; Lu, Zhuoxin; Kong, Xiangrui; Yan, Zheng

    2018-02-01

    Forasmuch as the distinguishment of islanding is easy to be interfered by grid disturbance, island detection device may make misjudgment thus causing the consequence of photovoltaic out of service. The detection device must provide with the ability to differ islanding from grid disturbance. In this paper, the concept of deep learning is introduced into classification of islanding and grid disturbance for the first time. A novel deep learning framework is proposed to detect and classify islanding or grid disturbance. The framework is a hybrid of wavelet transformation, multi-resolution singular spectrum entropy, and deep learning architecture. As a signal processing method after wavelet transformation, multi-resolution singular spectrum entropy combines multi-resolution analysis and spectrum analysis with entropy as output, from which we can extract the intrinsic different features between islanding and grid disturbance. With the features extracted, deep learning is utilized to classify islanding and grid disturbance. Simulation results indicate that the method can achieve its goal while being highly accurate, so the photovoltaic system mistakenly withdrawing from power grids can be avoided.

  14. Multi-resolution analysis for region of interest extraction in thermographic nondestructive evaluation

    Science.gov (United States)

    Ortiz-Jaramillo, B.; Fandiño Toro, H. A.; Benitez-Restrepo, H. D.; Orjuela-Vargas, S. A.; Castellanos-Domínguez, G.; Philips, W.

    2012-03-01

    Infrared Non-Destructive Testing (INDT) is known as an effective and rapid method for nondestructive inspection. It can detect a broad range of near-surface structuring flaws in metallic and composite components. Those flaws are modeled as a smooth contour centered at peaks of stored thermal energy, termed Regions of Interest (ROI). Dedicated methodologies must detect the presence of those ROIs. In this paper, we present a methodology for ROI extraction in INDT tasks. The methodology deals with the difficulties due to the non-uniform heating. The non-uniform heating affects low spatial/frequencies and hinders the detection of relevant points in the image. In this paper, a methodology for ROI extraction in INDT using multi-resolution analysis is proposed, which is robust to ROI low contrast and non-uniform heating. The former methodology includes local correlation, Gaussian scale analysis and local edge detection. In this methodology local correlation between image and Gaussian window provides interest points related to ROIs. We use a Gaussian window because thermal behavior is well modeled by Gaussian smooth contours. Also, the Gaussian scale is used to analyze details in the image using multi-resolution analysis avoiding low contrast, non-uniform heating and selection of the Gaussian window size. Finally, local edge detection is used to provide a good estimation of the boundaries in the ROI. Thus, we provide a methodology for ROI extraction based on multi-resolution analysis that is better or equal compared with the other dedicate algorithms proposed in the state of art.

  15. Multiresolution Network Temporal and Spatial Scheduling Model of Scenic Spot

    Directory of Open Access Journals (Sweden)

    Peng Ge

    2013-01-01

    Full Text Available Tourism is one of pillar industries of the world economy. Low-carbon tourism will be the mainstream direction of the scenic spots' development, and the ω path of low-carbon tourism development is to develop economy and protect environment simultaneously. However, as the tourists' quantity is increasing, the loads of scenic spots are out of control. And the instantaneous overload in some spots caused the image phenomenon of full capacity of the whole scenic spot. Therefore, realizing the real-time schedule becomes the primary purpose of scenic spot’s management. This paper divides the tourism distribution system into several logically related subsystems and constructs a temporal and spatial multiresolution network scheduling model according to the regularity of scenic spots’ overload phenomenon in time and space. It also defines dynamic distribution probability and equivalent dynamic demand to realize the real-time prediction. We define gravitational function between fields and takes it as the utility of schedule, after resolving the transportation model of each resolution, it achieves hierarchical balance between demand and capacity of the system. The last part of the paper analyzes the time complexity of constructing a multiresolution distribution system.

  16. Multi-resolution simulation of focused ultrasound propagation through ovine skull from a single-element transducer

    Science.gov (United States)

    Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik

    2018-05-01

    Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.

  17. Accuracy assessment of tree crown detection using local maxima and multi-resolution segmentation

    International Nuclear Information System (INIS)

    Khalid, N; Hamid, J R A; Latif, Z A

    2014-01-01

    Diversity of trees forms an important component in the forest ecosystems and needs proper inventories to assist the forest personnel in their daily activities. However, tree parameter measurements are often constrained by physical inaccessibility to site locations, high costs, and time. With the advancement in remote sensing technology, such as the provision of higher spatial and spectral resolution of imagery, a number of developed algorithms fulfil the needs of accurate tree inventories information in a cost effective and timely manner over larger forest areas. This study intends to generate tree distribution map in Ampang Forest Reserve using the Local Maxima and Multi-Resolution image segmentation algorithm. The utilization of recent worldview-2 imagery with Local Maxima and Multi-Resolution image segmentation proves to be capable of detecting and delineating the tree crown in its accurate standing position

  18. Long-range force and moment calculations in multiresolution simulations of molecular systems

    International Nuclear Information System (INIS)

    Poursina, Mohammad; Anderson, Kurt S.

    2012-01-01

    Multiresolution simulations of molecular systems such as DNAs, RNAs, and proteins are implemented using models with different resolutions ranging from a fully atomistic model to coarse-grained molecules, or even to continuum level system descriptions. For such simulations, pairwise force calculation is a serious bottleneck which can impose a prohibitive amount of computational load on the simulation if not performed wisely. Herein, we approximate the resultant force due to long-range particle-body and body-body interactions applicable to multiresolution simulations. Since the resultant force does not necessarily act through the center of mass of the body, it creates a moment about the mass center. Although this potentially important torque is neglected in many coarse-grained models which only use particle dynamics to formulate the dynamics of the system, it should be calculated and used when coarse-grained simulations are performed in a multibody scheme. Herein, the approximation for this moment due to far-field particle-body and body-body interactions is also provided.

  19. Network coding for multi-resolution multicast

    DEFF Research Database (Denmark)

    2013-01-01

    A method, apparatus and computer program product for utilizing network coding for multi-resolution multicast is presented. A network source partitions source content into a base layer and one or more refinement layers. The network source receives a respective one or more push-back messages from one...... or more network destination receivers, the push-back messages identifying the one or more refinement layers suited for each one of the one or more network destination receivers. The network source computes a network code involving the base layer and the one or more refinement layers for at least one...... of the one or more network destination receivers, and transmits the network code to the one or more network destination receivers in accordance with the push-back messages....

  20. Video Classification and Adaptive QoP/QoS Control for Multiresolution Video Applications on IPTV

    Directory of Open Access Journals (Sweden)

    Huang Shyh-Fang

    2012-01-01

    Full Text Available With the development of heterogeneous networks and video coding standards, multiresolution video applications over networks become important. It is critical to ensure the service quality of the network for time-sensitive video services. Worldwide Interoperability for Microwave Access (WIMAX is a good candidate for delivering video signals because through WIMAX the delivery quality based on the quality-of-service (QoS setting can be guaranteed. The selection of suitable QoS parameters is, however, not trivial for service users. Instead, what a video service user really concerns with is the video quality of presentation (QoP which includes the video resolution, the fidelity, and the frame rate. In this paper, we present a quality control mechanism in multiresolution video coding structures over WIMAX networks and also investigate the relationship between QoP and QoS in end-to-end connections. Consequently, the video presentation quality can be simply mapped to the network requirements by a mapping table, and then the end-to-end QoS is achieved. We performed experiments with multiresolution MPEG coding over WIMAX networks. In addition to the QoP parameters, the video characteristics, such as, the picture activity and the video mobility, also affect the QoS significantly.

  1. Pathfinder: multiresolution region-based searching of pathology images using IRM.

    OpenAIRE

    Wang, J. Z.

    2000-01-01

    The fast growth of digitized pathology slides has created great challenges in research on image database retrieval. The prevalent retrieval technique involves human-supplied text annotations to describe slide contents. These pathology images typically have very high resolution, making it difficult to search based on image content. In this paper, we present Pathfinder, an efficient multiresolution region-based searching system for high-resolution pathology image libraries. The system uses wave...

  2. Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry.

    Science.gov (United States)

    Caracappa, Peter F; Rhodes, Ashley; Fiedler, Derek

    2014-09-21

    Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.

  3. A multiresolution spatial parametrization for the estimation of fossil-fuel carbon dioxide emissions via atmospheric inversions.

    Energy Technology Data Exchange (ETDEWEB)

    Ray, Jaideep; Lee, Jina; Lefantzi, Sophia; Yadav, Vineet [Carnegie Institution for Science, Stanford, CA; Michalak, Anna M. [Carnegie Institution for Science, Stanford, CA; van Bloemen Waanders, Bart Gustaaf [Sandia National Laboratories, Albuquerque, NM; McKenna, Sean Andrew [IBM Research, Mulhuddart, Dublin 15, Ireland

    2013-04-01

    The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. To that end, we construct a multiresolution spatial parametrization for fossil-fuel CO2 emissions (ffCO2), to be used in atmospheric inversions. Such a parametrization does not currently exist. The parametrization uses wavelets to accurately capture the multiscale, nonstationary nature of ffCO2 emissions and employs proxies of human habitation, e.g., images of lights at night and maps of built-up areas to reduce the dimensionality of the multiresolution parametrization. The parametrization is used in a synthetic data inversion to test its suitability for use in atmospheric inverse problem. This linear inverse problem is predicated on observations of ffCO2 concentrations collected at measurement towers. We adapt a convex optimization technique, commonly used in the reconstruction of compressively sensed images, to perform sparse reconstruction of the time-variant ffCO2 emission field. We also borrow concepts from compressive sensing to impose boundary conditions i.e., to limit ffCO2 emissions within an irregularly shaped region (the United States, in our case). We find that the optimization algorithm performs a data-driven sparsification of the spatial parametrization and retains only of those wavelets whose weights could be estimated from the observations. Further, our method for the imposition of boundary conditions leads to a 10computational saving over conventional means of doing so. We conclude with a discussion of the accuracy of the estimated emissions and the suitability of the spatial parametrization for use in inverse problems with a significant degree of regularization.

  4. Crack Identification in CFRP Laminated Beams Using Multi-Resolution Modal Teager–Kaiser Energy under Noisy Environments

    Science.gov (United States)

    Xu, Wei; Cao, Maosen; Ding, Keqin; Radzieński, Maciej; Ostachowicz, Wiesław

    2017-01-01

    Carbon fiber reinforced polymer laminates are increasingly used in the aerospace and civil engineering fields. Identifying cracks in carbon fiber reinforced polymer laminated beam components is of considerable significance for ensuring the integrity and safety of the whole structures. With the development of high-resolution measurement technologies, mode-shape-based crack identification in such laminated beam components has become an active research focus. Despite its sensitivity to cracks, however, this method is susceptible to noise. To address this deficiency, this study proposes a new concept of multi-resolution modal Teager–Kaiser energy, which is the Teager–Kaiser energy of a mode shape represented in multi-resolution, for identifying cracks in carbon fiber reinforced polymer laminated beams. The efficacy of this concept is analytically demonstrated by identifying cracks in Timoshenko beams with general boundary conditions; and its applicability is validated by diagnosing cracks in a carbon fiber reinforced polymer laminated beam, whose mode shapes are precisely acquired via non-contact measurement using a scanning laser vibrometer. The analytical and experimental results show that multi-resolution modal Teager–Kaiser energy is capable of designating the presence and location of cracks in these beams under noisy environments. This proposed method holds promise for developing crack identification systems for carbon fiber reinforced polymer laminates. PMID:28773016

  5. A multiresolution approach for the convergence acceleration of multivariate curve resolution methods.

    Science.gov (United States)

    Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus

    2015-09-03

    Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Multiresolution strategies for the numerical solution of optimal control problems

    Science.gov (United States)

    Jain, Sachin

    There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a

  7. Time-Frequency Feature Representation Using Multi-Resolution Texture Analysis and Acoustic Activity Detector for Real-Life Speech Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2015-01-01

    Full Text Available The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI. In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII. The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech.

  8. Spatial Quality of Manually Geocoded Multispectral and Multiresolution Mosaics

    Directory of Open Access Journals (Sweden)

    Andrija Krtalić

    2008-05-01

    Full Text Available The digital airborne multisensor and multiresolution system for collection of information (images about mine suspected area was created, within European commission project Airborne Minefield Area Reduction (ARC, EC IST-2000-25300, http://www.arc.vub.ac.be to gain a better perspective in mine suspected areas (MSP in the Republic of Croatia. The system consists of a matrix camera (visible and near infrared range of electromagnetic spectrum, 0.4-1.1 µm, thermal (thermal range of electromagnetic spectrum, 8-14 µm and a hyperspectral linear scanner. Because of a specific purpose and seeking object on the scene, the flights for collecting the images took place at heights from 130 m to 900 m above the ground. The result of a small relative flight height and large MSPs was a large number of images which cover MSPs. Therefore, the need for merging images in largest parts, for a better perspective in whole MSPs and the interaction of detected object influences on the scene appeared. The mentioned system did not dispose of the module for automatic mosaicking and geocoding, so mosaicking and after that geocoding were done manually. This process made the classification of the scene (better distinguishing of objects on the scene and fusion of multispectral and multiresolution images after that possible. Classification and image fusion can be even done by manually mosaicking and geocoding. This article demonstrated this claim.

  9. Investigations of homologous disaccharides by elastic incoherent neutron scattering and wavelet multiresolution analysis

    Energy Technology Data Exchange (ETDEWEB)

    Magazù, S.; Migliardo, F. [Dipartimento di Fisica e di Scienze della Terra dell’, Università degli Studi di Messina, Viale F. S. D’Alcontres 31, 98166 Messina (Italy); Vertessy, B.G. [Institute of Enzymology, Hungarian Academy of Science, Budapest (Hungary); Caccamo, M.T., E-mail: maccamo@unime.it [Dipartimento di Fisica e di Scienze della Terra dell’, Università degli Studi di Messina, Viale F. S. D’Alcontres 31, 98166 Messina (Italy)

    2013-10-16

    Highlights: • Innovative multiresolution wavelet analysis of elastic incoherent neutron scattering. • Elastic Incoherent Neutron Scattering measurements on homologues disaccharides. • EINS wavevector analysis. • EINS temperature analysis. - Abstract: In the present paper the results of a wavevector and thermal analysis of Elastic Incoherent Neutron Scattering (EINS) data collected on water mixtures of three homologous disaccharides through a wavelet approach are reported. The wavelet analysis allows to compare both the spatial properties of the three systems in the wavevector range of Q = 0.27 Å{sup −1} ÷ 4.27 Å{sup −1}. It emerges that, differently from previous analyses, for trehalose the scalograms are constantly lower and sharper in respect to maltose and sucrose, giving rise to a global spectral density along the wavevector range markedly less extended. As far as the thermal analysis is concerned, the global scattered intensity profiles suggest a higher thermal restrain of trehalose in respect to the other two homologous disaccharides.

  10. A morphologically preserved multi-resolution TIN surface modeling and visualization method for virtual globes

    Science.gov (United States)

    Zheng, Xianwei; Xiong, Hanjiang; Gong, Jianya; Yue, Linwei

    2017-07-01

    Virtual globes play an important role in representing three-dimensional models of the Earth. To extend the functioning of a virtual globe beyond that of a "geobrowser", the accuracy of the geospatial data in the processing and representation should be of special concern for the scientific analysis and evaluation. In this study, we propose a method for the processing of large-scale terrain data for virtual globe visualization and analysis. The proposed method aims to construct a morphologically preserved multi-resolution triangulated irregular network (TIN) pyramid for virtual globes to accurately represent the landscape surface and simultaneously satisfy the demands of applications at different scales. By introducing cartographic principles, the TIN model in each layer is controlled with a data quality standard to formulize its level of detail generation. A point-additive algorithm is used to iteratively construct the multi-resolution TIN pyramid. The extracted landscape features are also incorporated to constrain the TIN structure, thus preserving the basic morphological shapes of the terrain surface at different levels. During the iterative construction process, the TIN in each layer is seamlessly partitioned based on a virtual node structure, and tiled with a global quadtree structure. Finally, an adaptive tessellation approach is adopted to eliminate terrain cracks in the real-time out-of-core spherical terrain rendering. The experiments undertaken in this study confirmed that the proposed method performs well in multi-resolution terrain representation, and produces high-quality underlying data that satisfy the demands of scientific analysis and evaluation.

  11. Identifying Spatial Units of Human Occupation in the Brazilian Amazon Using Landsat and CBERS Multi-Resolution Imagery

    Directory of Open Access Journals (Sweden)

    Maria Isabel Sobral Escada

    2012-01-01

    Full Text Available Every spatial unit of human occupation is part of a network structuring an extensive process of urbanization in the Amazon territory. Multi-resolution remote sensing data were used to identify and map human presence and activities in the Sustainable Forest District of Cuiabá-Santarém highway (BR-163, west of Pará, Brazil. The limits of spatial units of human occupation were mapped based on digital classification of Landsat-TM5 (Thematic Mapper 5 image (30m spatial resolution. High-spatial-resolution CBERS-HRC (China-Brazil Earth Resources Satellite-High-Resolution Camera images (5 m merged with CBERS-CCD (Charge Coupled Device images (20 m were used to map spatial arrangements inside each populated unit, describing intra-urban characteristics. Fieldwork data validated and refined the classification maps that supported the categorization of the units. A total of 133 spatial units were individualized, comprising population centers as municipal seats, villages and communities, and units of human activities, such as sawmills, farmhouses, landing strips, etc. From the high-resolution analysis, 32 population centers were grouped in four categories, described according to their level of urbanization and spatial organization as: structured, recent, established and dependent on connectivity. This multi-resolution approach provided spatial information about the urbanization process and organization of the territory. It may be extended into other areas or be further used to devise a monitoring system, contributing to the discussion of public policy priorities for sustainable development in the Amazon.

  12. Inferring species richness and turnover by statistical multiresolution texture analysis of satellite imagery.

    Directory of Open Access Journals (Sweden)

    Matteo Convertino

    Full Text Available BACKGROUND: The quantification of species-richness and species-turnover is essential to effective monitoring of ecosystems. Wetland ecosystems are particularly in need of such monitoring due to their sensitivity to rainfall, water management and other external factors that affect hydrology, soil, and species patterns. A key challenge for environmental scientists is determining the linkage between natural and human stressors, and the effect of that linkage at the species level in space and time. We propose pixel intensity based Shannon entropy for estimating species-richness, and introduce a method based on statistical wavelet multiresolution texture analysis to quantitatively assess interseasonal and interannual species turnover. METHODOLOGY/PRINCIPAL FINDINGS: We model satellite images of regions of interest as textures. We define a texture in an image as a spatial domain where the variations in pixel intensity across the image are both stochastic and multiscale. To compare two textures quantitatively, we first obtain a multiresolution wavelet decomposition of each. Either an appropriate probability density function (pdf model for the coefficients at each subband is selected, and its parameters estimated, or, a non-parametric approach using histograms is adopted. We choose the former, where the wavelet coefficients of the multiresolution decomposition at each subband are modeled as samples from the generalized Gaussian pdf. We then obtain the joint pdf for the coefficients for all subbands, assuming independence across subbands; an approximation that simplifies the computational burden significantly without sacrificing the ability to statistically distinguish textures. We measure the difference between two textures' representative pdf's via the Kullback-Leibler divergence (KL. Species turnover, or [Formula: see text] diversity, is estimated using both this KL divergence and the difference in Shannon entropy. Additionally, we predict species

  13. Real-time Multiresolution Crosswalk Detection with Walk Light Recognition for the Blind

    Directory of Open Access Journals (Sweden)

    ROMIC, K.

    2018-02-01

    Full Text Available Real-time image processing and object detection techniques have a great potential to be applied in digital assistive tools for the blind and visually impaired persons. In this paper, algorithm for crosswalk detection and walk light recognition is proposed with the main aim to help blind person when crossing the road. The proposed algorithm is optimized to work in real-time on portable devices using standard cameras. Images captured by camera are processed while person is moving and decision about detected crosswalk is provided as an output along with the information about walk light if one is present. Crosswalk detection method is based on multiresolution morphological image processing, while the walk light recognition is performed by proposed 6-stage algorithm. The main contributions of this paper are accurate crosswalk detection with small processing time due to multiresolution processing and the recognition of the walk lights covering only small amount of pixels in image. The experiment is conducted using images from video sequences captured in realistic situations on crossings. The results show 98.3% correct crosswalk detections and 89.5% correct walk lights recognition with average processing speed of about 16 frames per second.

  14. Application of multi-scale wavelet entropy and multi-resolution Volterra models for climatic downscaling

    Science.gov (United States)

    Sehgal, V.; Lakhanpal, A.; Maheswaran, R.; Khosa, R.; Sridhar, Venkataramana

    2018-01-01

    This study proposes a wavelet-based multi-resolution modeling approach for statistical downscaling of GCM variables to mean monthly precipitation for five locations at Krishna Basin, India. Climatic dataset from NCEP is used for training the proposed models (Jan.'69 to Dec.'94) and are applied to corresponding CanCM4 GCM variables to simulate precipitation for the validation (Jan.'95-Dec.'05) and forecast (Jan.'06-Dec.'35) periods. The observed precipitation data is obtained from the India Meteorological Department (IMD) gridded precipitation product at 0.25 degree spatial resolution. This paper proposes a novel Multi-Scale Wavelet Entropy (MWE) based approach for clustering climatic variables into suitable clusters using k-means methodology. Principal Component Analysis (PCA) is used to obtain the representative Principal Components (PC) explaining 90-95% variance for each cluster. A multi-resolution non-linear approach combining Discrete Wavelet Transform (DWT) and Second Order Volterra (SoV) is used to model the representative PCs to obtain the downscaled precipitation for each downscaling location (W-P-SoV model). The results establish that wavelet-based multi-resolution SoV models perform significantly better compared to the traditional Multiple Linear Regression (MLR) and Artificial Neural Networks (ANN) based frameworks. It is observed that the proposed MWE-based clustering and subsequent PCA, helps reduce the dimensionality of the input climatic variables, while capturing more variability compared to stand-alone k-means (no MWE). The proposed models perform better in estimating the number of precipitation events during the non-monsoon periods whereas the models with clustering without MWE over-estimate the rainfall during the dry season.

  15. A MULTIRESOLUTION METHOD FOR THE SIMULATION OF SEDIMENTATION IN INCLINED CHANNELS

    OpenAIRE

    Buerger, Raimund; Ruiz-Baier, Ricardo; Schneider, Kai; Torres, Hector

    2012-01-01

    An adaptive multiresolution scheme is proposed for the numerical solution of a spatially two-dimensional model of sedimentation of suspensions of small solid particles dispersed in a viscous fluid. This model consists in a version of the Stokes equations for incompressible fluid flow coupled with a hyperbolic conservation law for the local solids concentration. We study the process in an inclined, rectangular closed vessel, a configuration that gives rise a well-known increase of settling rat...

  16. Detection of pulmonary nodules on lung X-ray images. Studies on multi-resolutional filter and energy subtraction images

    International Nuclear Information System (INIS)

    Sawada, Akira; Sato, Yoshinobu; Kido, Shoji; Tamura, Shinichi

    1999-01-01

    The purpose of this work is to prove the effectiveness of an energy subtraction image for the detection of pulmonary nodules and the effectiveness of multi-resolutional filter on an energy subtraction image to detect pulmonary nodules. Also we study influential factors to the accuracy of detection of pulmonary nodules from viewpoints of types of images, types of digital filters and types of evaluation methods. As one type of images, we select an energy subtraction image, which removes bones such as ribs from the conventional X-ray image by utilizing the difference of X-ray absorption ratios at different energy between bones and soft tissue. Ribs and vessels are major causes of CAD errors in detection of pulmonary nodules and many researches have tried to solve this problem. So we select conventional X-ray images and energy subtraction X-ray images as types of images, and at the same time select ∇ 2 G (Laplacian of Guassian) filter, Min-DD (Minimum Directional Difference) filter and our multi-resolutional filter as types of digital filters. Also we select two evaluation methods and prove the effectiveness of an energy subtraction image, the effectiveness of Min-DD filter on a conventional X-ray image and the effectiveness of multi-resolutional filter on an energy subtraction image. (author)

  17. Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Hoa T. [Univ. of Utah, Salt Lake City, UT (United States); Stone, Daithi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-01-01

    An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different case studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.

  18. A fast multi-resolution approach to tomographic PIV

    Science.gov (United States)

    Discetti, Stefano; Astarita, Tommaso

    2012-03-01

    Tomographic particle image velocimetry (Tomo-PIV) is a recently developed three-component, three-dimensional anemometric non-intrusive measurement technique, based on an optical tomographic reconstruction applied to simultaneously recorded images of the distribution of light intensity scattered by seeding particles immersed into the flow. Nowadays, the reconstruction process is carried out mainly by iterative algebraic reconstruction techniques, well suited to handle the problem of limited number of views, but computationally intensive and memory demanding. The adoption of the multiplicative algebraic reconstruction technique (MART) has become more and more accepted. In the present work, a novel multi-resolution approach is proposed, relying on the adoption of a coarser grid in the first step of the reconstruction to obtain a fast estimation of a reliable and accurate first guess. A performance assessment, carried out on three-dimensional computer-generated distributions of particles, shows a substantial acceleration of the reconstruction process for all the tested seeding densities with respect to the standard method based on 5 MART iterations; a relevant reduction in the memory storage is also achieved. Furthermore, a slight accuracy improvement is noticed. A modified version, improved by a multiplicative line of sight estimation of the first guess on the compressed configuration, is also tested, exhibiting a further remarkable decrease in both memory storage and computational effort, mostly at the lowest tested seeding densities, while retaining the same performances in terms of accuracy.

  19. Decompositions of bubbly flow PIV velocity fields using discrete wavelets multi-resolution and multi-section image method

    International Nuclear Information System (INIS)

    Choi, Je-Eun; Takei, Masahiro; Doh, Deog-Hee; Jo, Hyo-Jae; Hassan, Yassin A.; Ortiz-Villafuerte, Javier

    2008-01-01

    Currently, wavelet transforms are widely used for the analyses of particle image velocimetry (PIV) velocity vector fields. This is because the wavelet provides not only spatial information of the velocity vectors, but also of the time and frequency domains. In this study, a discrete wavelet transform is applied to real PIV images of bubbly flows. The vector fields obtained by a self-made cross-correlation PIV algorithm were used for the discrete wavelet transform. The performances of the discrete wavelet transforms were investigated by changing the level of power of discretization. The images decomposed by wavelet multi-resolution showed conspicuous characteristics of the bubbly flows for the different levels. A high spatial bubble concentrated area could be evaluated by the constructed discrete wavelet transform algorithm, in which high-leveled wavelets play dominant roles in revealing the flow characteristics

  20. Multiresolution 3-D reconstruction from side-scan sonar images.

    Science.gov (United States)

    Coiras, Enrique; Petillot, Yvan; Lane, David M

    2007-02-01

    In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed.

  1. A VIRTUAL GLOBE-BASED MULTI-RESOLUTION TIN SURFACE MODELING AND VISUALIZETION METHOD

    Directory of Open Access Journals (Sweden)

    X. Zheng

    2016-06-01

    Full Text Available The integration and visualization of geospatial data on a virtual globe play an significant role in understanding and analysis of the Earth surface processes. However, the current virtual globes always sacrifice the accuracy to ensure the efficiency for global data processing and visualization, which devalue their functionality for scientific applications. In this article, we propose a high-accuracy multi-resolution TIN pyramid construction and visualization method for virtual globe. Firstly, we introduce the cartographic principles to formulize the level of detail (LOD generation so that the TIN model in each layer is controlled with a data quality standard. A maximum z-tolerance algorithm is then used to iteratively construct the multi-resolution TIN pyramid. Moreover, the extracted landscape features are incorporated into each-layer TIN, thus preserving the topological structure of terrain surface at different levels. In the proposed framework, a virtual node (VN-based approach is developed to seamlessly partition and discretize each triangulation layer into tiles, which can be organized and stored with a global quad-tree index. Finally, the real time out-of-core spherical terrain rendering is realized on a virtual globe system VirtualWorld1.0. The experimental results showed that the proposed method can achieve an high-fidelity terrain representation, while produce a high quality underlying data that satisfies the demand for scientific analysis.

  2. A multiresolution image based approach for correction of partial volume effects in emission tomography

    International Nuclear Information System (INIS)

    Boussion, N; Hatt, M; Lamare, F; Bizais, Y; Turzo, A; Rest, C Cheze-Le; Visvikis, D

    2006-01-01

    Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography. They lead to a loss of signal in tissues of size similar to the point spread function and induce activity spillover between regions. Although PVE can be corrected for by using algorithms that provide the correct radioactivity concentration in a series of regions of interest (ROIs), so far little attention has been given to the possibility of creating improved images as a result of PVE correction. Potential advantages of PVE-corrected images include the ability to accurately delineate functional volumes as well as improving tumour-to-background ratio, resulting in an associated improvement in the analysis of response to therapy studies and diagnostic examinations, respectively. The objective of our study was therefore to develop a methodology for PVE correction not only to enable the accurate recuperation of activity concentrations, but also to generate PVE-corrected images. In the multiresolution analysis that we define here, details of a high-resolution image H (MRI or CT) are extracted, transformed and integrated in a low-resolution image L (PET or SPECT). A discrete wavelet transform of both H and L images is performed by using the 'a trous' algorithm, which allows the spatial frequencies (details, edges, textures) to be obtained easily at a level of resolution common to H and L. A model is then inferred to build the lacking details of L from the high-frequency details in H. The process was successfully tested on synthetic and simulated data, proving the ability to obtain accurately corrected images. Quantitative PVE correction was found to be comparable with a method considered as a reference but limited to ROI analyses. Visual improvement and quantitative correction were also obtained in two examples of clinical images, the first using a combined PET/CT scanner with a lymphoma patient and the second using a FDG brain PET and corresponding T1-weighted MRI in

  3. Classification and Compression of Multi-Resolution Vectors: A Tree Structured Vector Quantizer Approach

    Science.gov (United States)

    2002-01-01

    their expression profile and for classification of cells into tumerous and non- tumerous classes. Then we will present a parallel tree method for... cancerous cells. We will use the same dataset and use tree structured classifiers with multi-resolution analysis for classifying cancerous from non- cancerous ...cells. We have the expressions of 4096 genes from 98 different cell types. Of these 98, 72 are cancerous while 26 are non- cancerous . We are interested

  4. Multiresolution molecular mechanics: Surface effects in nanoscale materials

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Qingcheng, E-mail: qiy9@pitt.edu; To, Albert C., E-mail: albertto@pitt.edu

    2017-05-01

    Surface effects have been observed to contribute significantly to the mechanical response of nanoscale structures. The newly proposed energy-based coarse-grained atomistic method Multiresolution Molecular Mechanics (MMM) (Yang, To (2015), ) is applied to capture surface effect for nanosized structures by designing a surface summation rule SR{sup S} within the framework of MMM. Combined with previously proposed bulk summation rule SR{sup B}, the MMM summation rule SR{sup MMM} is completed. SR{sup S} and SR{sup B} are consistently formed within SR{sup MMM} for general finite element shape functions. Analogous to quadrature rules in finite element method (FEM), the key idea to the good performance of SR{sup MMM} lies in that the order or distribution of energy for coarse-grained atomistic model is mathematically derived such that the number, position and weight of quadrature-type (sampling) atoms can be determined. Mathematically, the derived energy distribution of surface area is different from that of bulk region. Physically, the difference is due to the fact that surface atoms lack neighboring bonding. As such, SR{sup S} and SR{sup B} are employed for surface and bulk domains, respectively. Two- and three-dimensional numerical examples using the respective 4-node bilinear quadrilateral, 8-node quadratic quadrilateral and 8-node hexahedral meshes are employed to verify and validate the proposed approach. It is shown that MMM with SR{sup MMM} accurately captures corner, edge and surface effects with less 0.3% degrees of freedom of the original atomistic system, compared against full atomistic simulation. The effectiveness of SR{sup MMM} with respect to high order element is also demonstrated by employing the 8-node quadratic quadrilateral to solve a beam bending problem considering surface effect. In addition, the introduced sampling error with SR{sup MMM} that is analogous to numerical integration error with quadrature rule in FEM is very small. - Highlights:

  5. Study on spillover effect of copper futures between LME and SHFE using wavelet multiresolution analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Research on information spillover effects between financial markets remains active in the economic community. A Granger-type model has recently been used to investigate the spillover between London Metal Exchange (LME) and Shanghai Futures Exchange (SHFE), however, possible correlation between the future price and return on different time scales have been ignored. In this paper, wavelet multiresolution decomposition is used to investigate the spillover effects of copper future returns between the two markets. The daily return time series are decomposed on 2n (n=1, ..., 6) frequency bands through wavelet multiresolution analysis. The correlation between the two markets is studied with decomposed data. It is shown that high frequency detail components represent much more energy than low-frequency smooth components. The relation between copper future daily returns in LME and that in SHFE are different on different time scales. The fluctuations of the copper future daily returns in LME have large effect on that in SHFE in 32-day scale, but small effect in high frequency scales. It also has evidence that strong effects exist between LME and SHFE for monthly responses of the copper futures but not for daily responses.

  6. Multiresolution forecasting for futures trading using wavelet decompositions.

    Science.gov (United States)

    Zhang, B L; Coggins, R; Jabri, M A; Dersch, D; Flower, B

    2001-01-01

    We investigate the effectiveness of a financial time-series forecasting strategy which exploits the multiresolution property of the wavelet transform. A financial series is decomposed into an over complete, shift invariant scale-related representation. In transform space, each individual wavelet series is modeled by a separate multilayer perceptron (MLP). We apply the Bayesian method of automatic relevance determination to choose short past windows (short-term history) for the inputs to the MLPs at lower scales and long past windows (long-term history) at higher scales. To form the overall forecast, the individual forecasts are then recombined by the linear reconstruction property of the inverse transform with the chosen autocorrelation shell representation, or by another perceptron which learns the weight of each scale in the prediction of the original time series. The forecast results are then passed to a money management system to generate trades.

  7. Multiresolution Motion Estimation for Low-Rate Video Frame Interpolation

    Directory of Open Access Journals (Sweden)

    Hezerul Abdul Karim

    2004-09-01

    Full Text Available Interpolation of video frames with the purpose of increasing the frame rate requires the estimation of motion in the image so as to interpolate pixels along the path of the objects. In this paper, the specific challenges of low-rate video frame interpolation are illustrated by choosing one well-performing algorithm for high-frame-rate interpolation (Castango 1996 and applying it to low frame rates. The degradation of performance is illustrated by comparing the original algorithm, the algorithm adapted to low frame rate, and simple averaging. To overcome the particular challenges of low-frame-rate interpolation, two algorithms based on multiresolution motion estimation are developed and compared on objective and subjective basis and shown to provide an elegant solution to the specific challenges of low-frame-rate video interpolation.

  8. A multi-resolution approach for an automated fusion of different low-cost 3D sensors.

    Science.gov (United States)

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-04-24

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.

  9. Automatic segmentation of fluorescence lifetime microscopy images of cells using multiresolution community detection--a first study.

    Science.gov (United States)

    Hu, D; Sarder, P; Ronhovde, P; Orthaus, S; Achilefu, S; Nussinov, Z

    2014-01-01

    Inspired by a multiresolution community detection based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Furthermore, using the proposed method, the mean-square error in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The multiresolution community detection method appeared to perform better than a popular spectral clustering-based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in mean-square error with increasing resolution. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  10. A general CFD framework for fault-resilient simulations based on multi-resolution information fusion

    Science.gov (United States)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-10-01

    We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.

  11. Multiresolution wavelet analysis of heartbeat intervals discriminates healthy patients from those with cardiac pathology

    OpenAIRE

    Thurner, Stefan; Feurstein, Markus C.; Teich, Malvin C.

    1997-01-01

    We applied multiresolution wavelet analysis to the sequence of times between human heartbeats (R-R intervals) and have found a scale window, between 16 and 32 heartbeats, over which the widths of the R-R wavelet coefficients fall into disjoint sets for normal and heart-failure patients. This has enabled us to correctly classify every patient in a standard data set as either belonging to the heart-failure or normal group with 100% accuracy, thereby providing a clinically significant measure of...

  12. A DTM MULTI-RESOLUTION COMPRESSED MODEL FOR EFFICIENT DATA STORAGE AND NETWORK TRANSFER

    Directory of Open Access Journals (Sweden)

    L. Biagi

    2012-08-01

    Full Text Available In recent years the technological evolution of terrestrial, aerial and satellite surveying, has considerably increased the measurement accuracy and, consequently, the quality of the derived information. At the same time, the smaller and smaller limitations on data storage devices, in terms of capacity and cost, has allowed the storage and the elaboration of a bigger number of instrumental observations. A significant example is the terrain height surveyed by LIDAR (LIght Detection And Ranging technology where several height measurements for each square meter of land can be obtained. The availability of such a large quantity of observations is an essential requisite for an in-depth knowledge of the phenomena under study. But, at the same time, the most common Geographical Information Systems (GISs show latency in visualizing and analyzing these kind of data. This problem becomes more evident in case of Internet GIS. These systems are based on the very frequent flow of geographical information over the internet and, for this reason, the band-width of the network and the size of the data to be transmitted are two fundamental factors to be considered in order to guarantee the actual usability of these technologies. In this paper we focus our attention on digital terrain models (DTM's and we briefly analyse the problems about the definition of the minimal necessary information to store and transmit DTM's over network, with a fixed tolerance, starting from a huge number of observations. Then we propose an innovative compression approach for sparse observations by means of multi-resolution spline functions approximation. The method is able to provide metrical accuracy at least comparable to that provided by the most common deterministic interpolation algorithms (inverse distance weighting, local polynomial, radial basis functions. At the same time it dramatically reduces the number of information required for storing or for transmitting and rebuilding a

  13. Exploring a Multiresolution Modeling Approach within the Shallow-Water Equations

    Energy Technology Data Exchange (ETDEWEB)

    Ringler, Todd D.; Jacobsen, Doug; Gunzburger, Max; Ju, Lili; Duda, Michael; Skamarock, William

    2011-11-01

    The ability to solve the global shallow-water equations with a conforming, variable-resolution mesh is evaluated using standard shallow-water test cases. While the long-term motivation for this study is the creation of a global climate modeling framework capable of resolving different spatial and temporal scales in different regions, the process begins with an analysis of the shallow-water system in order to better understand the strengths and weaknesses of the approach developed herein. The multiresolution meshes are spherical centroidal Voronoi tessellations where a single, user-supplied density function determines the region(s) of fine- and coarsemesh resolution. The shallow-water system is explored with a suite of meshes ranging from quasi-uniform resolution meshes, where the grid spacing is globally uniform, to highly variable resolution meshes, where the grid spacing varies by a factor of 16 between the fine and coarse regions. The potential vorticity is found to be conserved to within machine precision and the total available energy is conserved to within a time-truncation error. This result holds for the full suite of meshes, ranging from quasi-uniform resolution and highly variable resolution meshes. Based on shallow-water test cases 2 and 5, the primary conclusion of this study is that solution error is controlled primarily by the grid resolution in the coarsest part of the model domain. This conclusion is consistent with results obtained by others.When these variable-resolution meshes are used for the simulation of an unstable zonal jet, the core features of the growing instability are found to be largely unchanged as the variation in the mesh resolution increases. The main differences between the simulations occur outside the region of mesh refinement and these differences are attributed to the additional truncation error that accompanies increases in grid spacing. Overall, the results demonstrate support for this approach as a path toward

  14. Sparse PDF maps for non-linear multi-resolution image operations

    KAUST Repository

    Hadwiger, Markus

    2012-11-01

    We introduce a new type of multi-resolution image pyramid for high-resolution images called sparse pdf maps (sPDF-maps). Each pyramid level consists of a sparse encoding of continuous probability density functions (pdfs) of pixel neighborhoods in the original image. The encoded pdfs enable the accurate computation of non-linear image operations directly in any pyramid level with proper pre-filtering for anti-aliasing, without accessing higher or lower resolutions. The sparsity of sPDF-maps makes them feasible for gigapixel images, while enabling direct evaluation of a variety of non-linear operators from the same representation. We illustrate this versatility for antialiased color mapping, O(n) local Laplacian filters, smoothed local histogram filters (e.g., median or mode filters), and bilateral filters. © 2012 ACM.

  15. An improved cone-beam filtered backprojection reconstruction algorithm based on x-ray angular correction and multiresolution analysis

    International Nuclear Information System (INIS)

    Sun, Y.; Hou, Y.; Yan, Y.

    2004-01-01

    With the extensive application of industrial computed tomography in the field of non-destructive testing, how to improve the quality of the reconstructed image is receiving more and more concern. It is well known that in the existing cone-beam filtered backprojection reconstruction algorithms the cone angle is controlled within a narrow range. The reason of this limitation is the incompleteness of projection data when the cone angle increases. Thus the size of the tested workpiece is limited. Considering the characteristic of X-ray cone angle, an improved cone-beam filtered back-projection reconstruction algorithm taking account of angular correction is proposed in this paper. The aim of our algorithm is to correct the cone-angle effect resulted from the incompleteness of projection data in the conventional algorithm. The basis of the correction is the angular relationship among X-ray source, tested workpiece and the detector. Thus the cone angle is not strictly limited and this algorithm may be used to detect larger workpiece. Further more, adaptive wavelet filter is used to make multiresolution analysis, which can modify the wavelet decomposition series adaptively according to the demand for resolution of local reconstructed area. Therefore the computation and the time of reconstruction can be reduced, and the quality of the reconstructed image can also be improved. (author)

  16. Applying multi-resolution numerical methods to geodynamics

    Science.gov (United States)

    Davies, David Rhodri

    structured grid solution strategies, the unstructured techniques utilized in 2-D would throw away the regular grid and, with it, the major benefits of the current solution algorithms. Alternative avenues towards multi-resolution must therefore be sought. A non-uniform structured method that produces similar advantages to unstructured grids is introduced here, in the context of the pre-existing 3-D spherical mantle dynamics code, TERRA. The method, based upon the multigrid refinement techniques employed in the field of computational engineering, is used to refine and solve on a radially non-uniform grid. It maintains the key benefits of TERRA's current configuration, whilst also overcoming many of its limitations. Highly efficient solutions to non-uniform problems are obtained. The scheme is highly resourceful in terms RAM, meaning that one can attempt calculations that would otherwise be impractical. In addition, the solution algorithm reduces the CPU-time needed to solve a given problem. Validation tests illustrate that the approach is accurate and robust. Furthermore, by being conceptually simple and straightforward to implement, the method negates the need to reformulate large sections of code. The technique is applied to highly advanced 3-D spherical mantle convection models. Due to its resourcefulness in terms of RAM, the modified code allows one to efficiently resolve thermal boundary layers at the dynamical regime of Earth's mantle. The simulations presented are therefore at superior vigor to the highest attained, to date, in 3-D spherical geometry, achieving Rayleigh numbers of order 109. Upwelling structures are examined, focussing upon the nature of deep mantle plumes. Previous studies have shown long-lived, anchored, coherent upwelling plumes to be a feature of low to moderate vigor convection. Since more vigorous convection traditionally shows greater time-dependence, the fixity of upwellings would not logically be expected for non-layered convection at higher

  17. Single-resolution and multiresolution extended-Kalman-filter-based reconstruction approaches to optical refraction tomography.

    Science.gov (United States)

    Naik, Naren; Vasu, R M; Ananthasayanam, M R

    2010-02-20

    The problem of reconstruction of a refractive-index distribution (RID) in optical refraction tomography (ORT) with optical path-length difference (OPD) data is solved using two adaptive-estimation-based extended-Kalman-filter (EKF) approaches. First, a basic single-resolution EKF (SR-EKF) is applied to a state variable model describing the tomographic process, to estimate the RID of an optically transparent refracting object from noisy OPD data. The initialization of the biases and covariances corresponding to the state and measurement noise is discussed. The state and measurement noise biases and covariances are adaptively estimated. An EKF is then applied to the wavelet-transformed state variable model to yield a wavelet-based multiresolution EKF (MR-EKF) solution approach. To numerically validate the adaptive EKF approaches, we evaluate them with benchmark studies of standard stationary cases, where comparative results with commonly used efficient deterministic approaches can be obtained. Detailed reconstruction studies for the SR-EKF and two versions of the MR-EKF (with Haar and Daubechies-4 wavelets) compare well with those obtained from a typically used variant of the (deterministic) algebraic reconstruction technique, the average correction per projection method, thus establishing the capability of the EKF for ORT. To the best of our knowledge, the present work contains unique reconstruction studies encompassing the use of EKF for ORT in single-resolution and multiresolution formulations, and also in the use of adaptive estimation of the EKF's noise covariances.

  18. Plastic Limit Loads for Slanted Circumferential Through-Wall Cracked Pipes Using 3D Finite-Element Limit Analyses

    International Nuclear Information System (INIS)

    Jang, Hyun Min; Cho, Doo Ho; Kim, Young Jin; Huh, Nam Su; Shim, Do Jun; Choi, Young Hwan; Park, Jung Soon

    2011-01-01

    On the basis of detailed 3D finite-element (FE) limit analyses, the plastic limit load solutions for pipes with slanted circumferential through-wall cracks (TWCs) subjected to axial tension, global bending, and internal pressure are reported. The FE model and analysis procedure employed in the present numerical study were validated by comparing the present FE results with existing solutions for plastic limit loads of pipes with idealized TWCs. For the quantification of the effect of slanted crack on plastic limit load, slant correction factors for calculating the plastic limit loads of pipes with slanted TWCs from pipes with idealized TWCs are newly proposed from extensive 3D FE calculations. These slant-correction factors are presented in tabulated form for practical ranges of geometry and for each set of loading conditions

  19. Amdel on-line analyser at Rooiberg Tin Limited

    International Nuclear Information System (INIS)

    Owen, T.V.

    1987-01-01

    An Amdel on line analysis system was installed on the 'A' mine tin flotation plant at Rooiberg in April 1984. The motivation for the installation was made on account of the large variations in the feed grade to the plant and the resulting need for rapid operational adjustments to control concentrate grades thereby maximising the financial returns. An 'on-line' analyser system presented itself as a suitable alternative to the existing control method of smaller laboratory x-ray fluorescence analysers. On the system as installed at Rooiberg, two probes were fitted in each analysis zone, viz a density probe using high energy gamma radiation from a Cesium 127 source and a specific element absorption probe using low energy gamma radiation from a Americium 241 source. The signals as received from the probes are fed to a line receiver unit in the control room where a micro computer is doing the processing and prints out the information as required. Several advantages of this type of installation were gained at Rooiberg Tin Limited

  20. Knowledge Guided Disambiguation for Large-Scale Scene Classification With Multi-Resolution CNNs

    Science.gov (United States)

    Wang, Limin; Guo, Sheng; Huang, Weilin; Xiong, Yuanjun; Qiao, Yu

    2017-04-01

    Convolutional Neural Networks (CNNs) have made remarkable progress on scene recognition, partially due to these recent large-scale scene datasets, such as the Places and Places2. Scene categories are often defined by multi-level information, including local objects, global layout, and background environment, thus leading to large intra-class variations. In addition, with the increasing number of scene categories, label ambiguity has become another crucial issue in large-scale classification. This paper focuses on large-scale scene recognition and makes two major contributions to tackle these issues. First, we propose a multi-resolution CNN architecture that captures visual content and structure at multiple levels. The multi-resolution CNNs are composed of coarse resolution CNNs and fine resolution CNNs, which are complementary to each other. Second, we design two knowledge guided disambiguation techniques to deal with the problem of label ambiguity. (i) We exploit the knowledge from the confusion matrix computed on validation data to merge ambiguous classes into a super category. (ii) We utilize the knowledge of extra networks to produce a soft label for each image. Then the super categories or soft labels are employed to guide CNN training on the Places2. We conduct extensive experiments on three large-scale image datasets (ImageNet, Places, and Places2), demonstrating the effectiveness of our approach. Furthermore, our method takes part in two major scene recognition challenges, and achieves the second place at the Places2 challenge in ILSVRC 2015, and the first place at the LSUN challenge in CVPR 2016. Finally, we directly test the learned representations on other scene benchmarks, and obtain the new state-of-the-art results on the MIT Indoor67 (86.7\\%) and SUN397 (72.0\\%). We release the code and models at~\\url{https://github.com/wanglimin/MRCNN-Scene-Recognition}.

  1. Global Multi-Resolution Topography (GMRT) Synthesis - Recent Updates and Developments

    Science.gov (United States)

    Ferrini, V. L.; Morton, J. J.; Celnick, M.; McLain, K.; Nitsche, F. O.; Carbotte, S. M.; O'hara, S. H.

    2017-12-01

    The Global Multi-Resolution Topography (GMRT, http://gmrt.marine-geo.org) synthesis is a multi-resolution compilation of elevation data that is maintained in Mercator, South Polar, and North Polar Projections. GMRT consists of four independently curated elevation components: (1) quality controlled multibeam data ( 100m res.), (2) contributed high-resolution gridded bathymetric data (0.5-200 m res.), (3) ocean basemap data ( 500 m res.), and (4) variable resolution land elevation data (to 10-30 m res. in places). Each component is managed and updated as new content becomes available, with two scheduled releases each year. The ocean basemap content for GMRT includes the International Bathymetric Chart of the Arctic Ocean (IBCAO), the International Bathymetric Chart of the Southern Ocean (IBCSO), and the GEBCO 2014. Most curatorial effort for GMRT is focused on the swath bathymetry component, with an emphasis on data from the US Academic Research Fleet. As of July 2017, GMRT includes data processed and curated by the GMRT Team from 974 research cruises, covering over 29 million square kilometers ( 8%) of the seafloor at 100m resolution. The curated swath bathymetry data from GMRT is routinely contributed to international data synthesis efforts including GEBCO and IBCSO. Additional curatorial effort is associated with gridded data contributions from the international community and ensures that these data are well blended in the synthesis. Significant new additions to the gridded data component this year include the recently released data from the search for MH370 (Geoscience Australia) as well as a large high-resolution grid from the Gulf of Mexico derived from 3D seismic data (US Bureau of Ocean Energy Management). Recent developments in functionality include the deployment of a new Polar GMRT MapTool which enables users to export custom grids and map images in polar projection for their selected area of interest at the resolution of their choosing. Available for both

  2. Telescopic multi-resolution augmented reality

    Science.gov (United States)

    Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold

    2014-05-01

    To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.

  3. Safety assessment of historical masonry churches based on pre-assigned kinematic limit analysis, FE limit and pushover analyses

    International Nuclear Information System (INIS)

    Milani, Gabriele; Valente, Marco

    2014-01-01

    This study presents some results of a comprehensive numerical analysis on three masonry churches damaged by the recent Emilia-Romagna (Italy) seismic events occurred in May 2012. The numerical study comprises: (a) pushover analyses conducted with a commercial code, standard nonlinear material models and two different horizontal load distributions; (b) FE kinematic limit analyses performed using a non-commercial software based on a preliminary homogenization of the masonry materials and a subsequent limit analysis with triangular elements and interfaces; (c) kinematic limit analyses conducted in agreement with the Italian code and based on the a-priori assumption of preassigned failure mechanisms, where the masonry material is considered unable to withstand tensile stresses. All models are capable of giving information on the active failure mechanism and the base shear at failure, which, if properly made non-dimensional with the weight of the structure, gives also an indication of the horizontal peak ground acceleration causing the collapse of the church. The results obtained from all three models indicate that the collapse is usually due to the activation of partial mechanisms (apse, façade, lateral walls, etc.). Moreover the horizontal peak ground acceleration associated to the collapse is largely lower than that required in that seismic zone by the Italian code for ordinary buildings. These outcomes highlight that structural upgrading interventions would be extremely beneficial for the considerable reduction of the seismic vulnerability of such kind of historical structures

  4. Safety assessment of historical masonry churches based on pre-assigned kinematic limit analysis, FE limit and pushover analyses

    Energy Technology Data Exchange (ETDEWEB)

    Milani, Gabriele, E-mail: milani@stru.polimi.it; Valente, Marco, E-mail: milani@stru.polimi.it [Department of Architecture, Built Environment and Construction Engineering (ABC), Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milan (Italy)

    2014-10-06

    This study presents some results of a comprehensive numerical analysis on three masonry churches damaged by the recent Emilia-Romagna (Italy) seismic events occurred in May 2012. The numerical study comprises: (a) pushover analyses conducted with a commercial code, standard nonlinear material models and two different horizontal load distributions; (b) FE kinematic limit analyses performed using a non-commercial software based on a preliminary homogenization of the masonry materials and a subsequent limit analysis with triangular elements and interfaces; (c) kinematic limit analyses conducted in agreement with the Italian code and based on the a-priori assumption of preassigned failure mechanisms, where the masonry material is considered unable to withstand tensile stresses. All models are capable of giving information on the active failure mechanism and the base shear at failure, which, if properly made non-dimensional with the weight of the structure, gives also an indication of the horizontal peak ground acceleration causing the collapse of the church. The results obtained from all three models indicate that the collapse is usually due to the activation of partial mechanisms (apse, façade, lateral walls, etc.). Moreover the horizontal peak ground acceleration associated to the collapse is largely lower than that required in that seismic zone by the Italian code for ordinary buildings. These outcomes highlight that structural upgrading interventions would be extremely beneficial for the considerable reduction of the seismic vulnerability of such kind of historical structures.

  5. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems.

    Science.gov (United States)

    Kim, Won Hwa; Chung, Moo K; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.

  6. Multiresolution approach to processing images for different applications interaction of lower processing with higher vision

    CERN Document Server

    Vujović, Igor

    2015-01-01

    This book presents theoretical and practical aspects of the interaction between low and high level image processing. Multiresolution analysis owes its popularity mostly to wavelets and is widely used in a variety of applications. Low level image processing is important for the performance of many high level applications. The book includes examples from different research fields, i.e. video surveillance; biomedical applications (EMG and X-ray); improved communication, namely teleoperation, telemedicine, animation, augmented/virtual reality and robot vision; monitoring of the condition of ship systems and image quality control.

  7. A Biologically Motivated Multiresolution Approach to Contour Detection

    Directory of Open Access Journals (Sweden)

    Alessandro Neri

    2007-01-01

    Full Text Available Standard edge detectors react to all local luminance changes, irrespective of whether they are due to the contours of the objects represented in a scene or due to natural textures like grass, foliage, water, and so forth. Moreover, edges due to texture are often stronger than edges due to object contours. This implies that further processing is needed to discriminate object contours from texture edges. In this paper, we propose a biologically motivated multiresolution contour detection method using Bayesian denoising and a surround inhibition technique. Specifically, the proposed approach deploys computation of the gradient at different resolutions, followed by Bayesian denoising of the edge image. Then, a biologically motivated surround inhibition step is applied in order to suppress edges that are due to texture. We propose an improvement of the surround suppression used in previous works. Finally, a contour-oriented binarization algorithm is used, relying on the observation that object contours lead to long connected components rather than to short rods obtained from textures. Experimental results show that our contour detection method outperforms standard edge detectors as well as other methods that deploy inhibition.

  8. Characterizing and understanding the climatic determinism of high- to low-frequency variations in precipitation in northwestern France using a coupled wavelet multiresolution/statistical downscaling approach

    Science.gov (United States)

    Massei, Nicolas; Dieppois, Bastien; Hannah, David; Lavers, David; Fossa, Manuel; Laignel, Benoit; Debret, Maxime

    2017-04-01

    Geophysical signals oscillate over several time-scales that explain different amount of their overall variability and may be related to different physical processes. Characterizing and understanding such variabilities in hydrological variations and investigating their determinism is one important issue in a context of climate change, as these variabilities can be occasionally superimposed to long-term trend possibly due to climate change. It is also important to refine our understanding of time-scale dependent linkages between large-scale climatic variations and hydrological responses on the regional or local-scale. Here we investigate such links by conducting a wavelet multiresolution statistical dowscaling approach of precipitation in northwestern France (Seine river catchment) over 1950-2016 using sea level pressure (SLP) and sea surface temperature (SST) as indicators of atmospheric and oceanic circulations, respectively. Previous results demonstrated that including multiresolution decomposition in a statistical downscaling model (within a so-called multiresolution ESD model) using SLP as large-scale predictor greatly improved simulation of low-frequency, i.e. interannual to interdecadal, fluctuations observed in precipitation. Building on these results, continuous wavelet transform of simulated precipiation using multiresolution ESD confirmed the good performance of the model to better explain variability at all time-scales. A sensitivity analysis of the model to the choice of the scale and wavelet function used was also tested. It appeared that whatever the wavelet used, the model performed similarly. The spatial patterns of SLP found as the best predictors for all time-scales, which resulted from the wavelet decomposition, revealed different structures according to time-scale, showing possible different determinisms. More particularly, some low-frequency components ( 3.2-yr and 19.3-yr) showed a much wide-spread spatial extentsion across the Atlantic

  9. a Web-Based Interactive Tool for Multi-Resolution 3d Models of a Maya Archaeological Site

    Science.gov (United States)

    Agugiaro, G.; Remondino, F.; Girardi, G.; von Schwerin, J.; Richards-Rissetto, H.; De Amicis, R.

    2011-09-01

    Continuous technological advances in surveying, computing and digital-content delivery are strongly contributing to a change in the way Cultural Heritage is "perceived": new tools and methodologies for documentation, reconstruction and research are being created to assist not only scholars, but also to reach more potential users (e.g. students and tourists) willing to access more detailed information about art history and archaeology. 3D computer-simulated models, sometimes set in virtual landscapes, offer for example the chance to explore possible hypothetical reconstructions, while on-line GIS resources can help interactive analyses of relationships and change over space and time. While for some research purposes a traditional 2D approach may suffice, this is not the case for more complex analyses concerning spatial and temporal features of architecture, like for example the relationship of architecture and landscape, visibility studies etc. The project aims therefore at creating a tool, called "QueryArch3D" tool, which enables the web-based visualisation and queries of an interactive, multi-resolution 3D model in the framework of Cultural Heritage. More specifically, a complete Maya archaeological site, located in Copan (Honduras), has been chosen as case study to test and demonstrate the platform's capabilities. Much of the site has been surveyed and modelled at different levels of detail (LoD) and the geometric model has been semantically segmented and integrated with attribute data gathered from several external data sources. The paper describes the characteristics of the research work, along with its implementation issues and the initial results of the developed prototype.

  10. On analysis of electroencephalogram by multiresolution-based energetic approach

    Science.gov (United States)

    Sevindir, Hulya Kodal; Yazici, Cuneyt; Siddiqi, A. H.; Aslan, Zafer

    2013-10-01

    Epilepsy is a common brain disorder where the normal neuronal activity gets affected. Electroencephalography (EEG) is the recording of electrical activity along the scalp produced by the firing of neurons within the brain. The main application of EEG is in the case of epilepsy. On a standard EEG some abnormalities indicate epileptic activity. EEG signals like many biomedical signals are highly non-stationary by their nature. For the investigation of biomedical signals, in particular EEG signals, wavelet analysis have found prominent position in the study for their ability to analyze such signals. Wavelet transform is capable of separating the signal energy among different frequency scales and a good compromise between temporal and frequency resolution is obtained. The present study is an attempt for better understanding of the mechanism causing the epileptic disorder and accurate prediction of occurrence of seizures. In the present paper following Magosso's work [12], we identify typical patterns of energy redistribution before and during the seizure using multiresolution wavelet analysis on Kocaeli University's Medical School's data.

  11. Extended generalized Lagrangian multipliers for magnetohydrodynamics using adaptive multiresolution methods

    Directory of Open Access Journals (Sweden)

    Domingues M. O.

    2013-12-01

    Full Text Available We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via Harten’s cell average multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge–Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution.

  12. The multi-resolution capability of Tchebichef moments and its applications to the analysis of fluorescence excitation-emission spectra

    Science.gov (United States)

    Li, Bao Qiong; Wang, Xue; Li Xu, Min; Zhai, Hong Lin; Chen, Jing; Liu, Jin Jin

    2018-01-01

    Fluorescence spectroscopy with an excitation-emission matrix (EEM) is a fast and inexpensive technique and has been applied to the detection of a very wide range of analytes. However, serious scattering and overlapping signals hinder the applications of EEM spectra. In this contribution, the multi-resolution capability of Tchebichef moments was investigated in depth and applied to the analysis of two EEM data sets (data set 1 consisted of valine-tyrosine-valine, tryptophan-glycine and phenylalanine, and data set 2 included vitamin B1, vitamin B2 and vitamin B6) for the first time. By means of the Tchebichef moments with different orders, the different information in the EEM spectra can be represented. It is owing to this multi-resolution capability that the overlapping problem was solved, and the information of chemicals and scatterings were separated. The obtained results demonstrated that the Tchebichef moment method is very effective, which provides a promising tool for the analysis of EEM spectra. It is expected that the applications of Tchebichef moment method could be developed and extended in complex systems such as biological fluids, food, environment and others to deal with the practical problems (overlapped peaks, unknown interferences, baseline drifts, and so on) with other spectra.

  13. 4D-CT Lung registration using anatomy-based multi-level multi-resolution optical flow analysis and thin-plate splines.

    Science.gov (United States)

    Min, Yugang; Neylon, John; Shah, Amish; Meeks, Sanford; Lee, Percy; Kupelian, Patrick; Santhanam, Anand P

    2014-09-01

    The accuracy of 4D-CT registration is limited by inconsistent Hounsfield unit (HU) values in the 4D-CT data from one respiratory phase to another and lower image contrast for lung substructures. This paper presents an optical flow and thin-plate spline (TPS)-based 4D-CT registration method to account for these limitations. The use of unified HU values on multiple anatomy levels (e.g., the lung contour, blood vessels, and parenchyma) accounts for registration errors by inconsistent landmark HU value. While 3D multi-resolution optical flow analysis registers each anatomical level, TPS is employed for propagating the results from one anatomical level to another ultimately leading to the 4D-CT registration. 4D-CT registration was validated using target registration error (TRE), inverse consistency error (ICE) metrics, and a statistical image comparison using Gamma criteria of 1 % intensity difference in 2 mm(3) window range. Validation results showed that the proposed method was able to register CT lung datasets with TRE and ICE values <3 mm. In addition, the average number of voxel that failed the Gamma criteria was <3 %, which supports the clinical applicability of the propose registration mechanism. The proposed 4D-CT registration computes the volumetric lung deformations within clinically viable accuracy.

  14. Multiresolution Wavelet Analysis of Heartbeat Intervals Discriminates Healthy Patients from Those with Cardiac Pathology

    Science.gov (United States)

    Thurner, Stefan; Feurstein, Markus C.; Teich, Malvin C.

    1998-02-01

    We applied multiresolution wavelet analysis to the sequence of times between human heartbeats ( R-R intervals) and have found a scale window, between 16 and 32 heartbeat intervals, over which the widths of the R-R wavelet coefficients fall into disjoint sets for normal and heart-failure patients. This has enabled us to correctly classify every patient in a standard data set as belonging either to the heart-failure or normal group with 100% accuracy, thereby providing a clinically significant measure of the presence of heart failure from the R-R intervals alone. Comparison is made with previous approaches, which have provided only statistically significant measures.

  15. Multimodal fusion framework: a multiresolution approach for emotion classification and recognition from physiological signals.

    Science.gov (United States)

    Verma, Gyanendra K; Tiwary, Uma Shanker

    2014-11-15

    The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45%, 74.37%, 57.74% and 75.94% for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for 'Depressing' with 85.46% using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85% with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Multiresolution molecular mechanics: Implementation and efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Biyikli, Emre; To, Albert C., E-mail: albertto@pitt.edu

    2017-01-01

    Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with the associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3–8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.

  17. A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data

    Science.gov (United States)

    Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei

    2013-08-01

    We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.

  18. Hierarchical graphical-based human pose estimation via local multi-resolution convolutional neural network

    Science.gov (United States)

    Zhu, Aichun; Wang, Tian; Snoussi, Hichem

    2018-03-01

    This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.

  19. Hierarchical graphical-based human pose estimation via local multi-resolution convolutional neural network

    Directory of Open Access Journals (Sweden)

    Aichun Zhu

    2018-03-01

    Full Text Available This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN. Firstly, a Relative Mixture Deformable Model (RMDM is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.

  20. Accurate convolution/superposition for multi-resolution dose calculation using cumulative tabulated kernels

    International Nuclear Information System (INIS)

    Lu Weiguo; Olivera, Gustavo H; Chen Mingli; Reckwerdt, Paul J; Mackie, Thomas R

    2005-01-01

    Convolution/superposition (C/S) is regarded as the standard dose calculation method in most modern radiotherapy treatment planning systems. Different implementations of C/S could result in significantly different dose distributions. This paper addresses two major implementation issues associated with collapsed cone C/S: one is how to utilize the tabulated kernels instead of analytical parametrizations and the other is how to deal with voxel size effects. Three methods that utilize the tabulated kernels are presented in this paper. These methods differ in the effective kernels used: the differential kernel (DK), the cumulative kernel (CK) or the cumulative-cumulative kernel (CCK). They result in slightly different computation times but significantly different voxel size effects. Both simulated and real multi-resolution dose calculations are presented. For simulation tests, we use arbitrary kernels and various voxel sizes with a homogeneous phantom, and assume forward energy transportation only. Simulations with voxel size up to 1 cm show that the CCK algorithm has errors within 0.1% of the maximum gold standard dose. Real dose calculations use a heterogeneous slab phantom, both the 'broad' (5 x 5 cm 2 ) and the 'narrow' (1.2 x 1.2 cm 2 ) tomotherapy beams. Various voxel sizes (0.5 mm, 1 mm, 2 mm, 4 mm and 8 mm) are used for dose calculations. The results show that all three algorithms have negligible difference (0.1%) for the dose calculation in the fine resolution (0.5 mm voxels). But differences become significant when the voxel size increases. As for the DK or CK algorithm in the broad (narrow) beam dose calculation, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 10% (7%) of the maximum dose. As for the broad (narrow) beam dose calculation using the CCK algorithm, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 1% of the maximum dose. Among all three methods, the CCK algorithm

  1. Multi-resolution anisotropy studies of ultrahigh-energy cosmic rays detected at the Pierre Auger Observatory

    Energy Technology Data Exchange (ETDEWEB)

    Aab, A.; Abreu, P.; Aglietta, M.; Samarai, I. Al; Albuquerque, I. F. M.; Allekotte, I.; Almela, A.; Castillo, J. Alvarez; Alvarez-Muñiz, J.; Anastasi, G. A.; Anchordoqui, L.; Andrada, B.; Andringa, S.; Aramo, C.; Arqueros, F.; Arsene, N.; Asorey, H.; Assis, P.; Aublin, J.; Avila, G.; Badescu, A. M.; Balaceanu, A.; Luz, R. J. Barreira; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Biteau, J.; Blaess, S. G.; Blanco, A.; Blazek, J.; Bleve, C.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Borodai, N.; Botti, A. M.; Brack, J.; Brancus, I.; Bretz, T.; Bridgeman, A.; Briechle, F. L.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, L.; Cancio, A.; Canfora, F.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Chavez, A. G.; Chinellato, J. A.; Chudoba, J.; Clay, R. W.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Coutu, S.; Covault, C. E.; Cronin, J.; D' Amico, S.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Jong, S. J.; Mauro, G. De; Neto, J. R. T. de Mello; Mitri, I. De; de Oliveira, J.; de Souza, V.; Debatin, J.; Deligny, O.; Giulio, C. Di; Matteo, A. Di; Castro, M. L. Díaz; Diogo, F.; Dobrigkeit, C.; D' Olivo, J. C.; Anjos, R. C. dos; Dova, M. T.; Dundovic, A.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Fick, B.; Figueira, J. M.; Filipčič, A.; Fratu, O.; Freire, M. M.; Fujii, T.; Fuster, A.; Gaior, R.; García, B.; Garcia-Pinto, D.; Gaté, F.; Gemmeke, H.; Gherghel-Lascu, A.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Głas, D.; Glaser, C.; Golup, G.; Berisso, M. Gómez; Vitale, P. F. Gómez; González, N.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Hasankiadeh, Q.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huege, T.; Hulsman, J.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Johnsen, J. A.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Katkov, I.; Keilhauer, B.; Kemp, E.; Kemp, J.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kuempel, D.; Mezek, G. Kukec; Kunka, N.; Awad, A. Kuotb; LaHurd, D.; Lauscher, M.; Legumina, R.; de Oliveira, M. A. Leigui; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopes, L.; López, R.; Casado, A. López; Luce, Q.; Lucero, A.; Malacari, M.; Mallamaci, M.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Mariş, I. C.; Marsella, G.; Martello, D.; Martinez, H.; Bravo, O. Martínez; Meza, J. J. Masías; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Melo, D.; Menshikov, A.; Messina, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Mockler, D.; Mollerach, S.; Montanet, F.; Morello, C.; Mostafá, M.; Müller, A. L.; Müller, G.; Muller, M. A.; Müller, S.; Mussa, R.; Naranjo, I.; Nellen, L.; Nguyen, P. H.; Niculescu-Oglinzanu, M.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, H.; Núñez, L. A.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Selmi-Dei, D. Pakk; Palatka, M.; Pallotta, J.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pedreira, F.; Pȩkala, J.; Pelayo, R.; Peña-Rodriguez, J.; Pereira, L. A. S.; Perlín, M.; Perrone, L.; Peters, C.; Petrera, S.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Ramos-Pollan, R.; Rautenberg, J.; Ravignani, D.; Revenu, B.; Ridky, J.; Risse, M.; Ristori, P.; Rizi, V.; de Carvalho, W. Rodrigues; Fernandez, G. Rodriguez; Rojo, J. Rodriguez; Rogozin, D.; Roncoroni, M. J.; Roth, M.; Roulet, E.; Rovero, A. C.; Ruehl, P.; Saffi, S. J.; Saftoiu, A.; Salazar, H.; Saleh, A.; Greus, F. Salesa; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santos, E. M.; Santos, E.; Sarazin, F.; Sarmento, R.; Sarmiento, C. A.; Sato, R.; Schauer, M.; Scherini, V.; Schieler, H.; Schimp, M.; Schmidt, D.; Scholten, O.; Schovánek, P.; Schröder, F. G.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sigl, G.; Silli, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sonntag, S.; Sorokin, J.; Squartini, R.; Stanca, D.; Stanič, S.; Stasielak, J.; Stassi, P.; Strafella, F.; Suarez, F.; Durán, M. Suarez; Sudholz, T.; Suomijärvi, T.; Supanitsky, A. D.; Swain, J.; Szadkowski, Z.; Taboada, A.; Taborda, O. A.; Tapia, A.; Theodoro, V. M.; Timmermans, C.; Peixoto, C. J. Todero; Tomankova, L.; Tomé, B.; Elipe, G. Torralba; Torri, M.; Travnicek, P.; Trini, M.; Ulrich, R.; Unger, M.; Urban, M.; Galicia, J. F. Valdés; Valiño, I.; Valore, L.; Aar, G. van; Bodegom, P. van; Berg, A. M. van den; Vliet, A. van; Varela, E.; Cárdenas, B. Vargas; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Quispe, I. D. Vergara; Verzi, V.; Vicha, J.; Villaseñor, L.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weindl, A.; Wiencke, L.; Wilczyński, H.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Yang, L.; Yelos, D.; Yushkov, A.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zimmermann, B.; Ziolkowski, M.; Zong, Z.; Zuccarello, F.

    2017-06-01

    We report a multi-resolution search for anisotropies in the arrival directions of cosmic rays detected at the Pierre Auger Observatory with local zenith angles up to 80(o) and energies in excess of 4 EeV (4 × 1018 eV). This search is conducted by measuring the angular power spectrum and performing a needlet wavelet analysis in two independent energy ranges. Both analyses are complementary since the angular power spectrum achieves a better performance in identifying large-scale patterns while the needlet wavelet analysis, considering the parameters used in this work, presents a higher efficiency in detecting smaller-scale anisotropies, potentially providing directional information on any observed anisotropies. No deviation from isotropy is observed on any angular scale in the energy range between 4 and 8 EeV. Above 8 EeV, an indication for a dipole moment is captured, while no other deviation from isotropy is observed for moments beyond the dipole one. The corresponding p-values obtained after accounting for searches blindly performed at several angular scales, are 1.3 × 10-5 in the case of the angular power spectrum, and 2.5 × 10-3 in the case of the needlet analysis. While these results are consistent with previous reports making use of the same data set, they provide extensions of the previous works through the thorough scans of the angular scales.

  2. Multi-resolution anisotropy studies of ultrahigh-energy cosmic rays detected at the Pierre Auger Observatory

    Energy Technology Data Exchange (ETDEWEB)

    Aab, A. [Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud Universiteit, Nijmegen (Netherlands); Abreu, P.; Andringa, S. [Laboratório de Instrumentação e Física Experimental de Partículas—LIP and Instituto Superior Técnico—IST, Universidade de Lisboa—UL (Portugal); Aglietta, M. [Osservatorio Astrofisico di Torino (INAF), Torino (Italy); Samarai, I. Al [Laboratoire de Physique Nucléaire et de Hautes Energies (LPNHE), Universités Paris 6 et Paris 7, CNRS-IN2P3 (France); Albuquerque, I.F.M. [Universidade de São Paulo, Inst. de Física, São Paulo (Brazil); Allekotte, I. [Centro Atómico Bariloche and Instituto Balseiro (CNEA-UNCuyo-CONICET) (Argentina); Almela, A.; Andrada, B. [Instituto de Tecnologías en Detección y Astropartículas (CNEA, CONICET, UNSAM), Centro Atómico Constituyentes, Comisión Nacional de Energía Atómica (Argentina); Castillo, J. Alvarez [Universidad Nacional Autónoma de México, México (Mexico); Alvarez-Muñiz, J. [Universidad de Santiago de Compostela (Spain); Anastasi, G.A. [Gran Sasso Science Institute (INFN), L' Aquila (Italy); Anchordoqui, L., E-mail: auger_spokespersons@fnal.gov [Department of Physics and Astronomy, Lehman College, City University of New York (United States); and others

    2017-06-01

    We report a multi-resolution search for anisotropies in the arrival directions of cosmic rays detected at the Pierre Auger Observatory with local zenith angles up to 80{sup o} and energies in excess of 4 EeV (4 × 10{sup 18} eV). This search is conducted by measuring the angular power spectrum and performing a needlet wavelet analysis in two independent energy ranges. Both analyses are complementary since the angular power spectrum achieves a better performance in identifying large-scale patterns while the needlet wavelet analysis, considering the parameters used in this work, presents a higher efficiency in detecting smaller-scale anisotropies, potentially providing directional information on any observed anisotropies. No deviation from isotropy is observed on any angular scale in the energy range between 4 and 8 EeV. Above 8 EeV, an indication for a dipole moment is captured; while no other deviation from isotropy is observed for moments beyond the dipole one. The corresponding p -values obtained after accounting for searches blindly performed at several angular scales, are 1.3 × 10{sup −5} in the case of the angular power spectrum, and 2.5 × 10{sup −3} in the case of the needlet analysis. While these results are consistent with previous reports making use of the same data set, they provide extensions of the previous works through the thorough scans of the angular scales.

  3. A multi-resolution analysis of lidar-DTMs to identify geomorphic processes from characteristic topographic length scales

    Science.gov (United States)

    Sangireddy, H.; Passalacqua, P.; Stark, C. P.

    2013-12-01

    Characteristic length scales are often present in topography, and they reflect the driving geomorphic processes. The wide availability of high resolution lidar Digital Terrain Models (DTMs) allows us to measure such characteristic scales, but new methods of topographic analysis are needed in order to do so. Here, we explore how transitions in probability distributions (pdfs) of topographic variables such as (log(area/slope)), defined as topoindex by Beven and Kirkby[1979], can be measured by Multi-Resolution Analysis (MRA) of lidar DTMs [Stark and Stark, 2001; Sangireddy et al.,2012] and used to infer dominant geomorphic processes such as non-linear diffusion and critical shear. We show this correlation between dominant geomorphic processes to characteristic length scales by comparing results from a landscape evolution model to natural landscapes. The landscape evolution model MARSSIM Howard[1994] includes components for modeling rock weathering, mass wasting by non-linear creep, detachment-limited channel erosion, and bedload sediment transport. We use MARSSIM to simulate steady state landscapes for a range of hillslope diffusivity and critical shear stresses. Using the MRA approach, we estimate modal values and inter-quartile ranges of slope, curvature, and topoindex as a function of resolution. We also construct pdfs at each resolution and identify and extract characteristic scale breaks. Following the approach of Tucker et al.,[2001], we measure the average length to channel from ridges, within the GeoNet framework developed by Passalacqua et al.,[2010] and compute pdfs for hillslope lengths at each scale defined in the MRA. We compare the hillslope diffusivity used in MARSSIM against inter-quartile ranges of topoindex and hillslope length scales, and observe power law relationships between the compared variables for simulated landscapes at steady state. We plot similar measures for natural landscapes and are able to qualitatively infer the dominant geomorphic

  4. Including Below Detection Limit Samples in Post Decommissioning Soil Sample Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Hwan; Yim, Man Sung [KAIST, Daejeon (Korea, Republic of)

    2016-05-15

    To meet the required standards the site owner has to show that the soil at the facility has been sufficiently cleaned up. To do this one must know the contamination of the soil at the site prior to clean up. This involves sampling that soil to identify the degree of contamination. However there is a technical difficulty in determining how much decontamination should be done. The problem arises when measured samples are below the detection limit. Regulatory guidelines for site reuse after decommissioning are commonly challenged because the majority of the activity in the soil at or below the limit of detection. Using additional statistical analyses of contaminated soil after decommissioning is expected to have the following advantages: a better and more reliable probabilistic exposure assessment, better economics (lower project costs) and improved communication with the public. This research will develop an approach that defines an acceptable method for demonstrating compliance of decommissioned NPP sites and validates that compliance. Soil samples from NPP often contain censored data. Conventional methods for dealing with censored data sets are statistically biased and limited in their usefulness.

  5. Using wavelet multi-resolution nature to accelerate the identification of fractional order system

    International Nuclear Information System (INIS)

    Li Yuan-Lu; Meng Xiao; Ding Ya-Qing

    2017-01-01

    Because of the fractional order derivatives, the identification of the fractional order system (FOS) is more complex than that of an integral order system (IOS). In order to avoid high time consumption in the system identification, the least-squares method is used to find other parameters by fixing the fractional derivative order. Hereafter, the optimal parameters of a system will be found by varying the derivative order in an interval. In addition, the operational matrix of the fractional order integration combined with the multi-resolution nature of a wavelet is used to accelerate the FOS identification, which is achieved by discarding wavelet coefficients of high-frequency components of input and output signals. In the end, the identifications of some known fractional order systems and an elastic torsion system are used to verify the proposed method. (paper)

  6. ROBUST MOTION SEGMENTATION FOR HIGH DEFINITION VIDEO SEQUENCES USING A FAST MULTI-RESOLUTION MOTION ESTIMATION BASED ON SPATIO-TEMPORAL TUBES

    OpenAIRE

    Brouard , Olivier; Delannay , Fabrice; Ricordel , Vincent; Barba , Dominique

    2007-01-01

    4 pages; International audience; Motion segmentation methods are effective for tracking video objects. However, objects segmentation methods based on motion need to know the global motion of the video in order to back-compensate it before computing the segmentation. In this paper, we propose a method which estimates the global motion of a High Definition (HD) video shot and then segments it using the remaining motion information. First, we develop a fast method for multi-resolution motion est...

  7. OpenCL-based vicinity computation for 3D multiresolution mesh compression

    Science.gov (United States)

    Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri

    2017-03-01

    3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.

  8. Surface analyses of TiC coated molybdenum limiter material exposed to high heat flux electron beam

    International Nuclear Information System (INIS)

    Onozuka, M.; Uchikawa, T.; Yamao, H.; Kawai, H.; Kousaku, A.; Nakamura, H.; Niikura, S.

    1987-01-01

    Observation and surface analyses of TiC coated molybdenum exposed to high heat flux have been performed to study thermal damage resistance of TiC coated molybdenum limiter material. High heat loads were provided by a 120 kW electron beam facility. SEM, AES and EPMA have been applied to the surface analyses

  9. Mis-Match Limit Load Analyses and Fracture Mechanics Assessment for Welded Pipe with Circumferential Crack at the Center of Weldment

    Energy Technology Data Exchange (ETDEWEB)

    Song, Tae Kwang; Jeon, Jun Young; Shim, Kwang Bo; Kim, Yun Jae [Korea University, Seoul (Korea, Republic of); Kim, Jong Sung [Sunchon University, Suncheon (Korea, Republic of); Jin, Tae Eun [Korea Power Engineering Company, Daejeon (Korea, Republic of)

    2010-01-15

    In this paper, limit load analyses and fracture mechanics analyses were conducted via finite element analyses for the welded pipe with circumferential crack at the center of the weldment. Systematic changes for strength mismatch ratio, width of weldment, crack shape and thickness ratio of the pipe were considered to provide strength mismatch limit load. And J-integral calculations based on reference stress method were conducted for two materials, stainless steel and ferritic steel. Reference stress defined by provided strength mis-match limit load gives much more accurate J-integral.

  10. Optimizing Energy and Modulation Selection in Multi-Resolution Modulation For Wireless Video Broadcast/Multicast

    KAUST Repository

    She, James

    2009-11-01

    Emerging technologies in Broadband Wireless Access (BWA) networks and video coding have enabled high-quality wireless video broadcast/multicast services in metropolitan areas. Joint source-channel coded wireless transmission, especially using hierarchical/superposition coded modulation at the channel, is recognized as an effective and scalable approach to increase the system scalability while tackling the multi-user channel diversity problem. The power allocation and modulation selection problem, however, is subject to a high computational complexity due to the nonlinear formulation and huge solution space. This paper introduces a dynamic programming framework with conditioned parsing, which significantly reduces the search space. The optimized result is further verified with experiments using real video content. The proposed approach effectively serves as a generalized and practical optimization framework that can gauge and optimize a scalable wireless video broadcast/multicast based on multi-resolution modulation in any BWA network.

  11. Optimizing Energy and Modulation Selection in Multi-Resolution Modulation For Wireless Video Broadcast/Multicast

    KAUST Repository

    She, James; Ho, Pin-Han; Shihada, Basem

    2009-01-01

    Emerging technologies in Broadband Wireless Access (BWA) networks and video coding have enabled high-quality wireless video broadcast/multicast services in metropolitan areas. Joint source-channel coded wireless transmission, especially using hierarchical/superposition coded modulation at the channel, is recognized as an effective and scalable approach to increase the system scalability while tackling the multi-user channel diversity problem. The power allocation and modulation selection problem, however, is subject to a high computational complexity due to the nonlinear formulation and huge solution space. This paper introduces a dynamic programming framework with conditioned parsing, which significantly reduces the search space. The optimized result is further verified with experiments using real video content. The proposed approach effectively serves as a generalized and practical optimization framework that can gauge and optimize a scalable wireless video broadcast/multicast based on multi-resolution modulation in any BWA network.

  12. Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Long, Qi

    2018-01-01

    Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods.

  13. Multiresolutional schemata for unsupervised learning of autonomous robots for 3D space operation

    Science.gov (United States)

    Lacaze, Alberto; Meystel, Michael; Meystel, Alex

    1994-01-01

    This paper describes a novel approach to the development of a learning control system for autonomous space robot (ASR) which presents the ASR as a 'baby' -- that is, a system with no a priori knowledge of the world in which it operates, but with behavior acquisition techniques that allows it to build this knowledge from the experiences of actions within a particular environment (we will call it an Astro-baby). The learning techniques are rooted in the recursive algorithm for inductive generation of nested schemata molded from processes of early cognitive development in humans. The algorithm extracts data from the environment and by means of correlation and abduction, it creates schemata that are used for control. This system is robust enough to deal with a constantly changing environment because such changes provoke the creation of new schemata by generalizing from experiences, while still maintaining minimal computational complexity, thanks to the system's multiresolutional nature.

  14. Multi-Resolution Wavelet-Transformed Image Analysis of Histological Sections of Breast Carcinomas

    Directory of Open Access Journals (Sweden)

    Hae-Gil Hwang

    2005-01-01

    Full Text Available Multi-resolution images of histological sections of breast cancer tissue were analyzed using texture features of Haar- and Daubechies transform wavelets. Tissue samples analyzed were from ductal regions of the breast and included benign ductal hyperplasia, ductal carcinoma in situ (DCIS, and invasive ductal carcinoma (CA. To assess the correlation between computerized image analysis and visual analysis by a pathologist, we created a two-step classification system based on feature extraction and classification. In the feature extraction step, we extracted texture features from wavelet-transformed images at 10× magnification. In the classification step, we applied two types of classifiers to the extracted features, namely a statistics-based multivariate (discriminant analysis and a neural network. Using features from second-level Haar transform wavelet images in combination with discriminant analysis, we obtained classification accuracies of 96.67 and 87.78% for the training and testing set (90 images each, respectively. We conclude that the best classifier of carcinomas in histological sections of breast tissue are the texture features from the second-level Haar transform wavelet images used in a discriminant function.

  15. Unsupervised segmentation of lung fields in chest radiographs using multiresolution fractal feature vector and deformable models.

    Science.gov (United States)

    Lee, Wen-Li; Chang, Koyin; Hsieh, Kai-Sheng

    2016-09-01

    Segmenting lung fields in a chest radiograph is essential for automatically analyzing an image. We present an unsupervised method based on multiresolution fractal feature vector. The feature vector characterizes the lung field region effectively. A fuzzy c-means clustering algorithm is then applied to obtain a satisfactory initial contour. The final contour is obtained by deformable models. The results show the feasibility and high performance of the proposed method. Furthermore, based on the segmentation of lung fields, the cardiothoracic ratio (CTR) can be measured. The CTR is a simple index for evaluating cardiac hypertrophy. After identifying a suspicious symptom based on the estimated CTR, a physician can suggest that the patient undergoes additional extensive tests before a treatment plan is finalized.

  16. Towards multi-resolution global climate modeling with ECHAM6-FESOM. Part II: climate variability

    Science.gov (United States)

    Rackow, T.; Goessling, H. F.; Jung, T.; Sidorenko, D.; Semmler, T.; Barbi, D.; Handorf, D.

    2018-04-01

    This study forms part II of two papers describing ECHAM6-FESOM, a newly established global climate model with a unique multi-resolution sea ice-ocean component. While part I deals with the model description and the mean climate state, here we examine the internal climate variability of the model under constant present-day (1990) conditions. We (1) assess the internal variations in the model in terms of objective variability performance indices, (2) analyze variations in global mean surface temperature and put them in context to variations in the observed record, with particular emphasis on the recent warming slowdown, (3) analyze and validate the most common atmospheric and oceanic variability patterns, (4) diagnose the potential predictability of various climate indices, and (5) put the multi-resolution approach to the test by comparing two setups that differ only in oceanic resolution in the equatorial belt, where one ocean mesh keeps the coarse 1° resolution applied in the adjacent open-ocean regions and the other mesh is gradually refined to 0.25°. Objective variability performance indices show that, in the considered setups, ECHAM6-FESOM performs overall favourably compared to five well-established climate models. Internal variations of the global mean surface temperature in the model are consistent with observed fluctuations and suggest that the recent warming slowdown can be explained as a once-in-one-hundred-years event caused by internal climate variability; periods of strong cooling in the model (`hiatus' analogs) are mainly associated with ENSO-related variability and to a lesser degree also to PDO shifts, with the AMO playing a minor role. Common atmospheric and oceanic variability patterns are simulated largely consistent with their real counterparts. Typical deficits also found in other models at similar resolutions remain, in particular too weak non-seasonal variability of SSTs over large parts of the ocean and episodic periods of almost absent

  17. New Resolution Strategy for Multi-scale Reaction Waves using Time Operator Splitting and Space Adaptive Multiresolution: Application to Human Ischemic Stroke*

    Directory of Open Access Journals (Sweden)

    Louvet Violaine

    2011-12-01

    Full Text Available We tackle the numerical simulation of reaction-diffusion equations modeling multi-scale reaction waves. This type of problems induces peculiar difficulties and potentially large stiffness which stem from the broad spectrum of temporal scales in the nonlinear chemical source term as well as from the presence of large spatial gradients in the reactive fronts, spatially very localized. A new resolution strategy was recently introduced ? that combines a performing time operator splitting with high oder dedicated time integration methods and space adaptive multiresolution. Based on recent theoretical studies of numerical analysis, such a strategy leads to a splitting time step which is not restricted neither by the fastest scales in the source term nor by stability limits related to the diffusion problem, but only by the physics of the phenomenon. In this paper, the efficiency of the method is evaluated through 2D and 3D numerical simulations of a human ischemic stroke model, conducted on a simplified brain geometry, for which a simple parallelization strategy for shared memory architectures was implemented, in order to reduce computing costs related to “detailed chemistry” features of the model.

  18. A Multi-Resolution Spatial Model for Large Datasets Based on the Skew-t Distribution

    KAUST Repository

    Tagle, Felipe

    2017-12-06

    Large, non-Gaussian spatial datasets pose a considerable modeling challenge as the dependence structure implied by the model needs to be captured at different scales, while retaining feasible inference. Skew-normal and skew-t distributions have only recently begun to appear in the spatial statistics literature, without much consideration, however, for the ability to capture dependence at multiple resolutions, and simultaneously achieve feasible inference for increasingly large data sets. This article presents the first multi-resolution spatial model inspired by the skew-t distribution, where a large-scale effect follows a multivariate normal distribution and the fine-scale effects follow a multivariate skew-normal distributions. The resulting marginal distribution for each region is skew-t, thereby allowing for greater flexibility in capturing skewness and heavy tails characterizing many environmental datasets. Likelihood-based inference is performed using a Monte Carlo EM algorithm. The model is applied as a stochastic generator of daily wind speeds over Saudi Arabia.

  19. Multiresolution analysis (discrete wavelet transform) through Daubechies family for emotion recognition in speech.

    Science.gov (United States)

    Campo, D.; Quintero, O. L.; Bastidas, M.

    2016-04-01

    We propose a study of the mathematical properties of voice as an audio signal. This work includes signals in which the channel conditions are not ideal for emotion recognition. Multiresolution analysis- discrete wavelet transform - was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states. ANNs proved to be a system that allows an appropriate classification of such states. This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features. Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify.

  20. A multiresolution spatial parameterization for the estimation of fossil-fuel carbon dioxide emissions via atmospheric inversions

    Directory of Open Access Journals (Sweden)

    J. Ray

    2014-09-01

    Full Text Available The characterization of fossil-fuel CO2 (ffCO2 emissions is paramount to carbon cycle studies, but the use of atmospheric inverse modeling approaches for this purpose has been limited by the highly heterogeneous and non-Gaussian spatiotemporal variability of emissions. Here we explore the feasibility of capturing this variability using a low-dimensional parameterization that can be implemented within the context of atmospheric CO2 inverse problems aimed at constraining regional-scale emissions. We construct a multiresolution (i.e., wavelet-based spatial parameterization for ffCO2 emissions using the Vulcan inventory, and examine whether such a~parameterization can capture a realistic representation of the expected spatial variability of actual emissions. We then explore whether sub-selecting wavelets using two easily available proxies of human activity (images of lights at night and maps of built-up areas yields a low-dimensional alternative. We finally implement this low-dimensional parameterization within an idealized inversion, where a sparse reconstruction algorithm, an extension of stagewise orthogonal matching pursuit (StOMP, is used to identify the wavelet coefficients. We find that (i the spatial variability of fossil-fuel emission can indeed be represented using a low-dimensional wavelet-based parameterization, (ii that images of lights at night can be used as a proxy for sub-selecting wavelets for such analysis, and (iii that implementing this parameterization within the described inversion framework makes it possible to quantify fossil-fuel emissions at regional scales if fossil-fuel-only CO2 observations are available.

  1. On-the-Fly Decompression and Rendering of Multiresolution Terrain

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, P; Cohen, J D

    2009-04-02

    We present a streaming geometry compression codec for multiresolution, uniformly-gridded, triangular terrain patches that supports very fast decompression. Our method is based on linear prediction and residual coding for lossless compression of the full-resolution data. As simplified patches on coarser levels in the hierarchy already incur some data loss, we optionally allow further quantization for more lossy compression. The quantization levels are adaptive on a per-patch basis, while still permitting seamless, adaptive tessellations of the terrain. Our geometry compression on such a hierarchy achieves compression ratios of 3:1 to 12:1. Our scheme is not only suitable for fast decompression on the CPU, but also for parallel decoding on the GPU with peak throughput over 2 billion triangles per second. Each terrain patch is independently decompressed on the fly from a variable-rate bitstream by a GPU geometry program with no branches or conditionals. Thus we can store the geometry compressed on the GPU, reducing storage and bandwidth requirements throughout the system. In our rendering approach, only compressed bitstreams and the decoded height values in the view-dependent 'cut' are explicitly stored on the GPU. Normal vectors are computed in a streaming fashion, and remaining geometry and texture coordinates, as well as mesh connectivity, are shared and re-used for all patches. We demonstrate and evaluate our algorithms on a small prototype system in which all compressed geometry fits in the GPU memory and decompression occurs on the fly every rendering frame without any cache maintenance.

  2. Surface analyses of TiC coated molybdenum limiter material exposed to high heat flux electron beam

    International Nuclear Information System (INIS)

    Onozuka, M.; Uchikawa, T.; Yamao, H.; Kawai, H.; Kousaku, A.; Nakamura, H.; Niikura, S.

    1986-01-01

    Observation and surface analyses of TiC coated molybdenum exposed to high heat flux have been performed to study thermal damage resistance of TiC coated molybdenum limiter material. High heat loads were provided by a 120 kW electron beam facility. (author)

  3. Automatic multiresolution age-related macular degeneration detection from fundus images

    Science.gov (United States)

    Garnier, Mickaël.; Hurtut, Thomas; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Age-related Macular Degeneration (AMD) is a leading cause of legal blindness. As the disease progress, visual loss occurs rapidly, therefore early diagnosis is required for timely treatment. Automatic, fast and robust screening of this widespread disease should allow an early detection. Most of the automatic diagnosis methods in the literature are based on a complex segmentation of the drusen, targeting a specific symptom of the disease. In this paper, we present a preliminary study for AMD detection from color fundus photographs using a multiresolution texture analysis. We analyze the texture at several scales by using a wavelet decomposition in order to identify all the relevant texture patterns. Textural information is captured using both the sign and magnitude components of the completed model of Local Binary Patterns. An image is finally described with the textural pattern distributions of the wavelet coefficient images obtained at each level of decomposition. We use a Linear Discriminant Analysis for feature dimension reduction, to avoid the curse of dimensionality problem, and image classification. Experiments were conducted on a dataset containing 45 images (23 healthy and 22 diseased) of variable quality and captured by different cameras. Our method achieved a recognition rate of 93:3%, with a specificity of 95:5% and a sensitivity of 91:3%. This approach shows promising results at low costs that in agreement with medical experts as well as robustness to both image quality and fundus camera model.

  4. Variability Extraction and Synthesis via Multi-Resolution Analysis using Distribution Transformer High-Speed Power Data

    Energy Technology Data Exchange (ETDEWEB)

    Chamana, Manohar [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mather, Barry A [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-19

    A library of load variability classes is created to produce scalable synthetic data sets using historical high-speed raw data. These data are collected from distribution monitoring units connected at the secondary side of a distribution transformer. Because of the irregular patterns and large volume of historical high-speed data sets, the utilization of current load characterization and modeling techniques are challenging. Multi-resolution analysis techniques are applied to extract the necessary components and eliminate the unnecessary components from the historical high-speed raw data to create the library of classes, which are then utilized to create new synthetic load data sets. A validation is performed to ensure that the synthesized data sets contain the same variability characteristics as the training data sets. The synthesized data sets are intended to be utilized in quasi-static time-series studies for distribution system planning studies on a granular scale, such as detailed PV interconnection studies.

  5. A one-time truncate and encode multiresolution stochastic framework

    Energy Technology Data Exchange (ETDEWEB)

    Abgrall, R.; Congedo, P.M.; Geraci, G., E-mail: gianluca.geraci@inria.fr

    2014-01-15

    In this work a novel adaptive strategy for stochastic problems, inspired from the classical Harten's framework, is presented. The proposed algorithm allows building, in a very general manner, stochastic numerical schemes starting from a whatever type of deterministic schemes and handling a large class of problems, from unsteady to discontinuous solutions. Its formulations permits to recover the same results concerning the interpolation theory of the classical multiresolution approach, but with an extension to uncertainty quantification problems. The present strategy permits to build numerical scheme with a higher accuracy with respect to other classical uncertainty quantification techniques, but with a strong reduction of the numerical cost and memory requirements. Moreover, the flexibility of the proposed approach allows to employ any kind of probability density function, even discontinuous and time varying, without introducing further complications in the algorithm. The advantages of the present strategy are demonstrated by performing several numerical problems where different forms of uncertainty distributions are taken into account, such as discontinuous and unsteady custom-defined probability density functions. In addition to algebraic and ordinary differential equations, numerical results for the challenging 1D Kraichnan–Orszag are reported in terms of accuracy and convergence. Finally, a two degree-of-freedom aeroelastic model for a subsonic case is presented. Though quite simple, the model allows recovering some physical key aspect, on the fluid/structure interaction, thanks to the quasi-steady aerodynamic approximation employed. The injection of an uncertainty is chosen in order to obtain a complete parameterization of the mass matrix. All the numerical results are compared with respect to classical Monte Carlo solution and with a non-intrusive Polynomial Chaos method.

  6. Genome-wide DNA polymorphism analyses using VariScan

    Directory of Open Access Journals (Sweden)

    Vilella Albert J

    2006-09-01

    Full Text Available Abstract Background DNA sequence polymorphisms analysis can provide valuable information on the evolutionary forces shaping nucleotide variation, and provides an insight into the functional significance of genomic regions. The recent ongoing genome projects will radically improve our capabilities to detect specific genomic regions shaped by natural selection. Current available methods and software, however, are unsatisfactory for such genome-wide analysis. Results We have developed methods for the analysis of DNA sequence polymorphisms at the genome-wide scale. These methods, which have been tested on a coalescent-simulated and actual data files from mouse and human, have been implemented in the VariScan software package version 2.0. Additionally, we have also incorporated a graphical-user interface. The main features of this software are: i exhaustive population-genetic analyses including those based on the coalescent theory; ii analysis adapted to the shallow data generated by the high-throughput genome projects; iii use of genome annotations to conduct a comprehensive analyses separately for different functional regions; iv identification of relevant genomic regions by the sliding-window and wavelet-multiresolution approaches; v visualization of the results integrated with current genome annotations in commonly available genome browsers. Conclusion VariScan is a powerful and flexible suite of software for the analysis of DNA polymorphisms. The current version implements new algorithms, methods, and capabilities, providing an important tool for an exhaustive exploratory analysis of genome-wide DNA polymorphism data.

  7. Developing a real-time emulation of multiresolutional control architectures for complex, discrete-event systems

    Energy Technology Data Exchange (ETDEWEB)

    Davis, W.J.; Macro, J.G.; Brook, A.L. [Univ. of Illinois, Urbana, IL (United States)] [and others

    1996-12-31

    This paper first discusses an object-oriented, control architecture and then applies the architecture to produce a real-time software emulator for the Rapid Acquisition of Manufactured Parts (RAMP) flexible manufacturing system (FMS). In specifying the control architecture, the coordinated object is first defined as the primary modeling element. These coordinated objects are then integrated into a Recursive, Object-Oriented Coordination Hierarchy. A new simulation methodology, the Hierarchical Object-Oriented Programmable Logic Simulator, is then employed to model the interactions among the coordinated objects. The final step in implementing the emulator is to distribute the models of the coordinated objects over a network of computers and to synchronize their operation to a real-time clock. The paper then introduces the Hierarchical Subsystem Controller as an intelligent controller for the coordinated object. The proposed approach to intelligent control is then compared to the concept of multiresolutional semiosis that has been developed by Dr. Alex Meystel. Finally, the plans for implementing an intelligent controller for the RAMP FMS are discussed.

  8. Multiresolution analysis over graphs for a motor imagery based online BCI game.

    Science.gov (United States)

    Asensio-Cubero, Javier; Gan, John Q; Palaniappan, Ramaswamy

    2016-01-01

    Multiresolution analysis (MRA) over graph representation of EEG data has proved to be a promising method for offline brain-computer interfacing (BCI) data analysis. For the first time we aim to prove the feasibility of the graph lifting transform in an online BCI system. Instead of developing a pointer device or a wheel-chair controller as test bed for human-machine interaction, we have designed and developed an engaging game which can be controlled by means of imaginary limb movements. Some modifications to the existing MRA analysis over graphs for BCI have also been proposed, such as the use of common spatial patterns for feature extraction at the different levels of decomposition, and sequential floating forward search as a best basis selection technique. In the online game experiment we obtained for three classes an average classification rate of 63.0% for fourteen naive subjects. The application of a best basis selection method helps significantly decrease the computing resources needed. The present study allows us to further understand and assess the benefits of the use of tailored wavelet analysis for processing motor imagery data and contributes to the further development of BCI for gaming purposes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Global multi-resolution terrain elevation data 2010 (GMTED2010)

    Science.gov (United States)

    Danielson, Jeffrey J.; Gesch, Dean B.

    2011-01-01

    In 1996, the U.S. Geological Survey (USGS) developed a global topographic elevation model designated as GTOPO30 at a horizontal resolution of 30 arc-seconds for the entire Earth. Because no single source of topographic information covered the entire land surface, GTOPO30 was derived from eight raster and vector sources that included a substantial amount of U.S. Defense Mapping Agency data. The quality of the elevation data in GTOPO30 varies widely; there are no spatially-referenced metadata, and the major topographic features such as ridgelines and valleys are not well represented. Despite its coarse resolution and limited attributes, GTOPO30 has been widely used for a variety of hydrological, climatological, and geomorphological applications as well as military applications, where a regional, continental, or global scale topographic model is required. These applications have ranged from delineating drainage networks and watersheds to using digital elevation data for the extraction of topographic structure and three-dimensional (3D) visualization exercises (Jenson and Domingue, 1988; Verdin and Greenlee, 1996; Lehner and others, 2008). Many of the fundamental geophysical processes active at the Earth's surface are controlled or strongly influenced by topography, thus the critical need for high-quality terrain data (Gesch, 1994). U.S. Department of Defense requirements for mission planning, geographic registration of remotely sensed imagery, terrain visualization, and map production are similarly dependent on global topographic data. Since the time GTOPO30 was completed, the availability of higher-quality elevation data over large geographic areas has improved markedly. New data sources include global Digital Terrain Elevation Data (DTEDRegistered) from the Shuttle Radar Topography Mission (SRTM), Canadian elevation data, and data from the Ice, Cloud, and land Elevation Satellite (ICESat). Given the widespread use of GTOPO30 and the equivalent 30-arc

  10. Hybrid Multiscale Finite Volume method for multiresolution simulations of flow and reactive transport in porous media

    Science.gov (United States)

    Barajas-Solano, D. A.; Tartakovsky, A. M.

    2017-12-01

    We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.

  11. Computerized mappings of the cerebral cortex: a multiresolution flattening method and a surface-based coordinate system

    Science.gov (United States)

    Drury, H. A.; Van Essen, D. C.; Anderson, C. H.; Lee, C. W.; Coogan, T. A.; Lewis, J. W.

    1996-01-01

    We present a new method for generating two-dimensional maps of the cerebral cortex. Our computerized, two-stage flattening method takes as its input any well-defined representation of a surface within the three-dimensional cortex. The first stage rapidly converts this surface to a topologically correct two-dimensional map, without regard for the amount of distortion introduced. The second stage reduces distortions using a multiresolution strategy that makes gross shape changes on a coarsely sampled map and further shape refinements on progressively finer resolution maps. We demonstrate the utility of this approach by creating flat maps of the entire cerebral cortex in the macaque monkey and by displaying various types of experimental data on such maps. We also introduce a surface-based coordinate system that has advantages over conventional stereotaxic coordinates and is relevant to studies of cortical organization in humans as well as non-human primates. Together, these methods provide an improved basis for quantitative studies of individual variability in cortical organization.

  12. The Impact of Including Below Detection Limit Samples in Post Decommissioning Soil Sample Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Hwan; Yim, Man-Sung [KAIST, Daejeon (Korea, Republic of)

    2016-10-15

    To meet the required standards the site owner has to show that the soil at the facility has been sufficiently cleaned up. To do this one must know the contamination of the soil at the site prior to clean up. This involves sampling that soil to identify the degree of contamination. However there is a technical difficulty in determining how much decontamination should be done. The problem arises when measured samples are below the detection limit. Regulatory guidelines for site reuse after decommissioning are commonly challenged because the majority of the activity in the soil at or below the limit of detection. Using additional statistical analyses of contaminated soil after decommissioning is expected to have the following advantages: a better and more reliable probabilistic exposure assessment, better economics (lower project costs) and improved communication with the public. This research will develop an approach that defines an acceptable method for demonstrating compliance of decommissioned NPP sites and validates that compliance. Soil samples from NPP often contain censored data. Conventional methods for dealing with censored data sets are statistically biased and limited in their usefulness. In this research, additional methods are performed using real data from a monazite manufacturing factory.

  13. Rule-based land cover classification from very high-resolution satellite image with multiresolution segmentation

    Science.gov (United States)

    Haque, Md. Enamul; Al-Ramadan, Baqer; Johnson, Brian A.

    2016-07-01

    Multiresolution segmentation and rule-based classification techniques are used to classify objects from very high-resolution satellite images of urban areas. Custom rules are developed using different spectral, geometric, and textural features with five scale parameters, which exploit varying classification accuracy. Principal component analysis is used to select the most important features out of a total of 207 different features. In particular, seven different object types are considered for classification. The overall classification accuracy achieved for the rule-based method is 95.55% and 98.95% for seven and five classes, respectively. Other classifiers that are not using rules perform at 84.17% and 97.3% accuracy for seven and five classes, respectively. The results exploit coarse segmentation for higher scale parameter and fine segmentation for lower scale parameter. The major contribution of this research is the development of rule sets and the identification of major features for satellite image classification where the rule sets are transferable and the parameters are tunable for different types of imagery. Additionally, the individual objectwise classification and principal component analysis help to identify the required object from an arbitrary number of objects within images given ground truth data for the training.

  14. In service monitoring based on fatigue analyses, possibilities and limitations

    International Nuclear Information System (INIS)

    Dittmar, S.; Binder, F.

    2004-01-01

    German LWR reactors are equipped with monitoring systems which are to enable a comparison of real transients with load case catalogues and fatigue catalogues for fatigue analyses. The information accuracy depends on the accuracy of measurements, on the consideration of parameters influencing fatigue (medium, component surface, component size, etc.), and on the accuracy of the load analyses. The contribution attempts a critical evaluation, also inview of the fact that real fatigue damage often are impossible to quantify on the basis of fatigue analyses at a later stage. The effects of the consideration or non-consideration of various influencing factors are discussed, as well as the consequences of the scatter of material characteristics on which the analyses are based. Possible measures to be taken in operational monitoring are derived. (orig.) [de

  15. Risk analyses in nuclear engineerig, their value in terms of information, and their limits in terms of applicability

    International Nuclear Information System (INIS)

    Heuser, F.W.

    1983-01-01

    This contribution first briefly explains the main pillars of the deterministic safety concept as developed in nuclear engineering, and some basic ideas on risk analyses in general. This is followed by an outline of the methodology and main purposes of risk analyses. The German Risk Study is taken as an example to discuss selected aspects with regard to information value and limits of risk analyses. The main conclusions state that risk analyses are a valuable instrument for quantitative safety evaluation, leading to a better understanding of safety problems and their prevention, and allowing a comparative assessment of various safety measures. They furthermore allow a refined evaluation of a variety of accident parameters and other impacts determining the risk emanating from accidents. The current state of the art in this sector still leaves numerous uncertainties so that risk analyses yield information for assessments rather than for definite predictions. However, the urge for quantifying the lack of knowledge leads to a better and more precise determination of the gaps still to be filled up by researchers and engineers. Thus risk analyses are a useful help in defining suitable approaches and setting up standards, showing the tasks to be fulfilled in safety research in general. (orig./HSCH) [de

  16. On the use of adaptive multiresolution method with time-varying tolerance for compressible fluid flows

    Science.gov (United States)

    Soni, V.; Hadjadj, A.; Roussel, O.

    2017-12-01

    In this paper, a fully adaptive multiresolution (MR) finite difference scheme with a time-varying tolerance is developed to study compressible fluid flows containing shock waves in interaction with solid obstacles. To ensure adequate resolution near rigid bodies, the MR algorithm is combined with an immersed boundary method based on a direct-forcing approach in which the solid object is represented by a continuous solid-volume fraction. The resulting algorithm forms an efficient tool capable of solving linear and nonlinear waves on arbitrary geometries. Through a one-dimensional scalar wave equation, the accuracy of the MR computation is, as expected, seen to decrease in time when using a constant MR tolerance considering the accumulation of error. To overcome this problem, a variable tolerance formulation is proposed, which is assessed through a new quality criterion, to ensure a time-convergence solution for a suitable quality resolution. The newly developed algorithm coupled with high-resolution spatial and temporal approximations is successfully applied to shock-bluff body and shock-diffraction problems solving Euler and Navier-Stokes equations. Results show excellent agreement with the available numerical and experimental data, thereby demonstrating the efficiency and the performance of the proposed method.

  17. Framework for multi-resolution analyses of advanced traffic management strategies.

    Science.gov (United States)

    2016-11-01

    Demand forecasting models and simulation models have been developed, calibrated, and used in isolation of each other. However, the advancement of transportation system technologies and strategies, the increase in the availability of data, and the unc...

  18. Application des ondelettes à l'analyse de texture et à l'inspection de surface industrielle

    Science.gov (United States)

    Wolf, D.; Husson, R.

    1993-11-01

    This paper presents a method of texture analysis based on multiresolution wavelets analysis. We discuss the problem of theoretical and experimental choice of the wavelet. Statistical modelling of wavelet images is treated and it results in considering statistical distribution to be a generalized Gaussian law. An algorithm for texture classification is developed with respect of the variances of different wavelet images. An industrial application of this algorithm illustrates its quality and proves its aptitude for automation of certain tasks in industrial control. Nous présentons une méthode d'analyse de texture fondée sur l'analyse multirésolution par ondelettes. Nous discutons du problème du choix théorique et expérimental de l'ondelette. Le problème de la modélisation statistique des images d'ondelettes est traité et aboutit à considérer la distribution statistique comme une loi de Gauss généralisée. Un algorithme de classification de texture est construit à l'aide de la variance des différentes images d'ondelettes. Enfin, une application industrielle de cet algorithme illustre ses qualités et démontre son aptitude à l'automatisation de certaines tâches de contrôle industriel.

  19. Multi-resolution analysis using integrated microscopic configuration with local patterns for benign-malignant mass classification

    Science.gov (United States)

    Rabidas, Rinku; Midya, Abhishek; Chakraborty, Jayasree; Sadhu, Anup; Arif, Wasim

    2018-02-01

    In this paper, Curvelet based local attributes, Curvelet-Local configuration pattern (C-LCP), is introduced for the characterization of mammographic masses as benign or malignant. Amid different anomalies such as micro- calcification, bilateral asymmetry, architectural distortion, and masses, the reason for targeting the mass lesions is due to their variation in shape, size, and margin which makes the diagnosis a challenging task. Being efficient in classification, multi-resolution property of the Curvelet transform is exploited and local information is extracted from the coefficients of each subband using Local configuration pattern (LCP). The microscopic measures in concatenation with the local textural information provide more discriminating capability than individual. The measures embody the magnitude information along with the pixel-wise relationships among the neighboring pixels. The performance analysis is conducted with 200 mammograms of the DDSM database containing 100 mass cases of each benign and malignant. The optimal set of features is acquired via stepwise logistic regression method and the classification is carried out with Fisher linear discriminant analysis. The best area under the receiver operating characteristic curve and accuracy of 0.95 and 87.55% are achieved with the proposed method, which is further compared with some of the state-of-the-art competing methods.

  20. Using Controlled Landslide Initiation Experiments to Test Limit-Equilibrium Analyses of Slope Stability

    Science.gov (United States)

    Reid, M. E.; Iverson, R. M.; Brien, D. L.; Iverson, N. R.; Lahusen, R. G.; Logan, M.

    2004-12-01

    Most studies of landslide initiation employ limit equilibrium analyses of slope stability. Owing to a lack of detailed data, however, few studies have tested limit-equilibrium predictions against physical measurements of slope failure. We have conducted a series of field-scale, highly controlled landslide initiation experiments at the USGS debris-flow flume in Oregon; these experiments provide exceptional data to test limit equilibrium methods. In each of seven experiments, we attempted to induce failure in a 0.65m thick, 2m wide, 6m3 prism of loamy sand placed behind a retaining wall in the 31° sloping flume. We systematically investigated triggering of sliding by groundwater injection, by prolonged moderate-intensity sprinkling, and by bursts of high intensity sprinkling. We also used vibratory compaction to control soil porosity and thereby investigate differences in failure behavior of dense and loose soils. About 50 sensors were monitored at 20 Hz during the experiments, including nests of tiltmeters buried at 7 cm spacing to define subsurface failure geometry, and nests of tensiometers and pore-pressure sensors to define evolving pore-pressure fields. In addition, we performed ancillary laboratory tests to measure soil porosity, shear strength, hydraulic conductivity, and compressibility. In loose soils (porosity of 0.52 to 0.55), abrupt failure typically occurred along the flume bed after substantial soil deformation. In denser soils (porosity of 0.41 to 0.44), gradual failure occurred within the soil prism. All failure surfaces had a maximum length to depth ratio of about 7. In even denser soil (porosity of 0.39), we could not induce failure by sprinkling. The internal friction angle of the soils varied from 28° to 40° with decreasing porosity. We analyzed stability at failure, given the observed pore-pressure conditions just prior to large movement, using a 1-D infinite-slope method and a more complete 2-D Janbu method. Each method provides a static

  1. Determining the 95% limit of detection for waterborne pathogen analyses from primary concentration to qPCR

    Science.gov (United States)

    Stokdyk, Joel P.; Firnstahl, Aaron; Spencer, Susan K.; Burch, Tucker R; Borchardt, Mark A.

    2016-01-01

    The limit of detection (LOD) for qPCR-based analyses is not consistently defined or determined in studies on waterborne pathogens. Moreover, the LODs reported often reflect the qPCR assay alone rather than the entire sample process. Our objective was to develop an approach to determine the 95% LOD (lowest concentration at which 95% of positive samples are detected) for the entire process of waterborne pathogen detection. We began by spiking the lowest concentration that was consistently positive at the qPCR step (based on its standard curve) into each procedural step working backwards (i.e., extraction, secondary concentration, primary concentration), which established a concentration that was detectable following losses of the pathogen from processing. Using the fraction of positive replicates (n = 10) at this concentration, we selected and analyzed a second, and then third, concentration. If the fraction of positive replicates equaled 1 or 0 for two concentrations, we selected another. We calculated the LOD using probit analysis. To demonstrate our approach we determined the 95% LOD for Salmonella enterica serovar Typhimurium, adenovirus 41, and vaccine-derived poliovirus Sabin 3, which were 11, 12, and 6 genomic copies (gc) per reaction (rxn), respectively (equivalent to 1.3, 1.5, and 4.0 gc L−1 assuming the 1500 L tap-water sample volume prescribed in EPA Method 1615). This approach limited the number of analyses required and was amenable to testing multiple genetic targets simultaneously (i.e., spiking a single sample with multiple microorganisms). An LOD determined this way can facilitate study design, guide the number of required technical replicates, aid method evaluation, and inform data interpretation.

  2. Automatic Segmentation of Fluorescence Lifetime Microscopy Images of Cells Using Multi-Resolution Community Detection -A First Study

    Science.gov (United States)

    Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Orthaus, Sandra; Achilefu, Samuel; Nussinov, Zohar

    2014-01-01

    Inspired by a multi-resolution community detection (MCD) based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Further, using the proposed method, the mean-square error (MSE) in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The MCD method appeared to perform better than a popular spectral clustering based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in MSE with increasing resolution. PMID:24251410

  3. Compressed modes for variational problems in mathematical physics and compactly supported multiresolution basis for the Laplace operator

    Science.gov (United States)

    Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2014-03-01

    We will describe a general formalism for obtaining spatially localized (``sparse'') solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an L1 regularization term to the variational principle, which is shown to yield solutions with compact support (``compressed modes''). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. In addition, we introduce an L1 regularized variational framework for developing a spatially localized basis, compressed plane waves (CPWs), that spans the eigenspace of a differential operator, for instance, the Laplace operator. Our approach generalizes the concept of plane waves to an orthogonal real-space basis with multiresolution capabilities. Supported by NSF Award DMR-1106024 (VO), DOE Contract No. DE-FG02-05ER25710 (RC) and ONR Grant No. N00014-11-1-719 (SO).

  4. A scalable multi-resolution spatio-temporal model for brain activation and connectivity in fMRI data

    KAUST Repository

    Castruccio, Stefano

    2018-01-23

    Functional Magnetic Resonance Imaging (fMRI) is a primary modality for studying brain activity. Modeling spatial dependence of imaging data at different spatial scales is one of the main challenges of contemporary neuroimaging, and it could allow for accurate testing for significance in neural activity. The high dimensionality of this type of data (on the order of hundreds of thousands of voxels) poses serious modeling challenges and considerable computational constraints. For the sake of feasibility, standard models typically reduce dimensionality by modeling covariance among regions of interest (ROIs)—coarser or larger spatial units—rather than among voxels. However, ignoring spatial dependence at different scales could drastically reduce our ability to detect activation patterns in the brain and hence produce misleading results. We introduce a multi-resolution spatio-temporal model and a computationally efficient methodology to estimate cognitive control related activation and whole-brain connectivity. The proposed model allows for testing voxel-specific activation while accounting for non-stationary local spatial dependence within anatomically defined ROIs, as well as regional dependence (between-ROIs). The model is used in a motor-task fMRI study to investigate brain activation and connectivity patterns aimed at identifying associations between these patterns and regaining motor functionality following a stroke.

  5. Evaluating Climate Causation of Conflict in Darfur Using Multi-temporal, Multi-resolution Satellite Image Datasets With Novel Analyses

    Science.gov (United States)

    Brown, I.; Wennbom, M.

    2013-12-01

    Climate change, population growth and changes in traditional lifestyles have led to instabilities in traditional demarcations between neighboring ethic and religious groups in the Sahel region. This has resulted in a number of conflicts as groups resort to arms to settle disputes. Such disputes often centre on or are justified by competition for resources. The conflict in Darfur has been controversially explained by resource scarcity resulting from climate change. Here we analyse established methods of using satellite imagery to assess vegetation health in Darfur. Multi-decadal time series of observations are available using low spatial resolution visible-near infrared imagery. Typically normalized difference vegetation index (NDVI) analyses are produced to describe changes in vegetation ';greenness' or ';health'. Such approaches have been widely used to evaluate the long term development of vegetation in relation to climate variations across a wide range of environments from the Arctic to the Sahel. These datasets typically measure peak NDVI observed over a given interval and may introduce bias. It is furthermore unclear how the spatial organization of sparse vegetation may affect low resolution NDVI products. We develop and assess alternative measures of vegetation including descriptors of the growing season, wetness and resource availability. Expanding the range of parameters used in the analysis reduces our dependence on peak NDVI. Furthermore, these descriptors provide a better characterization of the growing season than the single NDVI measure. Using multi-sensor data we combine high temporal/moderate spatial resolution data with low temporal/high spatial resolution data to improve the spatial representativity of the observations and to provide improved spatial analysis of vegetation patterns. The approach places the high resolution observations in the NDVI context space using a longer time series of lower resolution imagery. The vegetation descriptors

  6. Framework for multi-resolution analyses of advanced traffic management strategies [summary].

    Science.gov (United States)

    2017-01-01

    Transportation planning relies extensively on software that can simulate and predict travel behavior in response to alternative transportation networks. However, different software packages view traffic at different scales. Some programs are based on...

  7. A Multi-Resolution Mode CMOS Image Sensor with a Novel Two-Step Single-Slope ADC for Intelligent Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Daehyeok Kim

    2017-06-01

    Full Text Available In this paper, we present a multi-resolution mode CMOS image sensor (CIS for intelligent surveillance system (ISS applications. A low column fixed-pattern noise (CFPN comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution with supply voltages of 3.3 V (analog and 1.8 V (digital and 14 frame/s of frame rates.

  8. A Multi-Resolution Mode CMOS Image Sensor with a Novel Two-Step Single-Slope ADC for Intelligent Surveillance Systems.

    Science.gov (United States)

    Kim, Daehyeok; Song, Minkyu; Choe, Byeongseong; Kim, Soo Youn

    2017-06-25

    In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS) is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution) with supply voltages of 3.3 V (analog) and 1.8 V (digital) and 14 frame/s of frame rates.

  9. Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy

    International Nuclear Information System (INIS)

    Tang, Jing; Rahmim, Arman

    2015-01-01

    A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE

  10. Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy

    Science.gov (United States)

    Tang, Jing; Rahmim, Arman

    2015-01-01

    A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE

  11. Multiresolution analysis of the spatiotemporal variability in global radiation observed by a dense network of 99 pyranometers

    Science.gov (United States)

    Lakshmi Madhavan, Bomidi; Deneke, Hartwig; Witthuhn, Jonas; Macke, Andreas

    2017-03-01

    The time series of global radiation observed by a dense network of 99 autonomous pyranometers during the HOPE campaign around Jülich, Germany, are investigated with a multiresolution analysis based on the maximum overlap discrete wavelet transform and the Haar wavelet. For different sky conditions, typical wavelet power spectra are calculated to quantify the timescale dependence of variability in global transmittance. Distinctly higher variability is observed at all frequencies in the power spectra of global transmittance under broken-cloud conditions compared to clear, cirrus, or overcast skies. The spatial autocorrelation function including its frequency dependence is determined to quantify the degree of similarity of two time series measurements as a function of their spatial separation. Distances ranging from 100 m to 10 km are considered, and a rapid decrease of the autocorrelation function is found with increasing frequency and distance. For frequencies above 1/3 min-1 and points separated by more than 1 km, variations in transmittance become completely uncorrelated. A method is introduced to estimate the deviation between a point measurement and a spatially averaged value for a surrounding domain, which takes into account domain size and averaging period, and is used to explore the representativeness of a single pyranometer observation for its surrounding region. Two distinct mechanisms are identified, which limit the representativeness; on the one hand, spatial averaging reduces variability and thus modifies the shape of the power spectrum. On the other hand, the correlation of variations of the spatially averaged field and a point measurement decreases rapidly with increasing temporal frequency. For a grid box of 10 km × 10 km and averaging periods of 1.5-3 h, the deviation of global transmittance between a point measurement and an area-averaged value depends on the prevailing sky conditions: 2.8 (clear), 1.8 (cirrus), 1.5 (overcast), and 4.2 % (broken

  12. Suitability of an MRMCE (multi-resolution minimum cross entropy) algorithm for online monitoring of a two-phase flow

    International Nuclear Information System (INIS)

    Wang, Qi; Wang, Huaxiang; Xin, Shan

    2011-01-01

    The flow regimes are important characteristics to describe two-phase flows, and measurement of two-phase flow parameters is becoming increasingly important in many industrial processes. Computerized tomography (CT) has been applied to two-phase/multi-phase flow measurement in recent years. Image reconstruction of CT often involves repeatedly solving large-dimensional matrix equations, which are computationally expensive, especially for the case of online flow regime identification. In this paper, minimum cross entropy reconstruction based on multi-resolution processing (MRMCE) is presented for oil–gas two-phase flow regime identification. A regularized MCE solution is obtained using the simultaneous multiplicative algebraic reconstruction technique (SMART) at a coarse resolution level, where important information on the reconstructed image is contained. Then, the solution in the finest resolution is obtained by inverse fast wavelet transformation. Both computer simulation and static/dynamic experiments were carried out for typical flow regimes. Results obtained indicate that the proposed method can dramatically reduce the computational time and improve the quality of the reconstructed image with suitable decomposition levels compared with the single-resolution maximum likelihood expectation maximization (MLEM), alternating minimization (AM), Landweber, iterative least square technique (ILST) and minimum cross entropy (MCE) methods. Therefore, the MRMCE method is suitable for identification of dynamic two-phase flow regimes

  13. Preliminary scoping safety analyses of the limiting design basis protected accidents for the Fast Flux Test Facility tritium production core

    International Nuclear Information System (INIS)

    Heard, F.J.

    1997-01-01

    The SAS4A/SASSYS-l computer code is used to perform a series of analyses for the limiting protected design basis transient events given a representative tritium and medical isotope production core design proposed for the Fast Flux Test Facility. The FFTF tritium and isotope production mission will require a different core loading which features higher enrichment fuel, tritium targets, and medical isotope production assemblies. Changes in several key core parameters, such as the Doppler coefficient and delayed neutron fraction will affect the transient response of the reactor. Both reactivity insertion and reduction of heat removal events were analyzed. The analysis methods and modeling assumptions are described. Results of the analyses and comparison against fuel pin performance criteria are presented to provide quantification that the plant protection system is adequate to maintain the necessary safety margins and assure cladding integrity

  14. Resistive Fault Current Limiter Prototypes: Mechanical and Electrical Analyses

    International Nuclear Information System (INIS)

    Martini, L; Arcos, I; Bocchi, M; Brambilla, R; Dalessandro, R; Frigerio, A; Rossi, V

    2006-01-01

    The problem of excessive short-circuit currents has become an important issue for power systems operators and there are clear indications for a growing interest in superconducting fault current limiter devices for MV and HV grids. In this work, we report on both simulation and electrical testing on single-phase SFCL prototypes developed in the framework of an Italian RTD project to be completed with a 3-phase SFCL unit by the end of 2005

  15. SeeSway - A free web-based system for analysing and exploring standing balance data.

    Science.gov (United States)

    Clark, Ross A; Pua, Yong-Hao

    2018-06-01

    Computerised posturography can be used to assess standing balance, and can predict poor functional outcomes in many clinical populations. A key limitation is the disparate signal filtering and analysis techniques, with many methods requiring custom computer programs. This paper discusses the creation of a freely available web-based software program, SeeSway (www.rehabtools.org/seesway), which was designed to provide powerful tools for pre-processing, analysing and visualising standing balance data in an easy to use and platform independent website. SeeSway links an interactive web platform with file upload capability to software systems including LabVIEW, Matlab, Python and R to perform the data filtering, analysis and visualisation of standing balance data. Input data can consist of any signal that comprises an anterior-posterior and medial-lateral coordinate trace such as center of pressure or mass displacement. This allows it to be used with systems including criterion reference commercial force platforms and three dimensional motion analysis, smartphones, accelerometers and low-cost technology such as Nintendo Wii Balance Board and Microsoft Kinect. Filtering options include Butterworth, weighted and unweighted moving average, and discrete wavelet transforms. Analysis methods include standard techniques such as path length, amplitude, and root mean square in addition to less common but potentially promising methods such as sample entropy, detrended fluctuation analysis and multiresolution wavelet analysis. These data are visualised using scalograms, which chart the change in frequency content over time, scatterplots and standard line charts. This provides the user with a detailed understanding of their results, and how their different pre-processing and analysis method selections affect their findings. An example of the data analysis techniques is provided in the paper, with graphical representation of how advanced analysis methods can better discriminate

  16. Random error in cardiovascular meta-analyses

    DEFF Research Database (Denmark)

    Albalawi, Zaina; McAlister, Finlay A; Thorlund, Kristian

    2013-01-01

    BACKGROUND: Cochrane reviews are viewed as the gold standard in meta-analyses given their efforts to identify and limit systematic error which could cause spurious conclusions. The potential for random error to cause spurious conclusions in meta-analyses is less well appreciated. METHODS: We exam...

  17. Shoreline change after 12 years of tsunami in Banda Aceh, Indonesia: a multi-resolution, multi-temporal satellite data and GIS approach

    Science.gov (United States)

    Sugianto, S.; Heriansyah; Darusman; Rusdi, M.; Karim, A.

    2018-04-01

    The Indian Ocean Tsunami event on the 26 December 2004 has caused severe damage of some shorelines in Banda Aceh City, Indonesia. Tracing back the impact can be seen using remote sensing data combined with GIS. The approach is incorporated with image processing to analyze the extent of shoreline changes with multi-temporal data after 12 years of tsunami. This study demonstrates multi-resolution and multi-temporal satellite images of QuickBird and IKONOS to demarcate the shoreline of Banda Aceh shoreline from before and after tsunami. The research has demonstrated a significant change to the shoreline in the form of abrasion between 2004 and 2005 from few meters to hundred meters’ change. The change between 2004 and 2011 has not returned to the previous stage of shoreline before the tsunami, considered post tsunami impact. The abrasion occurs between 18.3 to 194.93 meters. Further, the change in 2009-2011 shows slowly change of shoreline of Banda Aceh, considered without impact of tsunami e.g. abrasion caused by ocean waves that erode the coast and on specific areas accretion occurs caused by sediment carried by the river flow into the sea near the shoreline of the study area.

  18. Hanging-wall deformation above a normal fault: sequential limit analyses

    Science.gov (United States)

    Yuan, Xiaoping; Leroy, Yves M.; Maillot, Bertrand

    2015-04-01

    The deformation in the hanging wall above a segmented normal fault is analysed with the sequential limit analysis (SLA). The method combines some predictions on the dip and position of the active fault and axial surface, with geometrical evolution à la Suppe (Groshong, 1989). Two problems are considered. The first followed the prototype proposed by Patton (2005) with a pre-defined convex, segmented fault. The orientation of the upper segment of the normal fault is an unknown in the second problem. The loading in both problems consists of the retreat of the back wall and the sedimentation. This sedimentation starts from the lowest point of the topography and acts at the rate rs relative to the wall retreat rate. For the first problem, the normal fault either has a zero friction or a friction value set to 25o or 30o to fit the experimental results (Patton, 2005). In the zero friction case, a hanging wall anticline develops much like in the experiments. In the 25o friction case, slip on the upper segment is accompanied by rotation of the axial plane producing a broad shear zone rooted at the fault bend. The same observation is made in the 30o case, but without slip on the upper segment. Experimental outcomes show a behaviour in between these two latter cases. For the second problem, mechanics predicts a concave fault bend with an upper segment dip decreasing during extension. The axial surface rooting at the normal fault bend sees its dips increasing during extension resulting in a curved roll-over. Softening on the normal fault leads to a stepwise rotation responsible for strain partitioning into small blocks in the hanging wall. The rotation is due to the subsidence of the topography above the hanging wall. Sedimentation in the lowest region thus reduces the rotations. Note that these rotations predicted by mechanics are not accounted for in most geometrical approaches (Xiao and Suppe, 1992) and are observed in sand box experiments (Egholm et al., 2007, referring

  19. The Multi-Resolution Land Characteristics (MRLC) Consortium: 20 years of development and integration of USA national land cover data

    Science.gov (United States)

    Wickham, James D.; Homer, Collin G.; Vogelmann, James E.; McKerrow, Alexa; Mueller, Rick; Herold, Nate; Coluston, John

    2014-01-01

    The Multi-Resolution Land Characteristics (MRLC) Consortium demonstrates the national benefits of USA Federal collaboration. Starting in the mid-1990s as a small group with the straightforward goal of compiling a comprehensive national Landsat dataset that could be used to meet agencies’ needs, MRLC has grown into a group of 10 USA Federal Agencies that coordinate the production of five different products, including the National Land Cover Database (NLCD), the Coastal Change Analysis Program (C-CAP), the Cropland Data Layer (CDL), the Gap Analysis Program (GAP), and the Landscape Fire and Resource Management Planning Tools (LANDFIRE). As a set, the products include almost every aspect of land cover from impervious surface to detailed crop and vegetation types to fire fuel classes. Some products can be used for land cover change assessments because they cover multiple time periods. The MRLC Consortium has become a collaborative forum, where members share research, methodological approaches, and data to produce products using established protocols, and we believe it is a model for the production of integrated land cover products at national to continental scales. We provide a brief overview of each of the main products produced by MRLC and examples of how each product has been used. We follow that with a discussion of the impact of the MRLC program and a brief overview of future plans.

  20. A 4.5 km resolution Arctic Ocean simulation with the global multi-resolution model FESOM 1.4

    Science.gov (United States)

    Wang, Qiang; Wekerle, Claudia; Danilov, Sergey; Wang, Xuezhu; Jung, Thomas

    2018-04-01

    In the framework of developing a global modeling system which can facilitate modeling studies on Arctic Ocean and high- to midlatitude linkage, we evaluate the Arctic Ocean simulated by the multi-resolution Finite Element Sea ice-Ocean Model (FESOM). To explore the value of using high horizontal resolution for Arctic Ocean modeling, we use two global meshes differing in the horizontal resolution only in the Arctic Ocean (24 km vs. 4.5 km). The high resolution significantly improves the model's representation of the Arctic Ocean. The most pronounced improvement is in the Arctic intermediate layer, in terms of both Atlantic Water (AW) mean state and variability. The deepening and thickening bias of the AW layer, a common issue found in coarse-resolution simulations, is significantly alleviated by using higher resolution. The topographic steering of the AW is stronger and the seasonal and interannual temperature variability along the ocean bottom topography is enhanced in the high-resolution simulation. The high resolution also improves the ocean surface circulation, mainly through a better representation of the narrow straits in the Canadian Arctic Archipelago (CAA). The representation of CAA throughflow not only influences the release of water masses through the other gateways but also the circulation pathways inside the Arctic Ocean. However, the mean state and variability of Arctic freshwater content and the variability of freshwater transport through the Arctic gateways appear not to be very sensitive to the increase in resolution employed here. By highlighting the issues that are independent of model resolution, we address that other efforts including the improvement of parameterizations are still required.

  1. The Limited Informativeness of Meta-Analyses of Media Effects.

    Science.gov (United States)

    Valkenburg, Patti M

    2015-09-01

    In this issue of Perspectives on Psychological Science, Christopher Ferguson reports on a meta-analysis examining the relationship between children's video game use and several outcome variables, including aggression and attention deficit symptoms (Ferguson, 2015, this issue). In this commentary, I compare Ferguson's nonsignificant effects sizes with earlier meta-analyses on the same topics that yielded larger, significant effect sizes. I argue that Ferguson's choice for partial effects sizes is unjustified on both methodological and theoretical grounds. I then plead for a more constructive debate on the effects of violent video games on children and adolescents. Until now, this debate has been dominated by two camps with diametrically opposed views on the effects of violent media on children. However, even the earliest media effects studies tell us that children can react quite differently to the same media content. Thus, if researchers truly want to understand how media affect children, rather than fight for the presence or absence of effects, they need to adopt a perspective that takes differential susceptibility to media effects more seriously. © The Author(s) 2015.

  2. Improvements in RIMS Isotopic Precision: Application to in situ atom-limited analyses

    International Nuclear Information System (INIS)

    Levine, J.; Stephan, T.; Savina, M.; Pellin, M.

    2009-01-01

    Resonance ionization mass spectrometry offers high sensitivity and elemental selectivity in microanalysis, but the isotopic precision attainable by this technique has been limited. Here we report instrumental modifications to improve the precision of RIMS isotope ratio measurements. Special attention must be paid to eliminating pulse-to-pulse variations in the time-of-flight mass spectrometer through which the photoions travel, and resonant excitation schemes must be chosen such that the resonance transitions can substantially power-broadened to cover the isotope shifts. We report resonance ionization measurements of chromium isotope ratios with statistics-limited precision better than 1%.

  3. Multiscale geometric modeling of macromolecules II: Lagrangian representation

    Science.gov (United States)

    Feng, Xin; Xia, Kelin; Chen, Zhan; Tong, Yiying; Wei, Guo-Wei

    2013-01-01

    Geometric modeling of biomolecules plays an essential role in the conceptualization of biolmolecular structure, function, dynamics and transport. Qualitatively, geometric modeling offers a basis for molecular visualization, which is crucial for the understanding of molecular structure and interactions. Quantitatively, geometric modeling bridges the gap between molecular information, such as that from X-ray, NMR and cryo-EM, and theoretical/mathematical models, such as molecular dynamics, the Poisson-Boltzmann equation and the Nernst-Planck equation. In this work, we present a family of variational multiscale geometric models for macromolecular systems. Our models are able to combine multiresolution geometric modeling with multiscale electrostatic modeling in a unified variational framework. We discuss a suite of techniques for molecular surface generation, molecular surface meshing, molecular volumetric meshing, and the estimation of Hadwiger’s functionals. Emphasis is given to the multiresolution representations of biomolecules and the associated multiscale electrostatic analyses as well as multiresolution curvature characterizations. The resulting fine resolution representations of a biomolecular system enable the detailed analysis of solvent-solute interaction, and ion channel dynamics, while our coarse resolution representations highlight the compatibility of protein-ligand bindings and possibility of protein-protein interactions. PMID:23813599

  4. Laser Beam Focus Analyser

    DEFF Research Database (Denmark)

    Nielsen, Peter Carøe; Hansen, Hans Nørgaard; Olsen, Flemming Ove

    2007-01-01

    the obtainable features in direct laser machining as well as heat affected zones in welding processes. This paper describes the development of a measuring unit capable of analysing beam shape and diameter of lasers to be used in manufacturing processes. The analyser is based on the principle of a rotating......The quantitative and qualitative description of laser beam characteristics is important for process implementation and optimisation. In particular, a need for quantitative characterisation of beam diameter was identified when using fibre lasers for micro manufacturing. Here the beam diameter limits...... mechanical wire being swept through the laser beam at varying Z-heights. The reflected signal is analysed and the resulting beam profile determined. The development comprised the design of a flexible fixture capable of providing both rotation and Z-axis movement, control software including data capture...

  5. Heat transfer calculations for the High Flux Isotope Reactor (HFIR). Technical specifications: bases for safety limits and limiting safety system settings

    International Nuclear Information System (INIS)

    Sims, T.M.; Swanks, J.H.

    1977-09-01

    Heat transfer analyses, in support of the preparation of the HFIR technical specifications, were made to establish the bases for the safety limits and limiting safety system settings applicable to the HFIR. The results of these analyses, along with the detailed bases, are presented

  6. The paddle move commonly used in magic tricks as a means for analysing the perceptual limits of combined motion trajectories.

    Science.gov (United States)

    Hergovich, Andreas; Gröbl, Kristian; Carbon, Claus-Christian

    2011-01-01

    Following Gustav Kuhn's inspiring technique of using magicians' acts as a source of insight into cognitive sciences, we used the 'paddle move' for testing the psychophysics of combined movement trajectories. The paddle move is a standard technique in magic consisting of a combined rotating and tilting movement. Careful control of the mutual speed parameters of the two movements makes it possible to inhibit the perception of the rotation, letting the 'magic' effect emerge--a sudden change of the tilted object. By using 3-D animated computer graphics we analysed the interaction of different angular speeds and the object shape/size parameters in evoking this motion disappearance effect. An angular speed of 540 degrees s(-1) (1.5 rev. s(-1)) sufficed to inhibit the perception of the rotary movement with the smallest object showing the strongest effect. 90.7% of the 172 participants were not able to perceive the rotary movement at an angular speed of 1125 degrees s(-1) (3.125 rev. s(-1)). Further analysis by multiple linear regression revealed major influences on the effectiveness of the magic trick of object height and object area, demonstrating the applicability of analysing key factors of magic tricks to reveal limits of the perceptual system.

  7. Moving beyond a limited follow-up in cost-effectiveness analyses of behavioral interventions

    NARCIS (Netherlands)

    Prenger, Hendrikje Cornelia; Pieterse, Marcel E.; Braakman-Jansen, Louise Marie Antoinette; van der Palen, Jacobus Adrianus Maria; Christenhusz, Lieke C.A.; Seydel, E.R.

    2013-01-01

    Background Cost-effectiveness analyses of behavioral interventions typically use a dichotomous outcome criterion. However, achieving behavioral change is a complex process involving several steps towards a change in behavior. Delayed effects may occur after an intervention period ends, which can

  8. Individual-based analyses reveal limited functional overlap in a coral reef fish community.

    Science.gov (United States)

    Brandl, Simon J; Bellwood, David R

    2014-05-01

    Detailed knowledge of a species' functional niche is crucial for the study of ecological communities and processes. The extent of niche overlap, functional redundancy and functional complementarity is of particular importance if we are to understand ecosystem processes and their vulnerability to disturbances. Coral reefs are among the most threatened marine systems, and anthropogenic activity is changing the functional composition of reefs. The loss of herbivorous fishes is particularly concerning as the removal of algae is crucial for the growth and survival of corals. Yet, the foraging patterns of the various herbivorous fish species are poorly understood. Using a multidimensional framework, we present novel individual-based analyses of species' realized functional niches, which we apply to a herbivorous coral reef fish community. In calculating niche volumes for 21 species, based on their microhabitat utilization patterns during foraging, and computing functional overlaps, we provide a measurement of functional redundancy or complementarity. Complementarity is the inverse of redundancy and is defined as less than 50% overlap in niche volumes. The analyses reveal extensive complementarity with an average functional overlap of just 15.2%. Furthermore, the analyses divide herbivorous reef fishes into two broad groups. The first group (predominantly surgeonfishes and parrotfishes) comprises species feeding on exposed surfaces and predominantly open reef matrix or sandy substrata, resulting in small niche volumes and extensive complementarity. In contrast, the second group consists of species (predominantly rabbitfishes) that feed over a wider range of microhabitats, penetrating the reef matrix to exploit concealed surfaces of various substratum types. These species show high variation among individuals, leading to large niche volumes, more overlap and less complementarity. These results may have crucial consequences for our understanding of herbivorous processes on

  9. Multiresolution edge detection using enhanced fuzzy c-means clustering for ultrasound image speckle reduction

    Energy Technology Data Exchange (ETDEWEB)

    Tsantis, Stavros [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 26504 (Greece); Spiliopoulos, Stavros; Karnabatidis, Dimitrios [Department of Radiology, School of Medicine, University of Patras, Rion, GR 26504 (Greece); Skouroliakou, Aikaterini [Department of Energy Technology Engineering, Technological Education Institute of Athens, Athens 12210 (Greece); Hazle, John D. [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Kagadis, George C., E-mail: gkagad@gmail.com, E-mail: George.Kagadis@med.upatras.gr, E-mail: GKagadis@mdanderson.org [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 26504, Greece and Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)

    2014-07-15

    Purpose: Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. Methods: The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. Results: A total of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists’ qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. Conclusions: A

  10. Multiresolution edge detection using enhanced fuzzy c-means clustering for ultrasound image speckle reduction

    International Nuclear Information System (INIS)

    Tsantis, Stavros; Spiliopoulos, Stavros; Karnabatidis, Dimitrios; Skouroliakou, Aikaterini; Hazle, John D.; Kagadis, George C.

    2014-01-01

    Purpose: Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. Methods: The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. Results: A total of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists’ qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. Conclusions: A

  11. Analyses and characterization of double shell tank

    Energy Technology Data Exchange (ETDEWEB)

    1994-10-04

    Evaporator candidate feed from tank 241-AP-108 (108-AP) was sampled under prescribed protocol. Physical, inorganic, and radiochemical analyses were performed on tank 108-AP. Characterization of evaporator feed tank waste is needed primarily for an evaluation of its suitability to be safely processed through the evaporator. Such analyses should provide sufficient information regarding the waste composition to confidently determine whether constituent concentrations are within not only safe operating limits, but should also be relevant to functional limits for operation of the evaporator. Characterization of tank constituent concentrations should provide data which enable a prediction of where the types and amounts of environmentally hazardous waste are likely to occur in the evaporator product streams.

  12. Analyses and characterization of double shell tank

    International Nuclear Information System (INIS)

    1994-01-01

    Evaporator candidate feed from tank 241-AP-108 (108-AP) was sampled under prescribed protocol. Physical, inorganic, and radiochemical analyses were performed on tank 108-AP. Characterization of evaporator feed tank waste is needed primarily for an evaluation of its suitability to be safely processed through the evaporator. Such analyses should provide sufficient information regarding the waste composition to confidently determine whether constituent concentrations are within not only safe operating limits, but should also be relevant to functional limits for operation of the evaporator. Characterization of tank constituent concentrations should provide data which enable a prediction of where the types and amounts of environmentally hazardous waste are likely to occur in the evaporator product streams

  13. Current limiters

    Energy Technology Data Exchange (ETDEWEB)

    Loescher, D.H. [Sandia National Labs., Albuquerque, NM (United States). Systems Surety Assessment Dept.; Noren, K. [Univ. of Idaho, Moscow, ID (United States). Dept. of Electrical Engineering

    1996-09-01

    The current that flows between the electrical test equipment and the nuclear explosive must be limited to safe levels during electrical tests conducted on nuclear explosives at the DOE Pantex facility. The safest way to limit the current is to use batteries that can provide only acceptably low current into a short circuit; unfortunately this is not always possible. When it is not possible, current limiters, along with other design features, are used to limit the current. Three types of current limiters, the fuse blower, the resistor limiter, and the MOSFET-pass-transistor limiters, are used extensively in Pantex test equipment. Detailed failure mode and effects analyses were conducted on these limiters. Two other types of limiters were also analyzed. It was found that there is no best type of limiter that should be used in all applications. The fuse blower has advantages when many circuits must be monitored, a low insertion voltage drop is important, and size and weight must be kept low. However, this limiter has many failure modes that can lead to the loss of over current protection. The resistor limiter is simple and inexpensive, but is normally usable only on circuits for which the nominal current is less than a few tens of milliamperes. The MOSFET limiter can be used on high current circuits, but it has a number of single point failure modes that can lead to a loss of protective action. Because bad component placement or poor wire routing can defeat any limiter, placement and routing must be designed carefully and documented thoroughly.

  14. Analyses of gust fronts by means of limited area NWP model outputs

    Czech Academy of Sciences Publication Activity Database

    Kašpar, Marek

    67-68, - (2003), s. 559-572 ISSN 0169-8095 R&D Projects: GA ČR GA205/00/1451 Institutional research plan: CEZ:AV0Z3042911 Keywords : gust front * limited area NWP model * output Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.012, year: 2003

  15. Accelerated safety analyses - structural analyses Phase I - structural sensitivity evaluation of single- and double-shell waste storage tanks

    International Nuclear Information System (INIS)

    Becker, D.L.

    1994-11-01

    Accelerated Safety Analyses - Phase I (ASA-Phase I) have been conducted to assess the appropriateness of existing tank farm operational controls and/or limits as now stipulated in the Operational Safety Requirements (OSRs) and Operating Specification Documents, and to establish a technical basis for the waste tank operating safety envelope. Structural sensitivity analyses were performed to assess the response of the different waste tank configurations to variations in loading conditions, uncertainties in loading parameters, and uncertainties in material characteristics. Extensive documentation of the sensitivity analyses conducted and results obtained are provided in the detailed ASA-Phase I report, Structural Sensitivity Evaluation of Single- and Double-Shell Waste Tanks for Accelerated Safety Analysis - Phase I. This document provides a summary of the accelerated safety analyses sensitivity evaluations and the resulting findings

  16. Heat deposition on the partial limiter

    International Nuclear Information System (INIS)

    Itoh, Kimitaka; Itoh, Sanae-I; Nagasaki, Kazunobu.

    1990-01-01

    The effect of the partial limiter in the outermost magnetic surface of toroidal plasmas is studied. The power deposition on the partial limiter and its effect on the temperature profile are analysed. Interpretation in terms of the perpendicular heat conductivity is also discussed. (author)

  17. Finite element limit analysis based plastic limit pressure solutions for cracked pipes

    International Nuclear Information System (INIS)

    Shim, Do Jun; Huh, Nam Su; Kim, Yun Jae; Kim, Young Jin

    2002-01-01

    Based on detailed FE limit analyses, the present paper provides tractable approximations for plastic limit pressure solutions for axial through-wall cracked pipe; axial (inner) surface cracked pipe; circumferential through-wall cracked pipe; and circumferential (inner) surface cracked pipe. Comparisons with existing analytical and empirical solutions show a large discrepancy in circumferential short through-wall cracks and in surface cracks (both axial and circumferential). Being based on detailed 3-D FE limit analysis, the present solutions are believed to be the most accurate, and thus to be valuable information not only for plastic collapse analysis of pressurised piping but also for estimating non-linear fracture mechanics parameters based on the reference stress approach

  18. Fourier Transform Mass Spectrometry: The Transformation of Modern Environmental Analyses

    Science.gov (United States)

    Lim, Lucy; Yan, Fangzhi; Bach, Stephen; Pihakari, Katianna; Klein, David

    2016-01-01

    Unknown compounds in environmental samples are difficult to identify using standard mass spectrometric methods. Fourier transform mass spectrometry (FTMS) has revolutionized how environmental analyses are performed. With its unsurpassed mass accuracy, high resolution and sensitivity, researchers now have a tool for difficult and complex environmental analyses. Two features of FTMS are responsible for changing the face of how complex analyses are accomplished. First is the ability to quickly and with high mass accuracy determine the presence of unknown chemical residues in samples. For years, the field has been limited by mass spectrometric methods that were based on knowing what compounds of interest were. Secondly, by utilizing the high resolution capabilities coupled with the low detection limits of FTMS, analysts also could dilute the sample sufficiently to minimize the ionization changes from varied matrices. PMID:26784175

  19. Legal weight truck cask model impact limiter response

    International Nuclear Information System (INIS)

    Meinert, N.M.; Shappert, L.B.

    1989-01-01

    Dynamic and quasi-static quarter-scale model testing was performed to supplement the analytical case presented in the Nuclear Assurance Corporation Legal Weight Truck (NAC LWT) cask transport licensing application. Four successive drop tests from 9.0 meters (30 feet) onto an unyielding surface and one 1.0-meter (40-inch) drop onto a scale mild steel pin 3.8 centimeters (1.5 inches) in diameter, corroborated the impact limiter design and structural analyses presented in the licensing application. Quantitative measurements, made during drop testing, support the impact limiter analyses. High-speed photography of the tests confirm that only a small amount of energy is elastically stored in the aluminum honeycomb and that oblique drop slapdown is not significant. The qualitative conclusion is that the limiter protected LWT cask will not sustain permanent structural damage and containment will be maintained, subsequent to a hypothetical accident, as shown by structural analyses

  20. Pump limiter studies in Tore Supra

    International Nuclear Information System (INIS)

    Chatelier, M.; Bonnel, P.; Bruneau, J.L.; Gil, C.; Grisolia, C.; Loarer, T.; Martin, G.; Pegourie, B.; Rodriguez, L.

    1991-01-01

    The aim of the Tore Supra pump limiter program is to study particle exhaust with a pump limiter system in long-pulse discharges with continuous pellet fueling and strong auxiliary heating. The pump limiter system consists of six vertical modules, located at the bottom of the machine, and one horizontal module at the outer midplane. The instrumentation of the limiter included pressure gauges, a residual gas analyser Langmuir probes, a spectrometer and water calorimeters. Initial results in low-density discharges, with the outboard limiter only, showed a modest effect on the plasma density, while large exhaust fluxes were measured in the pump limiter, without any external fueling

  1. Hydrogeologic characterization and evolution of the 'excavation damaged zone' by statistical analyses of pressure signals: application to galleries excavated at the clay-stone sites of Mont Terri (Ga98) and Tournemire (Ga03)

    International Nuclear Information System (INIS)

    Fatmi, H.; Ababou, R.; Matray, J.M.; Joly, C.

    2010-01-01

    Document available in extended abstract form only. This paper presents methods of statistical analysis and interpretation of hydrogeological signals in clayey formations, e.g., pore water pressure and atmospheric pressure. The purpose of these analyses is to characterize the hydraulic behaviour of this type of formation in the case of a deep repository of Mid- Level/High-Level and Long-lived radioactive wastes, and to study the evolution of the geologic formation and its EDZ (Excavation Damaged Zone) during the excavation of galleries. We focus on galleries Ga98 and Ga03 in the sites of Mont Terri (Jura, Switzerland) and Tournemire (France, Aveyron), through data collected in the BPP- 1 and PH2 boreholes, respectively. The Mont Terri site, crossing the Aalenian Opalinus clay-stone, is an underground laboratory managed by an international consortium, namely the Mont Terri project (Switzerland). The Tournemire site, crossing the Toarcian clay-stone, is an Underground Research facility managed by IRSN (France). We have analysed pore water and atmospheric pressure signals at these sites, sometimes in correlation with other data. The methods of analysis are based on the theory of stationary random signals (correlation functions, Fourier spectra, transfer functions, envelopes), and on multi-resolution wavelet analysis (adapted to nonstationary and evolutionary signals). These methods are also combined with filtering techniques, and they can be used for single signals as well as pairs of signals (cross-analyses). The objective of this work is to exploit pressure measurements in selected boreholes from the two compacted clay sites, in order to: - evaluate phenomena affecting the measurements (earth tides, barometric pressures..); - estimate hydraulic properties (specific storage..) of the clay-stones prior to excavation works and compare them with those estimated by pulse or slug tests on shorter time scales; - analyze the effects of drift excavation on pore pressures

  2. Fourier Transform Mass Spectrometry: The Transformation of Modern Environmental Analyses

    Directory of Open Access Journals (Sweden)

    Lucy Lim

    2016-01-01

    Full Text Available Unknown compounds in environmental samples are difficult to identify using standard mass spectrometric methods. Fourier transform mass spectrometry (FTMS has revolutionized how environmental analyses are performed. With its unsurpassed mass accuracy, high resolution and sensitivity, researchers now have a tool for difficult and complex environmental analyses. Two features of FTMS are responsible for changing the face of how complex analyses are accomplished. First is the ability to quickly and with high mass accuracy determine the presence of unknown chemical residues in samples. For years, the field has been limited by mass spectrometric methods that were based on knowing what compounds of interest were. Secondly, by utilizing the high resolution capabilities coupled with the low detection limits of FTMS, analysts also could dilute the sample sufficiently to minimize the ionization changes from varied matrices.

  3. Microstructures, Forming Limit and Failure Analyses of Inconel 718 Sheets for Fabrication of Aerospace Components

    Science.gov (United States)

    Sajun Prasad, K.; Panda, Sushanta Kumar; Kar, Sujoy Kumar; Sen, Mainak; Murty, S. V. S. Naryana; Sharma, Sharad Chandra

    2017-04-01

    Recently, aerospace industries have shown increasing interest in forming limits of Inconel 718 sheet metals, which can be utilised in designing tools and selection of process parameters for successful fabrication of components. In the present work, stress-strain response with failure strains was evaluated by uniaxial tensile tests in different orientations, and two-stage work-hardening behavior was observed. In spite of highly preferred texture, tensile properties showed minor variations in different orientations due to the random distribution of nanoprecipitates. The forming limit strains were evaluated by deforming specimens in seven different strain paths using limiting dome height (LDH) test facility. Mostly, the specimens failed without prior indication of localized necking. Thus, fracture forming limit diagram (FFLD) was evaluated, and bending correction was imposed due to the use of sub-size hemispherical punch. The failure strains of FFLD were converted into major-minor stress space ( σ-FFLD) and effective plastic strain-stress triaxiality space ( ηEPS-FFLD) as failure criteria to avoid the strain path dependence. Moreover, FE model was developed, and the LDH, strain distribution and failure location were predicted successfully using above-mentioned failure criteria with two stages of work hardening. Fractographs were correlated with the fracture behavior and formability of sheet metal.

  4. Methodologies for the practical determination and use of method detection limits

    International Nuclear Information System (INIS)

    Rucker, T.L.

    1995-01-01

    Method detection limits have often been misunderstood and misused. The basic definitions developed by Lloyd Currie and others have been combined with assumptions that are inappropriate for many types of radiochemical analyses. A partical way for determining detection limits based on Currie's basic definition is presented that removes the reliance on assumptions and that accounts for the total measurement uncertainty. Examples of proper and improper use of detection limits are also presented, including detection limits reported by commercial software for gamma spectroscopy and neutron activation analyses. (author) 6 refs.; 2 figs

  5. Determination of detection limits for a VPD ICPMS method of analysis; Determination des limites de detection d'une methode d'analyse VPD ICPMS

    Energy Technology Data Exchange (ETDEWEB)

    Badard, M.; Veillerot, M

    2007-07-01

    This training course report presents the different methods of detection and quantifying of metallic impurities in semiconductors. One of the most precise technique is the collection of metal impurities by vapor phase decomposition (VPD) followed by their analysis by ICPMS (inductively coupled plasma mass spectrometry). The study shows the importance of detection limits in the domain of chemical analysis and the way to determine them for the ICPMS analysis. The results found on detection limits are excellent. Even if the detection limits reached with ICPMS performed after manual or automatic VPD are much higher than detection limits of ICPMS alone, this method remains one of the most sensible for ultra-traces analysis. (J.S.)

  6. Possibilities, shortcomings, and limits of economic analyses of a phaseout of nuclear power

    International Nuclear Information System (INIS)

    Schefold, B.

    1989-01-01

    The economic dimensions of a phaseout can be modelled very partially only. For instance, the question concerning the extent of effective competition is to be answered quantitatively only within certain limits. As to the related order - political evaluation of the measures required both for boosting and phasing out nuclear power - as well as other aspects of social compatibility - arguments can be brought forward only, without giving a final judgement. While standards do exist for this, there are questions which cannot be answered without a certain degree of subjectivity. In all likelihood, economic analysis has in fact reached limits: our complex concepts of the interaction between economy, econometry, energy scenarios, political preconditions and social conditions given would be much simplified if it was possible to predict the consequences of a whole cluster of policy measures of the kind that would be associated with the decision for a phaseout. This demand on analysis is not likely to be fulfilled within a foreseeable period of time. For the time being, we must put up with comparing scenarios as sincerely as possible. This is bound to provide incomplete answers. (orig./HSCH) [de

  7. US ITER limiter module design

    International Nuclear Information System (INIS)

    Mattas, R.F.; Billone, M.; Hassanein, A.

    1996-08-01

    The recent U.S. effort on the ITER (International Thermonuclear Experimental Reactor) shield has been focused on the limiter module design. This is a multi-disciplinary effort that covers design layout, fabrication, thermal hydraulics, materials evaluation, thermo- mechanical response, and predicted response during off-normal events. The results of design analyses are presented. Conclusions and recommendations are also presented concerning, the capability of the limiter modules to meet performance goals and to be fabricated within design specifications using existing technology

  8. Advanced toroidal facility vaccuum vessel stress analyses

    International Nuclear Information System (INIS)

    Hammonds, C.J.; Mayhall, J.A.

    1987-01-01

    The complex geometry of the Advance Toroidal Facility (ATF) vacuum vessel required special analysis techniques in investigating the structural behavior of the design. The response of a large-scale finite element model was found for transportation and operational loading. Several computer codes and systems, including the National Magnetic Fusion Energy Computer Center Cray machines, were implemented in accomplishing these analyses. The work combined complex methods that taxed the limits of both the codes and the computer systems involved. Using MSC/NASTRAN cyclic-symmetry solutions permitted using only 1/12 of the vessel geometry to mathematically analyze the entire vessel. This allowed the greater detail and accuracy demanded by the complex geometry of the vessel. Critical buckling-pressure analyses were performed with the same model. The development, results, and problems encountered in performing these analyses are described. 5 refs., 3 figs

  9. Vacuum physics analysis of HT-7 superconducting tokamak pump limiter

    International Nuclear Information System (INIS)

    Hu Jiansheng; Li Chengfu; He Yexi

    1998-10-01

    The pump limiter is analysed with HT-7 superconducting tokamak parameter and the pump limiter construction. The particle exhaust of the pump limiter can be to achieve about 7.7%. So the pump limiter can be applied in the HT-7 device and will make good affection in plasma discharge

  10. EU-FP7-iMARS: analysis of Mars multi-resolution images using auto-coregistration, data mining and crowd source techniques: A Mid-term Report

    Science.gov (United States)

    Muller, J.-P.; Yershov, V.; Sidiropoulos, P.; Gwinner, K.; Willner, K.; Fanara, L.; Waelisch, M.; van Gasselt, S.; Walter, S.; Ivanov, A.; Cantini, F.; Morley, J. G.; Sprinks, J.; Giordano, M.; Wardlaw, J.; Kim, J.-R.; Chen, W.-T.; Houghton, R.; Bamford, S.

    2015-10-01

    Understanding the role of different solid surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 8 years, especially in 3D imaging of surface shape (down to resolutions of 10s of cms) and subsequent terrain correction of imagery from orbiting spacecraft. This has led to the potential to be able to overlay different epochs back to the mid-1970s. Within iMars, a processing system has been developed to generate 3D Digital Terrain Models (DTMs) and corresponding OrthoRectified Images (ORIs) fully automatically from NASA MRO HiRISE and CTX stereo-pairs which are coregistered to corresponding HRSC ORI/DTMs. In parallel, iMars has developed a fully automated processing chain for co-registering level-1 (EDR) images from all previous NASA orbital missions to these HRSC ORIs and in the case of HiRISE these are further co-registered to previously co-registered CTX-to-HRSC ORIs. Examples will be shown of these multi-resolution ORIs and the application of different data mining algorithms to change detection using these co-registered images. iMars has recently launched a citizen science experiment to evaluate best practices for future citizen scientist validation of such data mining processed results. An example of the iMars website will be shown along with an embedded Version 0 prototype of a webGIS based on OGC standards.

  11. Analyses of MHD instabilities

    International Nuclear Information System (INIS)

    Takeda, Tatsuoki

    1985-01-01

    In this article analyses of the MHD stabilities which govern the global behavior of a fusion plasma are described from the viewpoint of the numerical computation. First, we describe the high accuracy calculation of the MHD equilibrium and then the analysis of the linear MHD instability. The former is the basis of the stability analysis and the latter is closely related to the limiting beta value which is a very important theoretical issue of the tokamak research. To attain a stable tokamak plasma with good confinement property it is necessary to control or suppress disruptive instabilities. We, next, describe the nonlinear MHD instabilities which relate with the disruption phenomena. Lastly, we describe vectorization of the MHD codes. The above MHD codes for fusion plasma analyses are relatively simple though very time-consuming and parts of the codes which need a lot of CPU time concentrate on a small portion of the codes, moreover, the codes are usually used by the developers of the codes themselves, which make it comparatively easy to attain a high performance ratio on the vector processor. (author)

  12. limit loads for wall-thinning feeder pipes under combined bending and internal pressure

    International Nuclear Information System (INIS)

    Je, Jin Ho; Lee, Kuk Hee; Chung, Ha Joo; Kim, Ju Hee; Han, Jae Jun; Kim, Yun Jae

    2009-01-01

    Flow Accelerated Corrosion (FAC) during inservice conditions produces local wall-thinning in the feeder pipes of CANDU. The Wall-thinning in the feeder pipes is main degradation mechanisms affecting the integrity of piping systems. This paper discusses the integrity assessment of wall-thinned feeder pipes using limit load analysis. Based on finite element limit analyses, this paper compare limit loads for wall-thinning feeder pipes under combined bending and internal pressure with proposed limit loads. The limit loads are determined from limit analyses based on rectangular wall-thinning and elastic-perfectly-plastic materials using the large geometry change.

  13. Limitations to private properties in the vicinity of nuclear power plants

    International Nuclear Information System (INIS)

    Martini, L.E.

    1978-01-01

    This study, based on Argentine legislation, analyses the limitations to private property in the surroundings of nuclear power plants, due to public interest. A nuclear power plant could demand different kinds of property limitation, that could vary from restrictions to the absolute nature of property, to expropriation. Limitation of property is a different concept from restriction of property, the concept of limitation is wider. The author analyses for both concepts: validy requirements, competent bodies, and jurisdiction in case of conflict. He also explains the conditions and process of expropriation. Article 14 of the Constitution constitutes the positive legal basis for limitations of property. In the hypotheses of limitations to property due to vicinity of nuclear power plants, national legislation takes precedence over provincial law, since it is a matter of public national interest. (author)

  14. FUEL CASK IMPACT LIMITER VULNERABILITIES

    International Nuclear Information System (INIS)

    Leduc, D.; England, J.; Rothermel, R.

    2009-01-01

    Cylindrical fuel casks often have impact limiters surrounding just the ends of the cask shaft in a typical 'dumbbell' arrangement. The primary purpose of these impact limiters is to absorb energy to reduce loads on the cask structure during impacts associated with a severe accident. Impact limiters are also credited in many packages with protecting closure seals and maintaining lower peak temperatures during fire events. For this credit to be taken in safety analyses, the impact limiter attachment system must be shown to retain the impact limiter following Normal Conditions of Transport (NCT) and Hypothetical Accident Conditions (HAC) impacts. Large casks are often certified by analysis only because of the costs associated with testing. Therefore, some cask impact limiter attachment systems have not been tested in real impacts. A recent structural analysis of the T-3 Spent Fuel Containment Cask found problems with the design of the impact limiter attachment system. Assumptions in the original Safety Analysis for Packaging (SARP) concerning the loading in the attachment bolts were found to be inaccurate in certain drop orientations. This paper documents the lessons learned and their applicability to impact limiter attachment system designs

  15. Multielement trace analyses of SINQ materials by ICP-OES

    Energy Technology Data Exchange (ETDEWEB)

    Keil, R.; Schwikowski, M. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1997-09-01

    Inductively Coupled Plasma Optical Emission Spectrometry was used to analyse 70 elements in various materials used for construction of the SINQ. Detection limits for individual elements depend strongly on the matrix and had to be determined separately. (author) 1 tab.

  16. Limit analysis on FRP-strengthened RC members

    Directory of Open Access Journals (Sweden)

    D. De Domenico

    2014-07-01

    Full Text Available Reinforced concrete (RC members strengthened with externally bonded fiber-reinforced-polymer (FRP plates are numerically investigated by a plasticity-based limit analysis approach. The key-concept of the present approach is to adopt proper constitutive models for concrete, steel reinforcement bars (re-bars and FRP strengthening plates according to a multi-yield-criteria formulation. This allows the prediction of concrete crushing, steel bars yielding and FRP rupture that may occur at the ultimate limit state. To simulate such limitstate of the analysed elements, two iterative methods performing linear elastic analyses with adaptive elastic parameters and finite elements (FEs description are employed. The peak loads and collapse mechanisms predicted for FRP-plated RC beams are validated by comparison with the corresponding experimental findings.

  17. A simplified technique for shakedown limit load determination

    International Nuclear Information System (INIS)

    Abdalla, Hany F.; Megahed, Mohammad M.; Younan, Maher Y.A.

    2007-01-01

    In this paper, a simplified technique is presented to determine the shakedown limit load of a structure using the finite element method. The simplified technique determines the shakedown limit load without performing lengthy time consuming full elastic-plastic cyclic loading simulations or conventional iterative elastic techniques. Instead, the shakedown limit load is determined by performing two analyses namely: an elastic analysis and an elastic-plastic analysis. By extracting the results of the two analyses, the shakedown limit load is determined through the calculation of the residual stresses developed within the structure. The simplified technique is applied and verified using two bench mark shakedown problems namely: the two-bar structure subjected to constant axial force and cyclic thermal loading, and the Bree cylinder subjected to constant internal pressure and cyclic high temperature variation across its wall. The results of the simplified technique showed very good correlation with the, analytically determined, Bree diagrams of both structures. In order to gain confidence in the simplified technique, the shakedown limit loads output by the simplified technique are used to perform full elastic-plastic cyclic loading simulations to check for shakedown behavior of both structures

  18. Guidelines to implement the license renewal technical requirements of 10CFR54 for integrated plant assessments and time-limited aging analyses. Final report

    International Nuclear Information System (INIS)

    Lehnert, G.; Philpot, L.

    1995-11-01

    This report documents the initial results of the Nuclear Energy Institute License Renewal Implementation Guideline Task Force over the period August 1994 to July 1995 to develop guidance for complying with technical requirements of 10CFR54. The report also provided a starting point for the development of NEI 95-10, ''Industry Guideline for Implementing the Requirements of 10CCR54-The License Renewal Rule''. Information in this document can be used by utilities to prepare the technical material needed in an application for license renewal (LR) of a nuclear power unit. This guideline provides methods for identifying systems, structures, and components (SSCs) and their intended functions within the scope of license renewal. It identifies structures and components (SCs) requiring aging management review and methods for performing the aging management review. The guideline provides a process for identifying and evaluating time-limited aging analyses

  19. RELAP5 analyses of overcooling transients in a pressurized water reactor

    International Nuclear Information System (INIS)

    Bolander, M.A.; Fletcher, C.D.; Ogden, D.M.; Stitt, B.D.; Waterman, M.E.

    1983-01-01

    In support of the Pressurized Thermal Shock Integration Study sponsored by the United States Nuclear Regulatory Commission, the Idaho National Engineering Laboratory has performed analyses of overcooling transients using the RELAP5/MOD1.5 computer code. These analyses were performed for Oconee Plants 1 and 3, which are pressurized water reactors of Babcock and Wilcox lowered-loop design. Results of the RELAP5 analyses are presented, including a comparison with plant data. The capabilities and limitations of the RELAP5/MOD1.5 computer code in analyzing integral plant transients are examined. These analyses require detailed thermal-hydraulic and control system computer models

  20. Limits in the evolution of biological form: a theoretical morphologic perspective.

    Science.gov (United States)

    McGhee, George R

    2015-12-06

    Limits in the evolution of biological form can be empirically demonstrated by using theoretical morphospace analyses, and actual analytic examples are given for univalved ammonoid shell form, bivalved brachiopod shell form and helical bryozoan colony form. Limits in the evolution of form in these animal groups can be shown to be due to functional and developmental constraints on possible evolutionary trajectories in morphospace. Future evolutionary-limit research is needed to analyse the possible existence of temporal constraint in the evolution of biological form on Earth, and in the search for the possible existence of functional alien life forms on Titan and Triton that are developmentally impossible for Earth life.

  1. Clustering of France Monthly Precipitation, Temperature and Discharge Based on their Multiresolution Links with 500mb Geopotential Height from 1968 to 2008

    Science.gov (United States)

    Massei, N.; Fossa, M.; Dieppois, B.; Vidal, J. P.; Fournier, M.; Laignel, B.

    2017-12-01

    In the context of climate change and ever growing use of water resources, identifying how the climate and watershed signature in discharge variability changes with the geographic location is of prime importance. This study aims at establishing how 1968-2008 multiresolution links between 3 local hydrometerological variables (precipitation, temperature and discharge) and 500 mb geopotential height are structured over France. First, a methodology that allows to encode the 3D geopotential height data into its 1D conformal modulus time series is introduced. Then, for each local variable, their covariations with the geopotential height are computed with cross wavelet analysis. Finally, a clustering analysis of each variable cross spectra is done using bootstrap clustering.We compare the clustering results for each local variable in order to untangle the watershed from the climate drivers in France's rivers discharge. Additionally, we identify the areas in the geopotential height field that are responsible for the spatial structure of each local variable.Main results from this study show that for precipitation and discharge, clear spatial zones emerge. Each cluster is characterized either by different different amplitudes and/or time scales of covariations with geopotential height. Precipitation and discharge clustering differ with the later being simpler which indicates a strong low frequency modulation by the watersheds all over France. Temperature on the other hand shows less clearer spatial zones. For precipitation and discharge, we show that the main action path starts at the northern tropical zone then moves up the to central North Atlantic zone which seems to indicates an interaction between the convective cells variability and the reinforcement of the westerlies jets as one of the main control of the precipitation and discharge over France. Temperature shows a main zone of action directly over France hinting at local temperature/pressure interactions.

  2. Soil analyses by ICP-MS (Review)

    International Nuclear Information System (INIS)

    Yamasaki, Shin-ichi

    2000-01-01

    Soil analyses by inductively coupled plasma mass spectrometry (ICP-MS) are reviewed. The first half of the paper is devoted to the development of techniques applicable to soil analyses, where diverse analytical parameters are carefully evaluated. However, the choice of soil samples is somewhat arbitrary, and only a limited number of samples (mostly reference materials) are examined. In the second half, efforts are mostly concentrated on the introduction of reports, where a large number of samples and/or very precious samples have been analyzed. Although the analytical techniques used in these reports are not necessarily novel, valuable information concerning such topics as background levels of elements in soils, chemical forms of elements in soils and behavior of elements in soil ecosystems and the environment can be obtained. The major topics discussed are total elemental analysis, analysis of radionuclides with long half-lives, speciation, leaching techniques, and isotope ratio measurements. (author)

  3. Approaches to optimal aquifer management and intelligent control in a multiresolutional decision support system

    Science.gov (United States)

    Orr, Shlomo; Meystel, Alexander M.

    2005-03-01

    Despite remarkable new developments in stochastic hydrology and adaptations of advanced methods from operations research, stochastic control, and artificial intelligence, solutions of complex real-world problems in hydrogeology have been quite limited. The main reason is the ultimate reliance on first-principle models that lead to complex, distributed-parameter partial differential equations (PDE) on a given scale. While the addition of uncertainty, and hence, stochasticity or randomness has increased insight and highlighted important relationships between uncertainty, reliability, risk, and their effect on the cost function, it has also (a) introduced additional complexity that results in prohibitive computer power even for just a single uncertain/random parameter; and (b) led to the recognition in our inability to assess the full uncertainty even when including all uncertain parameters. A paradigm shift is introduced: an adaptation of new methods of intelligent control that will relax the dependency on rigid, computer-intensive, stochastic PDE, and will shift the emphasis to a goal-oriented, flexible, adaptive, multiresolutional decision support system (MRDS) with strong unsupervised learning (oriented towards anticipation rather than prediction) and highly efficient optimization capability, which could provide the needed solutions of real-world aquifer management problems. The article highlights the links between past developments and future optimization/planning/control of hydrogeologic systems. Malgré de remarquables nouveaux développements en hydrologie stochastique ainsi que de remarquables adaptations de méthodes avancées pour les opérations de recherche, le contrôle stochastique, et l'intelligence artificielle, solutions pour les problèmes complexes en hydrogéologie sont restées assez limitées. La principale raison est l'ultime confiance en les modèles qui conduisent à des équations partielles complexes aux paramètres distribués (PDE) à une

  4. Limit loads for pipe bends under combined pressure and in-plane bending based on finite element limit analysis

    International Nuclear Information System (INIS)

    Oh, Chang Sik; Kim, Yun Jae

    2006-01-01

    In the present paper, approximate plastic limit load solutions for pipe bends under combined internal pressure and bending are obtained from detailed three-dimensional (3-D) FE limit analyses based on elastic-perfectly plastic materials with the small geometry change option. The present FE results show that existing limit load solutions for pipe bends are lower bounds but can be very different from the present FE results in some cases, particularly for bending. Accordingly closed-form approximations are proposed for pipe bends under combined pressure and in-plane bending based on the present FE results. The proposed limit load solutions would be a basis of defective pipe bends and be useful to estimate non-linear fracture mechanics parameters based on the reference stress approach

  5. Study of elasticity and limit analysis of joints and branch pipe tee connections; Etude elastique et analyse limite des piquages et des tes

    Energy Technology Data Exchange (ETDEWEB)

    Plancq, David [Nantes Univ., 44 (France)

    1997-09-24

    The industrial context of this study is the behaviour and sizing the pipe joints in PWR and fast neutron reactors. Two aspects have been approached in this framework. The first issue is the elastic behaviour of the pipe joining with a plane or spherical surface or with another pipe in order to get a better understanding of this components usually modelled in classical calculations in a very simplified way. We focused our search on the bending of an intersecting pipe. In the case of the intersection with a plane surface we have conducted our study on the basis of literature results. In the case of intersection on a spherical surface we have also solved entirely the problem by using a sphere shell description different from that usually utilized. Finally, we give an approach to obtain a simple result for the bending of branch pipe tee joints allowing the formulation of a specific finite element. The second issue approached is the limit analysis which allows characterising the plastic failure of this structures and defining reference constraints. This constraints are used in numerous applications. We mention here the rules of pipe sizing and analyzing under primary load, the mechanics of cracks and the definition of global plasticity criteria. To solve this problem we concentrated our studies on the development of a new calculation techniques for the limit load called elastic compensation method (ECM). We have tested it on a large number of classical structures and on the branch pipe tee connections. We propose also a very simple result regarding the lower limit of the bending of a tee junction 111 refs., 88 figs., 8 tabs.

  6. Global confinement characteristics of Jet limiter plasmas

    International Nuclear Information System (INIS)

    Campbell, D.J.; Christiansen, J.P.; Cordey, J.G.; Thomas, P.R.; Thomsen, K.

    1989-01-01

    Data from a wide variety of plasma pulses on JET (aux. heating, current, field, minority species, plasma shape, etc) are analysed in order to assess the characteristics of global confinement. The scaling of confinement in ohmically and auxiliary heated discharges is examined. The ohmic confinement in the present new JET configuration (Belt Limiter) is essentially the same as previously. Confinement in auxiliary heated discharges shows presently a slight improvement since 1986. Both ohmic and non-ohmic data is used in a set of confinement time regression analyses and certain constraints derived from theory are imposed

  7. Solubility limited radionuclide transport through geologic media

    International Nuclear Information System (INIS)

    Muraoka, Susumu; Iwamoto, Fumio; Pigford, T.H.

    1980-11-01

    Prior analyses for the migration of radionuclides neglect solubility limits of resolved radionuclide in geologic media. But actually some of the actinides may appear in chemical forms of very low solubility. In the present report we have proposed the migration model with no decay parents in which concentration of radionuclide is limited in concentration of solubility in ground water. In addition, the analytical solutions of the space-time-dependent concentration are presented in the case of step release, band release and exponential release. (author)

  8. Can trial sequential monitoring boundaries reduce spurious inferences from meta-analyses?

    DEFF Research Database (Denmark)

    Thorlund, Kristian; Devereaux, P J; Wetterslev, Jørn

    2008-01-01

    BACKGROUND: Results from apparently conclusive meta-analyses may be false. A limited number of events from a few small trials and the associated random error may be under-recognized sources of spurious findings. The information size (IS, i.e. number of participants) required for a reliable......-analyses after each included trial and evaluated their results using a conventional statistical criterion (alpha = 0.05) and two-sided Lan-DeMets monitoring boundaries. We examined the proportion of false positive results and important inaccuracies in estimates of treatment effects that resulted from the two...... approaches. RESULTS: Using the random-effects model and final data, 12 of the meta-analyses yielded P > alpha = 0.05, and 21 yielded P alpha = 0.05. The monitoring boundaries eliminated all false positives. Important inaccuracies in estimates were observed in 6 out of 21 meta-analyses using the conventional...

  9. Flaking of co-deposited hydrogenated carbon layers on the TFTR limiter

    International Nuclear Information System (INIS)

    Skinner, C.H.; Gentile, C.A.; Menon, M.M.; Barry, R.E.

    1999-01-01

    Flaking of co-deposited layers on the inner limiter tiles was recently observed in TFTR. This phenomenon was unexpected and has occurred since the termination of plasma operations on 4 April 1997. Flaking affects approximately 15% of the observable tiles and appears on isotropic graphite but not on carbon fibre composite tiles. Photographic images of the flakes and precise measurements of the limiter geometry are reported. The mobilizability of tritium retained in co-deposited layers is an important factor in safety analyses of future DT reactors. A programme to analyse the flakes and tiles is underway. (author). Letter-to-the-editor

  10. The Partition of Multi-Resolution LOD Based on Qtm

    Science.gov (United States)

    Hou, M.-L.; Xing, H.-Q.; Zhao, X.-S.; Chen, J.

    2011-08-01

    The partition hierarch of Quaternary Triangular Mesh (QTM) determine the accuracy of spatial analysis and application based on QTM. In order to resolve the problem that the partition hierarch of QTM is limited by the level of the computer hardware, the new method that Multi- Resolution LOD (Level of Details) based on QTM will be discussed in this paper. This method can make the resolution of the cells varying with the viewpoint position by partitioning the cells of QTM, selecting the particular area according to the viewpoint; dealing with the cracks caused by different subdivisions, it satisfies the request of unlimited partition in part.

  11. THE PARTITION OF MULTI-RESOLUTION LOD BASED ON QTM

    Directory of Open Access Journals (Sweden)

    M.-L. Hou

    2012-08-01

    Full Text Available The partition hierarch of Quaternary Triangular Mesh (QTM determine the accuracy of spatial analysis and application based on QTM. In order to resolve the problem that the partition hierarch of QTM is limited by the level of the computer hardware, the new method that Multi- Resolution LOD (Level of Details based on QTM will be discussed in this paper. This method can make the resolution of the cells varying with the viewpoint position by partitioning the cells of QTM, selecting the particular area according to the viewpoint; dealing with the cracks caused by different subdivisions, it satisfies the request of unlimited partition in part.

  12. Climate agreements under limited participation, asymmetric information and market imperfections

    Energy Technology Data Exchange (ETDEWEB)

    Hagem, Cathrine

    1996-12-31

    This thesis relates to climate agreements and cost efficiency by analysing the formation of a system of quota leading to distributed discharge of emissions between countries. Main fields concerned are the greenhouse effect, the political process, efficient and cost-effective climate agreements, and climate agreements under limited participation, asymmetric information and market imperfections covering fields like limited participation in climate agreements, limited participation and indirect impact on non-participating countries` emissions, limited participation and direct impact on non-participating countries` emissions under asymmetric information, and non-competitive market for tradeable quotas. 166 refs., 7 tabs.

  13. Bearing Capacity Analyses for the Great Belt East Bridge Anchor Blocks

    DEFF Research Database (Denmark)

    Sørensen, Carsten Steen; Clausen, Carl J. Frimann; Andersen, Henrik

    1993-01-01

    This paper presents a comparison between different methods of bearing capacity analyses: Upper Bound Method. Limit Equilibrium Analysis and Finite Element Analysis. For the Great Belt East Bridge anchor blocks it was concluded that these methods of calculation agree within 5%. However, for cases ...

  14. Analysis and design of flow limiter used in steam generator

    International Nuclear Information System (INIS)

    Liu Shixun; Gao Yongjun

    1995-10-01

    Flow limiter is an important safety component of PWR steam generator. It can limit the blowdown rate of steam generator inventory in case of the main steam pipeline breaks, so that the rate of the primary coolant temperature reduction can be slowed down in order to prevent fuel element from burn-out. The venturi type flow limiter is analysed, its flow characteristics are delineated, physical and mathematical models defined; the detail mathematical derivation provided. The research lays down a theoretic basis for flow limiter design. The governing equations and formulas given can be directly applied to computer analysis of the flow limiter. (3 refs., 3 figs.)

  15. Analyses of containment structures with corrosion damage

    International Nuclear Information System (INIS)

    Cherry, J.L.

    1997-01-01

    Corrosion damage that has been found in a number of nuclear power plant containment structures can degrade the pressure capacity of the vessel. This has prompted concerns regarding the capacity of corroded containments to withstand accident loadings. To address these concerns, finite element analyses have been performed for a typical PWR Ice Condenser containment structure. Using ABAQUS, the pressure capacity was calculated for a typical vessel with no corrosion damage. Multiple analyses were then performed with the location of the corrosion and the amount of corrosion varied in each analysis. Using a strain-based failure criterion, a open-quotes lower boundclose quotes, open-quotes best estimateclose quotes, and open-quotes upper boundclose quotes failure level was predicted for each case. These limits were established by: determining the amount of variability that exists in material properties of typical containments, estimating the amount of uncertainty associated with the level of modeling detail and modeling assumptions, and estimating the effect of corrosion on the material properties

  16. Limitations of Species Delimitation Based on Phylogenetic Analyses: A Case Study in the Hypogymnia hypotrypa Group (Parmeliaceae, Ascomycota.

    Directory of Open Access Journals (Sweden)

    Xinli Wei

    Full Text Available Delimiting species boundaries among closely related lineages often requires a range of independent data sets and analytical approaches. Similar to other organismal groups, robust species circumscriptions in fungi are increasingly investigated within an empirical framework. Here we attempt to delimit species boundaries in a closely related clade of lichen-forming fungi endemic to Asia, the Hypogymnia hypotrypa group (Parmeliaceae. In the current classification, the Hypogymnia hypotrypa group includes two species: H. hypotrypa and H. flavida, which are separated based on distinctive reproductive modes, the former producing soredia but absent in the latter. We reexamined the relationship between these two species using phenotypic characters and molecular sequence data (ITS, GPD, and MCM7 sequences to address species boundaries in this group. In addition to morphological investigations, we used Bayesian clustering to identify potential genetic groups in the H. hypotrypa/H. flavida clade. We also used a variety of empirical, sequence-based species delimitation approaches, including: the "Automatic Barcode Gap Discovery" (ABGD, the Poisson tree process model (PTP, the General Mixed Yule Coalescent (GMYC, and the multispecies coalescent approach BPP. Different species delimitation scenarios were compared using Bayes factors delimitation analysis, in addition to comparisons of pairwise genetic distances, pairwise fixation indices (FST. The majority of the species delimitation analyses implemented in this study failed to support H. hypotrypa and H. flavida as distinct lineages, as did the Bayesian clustering analysis. However, strong support for the evolutionary independence of H. hypotrypa and H. flavida was inferred using BPP and further supported by Bayes factor delimitation. In spite of rigorous morphological comparisons and a wide range of sequence-based approaches to delimit species, species boundaries in the H. hypotrypa group remain uncertain

  17. Design basis event consequence analyses for the Yucca Mountain project

    International Nuclear Information System (INIS)

    Orvis, D.D.; Haas, M.N.; Martin, J.H.

    1997-01-01

    Design basis event (DBE) definition and analysis is an ongoing and integrated activity among the design and analysis groups of the Yucca Mountain Project (YMP). DBE's are those that potentially lead to breach of the waste package and waste form (e.g., spent fuel rods) with consequent release of radionuclides to the environment. A Preliminary Hazards Analysis (PHA) provided a systematic screening of external and internal events that were candidate DBE's that will be subjected to analyses for radiological consequences. As preparation, pilot consequence analyses for the repository subsurface and surface facilities have been performed to define the methodology, data requirements, and applicable regulatory limits

  18. Method of accounting for code safety valve setpoint drift in safety analyses

    International Nuclear Information System (INIS)

    Rousseau, K.R.; Bergeron, P.A.

    1989-01-01

    In performing the safety analyses for transients that result in a challenge to the reactor coolant system (RCS) pressure boundary, the general acceptance criterion is that the peak RCS pressure not exceed the American Society of Mechanical Engineers limit of 110% of the design pressure. Without crediting non-safety-grade pressure mitigating systems, protection from this limit is mainly provided by the primary and secondary code safety valves. In theory, the combination of relief capacity and setpoints for these valves is designed to provide this protection. Generally, banks of valves are set at varying setpoints staggered by 15- to 20-psid increments to minimize the number of valves that would open by an overpressure challenge. In practice, however, when these valves are removed and tested (typically during a refueling outage), setpoints are sometimes found to have drifted by >50 psid. This drift should be accounted for during the performance of the safety analysis. This paper describes analyses performed by Yankee Atomic Electric Company (YAEC) to account for setpoint drift in safety valves from testing. The results of these analyses are used to define safety valve operability or acceptance criteria

  19. An analyser for power plant operations

    International Nuclear Information System (INIS)

    Rogers, A.E.; Wulff, W.

    1990-01-01

    Safe and reliable operation of power plants is essential. Power plant operators need a forecast of what the plant will do when its current state is disturbed. The in-line plant analyser provides precisely this information at relatively low cost. The plant analyser scheme uses a mathematical model of the dynamic behaviour of the plant to establish a numerical simulation. Over a period of time, the simulation is calibrated with measurements from the particular plant in which it is used. The analyser then provides a reference against which to evaluate the plant's current behaviour. It can be used to alert the operator to any atypical excursions or combinations of readings that indicate malfunction or off-normal conditions that, as the Three Mile Island event suggests, are not easily recognised by operators. In a look-ahead mode, it can forecast the behaviour resulting from an intended change in settings or operating conditions. Then, when such changes are made, the plant's behaviour can be tracked against the forecast in order to assure that the plant is behaving as expected. It can be used to investigate malfunctions that have occurred and test possible adjustments in operating procedures. Finally, it can be used to consider how far from the limits of performance the elements of the plant are operating. Then by adjusting settings, the required power can be generated with as little stress as possible on the equipment. (6 figures) (Author)

  20. Construction of wavelets with composite dilations

    International Nuclear Information System (INIS)

    Wu Guochang; Li Zhiqiang; Cheng Zhengxing

    2009-01-01

    In order to overcome classical wavelets' shortcoming in image processing problems, people developed many producing systems, which built up wavelet family. In this paper, the notion of AB-multiresolution analysis is generalized, and the corresponding theory is developed. For an AB-multiresolution analysis associated with any expanding matrices, we deduce that there exists a singe scaling function in its reducing subspace. Under some conditions, wavelets with composite dilations can be gotten by AB-multiresolution analysis, which permits the existence of fast implementation algorithm. Then, we provide an approach to design the wavelets with composite dilations by classic wavelets. Our way consists of separable and partly nonseparable cases. In each section, we construct all kinds of examples with nice properties to prove our theory.

  1. Construction of Orthonormal Piecewise Polynomial Scaling and Wavelet Bases on Non-Equally Spaced Knots

    Directory of Open Access Journals (Sweden)

    Jean Pierre Astruc

    2007-01-01

    Full Text Available This paper investigates the mathematical framework of multiresolution analysis based on irregularly spaced knots sequence. Our presentation is based on the construction of nested nonuniform spline multiresolution spaces. From these spaces, we present the construction of orthonormal scaling and wavelet basis functions on bounded intervals. For any arbitrary degree of the spline function, we provide an explicit generalization allowing the construction of the scaling and wavelet bases on the nontraditional sequences. We show that the orthogonal decomposition is implemented using filter banks where the coefficients depend on the location of the knots on the sequence. Examples of orthonormal spline scaling and wavelet bases are provided. This approach can be used to interpolate irregularly sampled signals in an efficient way, by keeping the multiresolution approach.

  2. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2011-01-01

    Full Text Available Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong are presented.

  3. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  4. Confusion-limited galaxy fields. II. Classical analyses

    International Nuclear Information System (INIS)

    Chokshi, A.; Wright, E.L.

    1989-01-01

    Chokshi and Wright presented a detailed model for simulating angular distribution of galaxy images in fields that extended to very high redshifts. Standard tools are used to analyze these simulated galaxy fields for the Omega(O) = 0 and the Omega(O) = 1 cases in order to test the discriminatory power of these tools. Classical number-magnitude diagrams and surface brightness-color-color diagrams are employed to study crowded galaxy fields. An attempt is made to separate the effects due to stellar evolution in galaxies from those due to the space time geometry. The results show that this discrimination is maximized at near-infrared wavelengths where the stellar photospheres are still visible but stellar evolution effects are less severe than those observed at optical wavelenghts. Rapid evolution of the stars on the asymptotic giant branch is easily recognized in the simulated data for both cosmologies and serves to discriminate between the two extreme values of Omega(O). Measurements of total magnitudes of individual galaxies are not essential for studying light distribution in galaxies as a function of redshift. Calculations for the extragalactic background radiation are carried out using the simulated data, and compared to integrals over the evolutionary models used. 29 refs

  5. Once again about "The Limits to Growth" (in Russian)

    DEFF Research Database (Denmark)

    Nørgaard, Jørgen; Tarasova, Natalia P.; Mustafin, D. I.

    2009-01-01

    The paper analyses the critics of the pioneering and best selling report "The Limits to Growth" from 1972 by D. Meadows et.al., which outlined future global development options, with respect to population, resource depletion, food production, pollution, etc. In the paper is observed that nothing ....... Actually the report seems to be surprisingly right in its agregated analyses of future options, and in the present recognition of climtate change it would be wise to learn from the report and its updating version....

  6. Results of radiotherapy in craniopharyngiomas analysed by the linear quadratic model

    Energy Technology Data Exchange (ETDEWEB)

    Guerkaynak, M. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Oezyar, E. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Zorlu, F. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Akyol, F.H. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Lale Atahan, I. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey)

    1994-12-31

    In 23 craniopharyngioma patients treated by limited surgery and external radiotherapy, the results concerning local control were analysed by linear quadratic formula. A biologically effective dose (BED) of 55 Gy, calculated with time factor and an {alpha}/{beta} value of 10 Gy, seemed to be adequate for local control. (orig.).

  7. Radioactivity analyses and detection limit problems of environmental surveillance at a gas-cooled reactor

    International Nuclear Information System (INIS)

    Johnson, J.E.; Johnson, J.A.

    1988-01-01

    The lower limit of detection (LLD) values required by the USNRC for nuclear power facilities are often difficult to attain even using state of the art detection systems, e.g. the required LLD for I-131 in air is 70 fCi/m 3 . For a gas-cooled reactor where I-131 has never been observed in effluents, occasional false positive values occur due to: Counting statistics using high resolution Ge(Li) detectors, contamination from nuclear medicine releases and spectrum analysis systematic error. Statistically negative concentration values are often observed. These measurements must be included in the estimation of true mean values. For this and other reasons, the frequency distributions of this and other reasons, the frequency distributions of measured values appear to be log-normal. Difficulties in stating the true means and standard deviations are discussed for these situations

  8. Interior tomography in microscopic CT with image reconstruction constrained by full field of view scan at low spatial resolution

    Science.gov (United States)

    Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang

    2018-04-01

    In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.

  9. Thermal fracture and pump limit of Nd: glass

    International Nuclear Information System (INIS)

    Wang Mingzhe; Ma Wen; Tan Jichun; Zhang Yongliang; Li Mingzhong; Jing Feng

    2011-01-01

    Based on published fracture experiments and 3D transient finite-element analyses, and taking the first principal stress as the criterion and the Griffith crack theory to determine the critical fracture stress, a Weibull statistical model is established to predict the fracture possibility of Nd: glass with certain pump parameters. Other issues which limit the pump power are also presented. The results show that the fracture limit of laser medium depends on the optical polishing technology. For a short pulse and high energy Nd: glass laser, taking America's polishing technology in the 1990s as reference,the pump saturation limits the pump power to 18 kW/cm 2 when the repetition rate is lower than 1 Hz, while the thermal fracture limits the pump power when the repetition rate is higher than 10 Hz. (authors)

  10. Dose-related beneficial and harmful effects of gabapentin in postoperative pain management – post hoc analyses from a systematic review with meta-analyses and trial sequential analyses

    Directory of Open Access Journals (Sweden)

    Fabritius ML

    2017-11-01

    . Twenty-seven trials reported 72 SAEs, of which 83% were reported in the >1050 mg subgroup. No systematic increase in SAEs was observed with increasing doses of gabapentin.Conclusion: Data were sparse, and the small number of trials with low risk of bias is a major limitation for firm conclusions. Taking these limitations into account, we were not able to demonstrate a clear relationship between the dosage of gabapentin and opioid-sparing or harmful effects. These subgroup analyses are exploratory and hypothesis-generating for future trialists. Keywords: gabapentin, 1-(aminomethylcyclohexaneacetic acid, analgesic, postoperative pain management, dose effect

  11. IDEA: Interactive Display for Evolutionary Analyses.

    Science.gov (United States)

    Egan, Amy; Mahurkar, Anup; Crabtree, Jonathan; Badger, Jonathan H; Carlton, Jane M; Silva, Joana C

    2008-12-08

    The availability of complete genomic sequences for hundreds of organisms promises to make obtaining genome-wide estimates of substitution rates, selective constraints and other molecular evolution variables of interest an increasingly important approach to addressing broad evolutionary questions. Two of the programs most widely used for this purpose are codeml and baseml, parts of the PAML (Phylogenetic Analysis by Maximum Likelihood) suite. A significant drawback of these programs is their lack of a graphical user interface, which can limit their user base and considerably reduce their efficiency. We have developed IDEA (Interactive Display for Evolutionary Analyses), an intuitive graphical input and output interface which interacts with PHYLIP for phylogeny reconstruction and with codeml and baseml for molecular evolution analyses. IDEA's graphical input and visualization interfaces eliminate the need to edit and parse text input and output files, reducing the likelihood of errors and improving processing time. Further, its interactive output display gives the user immediate access to results. Finally, IDEA can process data in parallel on a local machine or computing grid, allowing genome-wide analyses to be completed quickly. IDEA provides a graphical user interface that allows the user to follow a codeml or baseml analysis from parameter input through to the exploration of results. Novel options streamline the analysis process, and post-analysis visualization of phylogenies, evolutionary rates and selective constraint along protein sequences simplifies the interpretation of results. The integration of these functions into a single tool eliminates the need for lengthy data handling and parsing, significantly expediting access to global patterns in the data.

  12. Sensor Pods: Multi-Resolution Surveys from a Light Aircraft

    Directory of Open Access Journals (Sweden)

    Conor Cahalane

    2017-02-01

    Full Text Available Airborne remote sensing, whether performed from conventional aerial survey platforms such as light aircraft or the more recent Remotely Piloted Airborne Systems (RPAS has the ability to compliment mapping generated using earth-orbiting satellites, particularly for areas that may experience prolonged cloud cover. Traditional aerial platforms are costly but capture spectral resolution imagery over large areas. RPAS are relatively low-cost, and provide very-high resolution imagery but this is limited to small areas. We believe that we are the first group to retrofit these new, low-cost, lightweight sensors in a traditional aircraft. Unlike RPAS surveys which have a limited payload, this is the first time that a method has been designed to operate four distinct RPAS sensors simultaneously—hyperspectral, thermal, hyper, RGB, video. This means that imagery covering a broad range of the spectrum captured during a single survey, through different imaging capture techniques (frame, pushbroom, video can be applied to investigate different multiple aspects of the surrounding environment such as, soil moisture, vegetation vitality, topography or drainage, etc. In this paper, we present the initial results validating our innovative hybrid system adapting dedicated RPAS sensors for a light aircraft sensor pod, thereby providing the benefits of both methodologies. Simultaneous image capture with a Nikon D800E SLR and a series of dedicated RPAS sensors, including a FLIR thermal imager, a four-band multispectral camera and a 100-band hyperspectral imager was enabled by integration in a single sensor pod operating from a Cessna c172. However, to enable accurate sensor fusion for image analysis, each sensor must first be combined in a common vehicle coordinate system and a method for triggering, time-stamping and calculating the position/pose of each sensor at the time of image capture devised. Initial tests were carried out over agricultural regions with

  13. Passive safety injection experiments and analyses (PAHKO)

    International Nuclear Information System (INIS)

    Tuunanen, J.

    1998-01-01

    PAHKO project involved experiments on the PACTEL facility and computer simulations of selected experiments. The experiments focused on the performance of Passive Safety Injection Systems (PSIS) of Advanced Light Water Reactors (ALWRs) in Small Break Loss-Of-Coolant Accident (SBLOCA) conditions. The PSIS consisted of a Core Make-up Tank (CMT) and two pipelines (Pressure Balancing Line, PBL, and Injection Line, IL). The examined PSIS worked efficiently in SBLOCAs although the flow through the PSIS stopped temporarily if the break was very small and the hot water filled the CMT. The experiments demonstrated the importance of the flow distributor in the CMT to limit rapid condensation. The project included validation of three thermal-hydraulic computer codes (APROS, CATHARE and RELAP5). The analyses showed the codes are capable to simulate the overall behaviour of the transients. The detailed analyses of the results showed some models in the codes still need improvements. Especially, further development of models for thermal stratification, condensation and natural circulation flow with small driving forces would be necessary for accurate simulation of the PSIS phenomena. (orig.)

  14. Thermal hydraulic analyses of LVR-15 research reactor with IRT-M fuel

    International Nuclear Information System (INIS)

    Macek, J.

    1997-01-01

    The LVR-15 pool-type research reactor has been in operation at the Nuclear Research Institute at Rez since 1955. Following a number of reconstructions and redesigning, the current reactor power is 15 MW. Thermal hydraulic analyses to demonstrate that the core heat will be safely removed during operation as well as in accident situations were performed based on methodology which had been specifically developed for the LVR-15 research reactor. This methodology was applied to stationary thermal hydraulic computations, as well as to transients, particularly with reactivity failure and loss of circulation pumps emergencies. The applied methodology and the core configuration as used in the Safety Report are described. The initial and boundary conditions are then considered and the summary of the calculated failures with regard to the defined safety limits is presented. The results of the core configuration analyses are also discussed with respect to meeting the safety limits and to the applicability of the methodology to this purpose

  15. Use of gamma spectroscopy in activation analysis; Utilisation de la spectrographie gamma dans l'analyse par activation

    Energy Technology Data Exchange (ETDEWEB)

    Leveque, [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires

    1959-07-01

    Brief review of the principles of activation analysis: calculation of activities, decay curves, {beta} absorption curves, examples of application. - Principle and description of the {gamma} spectrograph. - Practical utilisation of the {gamma} spectrograph: analysis by activation, analysis by {beta} - x fluorescence. - Sensitivity limit of the method and precision of the measurements. - Possible improvements to the method: {gamma} spectroscopy with elimination of the Compton effect. (author) [French] Bref rappel des principes de l'analyse par activation: calcul des activites, courbes de decroissance, courbes d'absorption {beta}, exemples d'utilisation. - Principe et description du spectrographe {gamma}. - Utilisation pratique de la spectrographie {gamma}: analyse par activation, analyse par fluorescence {beta} - x. - Limite de sensibilite de la methode et precision des mesures. - Ameliorations possibles de la methode: spectrographe {gamma} avec elimination de l'effet Compton. (auteur)

  16. Finite element limit loads for non-idealized through-wall cracks in thick-walled pipe

    International Nuclear Information System (INIS)

    Shim, Do-Jun; Han, Tae-Song; Huh, Nam-Su

    2013-01-01

    Highlights: • The lower bound bulging factor of thin-walled pipe can be used for thick-walled pipe. • The limit loads are proposed for thick-walled, transition through-wall cracked pipe. • The correction factors are proposed for estimating limit loads of transition cracks. • The limit loads of short transition cracks are similar to those of idealized cracks. - Abstract: The present paper provides plastic limit loads for non-idealized through-wall cracks in thick-walled pipe. These solutions are based on detailed 3-dimensional finite element (FE) analyses which can be used for structural integrity assessment of nuclear piping. To cover a practical range of interest, the geometric variables and loading conditions affecting the plastic limit loads of thick-walled pipe with non-idealized through-wall cracks were systematically varied. In terms of crack orientation, both circumferential and axial through-wall cracks were considered. As for loading conditions, axial tension, global bending, and internal pressure were considered for circumferential cracks, whereas only internal pressure was considered for axial cracks. Furthermore, the values of geometric factor representing shape characteristics of non-idealized through-wall cracks were also systematically varied. In order to provide confidence in the present FE analyses results, plastic limit loads of un-cracked, thick-walled pipe resulting from the present FE analyses were compared with the theoretical solutions. Finally, correction factors to the idealized through-wall crack solutions were developed to determine the plastic limit loads of non-idealized through-wall cracks in thick-walled pipe

  17. The concept of physical limitations in knee osteoarthritis

    DEFF Research Database (Denmark)

    Klokker, Louise; Osborne, Richard; Wæhrens, Eva Elisabet Ejlersen

    2015-01-01

    OBJECTIVE: To comprehensively identify components of the physical limitation concept in knee osteoarthritis (OA) and to rate the clinical importance of these using perspectives of both patients and health professionals. DESIGN: Concept mapping, a structured group process, was used to identify...... and organize data in focus groups (patients) and via a global web-based survey (professionals). Ideas were elicited through a nominal group technique and then organized using multidimensional scaling, cluster analysis, participant validation, rating of clinical importance, and thematic analyses to generate...... a conceptual model of physical limitations in knee OA. RESULTS: Fifteen Danish patients and 200 international professionals contributed to generating the conceptual model. Five clusters emerged: 'Limitations/physical deficits'; 'Everyday hurdles'; 'You're not the person you used to be'; 'Need to adjust way...

  18. Analysing performance through value creation

    Directory of Open Access Journals (Sweden)

    Adrian TRIFAN

    2015-12-01

    Full Text Available This paper draws a parallel between measuring financial performance in 2 variants: the first one using data offered by accounting, which lays emphasis on maximizing profit, and the second one which aims to create value. The traditional approach to performance is based on some indicators from accounting data: ROI, ROE, EPS. The traditional management, based on analysing the data from accounting, has shown its limits, and a new approach is needed, based on creating value. The evaluation of value based performance tries to avoid the errors due to accounting data, by using other specific indicators: EVA, MVA, TSR, CVA. The main objective is shifted from maximizing the income to maximizing the value created for shareholders. The theoretical part is accompanied by a practical analysis regarding the creation of value and an analysis of the main indicators which evaluate this concept.

  19. The Mobile Limiters of TJ-II: Power and Particle Control

    International Nuclear Information System (INIS)

    Cal, E. de la

    1998-01-01

    For mobile limiters have been designed for the TJ-II stellerator to reduce thermal loads on the vacuum vessel and its protections at the region of the central hard core (groove) and to characterise the scrap off layer plasma. The role of the mobile limiters for particle and thermal load control is analysed for the different operating phases of TJ-II. The task of impurity control will be treated in a future report. A simplified model has been used to estimate the termal loads on the limiters. The conclusion is that a new design for the limiter heads will be necessary for the neutral beam injection (NBI)-phase at high power density, if acceptable efficiencies of thermal removal is desired. The rexperimental measurements which will be made in the first phase (ECH) with the temperature and Lagmuir probes installed in the diagnosed limiter-heads will be essential for the optimisation of the future limiter-shape. For particle control it will be absolutely necessary to use first wall conditioning techniques (e.g. boronization), since no active pumping method is foreseen for TJ-II. Again, this point will be more critical in the NBI-phase, due to the large particle fluxes to the first wall and due to possible thermal gas, desorption caused by local overheating of plasma-facing surfaces. The role of magnetic topology on plasma-wall interaction is finally analysed. A configuration has been found in which the limiters act as divertor plates (Natural Island Divertor). This inherent flexibility for changing the magnetic topology of TJ-II should be exploited in order to find the most favourable operating scenarios for the high powder injection phase

  20. Movable limiter experiment on TPE-1RM15 reversed field pinch machine

    International Nuclear Information System (INIS)

    Yagi, Yasuyuki; Shimada, Toshio; Hirota, Isao; Maejima, Yoshiki; Hirano, Yoichi; Ogawa, Kiyoshi

    1989-01-01

    Two movable limiters with a graphite head (35 mm Φ x 40 mm high) were installed in TPE-1RM15 reversed field pinch (RFP) machine. Measurement of the heat flux input to the movable limiters and the effect of the insertion of the limiter on plasma properties, as well as surface analyses of the graphite head after the exposure, were conducted. The heat flux input into the electron drift side of the limiter exceeded that from the ion drift side by factor of 4-6 at the maximum insertion of the limiters (10 mm inward from the shadow of the fixed limiters). This factor increased as the movable limiter protruded into the plasma, and this profile is attributed to the change of the pitch profile of the magnetic field line at the plasma periphery. At the maximum insertion of the two movable limiters, the energy input into a graphite head was about 10% of the joule input energy during the current sustainment phase. The one turn loop voltage and plasma resistance increased when the movable limiters were inserted beyond the shadow of the fixed limiters, and the increment of the joule input power roughly correlates with the increment of the loss power into the protruded movable limiters. Unbalanced position scanning showed that the relative distance of a movable limiter from the plasma column was not affected by another movable limiter installed 180 0 toroidally away from the former limiter. Fundamental surface analyses of the graphite head showed that deposition of metal impurities (Fe and Cr) was higher at the corner of the ion drift side than that of the electron shift side, and that the corner of the electron drift side was more roughened than the ion drift side. (orig.)

  1. Speed limiter integrated fatigue analyzer (SLIFA) for speed and fatigue control on diesel engine truck and bus

    Science.gov (United States)

    Wahyudi, Haris; Pranoto, Hadi; Leman, A. M.; Sebayang, Darwin; Baba, I.

    2017-09-01

    Every second, the number of road traffic deaths is increased globally with millions more sustaining severe injuries and living with long-term adverse health consequences. Jakarta alone in year 2015 had recorded 556 people died due to road accidents, approximately reached 6.231 road accident cases. The identified major contributory factors of such unfortunate events are both driver fatigue and over speeding habit especially related to the driving of truck and bus. This paper presents the idea on how to control the electronic system from input fuel system of injection pump and the combustion chamber engine will control the valve solenoid in injection pump which can lock and fuel will stop for moment, and speed limit can be success, by using sensor heart rate we can input reduce speed limit when fatigue detection driver. Integration process this tool can be relevant when Speed Limiter Integrated Fatigue Analyser (SLIFA) trial in the diesel engine for truck and bus, the result of this research Speed Limiter Integrated Fatigue Analyser (SLIFA) able to control speed of diesel engine for truck and bus almost 30km/h, 60km/h, and until 70 km/h. The installation of the sensor heart rate as the input speed limit SLIFA would work when the driver is detected to be in the fatigue condition. We make Speed Limiter Integrated Fatigue Analyser (SLIFA) for control and monitoring system for diesel engine in truck and bus. Speed Limiter Integrated Fatigue Analyser (SLIFA) system can save the historical of the speed record, fatigue, rpm, and body temperature of the driver.

  2. Preliminary Results of Ancillary Safety Analyses Supporting TREAT LEU Conversion Activities

    Energy Technology Data Exchange (ETDEWEB)

    Brunett, A. J. [Argonne National Lab. (ANL), Argonne, IL (United States); Fei, T. [Argonne National Lab. (ANL), Argonne, IL (United States); Strons, P. S. [Argonne National Lab. (ANL), Argonne, IL (United States); Papadias, D. D. [Argonne National Lab. (ANL), Argonne, IL (United States); Hoffman, E. A. [Argonne National Lab. (ANL), Argonne, IL (United States); Kontogeorgakos, D. C. [Argonne National Lab. (ANL), Argonne, IL (United States); Connaway, H. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Wright, A. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-10-01

    The Transient Reactor Test Facility (TREAT), located at Idaho National Laboratory (INL), is a test facility designed to evaluate the performance of reactor fuels and materials under transient accident conditions. The facility, an air-cooled, graphite-moderated reactor designed to utilize fuel containing high-enriched uranium (HEU), has been in non-operational standby status since 1994. Currently, in support of the missions of the Department of Energy (DOE) National Nuclear Security Administration (NNSA) Material Management and Minimization (M3) Reactor Conversion Program, a new core design is being developed for TREAT that will utilize low-enriched uranium (LEU). The primary objective of this conversion effort is to design an LEU core that is capable of meeting the performance characteristics of the existing HEU core. Minimal, if any, changes are anticipated for the supporting systems (e.g. reactor trip system, filtration/cooling system, etc.); therefore, the LEU core must also be able to function with the existing supporting systems, and must also satisfy acceptable safety limits. In support of the LEU conversion effort, a range of ancillary safety analyses are required to evaluate the LEU core operation relative to that of the existing facility. These analyses cover neutronics, shielding, and thermal hydraulic topics that have been identified as having the potential to have reduced safety margins due to conversion to LEU fuel, or are required to support the required safety analyses documentation. The majority of these ancillary tasks have been identified in [1] and [2]. The purpose of this report is to document the ancillary safety analyses that have been performed at Argonne National Laboratory during the early stages of the LEU design effort, and to describe ongoing and anticipated analyses. For all analyses presented in this report, methodologies are utilized that are consistent with, or improved from, those used in analyses for the HEU Final Safety Analysis

  3. Classification tree analyses reveal limited potential for early targeted prevention against childhood overweight.

    Science.gov (United States)

    Beyerlein, Andreas; Kusian, Dennis; Ziegler, Anette-Gabriele; Schaffrath-Rosario, Angelika; von Kries, Rüdiger

    2014-02-01

    Whether specific combinations of risk factors in very early life might allow identification of high-risk target groups for overweight prevention programs was examined. Data of n = 8981 children from the German KiGGS study were analyzed. Using a classification tree approach, predictive risk factor combinations were assessed for overweight in 3-6, 7-10, and 11-17-year-old children. In preschool children, the subgroup with the highest overweight risk were migrant children with at least one obese parent, with a prevalence of 36.6 (95% confidence interval or CI: 22.9, 50.4)%, compared to an overall prevalence of 10.0 (8.9, 11.2)%. The prevalence of overweight increased from 18.3 (16.8, 19.8)% to 57.9 (46.6, 69.3)% in 7-10-year-old children, if at least one parent was obese and the child had been born large-for-gestational-age. In 11-17-year-olds, the overweight risk increased from 20.1 (18.9, 21.3)% to 63.0 (46.4, 79.7)% in the highest risk group. However, high prevalence ratios were found only in small subgroups, containing <10% of all overweight cases in the respective age group. Our results indicate only a limited potential for early targeted preventions against overweight in children and adolescents. Copyright © 2013 The Obesity Society.

  4. Predicting speech intelligibility in conditions with nonlinearly processed noisy speech

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    The speech-based envelope power spectrum model (sEPSM; [1]) was proposed in order to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII). The sEPSM applies the signal-tonoise ratio in the envelope domain (SNRenv), which was demonstrated...... to successfully predict speech intelligibility in conditions with nonlinearly processed noisy speech, such as processing with spectral subtraction. Moreover, a multiresolution version (mr-sEPSM) was demonstrated to account for speech intelligibility in various conditions with stationary and fluctuating...

  5. IDEA: Interactive Display for Evolutionary Analyses

    Directory of Open Access Journals (Sweden)

    Carlton Jane M

    2008-12-01

    Full Text Available Abstract Background The availability of complete genomic sequences for hundreds of organisms promises to make obtaining genome-wide estimates of substitution rates, selective constraints and other molecular evolution variables of interest an increasingly important approach to addressing broad evolutionary questions. Two of the programs most widely used for this purpose are codeml and baseml, parts of the PAML (Phylogenetic Analysis by Maximum Likelihood suite. A significant drawback of these programs is their lack of a graphical user interface, which can limit their user base and considerably reduce their efficiency. Results We have developed IDEA (Interactive Display for Evolutionary Analyses, an intuitive graphical input and output interface which interacts with PHYLIP for phylogeny reconstruction and with codeml and baseml for molecular evolution analyses. IDEA's graphical input and visualization interfaces eliminate the need to edit and parse text input and output files, reducing the likelihood of errors and improving processing time. Further, its interactive output display gives the user immediate access to results. Finally, IDEA can process data in parallel on a local machine or computing grid, allowing genome-wide analyses to be completed quickly. Conclusion IDEA provides a graphical user interface that allows the user to follow a codeml or baseml analysis from parameter input through to the exploration of results. Novel options streamline the analysis process, and post-analysis visualization of phylogenies, evolutionary rates and selective constraint along protein sequences simplifies the interpretation of results. The integration of these functions into a single tool eliminates the need for lengthy data handling and parsing, significantly expediting access to global patterns in the data.

  6. Physiology limits commercially viable photoautotrophic production of microalgal biofuels.

    Science.gov (United States)

    Kenny, Philip; Flynn, Kevin J

    2017-01-01

    Algal biofuels have been offered as an alternative to fossil fuels, based on claims that microalgae can provide a highly productive source of compounds as feedstocks for sustainable transport fuels. Life cycle analyses identify algal productivity as a critical factor affecting commercial and environmental viability. Here, we use mechanistic modelling of the biological processes driving microalgal growth to explore optimal production scenarios in an industrial setting, enabling us to quantify limits to algal biofuels potential. We demonstrate how physiological and operational trade-offs combine to restrict the potential for solar-powered algal-biodiesel production in open ponds to a ceiling of ca. 8000 L ha -1 year -1 . For industrial-scale operations, practical considerations limit production to ca. 6000 L ha -1 year -1 . According to published economic models and life cycle analyses, such production rates cannot support long-term viable commercialisation of solar-powered cultivation of natural microalgae strains exclusively as feedstock for biofuels. The commercial viability of microalgal biofuels depends critically upon limitations in microalgal physiology (primarily in rates of C-fixation); we discuss the scope for addressing this bottleneck concluding that even deployment of genetically modified microalgae with radically enhanced characteristics would leave a very significant logistical if not financial burden.

  7. Insights into Wilson's Warbler migration from analyses of hydrogen stable-isotope ratios

    Science.gov (United States)

    Jeffrey F. Kelly; Viorel Atudorei; Zachary D. Sharp; Deborah M. Finch

    2002-01-01

    Our ability to link the breeding locations of individual passerines to migration stopover sites and wintering locations is limited. Stable isotopes of hydrogen contained in bird feathers have recently shown potential in this regard. We measured hydrogen stable-isotope ratios (deltaD) of feathers from breeding, migrating, and wintering Wilson's Warblers. Analyses...

  8. Rating of roofs’ surfaces regarding their solar potential and suitability for PV systems, based on LiDAR data

    International Nuclear Information System (INIS)

    Lukač, Niko; Žlaus, Danijel; Seme, Sebastijan; Žalik, Borut; Štumberger, Gorazd

    2013-01-01

    Highlights: ► A new method for estimating and rating buildings roofs’ solar potential is presented. ► Considering LiDAR geospatial data together with pyranometer measurements. ► Use of multi-resolution shadowing model with new heuristic vegetation shadowing. ► High correlation between estimated solar potential and onsite measurements. -- Abstract: The roof surfaces within urban areas are constantly attracting interest regarding the installation of photovoltaic systems. These systems can improve self-sufficiency of electricity supply, and can help to decrease the emissions of greenhouse gases throughout urban areas. Unfortunately, some roof surfaces are unsuitable for installing photovoltaic systems. This presented work deals with the rating of roof surfaces within urban areas regarding their solar potential and suitability for the installation of photovoltaic systems. The solar potential of a roof’s surface is determined by a new method that combines extracted urban topography from LiDAR data with the pyranometer measurements of global and diffuse solar irradiances. Heuristic annual vegetation shadowing and a multi-resolution shadowing model, complete the proposed method. The significance of different influential factors (e.g. shadowing) was analysed extensively. A comparison between the results obtained by the proposed method and measurements performed on an actual PV power plant showed a correlation agreement of 97.4%.

  9. [Sleep duration and functional limitations in older adult].

    Science.gov (United States)

    Eumann Mesas, Arthur; López-García, Esther; Rodríguez-Artalejo, Fernando

    2011-04-30

    To examine the association between sleep duration and functional limitation in older adults from Spain. Cross-sectional study with 3,708 individuals representative of the non-institutionalized population aged ≥ 60 years in Spain. Sleep duration was self-reported, and the functional limitations in the instrumental activities of daily living (IADL) were assessed. Functional limitations in IADL were identified in 1,424 (38.4%) participants. In analyses adjusted for sociodemographic and lifestyle variables, the percentage of participants with limitation in IADL was higher in those who slept ≤ 5 hours (odds ratio [OR]=1.56; 95% confidence interval [CI]=1.18-2.06) or ≥ 10 hours (OR=2.08; 95%CI=1.67-2.60; p for trendlimitations held even after adjustment for comorbidity and sleep quality (OR=1.77; 95%CI=1.38-2.28) while the association between short sleep (≤ 5 hours) and functional limitation no longer held after this adjustment (OR=1.10; 95%CI=0.80-1.50). In older adults, long sleep duration is a marker of functional limitations independent of comorbidity. Copyright © 2010 Elsevier España, S.L. All rights reserved.

  10. Multielemental analyses of tree rings by inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Hall, G.S.

    1990-01-01

    Inductively coupled plasma mass spectrometry (ICP-MS) was evaluated for major, minor, trace, and ultra-trace elemental analyses of individual tree rings. The samples were obtained from an old-growth Douglas fir growing near Mount St. Helens volcano, and from trees at various other North American sites. Eightly percent of elements from Li to U had detection limits in the solid (wood) below 8.0 ng g -1 . Two anomalous peaks occur in Mount St. Helens samples at A.D. 1478 and 1490 that closely correlate with past eruptions of the volcano. These results show that ICP-MS is a rapid and sensitive analytical method for multielemental analyses of individual tree rings. (author) 16 refs.; 2 figs.; 2 tabs

  11. Development and validation of an index of musculoskeletal functional limitations

    Directory of Open Access Journals (Sweden)

    Katz Jeffrey N

    2009-06-01

    Full Text Available Abstract Background While musculoskeletal problems are leading sources of disability, there has been little research on measuring the number of functionally limiting musculoskeletal problems for use as predictor of outcome in studies of chronic disease. This paper reports on the development and preliminary validation of a self administered musculoskeletal functional limitations index. Methods We developed a summary musculoskeletal functional limitations index based upon a six-item self administered questionnaire in which subjects indicate whether they are limited a lot, a little or not at all because of problems in six anatomic regions (knees, hips, ankles and feet, back, neck, upper extremities. Responses are summed into an index score. The index was completed by a sample of total knee replacement recipients from four US states. Our analyses examined convergent validity at the item and at the index level as well as discriminant validity and the independence of the index from other correlates of quality of life. Results 782 subjects completed all items of the musculoskeletal functional limitations index and were included in the analyses. The mean age of the sample was 75 years and 64% were female. The index demonstrated anticipated associations with self-reported quality of life, activities of daily living, WOMAC functional status score, use of walking support, frequency of usual exercise, frequency of falls and dependence upon another person for assistance with chores. The index was strongly and independently associated with self-reported overall health. Conclusion The self-reported musculoskeletal functional limitations index appears to be a valid measure of musculoskeletal functional limitations, in the aspects of validity assessed in this study. It is useful for outcome studies following TKR and shows promise as a covariate in studies of chronic disease outcomes.

  12. Automated monosegmented flow analyser. Determination of glucose, creatinine and urea.

    Science.gov (United States)

    Raimundo Júnior, I M; Pasquini, C

    1997-10-01

    An automated monosegmented flow analyser containing a sampling valve and a reagent addition module and employing a laboratory-made photodiode array spectrophotometer as detection system is described. The instrument was controlled by a 386SX IBM compatible microcomputer through an IC8255 parallel port that communicates with the interface which controls the sampling valve and reagent addition module. The spectrophotometer was controlled by the same microcomputer through an RS232 serial standard interface. The software for the instrument was written in QuickBasic 4.5. Opto-switches were employed to detect the air bubbles limiting the monosegment, allowing precise sample localisation for reagent addition and signal reading. The main characteristics of the analyser are low reagent consumption and high sensitivity which is independent of the sample volume. The instrument was designed to determine glucose, creatinine or urea in blood plasma and serum without hardware modification. The results were compared against those obtained by the Clinical Hospital of UNICAMP using commercial analysers. Correlation coefficients among the methods were 0.997, 0.982 and 0.996 for glucose, creatinine and urea, respectively.

  13. A database structure for radiological optimization analyses of decommissioning operations

    International Nuclear Information System (INIS)

    Zeevaert, T.; Van de Walle, B.

    1995-09-01

    The structure of a database for decommissioning experiences is described. Radiological optimization is a major radiation protection principle in practices and interventions, involving radiological protection factors, economic costs, social factors. An important lack of knowledge with respect to these factors exists in the domain of the decommissioning of nuclear power plants, due to the low number of decommissioning operations already performed. Moreover, decommissioning takes place only once for a installation. Tasks, techniques, and procedures are in most cases rather specific, limiting the use of past experiences in the radiological optimization analyses of new decommissioning operations. Therefore, it is important that relevant data or information be acquired from decommissioning experiences. These data have to be stored in a database in a way they can be used efficiently in ALARA analyses of future decommissioning activities

  14. Evidence, models, conservation programs and limits to management

    Science.gov (United States)

    Nichols, J.D.

    2012-01-01

    Walsh et al. (2012) emphasized the importance of obtaining evidence to assess the effects of management actions on state variables relevant to objectives of conservation programs. They focused on malleefowl Leipoa ocellata, ground-dwelling Australian megapodes listed as vulnerable. They noted that although fox Vulpes vulpes baiting is the main management action used in malleefowl conservation throughout southern Australia, evidence of the effectiveness of this action is limited and currently debated. Walsh et al. (2012) then used data from 64 sites monitored for malleefowl and foxes over 23 years to assess key functional relationships relevant to fox control as a conservation action for malleefowl. In one set of analyses, Walsh et al. (2012) focused on two relationships: fox baiting investment versus fox presence, and fox presence versus malleefowl population size and rate of population change. Results led to the counterintuitive conclusion that increases in investments in fox control produced slight decreases in malleefowl population size and growth. In a second set of analyses, Walsh et al. (2012) directly assessed the relationship between investment in fox baiting and malleefowl population size and rate of population change. This set of analyses showed no significant relationship between investment in fox population control and malleefowl population growth. Both sets of analyses benefited from the incorporation of key environmental covariates hypothesized to influence these management relationships. Walsh et al. (2012) concluded that "in most situations, malleefowl conservation did not effectively benefit from fox baiting at current levels of investment." In this commentary, I discuss the work of Walsh et al. (2012) using the conceptual framework of structured decision making (SDM). In doing so, I accept their analytic results and associated conclusions as accurate and discuss basic ideas about evidence, conservation and limits to management.

  15. Thermal limits for passive safety of fusion reactors

    International Nuclear Information System (INIS)

    Kazimi, M.S.; Massidda, J.E.; Oshima, M.

    1989-01-01

    The thermal response of the first wall and blanket due to power/cooling mismatch in the absence of operation action is examined. The analyses of coolant and power transients are carried out on six reference blanket designs representing a broad range of fusion first wall and blanket technology. It is concluded that the requirement of plant protection will impose sufficiently stringent peak neutron wall loading limits to avoid a serious threat to the public. It is found that for the D-T design,s the operating wall loading may have to be limited to 3 - 8 MW/m/sup 2/ for passive plant protection, depending on the plant design

  16. HLA region excluded by linkage analyses of early onset periodontitis

    Energy Technology Data Exchange (ETDEWEB)

    Sun, C.; Wang, S.; Lopez, N.

    1994-09-01

    Previous studies suggested that HLA genes may influence susceptibility to early-onset periodontitis (EOP). Segregation analyses indicate that EOP may be due to a single major gene. We conducted linkage analyses to assess possible HLA effects on EOP. Fifty families with two or more close relatives affected by EOP were ascertained in Virginia and Chile. A microsatellite polymorphism within the HLA region (at the tumor necrosis factor beta locus) was typed using PCR. Linkage analyses used a donimant model most strongly supported by previous studies. Assuming locus homogeneity, our results exclude a susceptibility gene within 10 cM on either side of our marker locus. This encompasses all of the HLA region. Analyses assuming alternative models gave qualitatively similar results. Allowing for locus heterogeneity, our data still provide no support for HLA-region involvement. However, our data do not statistically exclude (LOD <-2.0) hypotheses of disease-locus heterogeneity, including models where up to half of our families could contain an EOP disease gene located in the HLA region. This is due to the limited power of even our relatively large collection of families and the inherent difficulties of mapping genes for disorders that have complex and heterogeneous etiologies. Additional statistical analyses, recruitment of families, and typing of flanking DNA markers are planned to more conclusively address these issues with respect to the HLA region and other candidate locations in the human genome. Additional results for markers covering most of the human genome will also be presented.

  17. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its......The Subject of large scale networks is approached from the perspective of the network planner. An analysis of the long term planning problems is presented with the main focus on the changing requirements for large scale networks and the potential problems in meeting these requirements. The problems...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...

  18. Thermoelastic analyses of spent fuel repositories in bedded and dome salt. Technical memorandum report RSI-0054

    International Nuclear Information System (INIS)

    Callahan, G.D.; Ratigan, J.L.

    1978-01-01

    Global thermoelastic analyses of bedded and dome salt models showed a slight preference for the bedded salt model through the range of thermal loading conditions. Spent fuel thermal loadings should be less than 75 kW/acre of the repository pending more accurate material modeling. One should first limit the study to one or two spent fuel thermal loading (i.e. 75 kW/acre and/or 50 kW/acre) analyses up to a maximum time of approximately 2000 years. Parametric thermoelastic type analyses could then be readily obtained to determine the influence of the thermomechanical properties. Recommendations for further study include parametric analyses, plasticity analyses, consideration of the material interfaces as joints, and possibly consideration of a global joint pattern (i.e. jointed at the same orientation everywhere) for the non-salt materials. Subsequently, the viscoelastic analyses could be performed

  19. Chromosome analyses of nuclear-power plant workers

    International Nuclear Information System (INIS)

    Bauchinger, M.; Kolin-Gerresheim, J.; Schmid, E.; Dresp, J.

    1980-01-01

    A brief report is given on chromosome aberration analyses of 57 healthy male employees of six German nuclear power plants. All had received annual doses below maximum permissible occupational limit of 5 rem and had worked with radiation for periods ranging from 1 - 14 years. Exposure was mainly due to external sources of γ rays and high energy x radiation. Controls were 11 healthy males with no radiation exposure except natural background. The yields of dicentrics and acentrics were significantly higher than in the unirradiated controls, but no dose dependence was apparent. These results are compared with the dose response dependence of dicentrics + rings found in nuclear dockyard workers by Evans et al. (1979). (U.K.)

  20. Fatigue evaluation of piping systems with limited vibration test data

    International Nuclear Information System (INIS)

    Huang, S.N.

    1990-11-01

    The safety-related piping in a nuclear power plant may be subjected to pump- or fluid-induced vibrations that, in general, affect only local areas of the piping systems. Pump- or fluid-induced vibrations typically are characterized by low levels of amplitudes and a high number of cycles over the lifetime of plant operation. Thus, the resulting fatigue damage to the piping systems could be an important safety concern. In general, tests and/or analyses are used to evaluate and qualify the piping systems. Test data, however, may be limited because of lack of instrumentation in critical piping locations and/or because of difficulty in obtaining data in inaccessible areas. This paper describes and summarizes a method to use limited pipe vibration test data, along with analytical harmonic response results from finite-element analyses, to assess the fatigue damage of nuclear power plant safety-related piping systems. 5 refs., 2 figs., 11 tabs

  1. Oral Chinese proprietary medicine for angina pectoris: an overview of systematic reviews/meta-analyses.

    Science.gov (United States)

    Luo, Jing; Xu, Hao; Yang, Guoyan; Qiu, Yu; Liu, Jianping; Chen, Keji

    2014-08-01

    Oral Chinese proprietary medicine (CPM) is commonly used to treat angina pectoris, and many relevant systematic reviews/meta-analyses are available. However, these reviews have not been systematically summarized and evaluated. We conducted an overview of these reviews, and explored their methodological and reporting quality to inform both practice and further research. We included systematic reviews/meta-analyses on oral CPM in treating angina until March 2013 by searching PubMed, Embase, the Cochrane Library and four Chinese databases. We extracted data according to a pre-designed form, and assessed the methodological and reporting characteristics of the reviews in terms of AMSTAR and PRISMA respectively. Most of the data analyses were descriptive. 36 systematic reviews/meta-analyses involving over 82,105 participants with angina reviewing 13 kinds of oral CPM were included. The main outcomes assessed in the reviews were surrogate outcomes (34/36, 94.4%), adverse events (31/36, 86.1%), and symptoms (30/36, 83.3%). Six reviews (6/36, 16.7%) drew definitely positive conclusions, while the others suggested potential benefits in the symptoms, electrocardiogram, and adverse events. The overall methodological and reporting quality of the reviews was limited, with many serious flaws such as the lack of review protocol and incomprehensive literature searches. Though many systematic reviews/meta-analyses on oral CPM for angina suggested potential benefits or definitely positive effects, stakeholders should interpret the findings of these reviews with caution, considering the overall limited methodological and reporting quality. We recommend further studies should be appropriately conducted and systematic reviews reported according to PRISMA standard. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Thermal Safety Analyses for the Production of Plutonium-238 at the High Flux Isotope Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Hurt, Christopher J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Freels, James D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hobbs, Randy W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jain, Prashant K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Maldonado, G. Ivan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-08-01

    There has been a considerable effort over the previous few years to demonstrate and optimize the production of plutonium-238 (238Pu) at the High Flux Isotope Reactor (HFIR). This effort has involved resources from multiple divisions and facilities at the Oak Ridge National Laboratory (ORNL) to demonstrate the fabrication, irradiation, and chemical processing of targets containing neptunium-237 (237Np) dioxide (NpO2)/aluminum (Al) cermet pellets. A critical preliminary step to irradiation at the HFIR is to demonstrate the safety of the target under irradiation via documented experiment safety analyses. The steady-state thermal safety analyses of the target are simulated in a finite element model with the COMSOL Multiphysics code that determines, among other crucial parameters, the limiting maximum temperature in the target. Safety analysis efforts for this model discussed in the present report include: (1) initial modeling of single and reduced-length pellet capsules in order to generate an experimental knowledge base that incorporate initial non-linear contact heat transfer and fission gas equations, (2) modeling efforts for prototypical designs of partially loaded and fully loaded targets using limited available knowledge of fabrication and irradiation characteristics, and (3) the most recent and comprehensive modeling effort of a fully coupled thermo-mechanical approach over the entire fully loaded target domain incorporating burn-up dependent irradiation behavior and measured target and pellet properties, hereafter referred to as the production model. These models are used to conservatively determine several important steady-state parameters including target stresses and temperatures, the limiting condition of which is the maximum temperature with respect to the melting point. The single pellet model results provide a basis for the safety of the irradiations, followed by parametric analyses in the initial prototypical designs

  3. SOCIAL MEDIA INTELLIGENCE: OPPORTUNITIES AND LIMITATIONS

    Directory of Open Access Journals (Sweden)

    Adrian Liviu IVAN

    2015-09-01

    Full Text Available An important part of the reform of the intelligence community is felt in the opening linked with the widening spectrum of methods and spaces which can be used to collect and analyse dates and information. One of these methods that produce large mutations in the system is connected to the world of social media which proves to be a huge source of information. Social Media Intelligence (SOCMINT, the newest member of the family INT's, is undoubtedly a separate domain, a practice rooted in the work of the intelligence community. This paper proposes a general characterization of the most important aspects of Social Media Intelligence, a brand new way for the intelligence community to collect and analyse information for national security purposes (but not only in the context of the current global challenges. Moreover, the work is focused in identifying the further limitations and opportunities of this practice in the upcoming decade.

  4. 75 FR 18497 - Guidance on Simultaneous Transmission Import Limit Studies for the Northwest Region; Notice of...

    Science.gov (United States)

    2010-04-12

    ... updated market power analyses associated with their market based rate authorizations, which are due in... Conference ``Guidance on Simultaneous Transmission Import Limit Studies.'' To view the archive of the... Northwest region transmission owners and their pending updated market power analyses. Interested persons...

  5. Interactive volume exploration of petascale microscopy data streams using a visualization-driven virtual memory approach

    KAUST Repository

    Hadwiger, Markus; Beyer, Johanna; Jeong, Wonki; Pfister, Hanspeter

    2012-01-01

    This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience. © 1995-2012 IEEE.

  6. Interactive volume exploration of petascale microscopy data streams using a visualization-driven virtual memory approach

    KAUST Repository

    Hadwiger, Markus

    2012-12-01

    This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience. © 1995-2012 IEEE.

  7. Analysing Scientific Collaborations of New Zealand Institutions using Scopus Bibliometric Data

    OpenAIRE

    Aref, Samin; Friggens, David; Hendy, Shaun

    2017-01-01

    Scientific collaborations are among the main enablers of development in small national science systems. Although analysing scientific collaborations is a well-established subject in scientometrics, evaluations of scientific collaborations within a country remain speculative with studies based on a limited number of fields or using data too inadequate to be representative of collaborations at a national level. This study represents a unique view on the collaborative aspect of scientific activi...

  8. Steady-state and accident analyses of PBMR with the computer code SPECTRA

    International Nuclear Information System (INIS)

    Stempniewicz, Marek M.

    2002-01-01

    The SPECTRA code is an accident analysis code developed at NRG. It is designed for thermal-hydraulic analyses of nuclear or conventional power plants. The code is capable of analysing the whole power plant, including reactor vessel, primary system, various control and safety systems, containment and reactor building. The aim of the work presented in this paper was to prepare a preliminary thermal-hydraulic model of PBMR for SPECTRA, and perform steady state and accident analyses. In order to assess SPECTRA capability to model the PBMR reactors, a model of the INCOGEN system has been prepared first. Steady state and accident scenarios were analyzed for INCOGEN configuration. Results were compared to the results obtained earlier with INAS and OCTOPUS/PANTHERMIX. A good agreement was obtained. Results of accident analyses with PBMR model showed qualitatively good results. It is concluded that SPECTRA is a suitable tool for analyzing High Temperature Reactors, such as INCOGEN or for example PBMR (Pebble Bed Modular Reactor). Analyses of INCOGEN and PBMR systems showed that in all analyzed cases the fuel temperatures remained within the acceptable limits. Consequently there is no danger of release of radioactivity to the environment. It may be concluded that those are promising designs for future safe industrial reactors. (author)

  9. Less is less: a systematic review of graph use in meta-analyses.

    Science.gov (United States)

    Schild, Anne H E; Voracek, Martin

    2013-09-01

    Graphs are an essential part of scientific communication. Complex datasets, of which meta-analyses are textbook examples, benefit the most from visualization. Although a number of graph options for meta-analyses exist, the extent to which these are used was hitherto unclear. A systematic review on graph use in meta-analyses in three disciplines (medicine, psychology, and business) and nine journals was conducted. Interdisciplinary differences, which are mirrored in the respective journals, were revealed, that is, graph use correlates with external factors rather than methodological considerations. There was only limited variation in graph types (with forest plots as the most important representatives), and diagnostic plots were very rare. Although an increase in graph use over time could be observed, it is unlikely that this phenomenon is specific to meta-analyses. There is a gaping discrepancy between available graphic methods and their application in meta-analyses. This may be rooted in a number of factors, namely, (i) insufficient dissemination of new developments, (ii) unsatisfactory implementation in software packages, and (iii) minor attention on graphics in meta-analysis reporting guidelines. Using visualization methods to their full capacity is a further step in using meta-analysis to its full potential. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Utilization of synchrotron radiation for trace-element analyses in toxicology of metals

    International Nuclear Information System (INIS)

    Hanson, A.L.; Jones, K.W.; Kraner, H.W.; Gordon, B.M.; Chen, J.R.

    1983-01-01

    The use of SXRF will nicely complement other more widely used analytical techniques for trace elements. The experiments at CHESS showed minimum detectable limits for 1-mm thick organic matrices with monochromated photon beams to be on the order of 160 to 300 ppB for Ni to Sr with minimal structural damage to the material being irradiated. Extrapolations to operating conditions at the NSLS, with a facility designed for XRF, indicate the MDL limits of 10 to 100 ppB should be achievable. The utilization of wavelength dispersive detectors should gain an order of magnitude in sensitivity, but with trade-off of some flexibility in multielemental analyses

  11. Analysing, Interpreting, and Testing the Invariance of the Actor-Partner Interdependence Model

    Directory of Open Access Journals (Sweden)

    Gareau, Alexandre

    2016-09-01

    Full Text Available Although in recent years researchers have begun to utilize dyadic data analyses such as the actor-partner interdependence model (APIM, certain limitations to the applicability of these models still exist. Given the complexity of APIMs, most researchers will often use observed scores to estimate the model's parameters, which can significantly limit and underestimate statistical results. The aim of this article is to highlight the importance of conducting a confirmatory factor analysis (CFA of equivalent constructs between dyad members (i.e. measurement equivalence/invariance; ME/I. Different steps for merging CFA and APIM procedures will be detailed in order to shed light on new and integrative methods.

  12. Wavelet and Blend maps for texture synthesis

    OpenAIRE

    Du Jin-Lian; Wang Song; Meng Xianhai

    2011-01-01

    blending is now a popular technology for large realtime texture synthesis .Nevertheless, creating blend map during rendering is time and computation consuming work. In this paper, we exploited a method to create a kind of blend tile which can be tile together seamlessly. Note that blend map is in fact a kind of image, which is Markov Random Field, contains multiresolution signals, while wavelet is a powerful way to process multiresolution signals, we use wavelet to process the traditional ble...

  13. 7 CFR 1400.204 - Limited partnerships, limited liability partnerships, limited liability companies, corporations...

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Limited partnerships, limited liability partnerships..., limited liability partnerships, limited liability companies, corporations, and other similar legal entities. (a) A limited partnership, limited liability partnership, limited liability company, corporation...

  14. Numerical Analyses of Earthquake Induced Liquefaction and Deformation Behaviour of an Upstream Tailings Dam

    Directory of Open Access Journals (Sweden)

    Muhammad Auchar Zardari

    2017-01-01

    Full Text Available Much of the seismic activity of northern Sweden consists of micro-earthquakes occurring near postglacial faults. However, larger magnitude earthquakes do occur in Sweden, and earthquake statistics indicate that a magnitude 5 event is likely to occur once every century. This paper presents dynamic analyses of the effects of larger earthquakes on an upstream tailings dam at the Aitik copper mine in northern Sweden. The analyses were performed to evaluate the potential for liquefaction and to assess stability of the dam under two specific earthquakes: a commonly occurring magnitude 3.6 event and a more extreme earthquake of magnitude 5.8. The dynamic analyses were carried out with the finite element program PLAXIS using a recently implemented constitutive model called UBCSAND. The results indicate that the magnitude 5.8 earthquake would likely induce liquefaction in a limited zone located below the ground surface near the embankment dikes. It is interpreted that stability of the dam may not be affected due to the limited extent of the liquefied zone. Both types of earthquakes are predicted to induce tolerable magnitudes of displacements. The results of the postseismic slope stability analysis, performed for a state after a seismic event, suggest that the dam is stable during both the earthquakes.

  15. Digital analyses of cartometric Fruska Gora guidelines

    Directory of Open Access Journals (Sweden)

    Živković Dragica

    2013-01-01

    Full Text Available Modern geo morphological topography research have been using quantity statistic and cartographic methods for topographic relief features, mutual relief features, mutual connection analyses on the grounds of good quality numeric parameters etc. Topographic features are important for topographic activities are important for important natural activities. Important morphological characteristics are precisely at the angle of topography, hypsometry, and topography exposition and so on. Small yet unknown relief slants can deeply affect land configuration, hypsometry, topographic exposition etc. Expositions modify the light and heat of interconnected phenomena: soil and air temperature, soil disintegration, the length of vegetation period, the complexity of photosynthesis, the fruitfulness of agricultural crops, the height of snow limit etc. [Projekat Ministarstva nauke Republike Srbije, br. 176008 i br. III44006

  16. Cost-of-illness studies and cost-effectiveness analyses in anxiety disorders: a systematic review.

    Science.gov (United States)

    Konnopka, Alexander; Leichsenring, Falk; Leibing, Eric; König, Hans-Helmut

    2009-04-01

    To review cost-of-illness studies (COI) and cost-effectiveness analyses (CEA) conducted for anxiety disorders. Based on a database search in Pubmed, PsychINFO and NHS EED, studies were classified according to various criteria. Cost data were inflated and converted to 2005 US-$ purchasing power parities (PPP). We finally identified 20 COI and 11 CEA of which most concentrated on panic disorder (PD) and generalized anxiety disorder (GAD). Differing inclusion of cost categories limited comparability of COI. PD and GAD tended to show higher direct costs per case, but lower direct cost per inhabitant than social and specific phobias. Different measures of effectiveness severely limited comparability of CEA. Overall CEA analysed 26 therapeutic or interventional strategies mostly compared to standard treatment, 8 of them resulting in lower better effectiveness and costs than the comparator. Anxiety disorders cause considerable costs. More research on phobias, more standardised inclusion of cost categories in COI and a wider use of comparable effectiveness measures (like QALYs) in CEA is needed.

  17. The limitations of ontogenetic data in phylogenetic analyses

    NARCIS (Netherlands)

    Koenemann, Stefan; Schram, Frederick R.

    2002-01-01

    The analysis of consecutive ontogenetic stages, or events, introduces a new class of data to phylogenetic systematics that are distinctly different from traditional morphological characters and molecular sequence data. Ontogenetic event sequences are distinguished by varying degrees of both a

  18. Transient electromagnetic and dynamic structural analyses of a blanket structure with coupling effects

    Energy Technology Data Exchange (ETDEWEB)

    Koganezawa, K. [Mitsubishi Atomic Power Industries, Inc., Yokohama (Japan); Kushiyama, M. [Mitsubishi Atomic Power Industries, Inc., Yokohama (Japan); Niikura, S. [Mitsubishi Atomic Power Industries, Inc., Yokohama (Japan); Kudough, F. [Mitsubishi Atomic Power Industries, Inc., Yokohama (Japan); Onozuka, M. [Mitsubishi Heavy Industries Ltd., Yokohama (Japan); Koizumi, K. [Japan Atomic Energy Research Inst., Ibaraki (Japan)

    1995-12-31

    Transient electromagnetic and dynamic structural analyses of a blanket structure in the fusion experimental reactor (FER) under a plasma disruption event and a vertical displacement event (VDE) have been performed to investigate the dynamic structural characteristics and the feasibility of the structure. Coupling effects between eddy currents and dynamic deflections have also been taken into account in these analyses. In this study, the inboard blanket was employed because of our computer memory limitation. A 1/192 segment model of a full torus was analyzed using the analytical code, EDDYCUFF. In the plasma disruption event, the maximum magnetic pressure caused by eddy currents and poloidal fields was 1.2MPa. The maximum stress intensity by this magnetic pressure was 114MPa. In the VDE, the maximum magnetic pressure was 2.4MPa and the maximum stress intensity was 253MPa. This stress was somewhat beyond the allowable stress limit. Therefore, the blanket structure and support design should be reviewed to reduce the stress to a suitable value. In summary, the dynamic structural characteristics and design issues of the blanket structure have been identified. (orig.).

  19. Association between Adult Height and Risk of Colorectal, Lung, and Prostate Cancer: Results from Meta-analyses of Prospective Studies and Mendelian Randomization Analyses

    Science.gov (United States)

    Khankari, Nikhil K.; Shu, Xiao-Ou; Wen, Wanqing; Kraft, Peter; Lindström, Sara; Peters, Ulrike; Schildkraut, Joellen; Schumacher, Fredrick; Bofetta, Paolo; Risch, Angela; Bickeböller, Heike; Amos, Christopher I.; Easton, Douglas; Gruber, Stephen B.; Haiman, Christopher A.; Hunter, David J.; Chanock, Stephen J.; Pierce, Brandon L.; Zheng, Wei

    2016-01-01

    Background Observational studies examining associations between adult height and risk of colorectal, prostate, and lung cancers have generated mixed results. We conducted meta-analyses using data from prospective cohort studies and further carried out Mendelian randomization analyses, using height-associated genetic variants identified in a genome-wide association study (GWAS), to evaluate the association of adult height with these cancers. Methods and Findings A systematic review of prospective studies was conducted using the PubMed, Embase, and Web of Science databases. Using meta-analyses, results obtained from 62 studies were summarized for the association of a 10-cm increase in height with cancer risk. Mendelian randomization analyses were conducted using summary statistics obtained for 423 genetic variants identified from a recent GWAS of adult height and from a cancer genetics consortium study of multiple cancers that included 47,800 cases and 81,353 controls. For a 10-cm increase in height, the summary relative risks derived from the meta-analyses of prospective studies were 1.12 (95% CI 1.10, 1.15), 1.07 (95% CI 1.05, 1.10), and 1.06 (95% CI 1.02, 1.11) for colorectal, prostate, and lung cancers, respectively. Mendelian randomization analyses showed increased risks of colorectal (odds ratio [OR] = 1.58, 95% CI 1.14, 2.18) and lung cancer (OR = 1.10, 95% CI 1.00, 1.22) associated with each 10-cm increase in genetically predicted height. No association was observed for prostate cancer (OR = 1.03, 95% CI 0.92, 1.15). Our meta-analysis was limited to published studies. The sample size for the Mendelian randomization analysis of colorectal cancer was relatively small, thus affecting the precision of the point estimate. Conclusions Our study provides evidence for a potential causal association of adult height with the risk of colorectal and lung cancers and suggests that certain genetic factors and biological pathways affecting adult height may also affect the

  20. Visualization of a Turbulent Jet Using Wavelets

    Institute of Scientific and Technical Information of China (English)

    Hui LI

    2001-01-01

    An application of multiresolution image analysis to turbulence was investigated in this paper, in order to visualize the coherent structure and the most essential scales governing turbulence. The digital imaging photograph of jet slice was decomposed by two-dimensional discrete wavelet transform based on Daubechies, Coifman and Baylkin bases. The best choice of orthogonal wavelet basis for analyzing the image of the turbulent structures was first discussed. It is found that these orthonormal wavelet families with index N<10 were inappropriate for multiresolution image analysis of turbulent flow. The multiresolution images of turbulent structures were very similar when using the wavelet basis with the higher index number, even though wavelet bases are different functions. From the image components in orthogonal wavelet spaces with different scales, the further evident of the multi-scale structures in jet can be observed, and the edges of the vortices at different resolutions or scales and the coherent structure can be easily extracted.

  1. Multiscale wavelet representations for mammographic feature analysis

    Science.gov (United States)

    Laine, Andrew F.; Song, Shuwu

    1992-12-01

    This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet coefficients, enhanced by linear, exponential and constant weight functions localized in scale space. By improving the visualization of breast pathology we can improve the changes of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).

  2. Hydrometeorological variability on a large french catchment and its relation to large-scale circulation across temporal scales

    Science.gov (United States)

    Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David

    2015-04-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach

  3. Waste tank characterization sampling limits

    International Nuclear Information System (INIS)

    Tusler, L.A.

    1994-01-01

    This document is a result of the Plant Implementation Team Investigation into delayed reporting of the exotherm in Tank 241-T-111 waste samples. The corrective actions identified are to have immediate notification of appropriate Tank Farm Operations Shift Management if analyses with potential safety impact exceed established levels. A procedure, WHC-IP-0842 Section 12.18, ''TWRS Approved Sampling and Data Analysis by Designated Laboratories'' (WHC 1994), has been established to require all tank waste sampling (including core, auger and supernate) and tank vapor samples be performed using this document. This document establishes levels for specified analysis that require notification of the appropriate shift manager. The following categories provide numerical values for analysis that may indicate that a tank is either outside the operating specification or should be evaluated for inclusion on a Watch List. The information given is intended to translate an operating limit such as heat load, expressed in Btu/hour, to an analysis related limit, in this case cesium-137 and strontium-90 concentrations. By using the values provided as safety flags, the analytical laboratory personnel can notify a shift manager that a tank is in potential violation of an operating limit or that a tank should be considered for inclusion on a Watch List. The shift manager can then take appropriate interim measures until a final determination is made by engineering personnel

  4. A Methodology for Conducting Integrative Mixed Methods Research and Data Analyses

    Science.gov (United States)

    Castro, Felipe González; Kellison, Joshua G.; Boyd, Stephen J.; Kopak, Albert

    2011-01-01

    Mixed methods research has gained visibility within the last few years, although limitations persist regarding the scientific caliber of certain mixed methods research designs and methods. The need exists for rigorous mixed methods designs that integrate various data analytic procedures for a seamless transfer of evidence across qualitative and quantitative modalities. Such designs can offer the strength of confirmatory results drawn from quantitative multivariate analyses, along with “deep structure” explanatory descriptions as drawn from qualitative analyses. This article presents evidence generated from over a decade of pilot research in developing an integrative mixed methods methodology. It presents a conceptual framework and methodological and data analytic procedures for conducting mixed methods research studies, and it also presents illustrative examples from the authors' ongoing integrative mixed methods research studies. PMID:22167325

  5. Cost tradeoffs in consequence management at nuclear power plants: A risk based approach to setting optimal long-term interdiction limits for regulatory analyses

    International Nuclear Information System (INIS)

    Mubayi, V.

    1995-05-01

    The consequences of severe accidents at nuclear power plants can be limited by various protective actions, including emergency responses and long-term measures, to reduce exposures of affected populations. Each of these protective actions involve costs to society. The costs of the long-term protective actions depend on the criterion adopted for the allowable level of long-term exposure. This criterion, called the ''long term interdiction limit,'' is expressed in terms of the projected dose to an individual over a certain time period from the long-term exposure pathways. The two measures of offsite consequences, latent cancers and costs, are inversely related and the choice of an interdiction limit is, in effect, a trade-off between these two measures. By monetizing the health effects (through ascribing a monetary value to life lost), the costs of the two consequence measures vary with the interdiction limit, the health effect costs increasing as the limit is relaxed and the protective action costs decreasing. The minimum of the total cost curve can be used to calculate an optimal long term interdiction limit. The calculation of such an optimal limit is presented for each of five US nuclear power plants which were analyzed for severe accident risk in the NUREG-1150 program by the Nuclear Regulatory Commission

  6. Relationship of employee-reported work limitations to work productivity.

    Science.gov (United States)

    Lerner, Debra; Amick, Benjamin C; Lee, Jennifer C; Rooney, Ted; Rogers, William H; Chang, Hong; Berndt, Ernst R

    2003-05-01

    Work limitation rates are crucial indicators of the health status of working people. If related to work productivity, work limitation rates may also supply important information about the economic burden of illness. Our objective was to assess the productivity impact of on-the-job work limitations due to employees' physical or mental health problems. Subjects were asked to complete a self-administered survey on the job during 3 consecutive months. Using robust regression analysis, we tested the relationship of objectively-measured work productivity to employee-reported work limitations. We attempted to survey employees of a large firm within 3 different jobs. The survey response rate was 2245 (85.9%). Full survey and productivity data were available for 1827 respondents. Each survey included a validated self-report instrument, the Work Limitations Questionnaire (WLQ). The firm provided objective, employee-level work productivity data. In adjusted regression analyses (n = 1827), employee work productivity (measured as the log of units produced/hour) was significantly associated with 3 dimensions of work limitations: limitations handling the job's time and scheduling demands (P = 0.003), physical job demands (P = 0.001), and output demands (P = 0.006). For every 10% increase in on-the-job work limitations reported on each of the 3 WLQ scales, work productivity declined approximately 4 to 5%. Employee work limitations have a negative impact on work productivity. Employee assessments of their work limitations supply important proxies for the economic burden of health problems.

  7. Effect of pressure on the lean limit flames of H2-CH4-air mixture in tubes

    KAUST Repository

    Zhou, Zhen; Shoshin, Yuriy; Hernandez Perez, Francisco; van Oijen, Jeroen A.; de Goey, Laurentius P.H.

    2017-01-01

    The lean limit flames of H2-CH4-air mixtures stabilized inside tubes in a downward flow are experimentally and numerically investigated at elevated pressures ranging from 2 to 5 bar. For the shapes of lean limit flames, a change from ball-like flame to cap-like flame is experimentally observed with the increase of pressure. This experimentally observed phenomenon is qualitatively predicted by numerical simulations. The structure of ball-like and cap-like lean limit flames at all tested pressures is analysed in detail based on the numerical predictions. The results show that the lean limit flames are located inside a recirculation zone at all tested pressures. For the leading edges of the lean limit flames at all tested pressures, the fuel transport is controlled by both convection and diffusion. For the trailing edge of the ball-like lean limit flame at 2 bar, the fuel transport is dominated by diffusion. However, with increasing pressure, the transport contribution caused by convection in the trailing edges of the lean limit flames increases. Finally, the influence of transport and chemistry on the predicted ultra lean flames and lean flammability limit is analysed at elevated pressures.

  8. Effect of pressure on the lean limit flames of H2-CH4-air mixture in tubes

    KAUST Repository

    Zhou, Zhen

    2017-05-25

    The lean limit flames of H2-CH4-air mixtures stabilized inside tubes in a downward flow are experimentally and numerically investigated at elevated pressures ranging from 2 to 5 bar. For the shapes of lean limit flames, a change from ball-like flame to cap-like flame is experimentally observed with the increase of pressure. This experimentally observed phenomenon is qualitatively predicted by numerical simulations. The structure of ball-like and cap-like lean limit flames at all tested pressures is analysed in detail based on the numerical predictions. The results show that the lean limit flames are located inside a recirculation zone at all tested pressures. For the leading edges of the lean limit flames at all tested pressures, the fuel transport is controlled by both convection and diffusion. For the trailing edge of the ball-like lean limit flame at 2 bar, the fuel transport is dominated by diffusion. However, with increasing pressure, the transport contribution caused by convection in the trailing edges of the lean limit flames increases. Finally, the influence of transport and chemistry on the predicted ultra lean flames and lean flammability limit is analysed at elevated pressures.

  9. Comparative biochemical analyses of venous blood and peritoneal fluid from horses with colic using a portable analyser and an in-house analyser.

    Science.gov (United States)

    Saulez, M N; Cebra, C K; Dailey, M

    2005-08-20

    Fifty-six horses with colic were examined over a period of three months. The concentrations of glucose, lactate, sodium, potassium and chloride, and the pH of samples of blood and peritoneal fluid, were determined with a portable clinical analyser and with an in-house analyser and the results were compared. Compared with the in-house analyser, the portable analyser gave higher pH values for blood and peritoneal fluid with greater variability in the alkaline range, and lower pH values in the acidic range, lower concentrations of glucose in the range below 8.3 mmol/l, and lower concentrations of lactate in venous blood in the range below 5 mmol/l and in peritoneal fluid in the range below 2 mmol/l, with less variability. On average, the portable analyser underestimated the concentrations of lactate and glucose in peritoneal fluid in comparison with the in-house analyser. Its measurements of the concentrations of sodium and chloride in peritoneal fluid had a higher bias and were more variable than the measurements in venous blood, and its measurements of potassium in venous blood and peritoneal fluid had a smaller bias and less variability than the measurements made with the in-house analyser.

  10. The limits to solar thermal electricity

    International Nuclear Information System (INIS)

    Trainer, Ted

    2014-01-01

    The potential and limits of solar thermal power systems depend primarily on their capacity to meet electricity demand in mid-winter, and the associated cost, storage and other implications. Evidence on output and costs is analysed. Most attention is given to central receivers. Problems of low radiation levels, embodied energy costs, variability and storage are discussed and are found to set significant difficulties for large scale solar thermal supply in less than ideal latitudes and seasons. It is concluded that for solar thermal systems to meet a large fraction of anticipated global electricity demand in winter would involve prohibitive capital costs. - Highlights: • Output and capital cost data for various solar thermal technologies is examined. • Special attention is given to performance in winter. • Attention is also given to the effect of solar intermittency. • Implications for storage are considered. • It is concluded that there are significant limits to solar thermal power

  11. Forward-Weighted CADIS Method for Variance Reduction of Monte Carlo Reactor Analyses

    International Nuclear Information System (INIS)

    Wagner, John C.; Mosher, Scott W.

    2010-01-01

    Current state-of-the-art tools and methods used to perform 'real' commercial reactor analyses use high-fidelity transport codes to produce few-group parameters at the assembly level for use in low-order methods applied at the core level. Monte Carlo (MC) methods, which allow detailed and accurate modeling of the full geometry and energy details and are considered the 'gold standard' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the several-decade-old methodology used in current practice. However, the prohibitive computational requirements associated with obtaining fully converged system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. A goal of current research at Oak Ridge National Laboratory (ORNL) is to change this paradigm by enabling the direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome is the slow non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, research has focused on development in the following two areas: (1) a hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The focus of this paper is limited to the first area mentioned above. It describes the FW-CADIS method applied to variance reduction of MC reactor analyses and provides initial results for calculating

  12. [Does medicine limit enlightenment?].

    Science.gov (United States)

    Schipperges, H

    1977-01-01

    In the first, historical part the most important programs of "Medical Enlightenment", are pointed out, beginning with Leibniz, followed by the public health movement of the 18th century, up to the time of Immanuel Kant. Based on this historical background several concepts of a "Medical Culture" are analysed in detail, for instance the "Theorie einer Medizinal-Ordnung" by Johann Benjamin Ehrhard (1800), the "Medicinische Reform" by Rudolf Virchow (1848) and the programs of the "Gesellschaft Deutscher Naturforscher und Arzte" (about 1850-1890), the latter bearing both scientific and political character. Following the historical part, the question is raised whether "Enlightenment" is limited by medicine and whether medicine is able to provide a program for individual health education resulting in a more cultivated style of private life, and lastly how this might be realized.

  13. Limiting electric fields of HVDC overhead power lines.

    Science.gov (United States)

    Leitgeb, N

    2014-05-01

    As a consequence of the increased use of renewable energy and the now long distances between energy generation and consumption, in Europe, electric power transfer by high-voltage (HV) direct current (DC) overhead power lines gains increasing importance. Thousands of kilometers of them are going to be built within the next years. However, existing guidelines and regulations do not yet contain recommendations to limit static electric fields, which are one of the most important criteria for HVDC overhead power lines in terms of tower design, span width and ground clearance. Based on theoretical and experimental data, in this article, static electric fields associated with adverse health effects are analysed and various criteria are derived for limiting static electric field strengths.

  14. Metagenome and Metatranscriptome Analyses Using Protein Family Profiles.

    Directory of Open Access Journals (Sweden)

    Cuncong Zhong

    2016-07-01

    Full Text Available Analyses of metagenome data (MG and metatranscriptome data (MT are often challenged by a paucity of complete reference genome sequences and the uneven/low sequencing depth of the constituent organisms in the microbial community, which respectively limit the power of reference-based alignment and de novo sequence assembly. These limitations make accurate protein family classification and abundance estimation challenging, which in turn hamper downstream analyses such as abundance profiling of metabolic pathways, identification of differentially encoded/expressed genes, and de novo reconstruction of complete gene and protein sequences from the protein family of interest. The profile hidden Markov model (HMM framework enables the construction of very useful probabilistic models for protein families that allow for accurate modeling of position specific matches, insertions, and deletions. We present a novel homology detection algorithm that integrates banded Viterbi algorithm for profile HMM parsing with an iterative simultaneous alignment and assembly computational framework. The algorithm searches a given profile HMM of a protein family against a database of fragmentary MG/MT sequencing data and simultaneously assembles complete or near-complete gene and protein sequences of the protein family. The resulting program, HMM-GRASPx, demonstrates superior performance in aligning and assembling homologs when benchmarked on both simulated marine MG and real human saliva MG datasets. On real supragingival plaque and stool MG datasets that were generated from healthy individuals, HMM-GRASPx accurately estimates the abundances of the antimicrobial resistance (AMR gene families and enables accurate characterization of the resistome profiles of these microbial communities. For real human oral microbiome MT datasets, using the HMM-GRASPx estimated transcript abundances significantly improves detection of differentially expressed (DE genes. Finally, HMM

  15. Scaling Limit of the Noncommutative Black Hole

    International Nuclear Information System (INIS)

    Majid, Shahn

    2011-01-01

    We show that the 'quantum' black hole wave operator in the κ-Minkowski or bicrossproduct model quantum spacetime introduced in [1] has a natural scaling limit λ p → 0 at the event horizon. Here λ p is the Planck time and the geometry at the event horizon in Planck length is maintained at the same time as the limit is taken, resulting in a classical theory with quantum gravity remnants. Among the features is a frequency-dependent 'skin' of some ω/ν Planck lengths just inside the event horizon for ω > 0 and just outside for ω < 0, where v is the frequency associated to the Schwarzschild radius. We use bessel and hypergeometric functions to analyse propagation through the event horizon and skin in both directions. The analysis confirms a finite redshift at the horizon for positive frequency modes in the exterior.

  16. A two-channel wave analyser for sounding rockets and satellites

    International Nuclear Information System (INIS)

    Brondz, E.

    1989-04-01

    Studies of low frequency electromagnetic waves, produced originally by lightning discharges penetrating the ionosphere, provide an important source of valuable information about the earth's surrounding plasma. Use of rockets and satellites supported by ground-based observations implies, unique opportunity for measuring in situ a number of parameters simultaneously in order to correlate data from various measurements. However, every rocket experiment has to be designed bearing in mind telemetry limitations and/or short flight duration. Typical flight duration for Norwegian rockets launched from Andoeya Rocket Range is 500 to 600 s. Therefore, the most desired way to use a rocket or satellite is to carry out data analyses on board in real time. Recent achievements in Digital Signal Processing (DSP) technology have made it possible to undertake very complex on board data manipulation. As a part of rocket instrumentation, a DSP based unit able to carry out on board analyses of low frequency electromagnetic waves in the ionosphere has been designed. The unit can be seen as a general purpose computer built on the basis of a fixed-point 16 bit signal processor. The unit is supplied with a program code in order to perform wave analyses on two independent channels simultaneously. The analyser is able to perform 256 point complex fast fourier transformations, and it produce a spectral power desity estimate on both channels every 85 ms. The design and construction of the DSP based unit is described and results from the tests are presented

  17. Risk based limits for Operational Safety Requirements

    International Nuclear Information System (INIS)

    Cappucci, A.J. Jr.

    1993-01-01

    OSR limits are designed to protect the assumptions made in the facility safety analysis in order to preserve the safety envelope during facility operation. Normally, limits are set based on ''worst case conditions'' without regard to the likelihood (frequency) of a credible event occurring. In special cases where the accident analyses are based on ''time at risk'' arguments, it may be desirable to control the time at which the facility is at risk. A methodology has been developed to use OSR limits to control the source terms and the times these source terms would be available, thus controlling the acceptable risk to a nuclear process facility. The methodology defines a new term ''gram-days''. This term represents the area under a source term (inventory) vs time curve which represents the risk to the facility. Using the concept of gram-days (normalized to one year) allows the use of an accounting scheme to control the risk under the inventory vs time curve. The methodology results in at least three OSR limits: (1) control of the maximum inventory or source term, (2) control of the maximum gram-days for the period based on a source term weighted average, and (3) control of the maximum gram-days at the individual source term levels. Basing OSR limits on risk based safety analysis is feasible, and a basis for development of risk based limits is defensible. However, monitoring inventories and the frequencies required to maintain facility operation within the safety envelope may be complex and time consuming

  18. K → ππ Electroweak penguins in the chiral limit

    International Nuclear Information System (INIS)

    Cirigliano, V.; Donoghue, J.F.; Golowich, E.; Maltman, K.

    2003-01-01

    We report on dispersive and finite energy sum rule analyses of the electroweak penguin matrix elements 2 vertical bar Q 7,8 vertical bar K 0 > in the chiral limit. We accomplish the correct perturbative matching (scale and scheme dependence) at NLO in α s , and we describe two different strategies for numerical evaluation

  19. Multifractal properties of diffusion-limited aggregates and random multiplicative processes

    International Nuclear Information System (INIS)

    Canessa, E.

    1991-04-01

    We consider the multifractal properties of irreversible diffusion-limited aggregation (DLA) from the point of view of the self-similarity of fluctuations in random multiplicative processes. In particular we analyse the breakdown of multifractal behaviour and phase transition associated with the negative moments of the growth probabilities in DLA. (author). 20 refs, 5 figs

  20. How to deal with continuous and dichotomic outcomes in epidemiological research: linear and logistic regression analyses

    NARCIS (Netherlands)

    Tripepi, Giovanni; Jager, Kitty J.; Stel, Vianda S.; Dekker, Friedo W.; Zoccali, Carmine

    2011-01-01

    Because of some limitations of stratification methods, epidemiologists frequently use multiple linear and logistic regression analyses to address specific epidemiological questions. If the dependent variable is a continuous one (for example, systolic pressure and serum creatinine), the researcher

  1. Insights into vehicle trajectories at the handling limits: analysing open data from race car drivers

    Science.gov (United States)

    Kegelman, John C.; Harbott, Lene K.; Gerdes, J. Christian

    2017-02-01

    Race car drivers can offer insights into vehicle control during extreme manoeuvres; however, little data from race teams is publicly available for analysis. The Revs Program at Stanford has built a collection of vehicle dynamics data acquired from vintage race cars during live racing events with the intent of making this database publicly available for future analysis. This paper discusses the data acquisition, post-processing, and storage methods used to generate the database. An analysis of available data quantifies the repeatability of professional race car driver performance by examining the statistical dispersion of their driven paths. Certain map features, such as sections with high path curvature, consistently corresponded to local minima in path dispersion, quantifying the qualitative concept that drivers anchor their racing lines at specific locations around the track. A case study explores how two professional drivers employ distinct driving styles to achieve similar lap times, supporting the idea that driving at the limits allows a family of solutions in terms of paths and speed that can be adapted based on specific spatial, temporal, or other constraints and objectives.

  2. Determination of operating limits for radionuclides for a proposed landfill at Paducah Gaseous Diffusion Plant

    International Nuclear Information System (INIS)

    Wang, J.C.; Lee, D.W.; Ketelle, R.H.; Lee, R.R.; Kocher, D.C.

    1994-01-01

    The operating limits for radionuclides in sanitary and industrial wastes were determined for a proposed landfill at the Paducah Gaseous Diffusion Plant (PGDP), Kentucky. These limits, which may be very small but nonzero, are not mandated by law or regulation but are needed for rational operation. The approach was based on analyses of the potential contamination of groundwater at the plant boundary and the potential exposure to radioactivity of an intruder at the landfill after closure. The groundwater analysis includes (1) a source model describing the disposal of waste and the release of radionuclides from waste to the groundwater, (2) site-specific groundwater flow and contaminant transport calculations, and (3) calculations of operating limits from the dose limit and conversion factors. The intruder analysis includes pathways through ingestion of contaminated vegetables and soil, external exposure to contaminated soil, and inhalation of suspended activity from contaminated soil particles. In both analyses, a limit on annual effective dose equivalent of 4 mrem (0.04 mSv) was adopted. The intended application of the results is to refine the radiological monitoring standards employed by the PGDP Health Physics personnel to determine what constitutes radioactive wastes, with concurrence of the Commonwealth of Kentucky

  3. Project W-320 SAR and process control thermal analyses

    International Nuclear Information System (INIS)

    Sathyanarayana, K.

    1997-01-01

    This report summarizes the results of thermal hydraulic computer modeling supporting Project W-320 for process control and SAR documentation. Parametric analyses were performed for the maximum steady state waste temperature. The parameters included heat load distribution, tank heat load, fluffing factor and thermal conductivity. Uncertainties in the fluffing factor and heat load distribution had the largest effect on maximum waste temperature. Safety analyses were performed for off normal events including loss of ventilation, loss of evaporation and loss of secondary chiller. The loss of both the primary and secondary ventilation was found to be the most limiting event with saturation temperature in the bottom waste reaching in just over 30 days. An evaluation was performed for the potential lowering of the supernatant level in tank 241-AY-102. The evaluation included a loss of ventilation and steam bump analysis. The reduced supernatant level decreased the time to reach saturation temperature in the waste for the loss of ventilation by about one week. However, the consequence of a steam bump were dramatically reduced

  4. Upper limits from counting experiments with multiple pipelines

    International Nuclear Information System (INIS)

    Sutton, Patrick J

    2009-01-01

    In counting experiments, one can set an upper limit on the rate of a Poisson process based on a count of the number of events observed due to the process. In some experiments, one makes several counts of the number of events, using different instruments, different event detection algorithms or observations over multiple time intervals. We demonstrate how to generalize the classical frequentist upper limit calculation to the case where multiple counts of events are made over one or more time intervals using several (not necessarily independent) procedures. We show how different choices of the rank ordering of possible outcomes in the space of counts correspond to applying different levels of significance to the various measurements. We propose an ordering that is matched to the sensitivity of the different measurement procedures and show that in typical cases it gives stronger upper limits than other choices. As an example, we show how this method can be applied to searches for gravitational-wave bursts, where multiple burst-detection algorithms analyse the same data set, and demonstrate how a single combined upper limit can be set on the gravitational-wave burst rate.

  5. Irregular Morphing for Real-Time Rendering of Large Terrain

    Directory of Open Access Journals (Sweden)

    S. Kalem

    2016-06-01

    Full Text Available The following paper proposes an alternative approach to the real-time adaptive triangulation problem. A new region-based multi-resolution approach for terrain rendering is described which improves on-the-fly the distribution of the density of triangles inside the tile after selecting appropriate Level-Of-Detail by an adaptive sampling. This proposed approach organizes the heightmap into a QuadTree of tiles that are processed independently. This technique combines the benefits of both Triangular Irregular Network approach and region-based multi-resolution approach by improving the distribution of the density of triangles inside the tile. Our technique morphs the initial regular grid of the tile to deformed grid in order to minimize approximation error. The proposed technique strives to combine large tile size and real-time processing while guaranteeing an upper bound on the screen space error. Thus, this approach adapts terrain rendering process to local surface characteristics and enables on-the-fly handling of large amount of terrain data. Morphing is based-on the multi-resolution wavelet analysis. The use of the D2WT multi-resolution analysis of the terrain height-map speeds up processing and permits to satisfy an interactive terrain rendering. Tests and experiments demonstrate that Haar B-Spline wavelet, well known for its properties of localization and its compact support, is suitable for fast and accurate redistribution. Such technique could be exploited in client-server architecture for supporting interactive high-quality remote visualization of very large terrain.

  6. Predictors of parents' intention to limit children's television viewing.

    Science.gov (United States)

    Bleakley, Amy; Piotrowski, Jessica Taylor; Hennessy, Michael; Jordan, Amy

    2013-12-01

    Scientific evidence demonstrates a link between viewing time and several poor health outcomes. We use a reasoned action approach to identify the determinants and beliefs associated with parents' intention to limit their children's television viewing. We surveyed a random sample of 516 caregivers to children ages 3-16 in a large Northeastern city. Multiple regression analyses were used to test a reasoned action model and examine the differences across demographic groups. The intention to limit viewing (-3 to 3) was low among parents of adolescents (M: 0.05) compared with parents of 3-6 year olds (M: 1.49) and 7-12 year olds (M: 1.16). Attitudes were the strongest determinant of intention (β = 0.43) across all demographic groups and normative pressure was also significantly related to intention (β = 0.20), except among parents of adolescents. Relevant beliefs associated with intention to limit viewing included: limiting television would be associated with the child exercising more, doing better in school, talking to family more and having less exposure to inappropriate content. Attitudes and normative pressure play an important role in determining parents' intention to limit their child's television viewing. The beliefs that were associated with parents' intention to limit should be emphasized by health professionals and in health communication campaigns.

  7. A conceptual framework for analysing and measuring land-use intensity

    DEFF Research Database (Denmark)

    Erb, Karl-Heinz; Haberl, Helmut; Jepsen, Martin Rudbeck

    2013-01-01

    Large knowledge gaps currently exist that limit our ability to understand and characterise dynamics and patterns of land-use intensity: in particular, a comprehensive conceptual framework and a system of measurement are lacking. This situation hampers the development of a sound understanding...... of the mechanisms, determinants, and constraints underlying changes in land-use intensity. On the basis of a review of approaches for studying land-use intensity, we propose a conceptual framework to quantify and analyse land-use intensity. This framework integrates three dimensions: (a) input intensity, (b) output...

  8. Finite element-based limit load of piping branch junctions under combined loadings

    International Nuclear Information System (INIS)

    Xuan Fuzhen; Li Peining

    2004-01-01

    The limit load is an important input parameter in engineering defect-assessment procedures and strength design. In the present work, a total of 100 different piping branch junction models for the limit load calculation were performed under combined internal pressure and moments in use of non-linear finite element (FE) method. Three different existing accumulation rules for limit load, i.e., linear equation, parabolic equation and quadratic equation were discussed on the basis of FE results. A novel limit load solution was developed based on detailed three-dimensional FE limit analyses which accommodated the geometrical parameter influence, together with analytical solutions based on equilibrium stress fields. Finally, six experimental results were provided to justify the presented equation. According to the FE limit analysis, limit load interaction of the piping tees under combined pressure and moments has a relationship with the geometrical parameters, especially with the diameter ratio d/D. The predicted limit loads from the presented formula are very close to the experimental data. The resulting limit load solution is given in a closed form, and thus can be easily used in practice

  9. Statistical analyses of the data on occupational radiation expousure at JPDR

    International Nuclear Information System (INIS)

    Kato, Shohei; Anazawa, Yutaka; Matsuno, Kenji; Furuta, Toshishiro; Akiyama, Isamu

    1980-01-01

    In the statistical analyses of the data on occupational radiation exposure at JPDR, statistical features were obtained as follows. (1) The individual doses followed log-normal distribution. (2) In the distribution of doses from one job in controlled area, the logarithm of the mean (μ) depended on the exposure rate (γ(mR/h)), and the σ correlated to the nature of the job and normally distributed. These relations were as follows. μ = 0.48 ln r-0.24, σ = 1.2 +- 0.58 (3) For the data containing different groups, the distribution of doses showed a polygonal line on the log-normal probability paper. (4) Under the dose limitation, the distribution of the doses showed asymptotic curve along the limit on the log-normal probability paper. (author)

  10. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input

  11. Upper limits on the luminosity of the progenitor of type Ia supernova SN2014J

    DEFF Research Database (Denmark)

    Nielsen, M. T. B.; Gilfanov, M.; Bogdan, A.

    2014-01-01

    We analysed archival data of Chandra pre-explosion observations of the position of SN2014J in M82. No X-ray source at this position was detected in the data, and we calculated upper limits on the luminosities of the progenitor. These upper limits allow us to firmly rule out an unobscured supersof...

  12. Evaluation of uncertainty and detection limits in radioactivity measurements

    Energy Technology Data Exchange (ETDEWEB)

    Herranz, M. [Universidad del Pais Vasco/Euskal Herriko Unibertsitatea, Escuela Tecnica Superior de Ingenieria de Bilbao, Alda. Urquijo, s/n, 48013 Bilbao (Spain); Idoeta, R. [Universidad del Pais Vasco/Euskal Herriko Unibertsitatea, Escuela Tecnica Superior de Ingenieria de Bilbao, Alda. Urquijo, s/n, 48013 Bilbao (Spain)], E-mail: raquel.idoeta@ehu.es; Legarda, F. [Universidad del Pais Vasco/Euskal Herriko Unibertsitatea, Escuela Tecnica Superior de Ingenieria de Bilbao, Alda. Urquijo, s/n, 48013 Bilbao (Spain)

    2008-10-01

    The uncertainty associated with the assessment of the radioactive content of any sample depends on the net counting rate registered during the measuring process and on the different weighting factors needed to transform this counting rate into activity, activity per unit mass or activity concentration. This work analyses the standard uncertainties in these weighting factors as well as their contribution to the uncertainty in the activity reported for three typical determinations for environmental radioactivity measurements in the laboratory. It also studies the corresponding characteristic limits and their dependence on the standard uncertainty related to those weighting factors, offering an analysis of the effectiveness of the simplified characteristic limits as evaluated by various measuring software and laboratories.

  13. Evaluation of uncertainty and detection limits in radioactivity measurements

    International Nuclear Information System (INIS)

    Herranz, M.; Idoeta, R.; Legarda, F.

    2008-01-01

    The uncertainty associated with the assessment of the radioactive content of any sample depends on the net counting rate registered during the measuring process and on the different weighting factors needed to transform this counting rate into activity, activity per unit mass or activity concentration. This work analyses the standard uncertainties in these weighting factors as well as their contribution to the uncertainty in the activity reported for three typical determinations for environmental radioactivity measurements in the laboratory. It also studies the corresponding characteristic limits and their dependence on the standard uncertainty related to those weighting factors, offering an analysis of the effectiveness of the simplified characteristic limits as evaluated by various measuring software and laboratories

  14. [Cost-Effectiveness and Cost-Utility Analyses of Antireflux Medicine].

    Science.gov (United States)

    Gockel, Ines; Lange, Undine Gabriele; Schürmann, Olaf; Jansen-Winkeln, Boris; Sibbel, Rainer; Lyros, Orestis; von Dercks, Nikolaus

    2018-04-12

    Laparoscopic antireflux surgery and medical therapy with proton pump inhibitors are gold standards of gastroesophageal reflux treatment. On account of limited resources and increasing healthcare needs and costs, in this analysis, not only optimal medical results, but also superiority in health economics of these 2 methods are evaluated. We performed an electronic literature survey in MEDLINE, PubMed, Cochrane Library, ISRCTN (International Standard Randomization Controlled Trial Number) as well as in the NHS Economic Evaluation Database, including studies published until 1/2017. Only studies considering the effect size of QALY (Quality-Adjusted Life Years) (with respect to different quality of life-scores) as primary outcome comparing laparoscopic fundoplication and medical therapy were included. Criteria of comparison were ICER (Incremental Cost-Effectiveness Ratio) and ICUR (Incremental Cost-Utility Ratio). Superiority of the respective treatment option for each publication was worked out. In total, 18 comparative studies were identified in the current literature with respect to above-mentioned search terms, qualifying for the defined inclusion criteria. Six studies were finally selected for analyses. Out of 6 publications, 3 showed superiority of laparoscopic fundoplication over long-term medical management based on current cost-effectiveness data. Limitations were related to different time intervals, levels of evidence of studies and underlying resources/costs of analyses, healthcare systems and applied quality of life instruments. Future prospective, randomized trials should examine this comparison in greater detail. Additionally, there is a large potential for further research in the health economics assessment of early diagnosis and prevention measures of reflux disease and Barrett's esophagus/carcinoma. © Georg Thieme Verlag KG Stuttgart · New York.

  15. Analyses of demand response in Denmark

    International Nuclear Information System (INIS)

    Moeller Andersen, F.; Grenaa Jensen, S.; Larsen, Helge V.; Meibom, P.; Ravn, H.; Skytte, K.; Togeby, M.

    2006-10-01

    Due to characteristics of the power system, costs of producing electricity vary considerably over short time intervals. Yet, many consumers do not experience corresponding variations in the price they pay for consuming electricity. The topic of this report is: are consumers willing and able to respond to short-term variations in electricity prices, and if so, what is the social benefit of consumers doing so? Taking Denmark and the Nord Pool market as a case, the report focuses on what is known as short-term consumer flexibility or demand response in the electricity market. With focus on market efficiency, efficient allocation of resources and security of supply, the report describes demand response from a micro-economic perspective and provides empirical observations and case studies. The report aims at evaluating benefits from demand response. However, only elements contributing to an overall value are presented. In addition, the analyses are limited to benefits for society, and costs of obtaining demand response are not considered. (au)

  16. Thermal and structural limitations for impurity-control components in FED/INTOR

    International Nuclear Information System (INIS)

    Majumdar, S.; Cha, Y.; Mattas, R.; Abdou, M.; Cramer, B.; Haines, J.

    1983-02-01

    The successful operation of the impurity-control system of the FED/INTOR will depend to a large extent on the ability of its various components to withstand the imposed thermal and mechanical loads. The present paper explores the thermal and stress analyses aspects of the limiter and divertor operation of the FED/INTOR in its reference configuration. Three basic limitations governing the design of the limiter and the divertor are the maximum allowable metal temperature, the maximum allowable stress intensity and the allowable fatigue life of the structural material. Other important design limitations stemming from sputtering, evaporation, melting during disruptions, etc. are not considered in the present paper. The materials considered in the present analysis are a copper and a vanadium alloy for the structural material and graphite, beryllium, beryllium oxide, tungsten and silicon carbide for the coating or tile material

  17. The relationship between limited MRI section analyses and volumetric assessment of synovitis in knee osteoarthritis

    International Nuclear Information System (INIS)

    Rhodes, L.A.; Keenan, A.-M.; Grainger, A.J.; Emery, P.; McGonagle, D.; Conaghan, P.G.

    2005-01-01

    AIM: To assess whether simple, limited section analysis can replace detailed volumetric assessment of synovitis in patients with osteoarthritis (OA) of the knee using contrast-enhanced magnetic resonance imaging (MRI). MATERIALS AND METHODS: Thirty-five patients with clinical and radiographic OA of the knee were assessed for synovitis using gadolinium-enhanced MRI. The volume of enhancing synovium was quantitatively assessed in four anatomical sites (the medial and lateral parapatellar recesses, the intercondylar notch and the suprapatellar pouch) by summing the volumes of synovitis in consecutive sections. Four different combinations of section analysis were evaluated for their ability to predict total synovial volume. RESULTS: A total of 114 intra-articular sites were assessed. Simple linear regression demonstrated that the best predictor of total synovial volume was the analysis containing the inferior, mid and superior sections of each of the intra-articular sites, which predicted between 40-80% (r 2 =0.396, p 2 =0.818, p<0.001 for medial parapatellar recess) of the total volume assessment. CONCLUSIONS: The results suggest that a three-section analysis on axial post-gadolinium sequences provides a simple surrogate measure of synovial volume in OA knees

  18. The relationship between limited MRI section analyses and volumetric assessment of synovitis in knee osteoarthritis

    Energy Technology Data Exchange (ETDEWEB)

    Rhodes, L.A. [Academic Unit of Medical Physics, University of Leeds and Leeds General Infirmary, Leeds (United Kingdom)]. E-mail: lar@medphysics.leeds.ac.uk; Keenan, A.-M. [Academic Unit of Musculoskeletal Disease, University of Leeds and Leeds General Infirmary, Leeds (United Kingdom); Grainger, A.J. [Department of Radiology, Leeds General Infirmary, Leeds (United Kingdom); Emery, P. [Academic Unit of Musculoskeletal Disease, University of Leeds and Leeds General Infirmary, Leeds (United Kingdom); McGonagle, D. [Academic Unit of Musculoskeletal Disease, University of Leeds and Leeds General Infirmary, Leeds (United Kingdom); Calderdale Royal Hospital, Salterhebble, Halifax (United Kingdom); Conaghan, P.G. [Academic Unit of Musculoskeletal Disease, University of Leeds and Leeds General Infirmary, Leeds (United Kingdom)

    2005-12-15

    AIM: To assess whether simple, limited section analysis can replace detailed volumetric assessment of synovitis in patients with osteoarthritis (OA) of the knee using contrast-enhanced magnetic resonance imaging (MRI). MATERIALS AND METHODS: Thirty-five patients with clinical and radiographic OA of the knee were assessed for synovitis using gadolinium-enhanced MRI. The volume of enhancing synovium was quantitatively assessed in four anatomical sites (the medial and lateral parapatellar recesses, the intercondylar notch and the suprapatellar pouch) by summing the volumes of synovitis in consecutive sections. Four different combinations of section analysis were evaluated for their ability to predict total synovial volume. RESULTS: A total of 114 intra-articular sites were assessed. Simple linear regression demonstrated that the best predictor of total synovial volume was the analysis containing the inferior, mid and superior sections of each of the intra-articular sites, which predicted between 40-80% (r {sup 2}=0.396, p<0.001 for notch; r {sup 2}=0.818, p<0.001 for medial parapatellar recess) of the total volume assessment. CONCLUSIONS: The results suggest that a three-section analysis on axial post-gadolinium sequences provides a simple surrogate measure of synovial volume in OA knees.

  19. Limitations in global information on species occurrences

    Directory of Open Access Journals (Sweden)

    Carsten Meyer

    2016-07-01

    Full Text Available Detailed information on species distributions is crucial for answering central questions in biogeography, ecology, evolutionary biology and conservation. Millions of species occurrence records have been mobilized via international data-sharing networks, but inherent biases, gaps and uncertainties hamper broader application. In my PhD thesis, I presented the first comprehensive analyses of global patterns and drivers of these limitations across different taxonomic groups and spatial scales. Integrating 300 million occurrence records for terrestrial vertebrates and plants with comprehensive taxonomic databases, expert range maps and regional checklists, I demonstrated extensive taxonomic, geographical and temporal biases, gaps and uncertainties. I identified key socio-economic drivers of data bias across different taxonomic groups and spatial scales. The results of my dissertation provide an empirical baseline for effectively accounting for data limitations in distribution models, as well as for prioritizing and monitoring efforts to collate additional occurrence information.

  20. Parametric analyses of single-zone thorium-fueled molten salt reactor fuel cycle options

    International Nuclear Information System (INIS)

    Powers, J.J.; Worrall, A.; Gehin, J.C.; Harrison, T.J.; Sunny, E.E.

    2013-01-01

    Analyses of fuel cycle options based on thorium-fueled Molten Salt Reactors (MSRs) have been performed in support of fuel cycle screening and evaluation activities for the United States Department of Energy. The MSR options considered are based on thermal spectrum MSRs with 3 different separations levels: full recycling, limited recycling, and 'once-through' operation without active separations. A single-fluid, single-zone 2250 MWth (1000 MWe) MSR concept consisting of a fuel-bearing molten salt with graphite moderator and reflectors was used as the basis for this study. Radiation transport and isotopic depletion calculations were performed using SCALE 6.1 with ENDF/B-VII nuclear data. New methodology developed at Oak Ridge National Laboratory (ORNL) enables MSR analysis using SCALE, modeling material feed and removal by taking user-specified parameters and performing multiple SCALE/TRITON simulations to determine the resulting equilibrium operating conditions. Parametric analyses examined the sensitivity of the performance of a thorium MSR to variations in the separations efficiency for protactinium and fission products. Results indicate that self-sustained operation is possible with full or limited recycling but once-through operation would require an external neutron source. (authors)

  1. Preliminary design analysis of the ALT-II limiter for TEXTOR

    International Nuclear Information System (INIS)

    Koski, J.A.; Boyd, R.D.; Kempka, S.M.; Romig, A.D. Jr.; Smith, M.F.; Watson, R.D.; Whitley, J.B.; Conn, R.W.; Grotz, S.P.

    1984-01-01

    Installation of a large toroidal belt pump limiter, Advanced Limiter Test II (ALT-II), on the TEXTOR tokamak at Juelich, FRG is anticipated for early 1986. This paper discusses the preliminary mechanical design and materials considerations undertaken as part of the feasibility study phase for ALT-II. Since the actively cooled limiter blade is the component in direct contact with the plasma edge, and thus subject to the severe plasma environment, most preliminary design efforts have concentrated on analysis of the blade. The screening process which led to the recommended preliminary design consisting of a dispersion strenghthened copper or OFHC copper cover plate over an austenitic stainless steel base plate is discussed. A 1 to 3 mm thick low atomic number coating consisting of a graded plasma-sprayed Silicon Carbide-Aluminium composite is recommended subject to further experiment and evaluation. Thermal-hydraulic and stress analyses of the limiter blade are also discussed. (orig.)

  2. The effects of phosphorus limitation on carbon metabolism in diatoms.

    Science.gov (United States)

    Brembu, Tore; Mühlroth, Alice; Alipanah, Leila; Bones, Atle M

    2017-09-05

    Phosphorus is an essential element for life, serving as an integral component of nucleic acids, lipids and a diverse range of other metabolites. Concentrations of bioavailable phosphorus are low in many aquatic environments. Microalgae, including diatoms, apply physiological and molecular strategies such as phosphorus scavenging or recycling as well as adjusting cell growth in order to adapt to limiting phosphorus concentrations. Such strategies also involve adjustments of the carbon metabolism. Here, we review the effect of phosphorus limitation on carbon metabolism in diatoms. Two transcriptome studies are analysed in detail, supplemented by other transcriptome, proteome and metabolite data, to gain an overview of different pathways and their responses. Phosphorus, nitrogen and silicon limitation responses are compared, and similarities and differences discussed. We use the current knowledge to propose a suggestive model for the carbon flow in phosphorus-replete and phosphorus-limited diatom cells.This article is part of the themed issue 'The peculiar carbon metabolism in diatoms'. © 2017 The Authors.

  3. Fault current limitation with HTc superconductors; Limitation de courant a partir de materiaux supraconducteurs HTc

    Energy Technology Data Exchange (ETDEWEB)

    Buzon, D.

    2002-09-15

    This report deals with the possibility of using high critical temperature (HTc) superconductors for current limitation. The transition from a superconductive to a high dissipative state could be used to limit inrush currents. This application of superconductivity is very attractive because it's an innovative device for electrical networks without any conventional equivalence at high voltage. This device would allow to improve the density of connections and the continuity of the electrical distribution. This study can be divided into two fields. The aim of the first one is to analyse the behaviour of different HTc superconductors for current limitation. We carried out experimental measurements to characterise those conductors during a nominal AC rating (measurements of losses) and during a fault setting. Particularly, a description of the transition in bulk textured YBCO samples near Tc was made of inhomogeneous transition of the device and to estimate its losses. Finally, a 1 kV / 100 A demonstrator made of 43 meanders of textured YBCO was tested at 90,5 K. Thermal gradients seem to be responsible of the altering of some of the samples. The other part of this study concerns the dynamic of the transition. Near Tc, our experiments showed that the transition is more homogeneous. Experimental measurements also showed the influence of thermal exchanges with the cryogenic surrounding on the transition. This point can be justified if the dissipated energy is locally concentrated. (author)

  4. Fault current limitation with HTc superconductors; Limitation de courant a partir de materiaux supraconducteurs HTc

    Energy Technology Data Exchange (ETDEWEB)

    Buzon, D

    2002-09-15

    This report deals with the possibility of using high critical temperature (HTc) superconductors for current limitation. The transition from a superconductive to a high dissipative state could be used to limit inrush currents. This application of superconductivity is very attractive because it's an innovative device for electrical networks without any conventional equivalence at high voltage. This device would allow to improve the density of connections and the continuity of the electrical distribution. This study can be divided into two fields. The aim of the first one is to analyse the behaviour of different HTc superconductors for current limitation. We carried out experimental measurements to characterise those conductors during a nominal AC rating (measurements of losses) and during a fault setting. Particularly, a description of the transition in bulk textured YBCO samples near Tc was made of inhomogeneous transition of the device and to estimate its losses. Finally, a 1 kV / 100 A demonstrator made of 43 meanders of textured YBCO was tested at 90,5 K. Thermal gradients seem to be responsible of the altering of some of the samples. The other part of this study concerns the dynamic of the transition. Near Tc, our experiments showed that the transition is more homogeneous. Experimental measurements also showed the influence of thermal exchanges with the cryogenic surrounding on the transition. This point can be justified if the dissipated energy is locally concentrated. (author)

  5. The frequency of drugs among Danish drivers before and after the introduction of fixed concentration limits

    DEFF Research Database (Denmark)

    Steentoft, Anni; Simonsen, Kirsten Wiese; Linnet, Kristian

    2010-01-01

    Until July 2007, the driving under the influence of drugs (DUID) legislation in Denmark was based on impairment, evaluated on the basis of a clinical investigation and toxicological analyses, but in 2007 fixed concentration limits were introduced into the Danish traffic legislation. The objective...... for this study was to investigate the prevalence of medication and illicit drugs among Danish drivers before and after 2007.......Until July 2007, the driving under the influence of drugs (DUID) legislation in Denmark was based on impairment, evaluated on the basis of a clinical investigation and toxicological analyses, but in 2007 fixed concentration limits were introduced into the Danish traffic legislation. The objective...

  6. Experimental and theoretical examples of the value and limitations of transition state theory

    International Nuclear Information System (INIS)

    Golden, D.M.

    1979-01-01

    Value and limitations of transition-state theory (TST) are reviewed. TST analyses of the temperature dependence of the ''direct'' reactions CH 3 + CH 3 CHO → CH 4 + CH 3 CO (1) and O + CH 4 → OH + CH 3 (2) are presented in detail, and other examples of TST usefulness are recalled. Limitations are discussed for bimolecular processes in terms of ''complex'' vs ''direct'' mechanisms. The reaction OH + CO → CO 2 + H is discussed in this context. Limitations for unimolecular processes seem to arise only for simple bond fission processes, and recent advances are noted. 2 figures, 5 tables

  7. Experimental and theoretical examples of the value and limitations of transition state theory

    Science.gov (United States)

    Golden, D. M.

    1979-01-01

    Value and limitations of transition-state theory (TST) are reviewed. TST analyses of the temperature dependence of the 'direct' reactions CH3 + CH3CHO yields CH4 + CH3CO(1) and O + CH4 yields OH + CH3(2) are presented in detail, and other examples of TST usefulness are recalled. Limitations are discussed for bimolecular processes in terms of 'complex' vs. 'direct' mechanisms. The reaction OH + CO yields CO2 + H is discussed in this context. Limitations for unimolecular processes seem to arise only for simple bond fission processes, and recent advances are noted.

  8. CITY TRANSPORT IN BARRIER-FREE ARCHITECTURAL PLANNING SPACE FOR PEOPLE WITH LIMITED MOBILITY

    Directory of Open Access Journals (Sweden)

    Pryadko Igor’ Petrovich

    2014-09-01

    Full Text Available This paper reviews the current state of transport organization for people with limited mobility. The article evaluates the results of the actions the executive authorities of Moscow and Moscow Region take. Barrier-free space organization for disabled people and parents with prams is given a special attention. The lack of strategy in the sphere leads to considerable difficulties for people with limited ability. This problem should be solved in cooperation with the survey of other peoples' needs. The article gives examples of comfortable urban space in Sochi, Moscow, Chita, Mytishchi and analyses the ways urbanism influences people with limited abilities.

  9. Negative Effects of Reward on Intrinsic Motivation--A Limited Phenomenon: Comment on Deci, Koestner, and Ryan (2001).

    Science.gov (United States)

    Cameron, Judy

    2001-01-01

    Prior meta analyses by J. Cameron and other researchers suggested that the negative effects of extrinsic reward on intrinsic motivation were limited and avoidable. E. Deci and others (2001) suggested that the analyses were flawed. This commentary makes the case that there is no inherent negative property of reward. (SLD)

  10. Signal Transduction Pathways of TNAP: Molecular Network Analyses.

    Science.gov (United States)

    Négyessy, László; Györffy, Balázs; Hanics, János; Bányai, Mihály; Fonta, Caroline; Bazsó, Fülöp

    2015-01-01

    Despite the growing body of evidence pointing on the involvement of tissue non-specific alkaline phosphatase (TNAP) in brain function and diseases like epilepsy and Alzheimer's disease, our understanding about the role of TNAP in the regulation of neurotransmission is severely limited. The aim of our study was to integrate the fragmented knowledge into a comprehensive view regarding neuronal functions of TNAP using objective tools. As a model we used the signal transduction molecular network of a pyramidal neuron after complementing with TNAP related data and performed the analysis using graph theoretic tools. The analyses show that TNAP is in the crossroad of numerous pathways and therefore is one of the key players of the neuronal signal transduction network. Through many of its connections, most notably with molecules of the purinergic system, TNAP serves as a controller by funnelling signal flow towards a subset of molecules. TNAP also appears as the source of signal to be spread via interactions with molecules involved among others in neurodegeneration. Cluster analyses identified TNAP as part of the second messenger signalling cascade. However, TNAP also forms connections with other functional groups involved in neuronal signal transduction. The results indicate the distinct ways of involvement of TNAP in multiple neuronal functions and diseases.

  11. Fastlim. A fast LHC limit calculator

    International Nuclear Information System (INIS)

    Papucci, Michele; Zeune, Lisa

    2014-02-01

    Fastlim is a tool to calculate conservative limits on extensions of the Standard Model from direct LHC searches without performing any Monte Carlo event generation. The program reconstructs the visible cross sections from pre-calculated efficiency tables and cross section tables for simplified event topologies. As a proof of concept of the approach, we have implemented searches relevant for supersymmetric models with R-parity conservation. Fastlim takes the spectrum and coupling information of a given model point and provides, for each signal region of the implemented analyses, the visible cross sections normalised to the corresponding upper limit, reported by the experiments, as well as the exclusion p-value. To demonstrate the utility of the program we study the sensitivity of the recent ATLAS missing energy searches to the parameter space of natural SUSY models. The program structure allows the straight-forward inclusion of external efficiency tables and can be generalised to R-parity violating scenarios and non-SUSY models. This paper serves as a self-contained user guide, and indicates the conventions and approximations used.

  12. Limits to behavioral evolution: the quantitative genetics of a complex trait under directional selection.

    Science.gov (United States)

    Careau, Vincent; Wolak, Matthew E; Carter, Patrick A; Garland, Theodore

    2013-11-01

    Replicated selection experiments provide a powerful way to study how "multiple adaptive solutions" may lead to differences in the quantitative-genetic architecture of selected traits and whether this may translate into differences in the timing at which evolutionary limits are reached. We analyze data from 31 generations (n=17,988) of selection on voluntary wheel running in house mice. The rate of initial response, timing of selection limit, and height of the plateau varied significantly between sexes and among the four selected lines. Analyses of litter size and realized selection differentials seem to rule out counterposing natural selection as a cause of the selection limits. Animal-model analyses showed that although the additive genetic variance was significantly lower in selected than control lines, both before and after the limits, the decrease was not sufficient to explain the limits. Moreover, directional selection promoted a negative covariance between additive and maternal genetic variance over the first 10 generations. These results stress the importance of replication in selection studies of higher-level traits and highlight the fact that long-term predictions of response to selection are not necessarily expected to be linear because of the variable effects of selection on additive genetic variance and maternal effects. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  13. Community detection for fluorescent lifetime microscopy image segmentation

    Science.gov (United States)

    Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Achilefu, Samuel; Nussinov, Zohar

    2014-03-01

    Multiresolution community detection (CD) method has been suggested in a recent work as an efficient method for performing unsupervised segmentation of fluorescence lifetime (FLT) images of live cell images containing fluorescent molecular probes.1 In the current paper, we further explore this method in FLT images of ex vivo tissue slices. The image processing problem is framed as identifying clusters with respective average FLTs against a background or "solvent" in FLT imaging microscopy (FLIM) images derived using NIR fluorescent dyes. We have identified significant multiresolution structures using replica correlations in these images, where such correlations are manifested by information theoretic overlaps of the independent solutions ("replicas") attained using the multiresolution CD method from different starting points. In this paper, our method is found to be more efficient than a current state-of-the-art image segmentation method based on mixture of Gaussian distributions. It offers more than 1:25 times diversity based on Shannon index than the latter method, in selecting clusters with distinct average FLTs in NIR FLIM images.

  14. Project financing of biomass conversion plants. Analysis and limitation of bank-specific risks; Projektfinanzierung von Biogasanlagen. Analyse und Begrenzung der bankspezifischen Risiken

    Energy Technology Data Exchange (ETDEWEB)

    Wolf, Eileen

    2011-07-01

    In view of the climate change, limited availability of fossil fuels and increasing energy prices, the power generation from renewable energy sources increasingly is promoted by the state. In this case, bio energy plays a special role. The implementation of bio energy projects usually occurs in the context of project financing. Under this aspect, the author of the book under consideration reports on an analysis and limitation of bank-specific risks.

  15. Proceedings of the 2nd CSNI Specialist Meeting on Simulators and Plant Analysers

    International Nuclear Information System (INIS)

    Tiihonen, O.

    1999-01-01

    The safe utilisation of nuclear power plants requires the availability of different computerised tools for analysing the plant behaviour and training the plant personnel. These can be grouped into three categories: accident analysis codes, plant analysers and training simulators. The safety analysis of nuclear power plants has traditionally been limited to the worst accident cases expected for the specific plant design. Many accident analysis codes have been developed for different plant types. The scope of the analyses has continuously expanded. The plant analysers are now emerging tools intended for extensive analysis of the plant behaviour using a best estimate model for the whole plant including the reactor and full thermodynamic process, both combined with automation and electrical systems. The comprehensive model is also supported by good visualisation tools. Training simulators with real time plant model are tools for training the plant operators to run the plant. Modern training simulators have also features supporting visualisation of the important phenomena occurring in the plant during transients. The 2nd CSNI Specialist Meeting on Simulators and Plant Analysers in Espoo attracted some 90 participants from 17 countries. A total of 49 invited papers were presented in the meeting in addition to 7 simulator system demonstrations. Ample time was reserved for the presentations and informal discussions during the four meeting days. (orig.)

  16. Limit moments for non circular cross-section (elliptical) pipe bends

    International Nuclear Information System (INIS)

    Spence, J.

    1977-01-01

    A number of experiment studies have been reported or are underway which investigate limit moments applied to pipe bends. Some theoretical work is also available. However, most of the work has been confined to nominally circular cross-section bends and little account has been taken of the practical problem of manufacturing tolerances. Many methods of manufacture result in bends which are not circular in cross-section but have an oval or elliptical shape. The present paper extends previous analyses on circular bends to cater for initially elliptical cross-sections. The loading is primarily in plane bending but out of plane is also considered and several independent methods are presented. No previous information is known to the authors. Upper and lower bound limit moments are derived first of all from existing linear elastic analyses and secondly upper bound moments are derived via a plastic analogy from existing stationary creep results. It is also shown that the creep information on design factors for bends can be used to obtain a reasonable estimate of the complete moment/strain behaviour of a bend or indeed a system. (Auth.)

  17. Metagenomic and Metatranscriptomic Analyses Reveal the Structure and Dynamics of a Dechlorinating Community Containing Dehalococcoides mccartyi and Corrinoid-Providing Microorganisms under Cobalamin-Limited Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Men, Yujie; Yu, Ke; Bælum, Jacob; Gao, Ying; Tremblay, Julien; Prestat, Emmanuel; Stenuit, Ben; Tringe, Susannah G.; Jansson, Janet; Zhang, Tong; Alvarez-Cohen, Lisa; Liu, Shuang-Jiang

    2017-02-10

    ABSTRACT

    The aim of this study is to obtain a systems-level understanding of the interactions betweenDehalococcoidesand corrinoid-supplying microorganisms by analyzing community structures and functional compositions, activities, and dynamics in trichloroethene (TCE)-dechlorinating enrichments. Metagenomes and metatranscriptomes of the dechlorinating enrichments with and without exogenous cobalamin were compared. Seven putative draft genomes were binned from the metagenomes. At an early stage (2 days), more transcripts of genes in theVeillonellaceaebin-genome were detected in the metatranscriptome of the enrichment without exogenous cobalamin than in the one with the addition of cobalamin. Among these genes, sporulation-related genes exhibited the highest differential expression when cobalamin was not added, suggesting a possible release route of corrinoids from corrinoid producers. Other differentially expressed genes include those involved in energy conservation and nutrient transport (including cobalt transport). The most highly expressed corrinoidde novobiosynthesis pathway was also assigned to theVeillonellaceaebin-genome. Targeted quantitative PCR (qPCR) analyses confirmed higher transcript abundances of those corrinoid biosynthesis genes in the enrichment without exogenous cobalamin than in the enrichment with cobalamin. Furthermore, the corrinoid salvaging and modification pathway ofDehalococcoideswas upregulated in response to the cobalamin stress. This study provides important insights into the microbial interactions and roles played by members of dechlorinating communities under cobalamin-limited conditions.

    IMPORTANCEThe key

  18. Stable carbon isotope analyses of nanogram quantities of particulate organic carbon (pollen) with laser ablation nano combustion gas chromatography/isotope ratio mass spectrometry

    NARCIS (Netherlands)

    van Roij, Linda; Sluijs, Appy; Laks, Jelmer J.; Reichart, Gert-Jan

    2017-01-01

    Rationale: Analyses of stable carbon isotope ratios (δ13C values) of organic and inorganic matter remains have been instrumental for much of our understanding of present and past environmental and biological processes. Until recently, the analytical window of such analyses has been limited to

  19. Multiresolution pattern recognition of small volcanos in Magellan data

    Science.gov (United States)

    Smyth, P.; Anderson, C. H.; Aubele, J. C.; Crumpler, L. S.

    1992-01-01

    The Magellan data is a treasure-trove for scientific analysis of venusian geology, providing far more detail than was previously available from Pioneer Venus, Venera 15/16, or ground-based radar observations. However, at this point, planetary scientists are being overwhelmed by the sheer quantities of data collected--data analysis technology has not kept pace with our ability to collect and store it. In particular, 'small-shield' volcanos (less than 20 km in diameter) are the most abundant visible geologic feature on the planet. It is estimated, based on extrapolating from previous studies and knowledge of the underlying geologic processes, that there should be on the order of 10(exp 5) to 10(exp 6) of these volcanos visible in the Magellan data. Identifying and studying these volcanos is fundamental to a proper understanding of the geologic evolution of Venus. However, locating and parameterizing them in a manual manner is very time-consuming. Hence, we have undertaken the development of techniques to partially automate this task. The goal is not the unrealistic one of total automation, but rather the development of a useful tool to aid the project scientists. The primary constraints for this particular problem are as follows: (1) the method must be reasonably robust; and (2) the method must be reasonably fast. Unlike most geological features, the small volcanos of Venus can be ascribed to a basic process that produces features with a short list of readily defined characteristics differing significantly from other surface features on Venus. For pattern recognition purposes the relevant criteria include the following: (1) a circular planimetric outline; (2) known diameter frequency distribution from preliminary studies; (3) a limited number of basic morphological shapes; and (4) the common occurrence of a single, circular summit pit at the center of the edifice.

  20. PopGenome: An Efficient Swiss Army Knife for Population Genomic Analyses in R

    OpenAIRE

    Pfeifer, Bastian; Wittelsbürger, Ulrich; Ramos-Onsins, Sebastian E.; Lercher, Martin J.

    2014-01-01

    Although many computer programs can perform population genetics calculations, they are typically limited in the analyses and data input formats they offer; few applications can process the large data sets produced by whole-genome resequencing projects. Furthermore, there is no coherent framework for the easy integration of new statistics into existing pipelines, hindering the development and application of new population genetics and genomics approaches. Here, we present PopGenome, a populati...

  1. Scale Issues Related to the Accuracy Assessment of Land Use/Land Cover Maps Produced Using Multi-Resolution Data: Comments on “The Improvement of Land Cover Classification by Thermal Remote Sensing”. Remote Sens. 2015, 7(7, 8368–8390

    Directory of Open Access Journals (Sweden)

    Brian A. Johnson

    2015-10-01

    Full Text Available Much remote sensing (RS research focuses on fusing, i.e., combining, multi-resolution/multi-sensor imagery for land use/land cover (LULC classification. In relation to this topic, Sun and Schulz [1] recently found that a combination of visible-to-near infrared (VNIR; 30 m spatial resolution and thermal infrared (TIR; 100–120 m spatial resolution Landsat data led to more accurate LULC classification. They also found that using multi-temporal TIR data alone for classification resulted in comparable (and in some cases higher classification accuracies to the use of multi-temporal VNIR data, which contrasts with the findings of other recent research [2]. This discrepancy, and the generally very high LULC accuracies achieved by Sun and Schulz (up to 99.2% overall accuracy for a combined VNIR/TIR classification result, can likely be explained by their use of an accuracy assessment procedure which does not take into account the multi-resolution nature of the data. Sun and Schulz used 10-fold cross-validation for accuracy assessment, which is not necessarily inappropriate for RS accuracy assessment in general. However, here it is shown that the typical pixel-based cross-validation approach results in non-independent training and validation data sets when the lower spatial resolution TIR images are used for classification, which causes classification accuracy to be overestimated.

  2. Limiting the liability of the nuclear operator

    International Nuclear Information System (INIS)

    Reyners, P.

    1986-01-01

    This article discusses the questioning of a fundamental principle of the special nuclear third party liability regime by certain NEA countries: the limitation of the nuclear operator's liability. This regime, set up since the late fifties at European then at worldwide level, had until now been widely adopted in the national legislation of most of the countries with a nuclear power programme. The author analyses the different arguments in favour of restoring unlimited liability for the nuclear operator and attempts to define its implications for the future of the nuclear third party liability regime in NEA countries. (NEA) [fr

  3. Two-phase-flow models and their limitations

    International Nuclear Information System (INIS)

    Ishii, M.; Kocamustafaogullari, G.

    1982-01-01

    An accurate prediction of transient two-phase flow is essential to safety analyses of nuclear reactors under accident conditions. The fluid flow and heat transfer encountered are often extremely complex due to the reactor geometry and occurrence of transient two-phase flow. Recently considerable progresses in understanding and predicting these phenomena have been made by a combination of rigorous model development, advanced computational techniques, and a number of small and large scale supporting experiments. In view of their essential importance, the foundation of various two-phase-flow models and their limitations are discussed in this paper

  4. High overlap of CNVs and selection signatures revealed by varLD analyses of taurine and zebu cattle

    Science.gov (United States)

    Selection Signatures (SS) assessed through analysis of genomic data are being widely studied to discover population specific regions selected via artificial or natural selection. Different methodologies have been proposed for these analyses, each having specific limitations as to the age of the sele...

  5. The limits of the Bohm criterion in collisional plasmas

    International Nuclear Information System (INIS)

    Valentini, H.-B.; Kaiser, D.

    2015-01-01

    The sheath formation within a low-pressure collisional plasma is analysed by means of a two-fluid model. The Bohm criterion takes into account the effects of the electric field and the inertia of the ions. Numerical results yield that these effects contribute to the space charge formation, only, if the collisionality is lower than a relatively small threshold. It follows that a lower and an upper limit of the drift speed of the ions exist where the effects treated by Bohm can form a sheath. This interval becomes narrower as the collisionality increases and vanishes at the mentioned threshold. Above the threshold, the sheath is mainly created by collisions and the ionisation. Under these conditions, the sheath formation cannot be described by means of Bohm like criteria. In a few references, a so-called upper limit of the Bohm criterion is stated for collisional plasmas where the momentum equation of the ions is taken into account, only. However, the present paper shows that this limit results in an unrealistically steep increase of the space charge density towards the wall, and, therefore, it yields no useful limit of the Bohm velocity

  6. Conventional and advanced exergetic analyses applied to a combined cycle power plant

    International Nuclear Information System (INIS)

    Petrakopoulou, Fontina; Tsatsaronis, George; Morosuk, Tatiana; Carassai, Anna

    2012-01-01

    Conventional exergy-based methods pinpoint components and processes with high irreversibilities. However, they lack certain insight. For a given advanced technological state, there is a minimum level of exergy destruction related to technological and/or economic constraints that is unavoidable. Furthermore, in any thermodynamic system, exergy destruction stems from both component interactions (exogenous) and component inefficiencies (endogenous). To overcome the limitations of the conventional analyses and to increase our knowledge about a plant, advanced exergy-based analyses have been developed. In this paper, a combined cycle power plant is analyzed using both conventional and advanced exergetic analyses. Except for the expander of the gas turbine system and the high-pressure steam turbine, most of the exergy destruction in the plant components is unavoidable. This unavoidable part is constrained by internal technological limitations, i.e. each component’s endogenous exergy destruction. High levels of endogenous exergy destruction show that component interactions do not contribute significantly to the thermodynamic inefficiencies. In addition, these inefficiencies are unavoidable to a large extent. With the advanced analysis, new improvement strategies are revealed that could not otherwise be found. -- Highlights: ► This is the first application of a complete advanced exergetic analysis to a complex power plant. ► In the three-pressure-level combined cycle power plant studied here, the improvement potential of the majority of the components is low, since most of the exergy destruction is unavoidable. ► Component interactions are generally of lower importance for the considered plant. ► Splitting the exogenous exergy destruction reveals one-to-one component interactions and improvement strategies. ► The advanced exergetic analysis is a necessary supplement to the conventional analysis in improving a complex system.

  7. Field sampling, preparation procedure and plutonium analyses of large freshwater samples

    International Nuclear Information System (INIS)

    Straelberg, E.; Bjerk, T.O.; Oestmo, K.; Brittain, J.E.

    2002-01-01

    This work is part of an investigation of the mobility of plutonium in freshwater systems containing humic substances. A well-defined bog-stream system located in the catchment area of a subalpine lake, Oevre Heimdalsvatn, Norway, is being studied. During the summer of 1999, six water samples were collected from the tributary stream Lektorbekken and the lake itself. However, the analyses showed that the plutonium concentration was below the detection limit in all the samples. Therefore renewed sampling at the same sites was carried out in August 2000. The results so far are in agreement with previous analyses from the Heimdalen area. However, 100 times higher concentrations are found in the lowlands in the eastern part of Norway. The reason for this is not understood, but may be caused by differences in the concentrations of humic substances and/or the fact that the mountain areas are covered with snow for a longer period of time every year. (LN)

  8. A portable analyser for the measurement of ammonium in marine waters.

    Science.gov (United States)

    Amornthammarong, Natchanon; Zhang, Jia-Zhong; Ortner, Peter B; Stamates, Jack; Shoemaker, Michael; Kindel, Michael W

    2013-03-01

    A portable ammonium analyser was developed and used to measure in situ ammonium in the marine environment. The analyser incorporates an improved LED photodiode-based fluorescence detector (LPFD). This system is more sensitive and considerably smaller than previous systems and incorporates a pre-filtering subsystem enabling measurements in turbid, sediment-laden waters. Over the typical range for ammonium in marine waters (0–10 mM), the response is linear (r(2) = 0.9930) with a limit of detection (S/N ratio > 3) of 10 nM. The working range for marine waters is 0.05–10 mM. Repeatability is 0.3% (n =10) at an ammonium level of 2 mM. Results from automated operation in 15 min cycles over 16 days had good overall precision (RSD = 3%, n = 660). The system was field tested at three shallow South Florida sites. Diurnal cycles and possibly a tidal influence were expressed in the concentration variability observed.

  9. Non-destructive XRF analyses of fine-grained basalts from Eiao, Marquesas Islands

    International Nuclear Information System (INIS)

    Charleux, M.; McAlister, A.; Mills, P.R.; Lundblad, S.P.

    2014-01-01

    The Marquesan island of Eiao was an important source of fine-grained basalt in Central East Polynesia, with examples being identified in archaeological assemblages throughout the region. However, compared to many other large-scale Polynesian basalt sources, little has been published about the physical extent and geochemical variability of tool-quality basalt on Eiao; prior to our study, only a single site with evidence of stone extraction had been identified and geochemical information was limited to less than two dozen samples. In this paper we report geochemical data for 225 additional basalt specimens collected on Eiao. Our analyses were conducted non-destructively using three EDXRF instruments: one lab-based unit and two portable analysers. The majority of our sample, identified here as Group 1, possesses geochemical and physical characteristics similar to those reported in previous studies. Group 1 samples were collected from various locations on Eiao suggesting that, rather than being limited to a single quarry site, fine-grained basalt was extracted from multiple sources throughout the island. In addition, we identified a second group (Group 2), which possesses a distinct geochemistry, a coarser grain and often an unusual reddish colour. Evidence from Eiao indicates that Group 2 stone was regularly utilised and our analysis of an adze collected on Hiva Oa Island suggests that this material was distributed at least as far as the southern Marquesas. (author)

  10. What Limits the Encoding Effect of Note-Taking? A Meta-Analytic Examination

    Science.gov (United States)

    Kobayashi, K.

    2005-01-01

    Previous meta-analyses indicate that the overall encoding effect of note-taking is positive but modest. This meta-analysis of 57 note-taking versus no note-taking comparison studies explored what limits the encoding effect by examining the moderating influence of seven variables: intervention, schooling level, presentation mode and length, test…

  11. Diffusive limits for linear transport equations

    International Nuclear Information System (INIS)

    Pomraning, G.C.

    1992-01-01

    The authors show that the Hibert and Chapman-Enskog asymptotic treatments that reduce the nonlinear Boltzmann equation to the Euler and Navier-Stokes fluid equations have analogs in linear transport theory. In this linear setting, these fluid limits are described by diffusion equations, involving familiar and less familiar diffusion coefficients. Because of the linearity extant, one can carry out explicitly the initial and boundary layer analyses required to obtain asymptotically consistent initial and boundary conditions for the diffusion equations. In particular, the effects of boundary curvature and boundary condition variation along the surface can be included in the boundary layer analysis. A brief review of heuristic (nonasymptotic) diffusion description derivations is also included in our discussion

  12. Construction and decomposition of biorthogonal vector-valued wavelets with compact support

    International Nuclear Information System (INIS)

    Chen Qingjiang; Cao Huaixin; Shi Zhi

    2009-01-01

    In this article, we introduce vector-valued multiresolution analysis and the biorthogonal vector-valued wavelets with four-scale. The existence of a class of biorthogonal vector-valued wavelets with compact support associated with a pair of biorthogonal vector-valued scaling functions with compact support is discussed. A method for designing a class of biorthogonal compactly supported vector-valued wavelets with four-scale is proposed by virtue of multiresolution analysis and matrix theory. The biorthogonality properties concerning vector-valued wavelet packets are characterized with the aid of time-frequency analysis method and operator theory. Three biorthogonality formulas regarding them are presented.

  13. Probabilistic fuel rod analyses using the TRANSURANUS code

    Energy Technology Data Exchange (ETDEWEB)

    Lassmann, K; O` Carroll, C; Laar, J Van De [CEC Joint Research Centre, Karlsruhe (Germany)

    1997-08-01

    After more than 25 years of fuel rod modelling research, the basic concepts are well established and the limitations of the specific approaches are known. However, the widely used mechanistic approach leads in many cases to discrepancies between theoretical predictions and experimental evidence indicating that models are not exact and that some of the physical processes encountered are of stochastic nature. To better understand uncertainties and their consequences, the mechanistic approach must therefore be augmented by statistical analyses. In the present paper the basic probabilistic methods are briefly discussed. Two such probabilistic approaches are included in the fuel rod performance code TRANSURANUS: the Monte Carlo method and the Numerical Noise Analysis. These two techniques are compared and their capabilities are demonstrated. (author). 12 refs, 4 figs, 2 tabs.

  14. Corpus Callosum Analysis using MDL-based Sequential Models of Shape and Appearance

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Davies, Rhodri H.; Ryberg, Charlotte

    2004-01-01

    are proposed, but all remain applicable to other domain problems. The well-known multi-resolution AAM optimisation is extended to include sequential relaxations on texture resolution, model coverage and model parameter constraints. Fully unsupervised analysis is obtained by exploiting model parameter...... that show that the method produces accurate, robust and rapid segmentations in a cross sectional study of 17 subjects, establishing its feasibility as a fully automated clinical tool for analysis and segmentation.......This paper describes a method for automatically analysing and segmenting the corpus callosum from magnetic resonance images of the brain based on the widely used Active Appearance Models (AAMs) by Cootes et al. Extensions of the original method, which are designed to improve this specific case...

  15. Investigating ASME allowable loads with finite element analyses

    International Nuclear Information System (INIS)

    Mattar Neto, Miguel; Bezerra, Luciano M.; Miranda, Carlos A. de J.; Cruz, Julio R.B.

    1995-01-01

    The evaluation of nuclear components using finite element analysis (FEA) does not generally fall into the shell type verification adopted by the ASME Code. Consequently, the demonstration that the modes of failure are avoided sometimes is not straightforward. Allowable limits, developed by limit load theory, require the computation of shell membrane and bending stresses. How to calculate these stresses from FEA is not necessarily self-evident. One approach to be considered is to develop recommendations in a case-by-case basis for the most common pressure vessel geometries and loads based on comparison between the results of elastic and plastic FEA. In this paper, FE analyses of common 2D and complex 3D geometries are examined and discussed. It will be clear that in the cases studied, stress separation and categorization are not self-evident and simple tasks to undertake. Certain unclear recommendations of ASME Code can lead the stress analyst to non conservative designs as will be demonstrated in this paper. At the endo of this paper, taking into account comparison between elastic and elastic-plastic FE results from ANSYS some observations, suggestions and conclusions about the degree of conservatism of the ASME recommendations will be addressed. (author)

  16. Neural Anatomy of Primary Visual Cortex Limits Visual Working Memory.

    Science.gov (United States)

    Bergmann, Johanna; Genç, Erhan; Kohler, Axel; Singer, Wolf; Pearson, Joel

    2016-01-01

    Despite the immense processing power of the human brain, working memory storage is severely limited, and the neuroanatomical basis of these limitations has remained elusive. Here, we show that the stable storage limits of visual working memory for over 9 s are bound by the precise gray matter volume of primary visual cortex (V1), defined by fMRI retinotopic mapping. Individuals with a bigger V1 tended to have greater visual working memory storage. This relationship was present independently for both surface size and thickness of V1 but absent in V2, V3 and for non-visual working memory measures. Additional whole-brain analyses confirmed the specificity of the relationship to V1. Our findings indicate that the size of primary visual cortex plays a critical role in limiting what we can hold in mind, acting like a gatekeeper in constraining the richness of working mental function. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Multicollinearity in spatial genetics: separating the wheat from the chaff using commonality analyses.

    Science.gov (United States)

    Prunier, J G; Colyn, M; Legendre, X; Nimon, K F; Flamand, M C

    2015-01-01

    Direct gradient analyses in spatial genetics provide unique opportunities to describe the inherent complexity of genetic variation in wildlife species and are the object of many methodological developments. However, multicollinearity among explanatory variables is a systemic issue in multivariate regression analyses and is likely to cause serious difficulties in properly interpreting results of direct gradient analyses, with the risk of erroneous conclusions, misdirected research and inefficient or counterproductive conservation measures. Using simulated data sets along with linear and logistic regressions on distance matrices, we illustrate how commonality analysis (CA), a detailed variance-partitioning procedure that was recently introduced in the field of ecology, can be used to deal with nonindependence among spatial predictors. By decomposing model fit indices into unique and common (or shared) variance components, CA allows identifying the location and magnitude of multicollinearity, revealing spurious correlations and thus thoroughly improving the interpretation of multivariate regressions. Despite a few inherent limitations, especially in the case of resistance model optimization, this review highlights the great potential of CA to account for complex multicollinearity patterns in spatial genetics and identifies future applications and lines of research. We strongly urge spatial geneticists to systematically investigate commonalities when performing direct gradient analyses. © 2014 John Wiley & Sons Ltd.

  18. Implementation of analyses based on social media data for marketing purposes in academic and scientific organizations in practice – opportunities and limitations

    Directory of Open Access Journals (Sweden)

    Magdalena Grabarczyk-Tokaj

    2013-12-01

    Full Text Available The article is focused on the issue of practice use of analyses, based on data collected in social media, for institutions’ communication and marketing purposes. The subject is being discussed from the perspective of Digital Darwinism — situation, when development of technologies and new means of communication is significantly faster than growth in the knowledge and digital skills among organizations eager to implement those solutions. To diminish negative consequences of Digital Darwinism institutions can broaden their knowledge with analyses of data from cyber space to optimize operations, and make use of running dialog and cooperation with prosuments to face dynamic changes in trends, technologies and society. Information acquired from social media user generated content can be employed as guidelines in planning, running and evaluating communication and marketing activities. The article presents examples of tools and solutions, that can be implement in practice as a support for actions taken by institutions.

  19. A review of multivariate analyses in imaging genetics

    Directory of Open Access Journals (Sweden)

    Jingyu eLiu

    2014-03-01

    Full Text Available Recent advances in neuroimaging technology and molecular genetics provide the unique opportunity to investigate genetic influence on the variation of brain attributes. Since the year 2000, when the initial publication on brain imaging and genetics was released, imaging genetics has been a rapidly growing research approach with increasing publications every year. Several reviews have been offered to the research community focusing on various study designs. In addition to study design, analytic tools and their proper implementation are also critical to the success of a study. In this review, we survey recent publications using data from neuroimaging and genetics, focusing on methods capturing multivariate effects accommodating the large number of variables from both imaging data and genetic data. We group the analyses of genetic or genomic data into either a prior driven or data driven approach, including gene-set enrichment analysis, multifactor dimensionality reduction, principal component analysis, independent component analysis (ICA, and clustering. For the analyses of imaging data, ICA and extensions of ICA are the most widely used multivariate methods. Given detailed reviews of multivariate analyses of imaging data available elsewhere, we provide a brief summary here that includes a recently proposed method known as independent vector analysis. Finally, we review methods focused on bridging the imaging and genetic data by establishing multivariate and multiple genotype-phenotype associations, including sparse partial least squares, sparse canonical correlation analysis, sparse reduced rank regression and parallel ICA. These methods are designed to extract latent variables from both genetic and imaging data, which become new genotypes and phenotypes, and the links between the new genotype-phenotype pairs are maximized using different cost functions. The relationship between these methods along with their assumptions, advantages, and

  20. Measurement of the analysing power of elastic proton-proton scattering at 582 MeV

    International Nuclear Information System (INIS)

    Berdoz, A.; Favier, B.; Foroughi, F.; Weddigen, C.

    1984-01-01

    The authors have measured the analysing power of elastic proton-proton scattering at 582 MeV for 14 angles from 20 to 80 0 CM. The angular range was limited to >20 0 by the energy loss of the recoil protons. The experiment was performed at the PM1 beam line at SIN. A beam intensity of about 10 8 particles s -1 was used. (Auth.)

  1. EU-FP7-iMars: Analysis of Mars Multi-Resolution Images using Auto-Coregistration, Data Mining and Crowd Source Techniques

    Science.gov (United States)

    Ivanov, Anton; Oberst, Jürgen; Yershov, Vladimir; Muller, Jan-Peter; Kim, Jung-Rack; Gwinner, Klaus; Van Gasselt, Stephan; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Sidiropoulos, Panagiotis

    Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to be able to overlay different epochs back to the mid-1970s, examine time-varying changes (such as the recent discovery of boulder movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Consequently we are seeing a dramatic improvement in our understanding of surface formation processes. Since January 2004, the ESA Mars Express has been acquiring global data, especially HRSC stereo (12.5-25 m nadir images) with 87% coverage with more than 65% useful for stereo mapping. NASA began imaging the surface of Mars, initially from flybys in the 1960s and then from the first orbiter with image resolution less than 100 m in the late 1970s from Viking Orbiter. The most recent orbiter, NASA MRO, has acquired surface imagery of around 1% of the Martian surface from HiRISE (at ≈20 cm) and ≈5% from CTX (≈6 m) in stereo. Within the iMars project (http://i-Mars.eu), a fully automated large-scale processing (“Big Data”) solution is being developed to generate the best possible multi-resolution DTM of Mars. In addition, HRSC OrthoRectified Images (ORI) will be used as a georeference basis so that all higher resolution ORIs will be co-registered to the HRSC DTMs (50-100m grid) products generated at DLR and, from CTX (6-20 m grid) and HiRISE (1-3 m grids) on a large-scale Linux cluster based at MSSL. The HRSC products will be employed to provide a geographic reference for all current, future and historical NASA products using automated co-registration based on feature points and initial results will be shown here. In 2015, many of the entire NASA and ESA orbital images will be co-registered and the updated georeferencing

  2. EU-FP7-iMARS: analysis of Mars multi-resolution images using auto-coregistration, data mining and crowd source techniques

    Science.gov (United States)

    Ivanov, Anton; Muller, Jan-Peter; Tao, Yu; Kim, Jung-Rack; Gwinner, Klaus; Van Gasselt, Stephan; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Sidiropoulos, Panagiotis; Fanara, Lida; Waenlish, Marita; Walter, Sebastian; Steinkert, Ralf; Schreiner, Bjorn; Cantini, Federico; Wardlaw, Jessica; Sprinks, James; Giordano, Michele; Marsh, Stuart

    2016-07-01

    Understanding planetary atmosphere-surface and extra-terrestrial-surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to be able to overlay different epochs back in time to the mid 1970s, to examine time-varying changes, such as the recent discovery of mass movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Within the EU FP-7 iMars project, UCL have developed a fully automated multi-resolution DTM processing chain, called the Co-registration ASP-Gotcha Optimised (CASP-GO), based on the open source NASA Ames Stereo Pipeline (ASP), which is being applied to the production of planetwide DTMs and ORIs (OrthoRectified Images) from CTX and HiRISE. Alongside the production of individual strip CTX & HiRISE DTMs & ORIs, DLR have processed HRSC mosaics of ORIs and DTMs for complete areas in a consistent manner using photogrammetric bundle block adjustment techniques. A novel automated co-registration and orthorectification chain has been developed and is being applied to level-1 EDR images taken by the 4 NASA orbital cameras since 1976 using the HRSC map products (both mosaics and orbital strips) as a map-base. The project has also included Mars Radar profiles from Mars Express and Mars Reconnaissance Orbiter missions. A webGIS has been developed for displaying this time sequence of imagery and a demonstration will be shown applied to one of the map-sheets. Automated quality control techniques are applied to screen for suitable images and these are extended to detect temporal changes in features on the surface such as mass movements, streaks, spiders, impact craters, CO2 geysers and Swiss Cheese terrain. These data mining techniques are then being employed within a citizen science project within the Zooniverse family

  3. Pollen limitation and its influence on natural selection through seed set.

    Science.gov (United States)

    Bartkowska, M P; Johnston, M O

    2015-11-01

    Stronger pollen limitation should increase competition among plants, leading to stronger selection on traits important for pollen receipt. The few explicit tests of this hypothesis, however, have provided conflicting support. Using the arithmetic relationship between these two quantities, we show that increased pollen limitation will automatically result in stronger selection (all else equal) although other factors can alter selection independently of pollen limitation. We then tested the hypothesis using two approaches. First, we analysed the published studies containing information on both pollen limitation and selection. Second, we explored how natural selection measured in one Ontario population of Lobelia cardinalis over 3 years and two Michigan populations in 1 year relates to pollen limitation. For the Ontario population, we also explored whether pollinator-mediated selection is related to pollen limitation. Consistent with the hypothesis, we found an overall positive relationship between selection strength and pollen limitation both among species and within L. cardinalis. Unexpectedly, this relationship was found even for vegetative traits among species, and was not found in L. cardinalis for pollinator-mediated selection on nearly all trait types. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.

  4. Predictors of activity limitation in people with gout: a prospective study.

    Science.gov (United States)

    Stewart, Sarah; Rome, Keith; Eason, Alastair; House, Meaghan E; Horne, Anne; Doyle, Anthony J; Knight, Julie; Taylor, William J; Dalbeth, Nicola

    2018-04-21

    The objective of the study was to determine clinical factors associated with activity limitation and predictors of a change in activity limitation after 1 year in people with gout. Two hundred ninety-five participants with gout (disease duration limitation was assessed using the Health Assessment Questionnaire-II (HAQ-II). After 1 year, participants were invited to complete a further HAQ-II; follow-up questionnaires were available for 182 participants. Fully saturated and stepwise regression analyses were used to determine associations between baseline characteristics and HAQ-II at baseline and 1 year, and to determine predictors of worsening HAQ-II in those with normal baseline scores. Median (range) baseline HAQ-II was 0.20 (0-2.50) and 0.20 (0-2.80) after 1 year of follow-up. Pain score was the strongest independent predictor of baseline HAQ-II, followed by radiographic narrowing score, type 2 diabetes, swollen joint count, BMI, age and urate (model R 2  = 0.51, P limitation, and levels of activity limitation are, on average, stable over a 1-year period. Baseline pain scores are strongly associated with activity limitation and predict development of activity limitation in those with normal HAQ-II scores at baseline.

  5. Limits on fundamental limits to computation.

    Science.gov (United States)

    Markov, Igor L

    2014-08-14

    An indispensable part of our personal and working lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the past fifty years. Such Moore scaling now requires ever-increasing efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and increase our understanding of integrated-circuit scaling, here I review fundamental limits to computation in the areas of manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, I recapitulate how some limits were circumvented, and compare loose and tight limits. Engineering difficulties encountered by emerging technologies may indicate yet unknown limits.

  6. Collaborative regression-based anatomical landmark detection

    International Nuclear Information System (INIS)

    Gao, Yaozong; Shen, Dinggang

    2015-01-01

    Anatomical landmark detection plays an important role in medical image analysis, e.g. for registration, segmentation and quantitative analysis. Among the various existing methods for landmark detection, regression-based methods have recently attracted much attention due to their robustness and efficiency. In these methods, landmarks are localised through voting from all image voxels, which is completely different from the classification-based methods that use voxel-wise classification to detect landmarks. Despite their robustness, the accuracy of regression-based landmark detection methods is often limited due to (1) the inclusion of uninformative image voxels in the voting procedure, and (2) the lack of effective ways to incorporate inter-landmark spatial dependency into the detection step. In this paper, we propose a collaborative landmark detection framework to address these limitations. The concept of collaboration is reflected in two aspects. (1) Multi-resolution collaboration. A multi-resolution strategy is proposed to hierarchically localise landmarks by gradually excluding uninformative votes from faraway voxels. Moreover, for informative voxels near the landmark, a spherical sampling strategy is also designed at the training stage to improve their prediction accuracy. (2) Inter-landmark collaboration. A confidence-based landmark detection strategy is proposed to improve the detection accuracy of ‘difficult-to-detect’ landmarks by using spatial guidance from ‘easy-to-detect’ landmarks. To evaluate our method, we conducted experiments extensively on three datasets for detecting prostate landmarks and head and neck landmarks in computed tomography images, and also dental landmarks in cone beam computed tomography images. The results show the effectiveness of our collaborative landmark detection framework in improving landmark detection accuracy, compared to other state-of-the-art methods. (paper)

  7. Assessment of effect of reinforcement on plastic limit load of branch junction

    International Nuclear Information System (INIS)

    Myung, Man Sik; Kim, Yun Jae; Yoon, Ki Bong

    2009-01-01

    The present work provides effects of reinforcement shape and area on plastic limit loads of branch junctions, based on detailed three-dimensional finite element limit analysis and small strain FE limit analyses assuming elastic-perfectly plastic material behavior. Three types of loading are considered; internal pressure, in-plane bending on the branch pipe and in-plane bending on the run pipe. It is found that reinforcement is the most effective in the case when (in-plane/out-of-plane) bending is applied to the branch pipe. When bending is applied to the run pipe, reinforcement is less effective, compared to the case when bending is applied to the branch pipe. The reinforcement effect is the least effective for internal pressure.

  8. Coupling limit equilibrium analyses and real-time monitoring to refine a landslide surveillance system in Calabria (southern Italy)

    Science.gov (United States)

    Iovine, G. G. R.; Lollino, P.; Gariano, S. L.; Terranova, O. G.

    2010-11-01

    On 28 January 2009, a large debris slide was triggered by prolonged rainfalls at the southern suburbs of San Benedetto Ullano (Northern Calabria). The slope movement affected fractured and weathered migmatitic gneiss and biotitic schist, and included a pre-existing landslide. A detailed geomorphologic field survey, carried out during the whole phase of mobilization, allowed to recognize the evolution of the phenomenon. A set of datum points was located along the borders of the landslide and frequent hand-made measurements of surface displacements were performed. Since 11 February, a basic real-time monitoring system of meteoric parameters and of surface displacements, measured by means of high-precision extensometers, was also implemented. Based on the data gained through the monitoring system, and on field surveying, a basic support system for emergency management could be defined since the first phases of activation of the phenomenon. The evolution of the landslide was monitored during the following months: as a consequence, evidence of retrogressive distribution could be recognized, with initial activation in the middle sector of the slope, where new temporary springs were observed. During early May, the activity reduced to displacements of a few millimetres per month and the geo-hydrological crisis seemed to be concluded. Afterwards, the geological scheme of the slope was refined based on the data collected through a set of explorative boreholes, equipped with inclinometers and piezometers: according to the stratigraphic and inclinometric data, the depth of the mobilized body resulted in varying between 15 and 35 m along a longitudinal section. A parametric limit equilibrium analysis was carried out to explore the stability conditions of the slope affected by the landslide as well as to quantify the role of the water table in destabilizing the slope. The interpretation of the process based on field observations was confirmed by the limit equilibrium analysis

  9. Coupling limit equilibrium analyses and real-time monitoring to refine a landslide surveillance system in Calabria (southern Italy

    Directory of Open Access Journals (Sweden)

    G. G. R. Iovine

    2010-11-01

    Full Text Available On 28 January 2009, a large debris slide was triggered by prolonged rainfalls at the southern suburbs of San Benedetto Ullano (Northern Calabria. The slope movement affected fractured and weathered migmatitic gneiss and biotitic schist, and included a pre-existing landslide. A detailed geomorphologic field survey, carried out during the whole phase of mobilization, allowed to recognize the evolution of the phenomenon. A set of datum points was located along the borders of the landslide and frequent hand-made measurements of surface displacements were performed. Since 11 February, a basic real-time monitoring system of meteoric parameters and of surface displacements, measured by means of high-precision extensometers, was also implemented.

    Based on the data gained through the monitoring system, and on field surveying, a basic support system for emergency management could be defined since the first phases of activation of the phenomenon. The evolution of the landslide was monitored during the following months: as a consequence, evidence of retrogressive distribution could be recognized, with initial activation in the middle sector of the slope, where new temporary springs were observed. During early May, the activity reduced to displacements of a few millimetres per month and the geo-hydrological crisis seemed to be concluded.

    Afterwards, the geological scheme of the slope was refined based on the data collected through a set of explorative boreholes, equipped with inclinometers and piezometers: according to the stratigraphic and inclinometric data, the depth of the mobilized body resulted in varying between 15 and 35 m along a longitudinal section. A parametric limit equilibrium analysis was carried out to explore the stability conditions of the slope affected by the landslide as well as to quantify the role of the water table in destabilizing the slope. The interpretation of the process based on field observations was confirmed

  10. GEOMORPHOLOGICAL ECOGEOGRAPHICAL VARIABLES DEFINIG FEATURES OF ECOLOGICAL NICHE OF COMMON MILKWEED (ASCLEPIAS SYRIACA L.

    Directory of Open Access Journals (Sweden)

    O. M. Kunah

    2016-04-01

    Full Text Available The role of geomorphological ecogeographical variables have been shown, which are received by means of the digital elevation model created on the basis of remote sensing data as markers of an ecological niche of weeds on an example common milkweed (Asclepias syriaca L.. The research range chooses territory which is in settlement Vovnjanka district (the Poltava region. The range has the linear sizes of 26 kilometres in a direction from the east on the west and 15 kilometres in a direction from the north on the south, the range total area makes 390 км2. As geomorphological variables the topographical wetness index, topographic position index, mass balance index, erosion LS-factor, direct and disseminated insolation, altitude above channel network, multiresolution valley bottom flatness, multiresolution ridge top flatness index, vector ruggedness measure have been considered. It is established, that on set of the geomorphological indicators received by means of digital model of a relief, it is possible to assert, that within a separate agricultural field a wide variety of microconditions which is caused by relief features is formed. Possibly, the variation of thermal and water modes, moisture redistribution, and also productivity mechanical processings of soil and efforts under the control of number of weeds make a background in which limits there is possible a moving of weed plants, including common milkweed.

  11. Validation of chemical analyses of atmospheric deposition in forested European sites

    Directory of Open Access Journals (Sweden)

    Erwin ULRICH

    2005-08-01

    Full Text Available Within the activities of the Integrated Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests and of the EU Regulation 2152/2003, a Working Group on Quality Assurance/Quality Control of analyses has been created to assist the participating laboratories in the analysis of atmospheric deposition, soil and soil solution, and leaves/needles. As part of the activity of the WG, this study is a statistical analysis in the field of water analysis of chemical concentrations and relationships between ions, and between conductivity and ions for different types of samples (bulk or wet-only samples, throughfall, stemflow considered in forest studies. About 5000 analyses from seven laboratories were used to establish relationships representative of different European geographic and climatic situations, from northern Finland to southern Italy. Statistically significant differences between the relationships obtained from different types of solutions, interacting with different types of vegetation (throughfall and stemflow samples, broad-leaved trees and conifers and with varying influence of marine salt were tested. The ultimate aim is to establish general relationships between ions, and between conductivity and ions, with relative confidence limits, which can be used as a comparison with those established in single laboratories. The use of such techniques is strongly encouraged in the ICPF laboratories to validate single chemical analyses, to be performed when it is still possible to replicate the analysis, and as a general overview of the whole set of analyses, to obtain an indication of the laboratory performance on a long-term basis.

  12. Partnerships – Limited partnerships and limited liability limited partnerships

    OpenAIRE

    Henning, Johan J.

    2000-01-01

    Consideration of the Limited Liability Partnership Act 2000 which introduced a new corporate entity, carrying the designations “partnership” and “limited” which allow members to limit their liability whilst organising themselves internally as a partnership. Article by Professor Johan Henning (Director of the Centre for Corporate Law and Practice, IALS and Dean of the Faculty of Law, University of the Free State, South Africa). Published in Amicus Curiae - Journal of the Institute of Advanced ...

  13. The limitation and modification of flux-limited diffusion theory

    International Nuclear Information System (INIS)

    Liu Chengan; Huang Wenkai

    1986-01-01

    The limitation of various typical flux-limited diffusion theory and advantages of asymptotic diffusion theory with time absorption constant are analyzed and compared. The conclusions are as following: Though the flux-limited problem in neutron diffusion theory are theoretically solved by derived flux-limited diffusion equation, it's going too far to limit flux due to the inappropriate assumption in deriving flux-limited diffusion equation. The asymptotic diffusion theory with time absorption constant has eliminated the above-mentioned limitation, and it is more accurate than flux-limited diffusion theory in describing neutron transport problem

  14. Limitations of red noise in analysing Dansgaard-Oeschger events

    Directory of Open Access Journals (Sweden)

    H. Braun

    2010-02-01

    Full Text Available During the last glacial period, climate records from the North Atlantic region exhibit a pronounced spectral component corresponding to a period of about 1470 years, which has attracted much attention. This spectral peak is closely related to the recurrence pattern of Dansgaard-Oeschger (DO events. In previous studies a red noise random process, more precisely a first-order autoregressive (AR1 process, was used to evaluate the statistical significance of this peak, with a reported significance of more than 99%. Here we use a simple mechanistic two-state model of DO events, which itself was derived from a much more sophisticated ocean-atmosphere model of intermediate complexity, to numerically evaluate the spectral properties of random (i.e., solely noise-driven events. This way we find that the power spectral density of random DO events differs fundamentally from a simple red noise random process. These results question the applicability of linear spectral analysis for estimating the statistical significance of highly non-linear processes such as DO events. More precisely, to enhance our scientific understanding about the trigger of DO events, we must not consider simple "straw men" as, for example, the AR1 random process, but rather test against realistic alternative descriptions.

  15. Limitations and risks of meta-analyses of longevity studies

    DEFF Research Database (Denmark)

    Sebastiani, Paola; Bae, Harold; Gurinovich, Anastasia

    2017-01-01

    Searching for genetic determinants of human longevity has been challenged by the rarity of data sets with large numbers of individuals who have reached extreme old age, inconsistent definitions of the phenotype, and the difficulty of defining appropriate controls. Meta-analysis - a statistical...... method to summarize results from different studies - has become a common tool in genetic epidemiology to accrue large sample sizes for powerful genetic association studies. In conducting a meta-analysis of studies of human longevity however, particular attention must be made to the definition of cases...... and controls (including their health status) and on the effect of possible confounders such as sex and ethnicity upon the genetic effect to be estimated. We will show examples of how a meta-analysis can inflate the false negative rates of genetic association studies or it can bias estimates of the association...

  16. The limited informativeness of meta-analyses and media effects

    NARCIS (Netherlands)

    Valkenburg, P.M.

    2015-01-01

    In this issue of Perspectives on Psychological Science, Christopher Ferguson reports on a meta-analysis examining the relationship between children’s video game use and several outcome variables, including aggression and attention deficit symptoms (Ferguson, 2015, this issue). In this commentary, I

  17. In-depth analyses of paleolithic pigments in cave climatic conditions

    Science.gov (United States)

    Touron, Stéphanie; Trichereau, Barbara; Syvilay, Delphine

    2017-07-01

    Painted caves are a specific environment which preservation needs multidisciplinary studies carried out within the different actors. The actions set-up must follow national and European ethics and treaties and be as less invasive as possible to preserve the integrity of the site. Studying colorants in caves should meet these expectations and take into account on-field conditions: high humidity rate, reduced access to electricity, etc. Therefore, non-invasive analyses should be preferred. However, their limits restrict the field of application and sometimes sampling and laboratory analyses must be used to answer the problematic. It is especially true when the pigment is covered by calcite. For this purpose, the Laser-Induced Breakdown Spectroscopy (LIBS) has been assessed to identify the composition with stratigraphic analyses. This study carries out in-depth profile on laboratory samples in conditions close to the ones meet in caves. Samples were prepared on a calcareous substrate using three pigments: red ochre, manganese black and carbon black and two binding media: water and saliva. All samples have been covered by calcite. Four sets of measurements have then been done using the LIBS instrument. The in-depth profiles were obtained using the Standard Normal Variate (SNV) normalization. For all the samples, the pigment layer was identified in the second or third shot, the calcite layer being quite thin. However, the results remain promising with the carbon black pigment but not really conclusive, the carbon being generally quite difficult to quantify.

  18. Using the ion microprobe mass analyser for trace element analysis

    International Nuclear Information System (INIS)

    Schilling, J.H.

    1978-01-01

    Most techniques for the analysis of trace elements are capable of determining the concentrations in a bulk sample or solution, but without reflecting their distribution. In a bulk analysis therefore elements which occur in high concentration in a few precipitates would still be considered trace elements even though their local concentration greatly exceed the normally accepted trace elements concentration limit. Anomalous distribution is also shown by an oxide layer, a few hundred Angstrom thick, on an aluminium sample. A low oxide concentration would be reported if it were included in the bulk analysis, which contradicts the high surface concentration. The importance of a knowledge of the trace element distribution is therefore demonstrated. Distributional trace element analysis can be carried out using the ion microprobe mass analyser (IMMA). Since the analytical technique used in this instrument, namely secondary ion mass spectrometry (SIMS), is not universally appreciated, the instrument and its features will be described briefly followed by a discussion of quantitative analysis and the related subjects of detection limit and sample consumption. Finally, a few examples of the use of the instrument are given

  19. Comparative genomic analyses of nickel, cobalt and vitamin B12 utilization

    Directory of Open Access Journals (Sweden)

    Gelfand Mikhail S

    2009-02-01

    Full Text Available Abstract Background Nickel (Ni and cobalt (Co are trace elements required for a variety of biological processes. Ni is directly coordinated by proteins, whereas Co is mainly used as a component of vitamin B12. Although a number of Ni and Co-dependent enzymes have been characterized, systematic evolutionary analyses of utilization of these metals are limited. Results We carried out comparative genomic analyses to examine occurrence and evolutionary dynamics of the use of Ni and Co at the level of (i transport systems, and (ii metalloproteomes. Our data show that both metals are widely used in bacteria and archaea. Cbi/NikMNQO is the most common prokaryotic Ni/Co transporter, while Ni-dependent urease and Ni-Fe hydrogenase, and B12-dependent methionine synthase (MetH, ribonucleotide reductase and methylmalonyl-CoA mutase are the most widespread metalloproteins for Ni and Co, respectively. Occurrence of other metalloenzymes showed a mosaic distribution and a new B12-dependent protein family was predicted. Deltaproteobacteria and Methanosarcina generally have larger Ni- and Co-dependent proteomes. On the other hand, utilization of these two metals is limited in eukaryotes, and very few of these organisms utilize both of them. The Ni-utilizing eukaryotes are mostly fungi (except saccharomycotina and plants, whereas most B12-utilizing organisms are animals. The NiCoT transporter family is the most widespread eukaryotic Ni transporter, and eukaryotic urease and MetH are the most common Ni- and B12-dependent enzymes, respectively. Finally, investigation of environmental and other conditions and identity of organisms that show dependence on Ni or Co revealed that host-associated organisms (particularly obligate intracellular parasites and endosymbionts have a tendency for loss of Ni/Co utilization. Conclusion Our data provide information on the evolutionary dynamics of Ni and Co utilization and highlight widespread use of these metals in the three

  20. Drishti: a volume exploration and presentation tool

    Science.gov (United States)

    Limaye, Ajay

    2012-10-01

    Among several rendering techniques for volumetric data, direct volume rendering is a powerful visualization tool for a wide variety of applications. This paper describes the major features of hardware based volume exploration and presentation tool - Drishti. The word, Drishti, stands for vision or insight in Sanskrit, an ancient Indian language. Drishti is a cross-platform open-source volume rendering system that delivers high quality, state of the art renderings. The features in Drishti include, though not limited to, production quality rendering, volume sculpting, multi-resolution zooming, transfer function blending, profile generation, measurement tools, mesh generation, stereo/anaglyph/crosseye renderings. Ultimately, Drishti provides an intuitive and powerful interface for choreographing animations.

  1. How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?

    Science.gov (United States)

    Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J

    2004-01-01

    There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost

  2. Beam-limiting and radiation-limiting interlocks

    International Nuclear Information System (INIS)

    Macek, R.J.

    1996-01-01

    This paper reviews several aspects of beam-limiting and radiation- limiting interlocks used for personnel protection at high-intensity accelerators. It is based heavily on the experience at the Los Alamos Neutron Science Center (LANSCE) where instrumentation-based protection is used extensively. Topics include the need for ''active'' protection systems, system requirements, design criteria, and means of achieving and assessing acceptable reliability. The experience with several specific devices (ion chamber-based beam loss interlock, beam current limiter interlock, and neutron radiation interlock) designed and/or deployed to these requirements and criteria is evaluated

  3. Sensitivity and uncertainty analyses applied to one-dimensional radionuclide transport in a layered fractured rock: Evaluation of the Limit State approach, Iterative Performance Assessment, Phase 2

    International Nuclear Information System (INIS)

    Wu, Y.T.; Gureghian, A.B.; Sagar, B.; Codell, R.B.

    1992-12-01

    The Limit State approach is based on partitioning the parameter space into two parts: one in which the performance measure is smaller than a chosen value (called the limit state), and the other in which it is larger. Through a Taylor expansion at a suitable point, the partitioning surface (called the limit state surface) is approximated as either a linear or quadratic function. The success and efficiency of the limit state method depends upon choosing an optimum point for the Taylor expansion. The point in the parameter space that has the highest probability of producing the value chosen as the limit state is optimal for expansion. When the parameter space is transformed into a standard Gaussian space, the optimal expansion point, known as the lost Probable Point (MPP), has the property that its location on the Limit State surface is closest to the origin. Additionally, the projections onto the parameter axes of the vector from the origin to the MPP are the sensitivity coefficients. Once the MPP is determined and the Limit State surface approximated, formulas (see Equations 4-7 and 4-8) are available for determining the probability of the performance measure being less than the limit state. By choosing a succession of limit states, the entire cumulative distribution of the performance measure can be detemined. Methods for determining the MPP and also for improving the estimate of the probability are discussed in this report

  4. Robustness assessments are needed to reduce bias in meta-analyses that include zero-event randomized trials

    DEFF Research Database (Denmark)

    Keus, F; Wetterslev, J; Gluud, C

    2009-01-01

    of statistical method on inference. RESULTS: In seven meta-analyses of seven outcomes from 15 trials, there were zero-event trials in 0 to 71.4% of the trials. We found inconsistency in significance in one of seven outcomes (14%; 95% confidence limit 0.4%-57.9%). There was also considerable variability......OBJECTIVES: Meta-analysis of randomized trials with binary data can use a variety of statistical methods. Zero-event trials may create analytic problems. We explored how different methods may impact inferences from meta-analyses containing zero-event trials. METHODS: Five levels of statistical...... methods are identified for meta-analysis with zero-event trials, leading to numerous data analyses. We used the binary outcomes from our Cochrane review of randomized trials of laparoscopic vs. small-incision cholecystectomy for patients with symptomatic cholecystolithiasis to illustrate the influence...

  5. Material limitations on the detection limit in refractometry.

    Science.gov (United States)

    Skafte-Pedersen, Peder; Nunes, Pedro S; Xiao, Sanshui; Mortensen, Niels Asger

    2009-01-01

    We discuss the detection limit for refractometric sensors relying on high-Q optical cavities and show that the ultimate classical detection limit is given by min {Δn} ≳ η, with n + iη being the complex refractive index of the material under refractometric investigation. Taking finite Q factors and filling fractions into account, the detection limit declines. As an example we discuss the fundamental limits of silicon-based high-Q resonators, such as photonic crystal resonators, for sensing in a bio-liquid environment, such as a water buffer. In the transparency window (λ ≳ 1100 nm) of silicon the detection limit becomes almost independent on the filling fraction, while in the visible, the detection limit depends strongly on the filling fraction because the silicon absorbs strongly.

  6. Comparative analyses of basal rate of metabolism in mammals: data selection does matter.

    Science.gov (United States)

    Genoud, Michel; Isler, Karin; Martin, Robert D

    2018-02-01

    (Mammalia, Eutheria, Metatheria), although less-reliable estimates of BMR were generally about 12-20% larger than more-reliable ones. Larger effects were found with more-limited clades, such as sciuromorph rodents. For the relationship between BMR and brain mass the results of comparative analyses were found to depend strongly on the data set used, especially with more-limited, order-level clades. In fact, with small sample sizes (e.g. diligence when selecting BMR estimates and caution regarding results (even if seemingly significant) with small sample sizes. © 2017 Cambridge Philosophical Society.

  7. Limitations on the application of optimization methods in the design of radiation protection in large installations

    International Nuclear Information System (INIS)

    Hock, R.; Brauns, J.; Steinicke, P.

    1986-01-01

    In a society where prices of goods are not regulated, optimization is best achieved by competition and not by the decisions of an authority. In order to improve its competitive position, a company may attach increasing importance to cost-benefit analyses both internally and in its discussions with customers. Some limitations and problems of this methodology are analysed in the paper. It is concluded that an increase in design effort (analysis of more options) beyond a planned level, in order to reduce radiation exposure, can only be justified in rational terms if exposure limits are involved. An increase in design effort could also be justified if solutions with lower equipment and operating costs but higher radiation exposure were acceptable. Because of the high competitive value of radiation protection, however, it is difficult to gain acceptance for such optimization. The cost of the investigation itself requires optimal procedures for the optimization process and therefore limitation of the number of options to be analysed. This is demonstrated for the example of a shielding wall. Another problem is the probabilistic nature of many of the parameters involved. In most cases this probability distribution is only inaccurately known. Deterministic 'design basis assumptions' therefore have to be made. The choice of these assumptions may greatly influence the result of the optimization, as demonstrated in an example taken from practice. (author)

  8. PRESSURE OF AGEING ON REGIONAL DEVELOPMENT. CHALLENGES AND LIMITS FOR THE LABOUR MARKET

    Directory of Open Access Journals (Sweden)

    Silvia\tPISICĂ

    2015-12-01

    Full Text Available sing official statistics, the paper aims to contribute to regional perspective of labour market challenges and limits and the increasing number of elderly people participating in economic activity. Regional level is considered for analysing the social productivity of labour in terms of GDP and employment. The employment is analysed from the perspective of share and structure of elderly people on the labour market. In this respect, activity rates, ageing index and economic dependency ratio are reviewed. In order to shape the determinants of employment of elderly people, poverty measures at NUTS 2 level are figured out.

  9. Limits to ductility set by plastic flow localization

    International Nuclear Information System (INIS)

    Needleman, A.; Rice, J.R.

    1977-11-01

    The theory of strain localization is reviewed with reference both to local necking in sheet metal forming processes and to more general three dimensional shear band localizations that sometimes mark the onset of ductile rupture. Both bifurcation behavior and the growth of initial imperfections are considered. In addition to analyses based on classical Mises-like constitutive laws, approaches to localization based on constitutive models that may more accurately model processes of slip and progressive rupturing on the microscale in structural alloys are discussed. Among these non-classical constitutive features are the destabilizing roles of yield surface vertices and of non-normality effects, arising, for example, from slight pressure sensitivity of yield. Analyses based on a constitutive model of a progressively cavitating dilational plastic material which is intended to model the process of ductile void growth in metals are also discussed. A variety of numerical results are presented. In the context of the three dimensional theory of localization, it is shown that a simple vertex model predicts ratios of ductility in plane strain tension to ductility in axisymmetric tension qualitatively consistent with experiment, and the destabilizing influence of a hydrostatic stress dependent void nucleation criterion is illustrated. In the sheet necking context, and focussing on positive biaxial stretching, it is shown that forming limit curves based on a simple vertex model and those based on a simple void growth model are qualitatively in accord, although attributing instability to very different physical mechanisms. These forming limit curves are compared with those obtained from the Mises material model and employing various material and geometric imperfections

  10. Multiscale Feature Model for Terrain Data Based on Adaptive Spatial Neighborhood

    Directory of Open Access Journals (Sweden)

    Huijie Zhang

    2013-01-01

    Full Text Available Multiresolution hierarchy based on features (FMRH has been applied in the field of terrain modeling and obtained significant results in real engineering. However, it is difficult to schedule multiresolution data in FMRH from external memory. This paper proposed new multiscale feature model and related strategies to cluster spatial data blocks and solve the scheduling problems of FMRH using spatial neighborhood. In the model, the nodes with similar error in the different layers should be in one cluster. On this basis, a space index algorithm for each cluster guided by Hilbert curve is proposed. It ensures that multi-resolution terrain data can be loaded without traversing the whole FMRH; therefore, the efficiency of data scheduling is improved. Moreover, a spatial closeness theorem of cluster is put forward and is also proved. It guarantees that the union of data blocks composites a whole terrain without any data loss. Finally, experiments have been carried out on many different large scale data sets, and the results demonstrate that the schedule time is shortened and the efficiency of I/O operation is apparently improved, which is important in real engineering.

  11. Are We Reaching the Limits of Homo sapiens?

    Science.gov (United States)

    Marck, Adrien; Antero, Juliana; Berthelot, Geoffroy; Saulière, Guillaume; Jancovici, Jean-Marc; Masson-Delmotte, Valérie; Boeuf, Gilles; Spedding, Michael; Le Bourg, Éric; Toussaint, Jean-François

    2017-01-01

    Echoing scientific and industrial progress, the Twentieth century was an unprecedented period of improvement for human capabilities and performances, with a significant increase in lifespan, adult height, and maximal physiological performance. Analyses of historical data show a major slow down occurring in the most recent years. This triggered large and passionate debates in the academic scene within multiple disciplines; as such an observation could be interpreted as our upper biological limits. Such a new phase of human history may be related to structural and functional limits determined by long term evolutionary constraints, and the interaction between complex systems and their environment. In this interdisciplinary approach, we call into question the validity of subsequent forecasts and projections through innovative and related biomarkers such as sport, lifespan, and height indicators. We set a theoretical framework based on biological and environmental relevance rather than using a typical single-variable forecasting approach. As demonstrated within the article, these new views will have major social, economical, and political implications.

  12. Limit load analysis of thick-walled concrete structures

    International Nuclear Information System (INIS)

    Argyris, J.H.; Faust, G.; Willam, K.J.

    1975-01-01

    The paper illustrates the interaction of constitutive modeling and finite element solution techniques for limit load prediction of concrete structures. On the constitutive side, an engineering model of concrete fracture is developed in which the Mohr-Coulomb criterion is augmented by tension cut-off to describe incipient failure. Upon intersection with the stress path the failure surface collapses for brittle behaviour according to one of three softening rules, no-tension, no-cohesion, and no-friction. The stress transfer accompanying the energy dissipation during local failure is modelled by several fracture rules which are examined with regard to ultimate load prediction. On the numerical side the effect of finite element idealization is studied first as far as ultimate load convergence is concerned. Subsequently, incremental tangential and initial load techniques are compared together with the effect of step size. Limit load analyses of a thick-walled concrete ring and a lined concrete reactor closure conclude the paper with examples from practical engineering. (orig.) [de

  13. Limit load solutions for piping branch junctions under out-of-plane bending

    International Nuclear Information System (INIS)

    Xu, Ying Hu; Lee, Kuk Hee; Jeon, Jun Young; Kim, Yun Jae

    2009-01-01

    Approximate plastic limit load solutions for piping branch junctions under out-of plane bending are obtained from detailed three-dimensional (3-D) FE limit analyses based on elastic-perfectly plastic materials with the small geometry change option. Two types of bending are considered; out-of-plane bending to the branch pipe and out-of-plane bending to the run pipe. Accordingly closed-form approximations are proposed for piping branch junctions under out-of-plane bending based on the FE results. The proposed solutions are valid for the branch-to-run pipe radius and thickness from 0.0 to 1.0, and the mean radius-to-thickness ratio of the run pipe from 2.0 to 20.0. And, this study provides effects of reinforcement area on plastic limit loads.

  14. Material Limitations on the Detection Limit in Refractometry

    Directory of Open Access Journals (Sweden)

    Niels Asger Mortensen

    2009-10-01

    Full Text Available We discuss the detection limit for refractometric sensors relying on high-Q optical cavities and show that the ultimate classical detection limit is given by min {Δn} ≳ η with n + iη being the complex refractive index of the material under refractometric investigation. Taking finite Q factors and filling fractions into account, the detection limit declines. As an example we discuss the fundamental limits of silicon-based high-Q resonators, such as photonic crystal resonators, for sensing in a bio-liquid environment, such as a water buffer. In the transparency window (λ ≳ 1100 nm of silicon the detection limit becomes almost independent on the filling fraction, while in the visible, the detection limit depends strongly on the filling fraction because the silicon absorbs strongly.

  15. Limited Amount of Formula May Facilitate Breastfeeding: Randomized, Controlled Trial to Compare Standard Clinical Practice versus Limited Supplemental Feeding.

    Directory of Open Access Journals (Sweden)

    Zbyněk Straňák

    Full Text Available Breastfeeding is known to reduce infant morbidity and improve well-being. Nevertheless, breastfeeding rates remain low despite public health efforts. Our study aims to investigate the effect of controlled limited formula usage during birth hospitalisation on breastfeeding, using the primary hypothesis that early limited formula feeds in infants with early weight loss will not adversely affect the rate of exclusive or any breastfeeding as measured at discharge, 3 and 6 months of age.We randomly assigned 104 healthy term infants, 24 to 48 hours old, with ≥ 5% loss of birth weight to controlled limited formula (CLF intervention (10 ml formula by syringe after each breastfeeding, discontinued at onset of lactation or control group (standard approach, SA. Groups were compared for demographic data and breastfeeding rates at discharge, 3 months and 6 months of age (p-values adjusted for multiple testing.Fifty newborns were analysed in CLF and 50 in SA group. There were no differences in demographic data or clinical characteristics between groups. We found no evidence of difference between treatment groups in the rates of exclusive as well as any breastfeeding at discharge (p-value 0.2 and >0.99 respectively, 3 months (p-value 0.12 and 0.10 and 6 months of infants' age (p-value 0.45 and 0.34 respectively. The percentage weight loss during hospitalisation was significantly higher in the SA group (7.3% in CLF group, 8.4% in SA group, p = 0.002.The study shows that controlled limited formula use does not have an adverse effect on rates of breastfeeding in the short and long term. Larger studies are needed to confirm a possible potential in controlled limited formula use to support establishing breastfeeding and to help to improve the rates of breastfeeding overall.ISRCTN registry ISRCTN61915183.

  16. Chemical analyses in the World Coal Quality Inventory

    Science.gov (United States)

    Tewalt, Susan J.; Belkin, Harvey E.; SanFilipo, John R.; Merrill, Matthew D.; Palmer, Curtis A.; Warwick, Peter D.; Karlsen, Alexander W.; Finkelman, Robert B.; Park, Andy J.

    2010-01-01

    The main objective of the World Coal Quality Inventory (WoCQI) was to collect and analyze a global set of samples of mined coal during a time period from about 1995 to 2006 (Finkelman and Lovern, 2001). Coal samples were collected by foreign collaborators and submitted to country specialists in the U.S. Geological Survey (USGS) Energy Program. However, samples from certain countries, such as Afghanistan, India, and Kyrgyzstan, were collected collaboratively in the field with USGS personnel. Samples were subsequently analyzed at two laboratories: the USGS Inorganic Geochemistry Laboratory located in Denver, CO and a commercial laboratory (Geochemical Testing, Inc.) located in Somerset, PA. Thus the dataset, which is in Excel (2003) format and includes 1,580 samples from 57 countries, does not have the inter-laboratory variability that is present in many compilations. Major-, minor-, and trace-element analyses from the USGS laboratory, calculated to a consistent analytical basis (dry, whole-coal) and presented with available sample identification information, are sorted alphabetically by country name. About 70 percent of the samples also have data from the commercial laboratory, which are presented on an as-received analytical basis. The USGS initiated a laboratory review of quality assurance in 2008, covering quality control and methodology used in inorganic chemical analyses of coal, coal power plant ash, water, and sediment samples. This quality control review found that data generated by the USGS Inorganic Geochemistry Laboratory from 1996 through 2006 were characterized by quality practices that did not meet USGS requirements commonly in use at the time. The most serious shortcomings were (1) the adjustment of raw sample data to standards when the instrument values for those standards exceeded acceptable limits or (2) the insufficient use of multiple standards to provide adequate quality assurance. In general, adjustment of raw data to account for instrument

  17. Procedures for field chemical analyses of water samples

    International Nuclear Information System (INIS)

    Korte, N.; Ealey, D.

    1983-12-01

    A successful water-quality monitoring program requires a clear understanding of appropriate measurement procedures in order to obtain reliable field data. It is imperative that the responsible personnel have a thorough knowledge of the limitations of the techniques being used. Unfortunately, there is a belief that field analyses are simple and straightforward. Yet, significant controversy as well as misuse of common measurement techniques abounds. This document describes procedures for field measurements of pH, carbonate and bicarbonate, specific conductance, dissolved oxygen, nitrate, Eh, and uranium. Each procedure section includes an extensive discussion regarding the limitations of the method as well as brief discussions of calibration procedures and available equipment. A key feature of these procedures is the consideration given to the ultimate use of the data. For example, if the data are to be used for geochemical modeling, more precautions are needed. In contrast, routine monitoring conducted merely to recognize gross changes can be accomplished with less effort. Finally, quality assurance documentation for each measurement is addressed in detail. Particular attention is given to recording sufficient information such that decisions concerning the quality of the data can be easily made. Application of the procedures and recommendations presented in this document should result in a uniform and credible water-quality monitoring program. 22 references, 4 figures, 3 tables

  18. EU-FP7-iMARS: Analysis of Mars Multi-Resolution Images Using Auto-Coregistration Data Mining and Crowd Source Techniques: Processed Results - a First Look

    Science.gov (United States)

    Muller, Jan-Peter; Tao, Yu; Sidiropoulos, Panagiotis; Gwinner, Klaus; Willner, Konrad; Fanara, Lida; Waehlisch, Marita; van Gasselt, Stephan; Walter, Sebastian; Steikert, Ralf; Schreiner, Bjoern; Ivanov, Anton; Cantini, Federico; Wardlaw, Jessica; Morley, Jeremy; Sprinks, James; Giordano, Michele; Marsh, Stuart; Kim, Jungrack; Houghton, Robert; Bamford, Steven

    2016-06-01

    Understanding planetary atmosphere-surface exchange and extra-terrestrial-surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to overlay image data and derived information from different epochs, back in time to the mid 1970s, to examine changes through time, such as the recent discovery of mass movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Within the EU FP-7 iMars project, we have developed a fully automated multi-resolution DTM processing chain, called the Coregistration ASP-Gotcha Optimised (CASP-GO), based on the open source NASA Ames Stereo Pipeline (ASP) [Tao et al., this conference], which is being applied to the production of planetwide DTMs and ORIs (OrthoRectified Images) from CTX and HiRISE. Alongside the production of individual strip CTX & HiRISE DTMs & ORIs, DLR [Gwinner et al., 2015] have processed HRSC mosaics of ORIs and DTMs for complete areas in a consistent manner using photogrammetric bundle block adjustment techniques. A novel automated co-registration and orthorectification chain has been developed by [Sidiropoulos & Muller, this conference]. Using the HRSC map products (both mosaics and orbital strips) as a map-base it is being applied to many of the 400,000 level-1 EDR images taken by the 4 NASA orbital cameras. In particular, the NASA Viking Orbiter camera (VO), Mars Orbiter Camera (MOC), Context Camera (CTX) as well as the High Resolution Imaging Science Experiment (HiRISE) back to 1976. A webGIS has been developed [van Gasselt et al., this conference] for displaying this time sequence of imagery and will be demonstrated showing an example from one of the HRSC quadrangle map-sheets. Automated quality control [Sidiropoulos & Muller, 2015] techniques are applied to screen for

  19. Acrylamide levels in Finnish foodstuffs analysed with liquid chromatography tandem mass spectrometry.

    Science.gov (United States)

    Eerola, Susanna; Hollebekkers, Koen; Hallikainen, Anja; Peltonen, Kimmo

    2007-02-01

    Sample clean-up and HPLC with tandem mass spectrometric detection (LC-MS/MS) was validated for the routine analysis of acrylamide in various foodstuffs. The method used proved to be reliable and the detection limit for routine monitoring was sensitive enough for foods and drinks (38 microg/kg for foods and 5 microg/L for drinks). The RSDs for repeatability and day-to-day variation were below 15% in all food matrices. Two hundred and one samples which included more than 30 different types of food and foods manufactured and prepared in various ways were analysed. The main types of food analysed were potato and cereal-based foods, processed foods (pizza, minced beef meat, meat balls, chicken nuggets, potato-ham casserole and fried bacon) and coffee. Acrylamide was detected at levels, ranging from nondetectable to 1480 microg/kg level in solid food, with crisp bread exhibiting the highest levels. In drinks, the highest value (29 microg/L) was found in regular coffee drinks.

  20. An automatic system to search, acquire, and analyse chromosomal aberrations obtained using FISH technique

    International Nuclear Information System (INIS)

    Esposito, R.D.

    2003-01-01

    Full text: Chromosomal aberrations (CA) analysis in peripheral blood lymphocytes is useful both in prenatal diagnoses and cancer cytogenetics, as well as in toxicology to determine the biologically significant dose of specific, both physical and chemical, genotoxic agents to which an individual is exposed. A useful cytogenetic technique for CAs analysis is Fluorescence-in-situ-Hybridization (FISH) which simplifies the automatic Identification and characterisation of aberrations, allowing the visualisation of chromosomes as bright signals on a dark background, and a fast analysis of stable aberrations, which are particularly interesting for late effects. The main limitation of CA analysis is the rarity with which these events occur, and therefore the time necessary to single out a statistically significant number of aberrant cells. In order to address this problem, a prototype system, capable of automatically searching, acquiring, and recognising chromosomal images of samples prepared using FISH, has been developed. The system is able to score large number of samples in a reasonable time using predefined search criteria. The system is based on the appropriately implemented and characterised automatic metaphase finder Metafer4 (MetaSystems), coupled with a specific module for the acquisition of high magnification metaphase images with any combination of fluorescence filters. These images are then analysed and classified using our software. The prototype is currently capable of separating normal metaphase images from presumed aberrant ones. This system is currently in use in our laboratories both by ourselves and by other researchers not involved in its development, in order to carry out analyses of CAs induced by ionising radiation. The prototype allows simple acquisition and management of large quantities of images and makes it possible to carry out methodological studies -such as the comparison of results obtained by different operators- as well as increasing the

  1. A web-mapping system for real-time visualization of the global terrain

    Science.gov (United States)

    Zhang, Liqiang; Yang, Chongjun; Liu, Donglin; Ren, Yingchao; Rui, Xiaoping

    2005-04-01

    In this paper, we mainly present a web-based 3D global terrain visualization application that provides more powerful transmission and visualization of global multiresolution data sets across networks. A client/server architecture is put forward. The paper also reports various relevant research work, such as efficient data compression methods to reduce the physical size of these data sets and accelerate network delivery, streaming transmission for progressively downloading data, and real-time multiresolution terrain surface visualization with a high visual quality by M-band wavelet transforms and a hierarchical triangulation technique. Finally, an experiment is performed using different levels of detailed data to verify that the system works appropriately.

  2. CO{sub 2} Sequestration Capacity and Associated Aspects of the Most Promising Geologic Formations in the Rocky Mountain Region: Local-Scale Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Laes, Denise; Eisinger, Chris; Morgan, Craig; Rauzi, Steve; Scholle, Dana; Scott, Phyllis; Lee, Si-Yong; Zaluski, Wade; Esser, Richard; Matthews, Vince; McPherson, Brian

    2013-07-30

    The purpose of this report is to provide a summary of individual local-­scale CCS site characterization studies conducted in Colorado, New Mexico and Utah. These site-­ specific characterization analyses were performed as part of the “Characterization of Most Promising Sequestration Formations in the Rocky Mountain Region” (RMCCS) project. The primary objective of these local-­scale analyses is to provide a basis for regional-­scale characterization efforts within each state. Specifically, limits on time and funding will typically inhibit CCS projects from conducting high-­ resolution characterization of a state-­sized region, but smaller (< 10,000 km{sup 2}) site analyses are usually possible, and such can provide insight regarding limiting factors for the regional-­scale geology. For the RMCCS project, the outcomes of these local-­scale studies provide a starting point for future local-­scale site characterization efforts in the Rocky Mountain region.

  3. Development and application of model RAIA uranium on-line analyser

    International Nuclear Information System (INIS)

    Dong Yanwu; Song Yufen; Zhu Yaokun; Cong Peiyuan; Cui Songru

    1999-01-01

    The working principle, structure, adjustment and application of model RAIA on-line analyser are reported. The performance of this instrument is reliable. For identical sample, the signal fluctuation in continuous monitoring for four months is less than +-1%. According to required measurement range, appropriate length of sample cell is chosen. The precision of measurement process is better than 1% at 100 g/L U. The detection limit is 50 mg/L. The uranium concentration in process stream can be displayed automatically and printed at any time. It presents 4∼20 mA current signal being proportional to the uranium concentration. This makes a long step towards process continuous control and computer management

  4. Thermal analyses. Information on the expected baking process; Thermische analyses. Informatie over een te verwachten bakgedrag

    Energy Technology Data Exchange (ETDEWEB)

    Van Wijck, H. [Stichting Technisch Centrum voor de Keramische Industrie TCKI, Velp (Netherlands)

    2009-09-01

    The design process and the drying process for architectural ceramics and pottery partly determine the characteristics of the final product, but the largest changes occur during the baking process. An overview is provided of the different thermal analyses and how the information from these analyses can predict the process in practice. (mk) [Dutch] Het vormgevingsproces en het droogproces voor bouwkeramische producten en aardewerk bepalen voor een deel de eigenschappen van de eindproducten, maar de grootste veranderingen treden op bij het bakproces. Een overzicht wordt gegeven van de verschillende thermische analyses en hoe de informatie uit deze analyses het in de praktijk te verwachten gedrag kan voorspellen.

  5. Anti-schistosomal intervention targets identified by lifecycle transcriptomic analyses.

    Directory of Open Access Journals (Sweden)

    Jennifer M Fitzpatrick

    2009-11-01

    Full Text Available Novel methods to identify anthelmintic drug and vaccine targets are urgently needed, especially for those parasite species currently being controlled by singular, often limited strategies. A clearer understanding of the transcriptional components underpinning helminth development will enable identification of exploitable molecules essential for successful parasite/host interactions. Towards this end, we present a combinatorial, bioinformatics-led approach, employing both statistical and network analyses of transcriptomic data, for identifying new immunoprophylactic and therapeutic lead targets to combat schistosomiasis.Utilisation of a Schistosoma mansoni oligonucleotide DNA microarray consisting of 37,632 elements enabled gene expression profiling from 15 distinct parasite lifecycle stages, spanning three unique ecological niches. Statistical approaches of data analysis revealed differential expression of 973 gene products that minimally describe the three major characteristics of schistosome development: asexual processes within intermediate snail hosts, sexual maturation within definitive vertebrate hosts and sexual dimorphism amongst adult male and female worms. Furthermore, we identified a group of 338 constitutively expressed schistosome gene products (including 41 transcripts sharing no sequence similarity outside the Platyhelminthes, which are likely to be essential for schistosome lifecycle progression. While highly informative, statistics-led bioinformatics mining of the transcriptional dataset has limitations, including the inability to identify higher order relationships between differentially expressed transcripts and lifecycle stages. Network analysis, coupled to Gene Ontology enrichment investigations, facilitated a re-examination of the dataset and identified 387 clusters (containing 12,132 gene products displaying novel examples of developmentally regulated classes (including 294 schistosomula and/or adult transcripts with no

  6. Multiresolution approximation for volatility processes

    NARCIS (Netherlands)

    E. Capobianco (Enrico)

    2002-01-01

    textabstractWe present an application of wavelet techniques to non-stationary time series with the aim of detecting the dependence structure which is typically found to characterize intraday stock index financial returns. It is particularly important to identify what components truly belong to the

  7. Evaluating the effect of sample type on American alligator (Alligator mississippiensis) analyte values in a point-of-care blood analyser

    OpenAIRE

    Hamilton, Matthew T.; Finger, John W.; Winzeler, Megan E.; Tuberville, Tracey D.

    2016-01-01

    The assessment of wildlife health has been enhanced by the ability of point-of-care (POC) blood analysers to provide biochemical analyses of non-domesticated animals in the field. However, environmental limitations (e.g. temperature, atmospheric humidity and rain) and lack of reference values may inhibit researchers from using such a device with certain wildlife species. Evaluating the use of alternative sample types, such as plasma, in a POC device may afford researchers the opportunity to d...

  8. Elastic-Plastic Fracture Mechanics Analyses for Circumferential Part-Through Surface Cracks at the Interface Between elbows and Pipes

    International Nuclear Information System (INIS)

    Song, Tae Kwang; Oh, Chang Kyun; Kim, Yun Jae; Kim, Jong Sung; Jin, Tae Eun

    2007-01-01

    This paper presents plastic limit loads and approximate J-integral estimates for circumferential part-through surface crack at the interface between elbows and pipes. Based on finite element limit analyses using elastic-perfectly plastic materials, plastic limit moments under in-plane bending are obtained and it is found that they are similar those for circumferential part-through surface cracks in the center of elbow. Based on present FE results, closed-form limit load solutions are proposed. Welds are not explicitly considered and all materials are assumed to be homogeneous. And the method to estimate the elastic-plastic J-integral for circumferential part-through surface cracks at the interface between elbows and straight pipes is proposed based on the reference stress approach, which was compared with corresponding solutions for straight pipes

  9. Elastic-Plastic Fracture Mechanics Analyses For circumferential Part-through Surface Cracks At The Interface Between Elbows and Pipes

    International Nuclear Information System (INIS)

    Song, Tae Kwang; Kim, Yun Jae; Oh, Chang Kyun; Kim, Jong Sung; Jin, Tae Eun

    2007-01-01

    This paper presents plastic limit loads and approximate J-integral estimates for circumferential part-through surface crack at the interface between elbows and pipes. Based on finite element limit analyses using elastic-perfectly plastic materials, plastic limit moments under in-plane bending are obtained and it is found that they are similar those for circumferential part-through surface cracks in the center of elbow. Based on present FE results, closed-form limit load solutions are proposed. Welds are not explicitly considered and all materials are assumed to be homogeneous. And the method to estimate the elastic-plastic J-integral for circumferential part-through surface cracks at the interface between elbows and straight pipes is proposed based on the reference stress approach, which was compared with corresponding solutions for straight pipes

  10. Proteomic analyses of host and pathogen responses during bovine mastitis.

    Science.gov (United States)

    Boehmer, Jamie L

    2011-12-01

    The pursuit of biomarkers for use as clinical screening tools, measures for early detection, disease monitoring, and as a means for assessing therapeutic responses has steadily evolved in human and veterinary medicine over the past two decades. Concurrently, advances in mass spectrometry have markedly expanded proteomic capabilities for biomarker discovery. While initial mass spectrometric biomarker discovery endeavors focused primarily on the detection of modulated proteins in human tissues and fluids, recent efforts have shifted to include proteomic analyses of biological samples from food animal species. Mastitis continues to garner attention in veterinary research due mainly to affiliated financial losses and food safety concerns over antimicrobial use, but also because there are only a limited number of efficacious mastitis treatment options. Accordingly, comparative proteomic analyses of bovine milk have emerged in recent years. Efforts to prevent agricultural-related food-borne illness have likewise fueled an interest in the proteomic evaluation of several prominent strains of bacteria, including common mastitis pathogens. The interest in establishing biomarkers of the host and pathogen responses during bovine mastitis stems largely from the need to better characterize mechanisms of the disease, to identify reliable biomarkers for use as measures of early detection and drug efficacy, and to uncover potentially novel targets for the development of alternative therapeutics. The following review focuses primarily on comparative proteomic analyses conducted on healthy versus mastitic bovine milk. However, a comparison of the host defense proteome of human and bovine milk and the proteomic analysis of common veterinary pathogens are likewise introduced.

  11. Additional Stress And Fracture Mechanics Analyses Of Pressurized Water Reactor Pressure Vessel Nozzles

    International Nuclear Information System (INIS)

    Walter, Matthew; Yin, Shengjun; Stevens, Gary; Sommerville, Daniel; Palm, Nathan; Heinecke, Carol

    2012-01-01

    In past years, the authors have undertaken various studies of nozzles in both boiling water reactors (BWRs) and pressurized water reactors (PWRs) located in the reactor pressure vessel (RPV) adjacent to the core beltline region. Those studies described stress and fracture mechanics analyses performed to assess various RPV nozzle geometries, which were selected based on their proximity to the core beltline region, i.e., those nozzle configurations that are located close enough to the core region such that they may receive sufficient fluence prior to end-of-life (EOL) to require evaluation of embrittlement as part of the RPV analyses associated with pressure-temperature (P-T) limits. In this paper, additional stress and fracture analyses are summarized that were performed for additional PWR nozzles with the following objectives: To expand the population of PWR nozzle configurations evaluated, which was limited in the previous work to just two nozzles (one inlet and one outlet nozzle). To model and understand differences in stress results obtained for an internal pressure load case using a two-dimensional (2-D) axi-symmetric finite element model (FEM) vs. a three-dimensional (3-D) FEM for these PWR nozzles. In particular, the ovalization (stress concentration) effect of two intersecting cylinders, which is typical of RPV nozzle configurations, was investigated. To investigate the applicability of previously recommended linear elastic fracture mechanics (LEFM) hand solutions for calculating the Mode I stress intensity factor for a postulated nozzle corner crack for pressure loading for these PWR nozzles. These analyses were performed to further expand earlier work completed to support potential revision and refinement of Title 10 to the U.S. Code of Federal Regulations (CFR), Part 50, Appendix G, Fracture Toughness Requirements, and are intended to supplement similar evaluation of nozzles presented at the 2008, 2009, and 2011 Pressure Vessels and Piping (PVP

  12. Analysis of determination modalities concerning the exposure and emission limits values of chemical and radioactive substances; Analyse des modalites de fixation des valeurs limites d'exposition et d'emission pour les substances chimiques et radioactives

    Energy Technology Data Exchange (ETDEWEB)

    Schieber, C.; Schneider, T

    2002-08-01

    This document presents the generic approach adopted by various organizations for the determination of the public exposure limits values to chemical and radioactive substances and for the determination of limits values of chemical products emissions by some installations. (A.L.B.)

  13. Material Limitations on the Detection Limit in Refractometry

    OpenAIRE

    Skafte-Pedersen, Peder; Nunes, Pedro S.; Xiao, Sanshui; Mortensen, Niels Asger

    2009-01-01

    We discuss the detection limit for refractometric sensors relying on high-Q optical cavities and show that the ultimate classical detection limit is given by min {Δn} ≳ η with n + iη being the complex refractive index of the material under refractometric investigation. Taking finite Q factors and filling fractions into account, the detection limit declines. As an example we discuss the fundamental limits of silicon-based high-Q resonators, such as photonic crystal resonators, for sensing in a...

  14. Currie detection limits in gamma-ray spectroscopy

    International Nuclear Information System (INIS)

    Geer, L.-E. de

    2004-01-01

    Currie Hypothesis testing is applied to gamma-ray spectral data, where an optimum part of the peak is used and the background is considered well known from nearby channels. With this, the risk of making Type I errors is about 100 times lower than commonly assumed. A programme, PeakMaker, produces random peaks with given characteristics on the screen and calculations are done to facilitate a full use of Poisson statistics in spectrum analyses. Short technical note summary: The Currie decision limit concept applied to spectral data is reinterpreted, which gives better consistency between the selected error risk and the observed error rates. A PeakMaker program is described and the few count problem is analyzed

  15. Contesting Citizenship: Comparative Analyses

    DEFF Research Database (Denmark)

    Siim, Birte; Squires, Judith

    2007-01-01

    importance of particularized experiences and multiple ineequality agendas). These developments shape the way citizenship is both practiced and analysed. Mapping neat citizenship modles onto distinct nation-states and evaluating these in relation to formal equality is no longer an adequate approach....... Comparative citizenship analyses need to be considered in relation to multipleinequalities and their intersections and to multiple governance and trans-national organisinf. This, in turn, suggests that comparative citizenship analysis needs to consider new spaces in which struggles for equal citizenship occur...

  16. The Particle-Matrix model: limitations and further improvements needed

    DEFF Research Database (Denmark)

    Cepuritis, Rolands; Jacobsen, Stefan; Spangenberg, Jon

    According to the Particle-Matrix Model (PMM) philosophy, the workability of concrete dependson the properties of two phases and the volumetric ratio between them: the fluid matrix phase (≤0.125 mm) and the solid particle phase (> 0.125 mm). The model has been successfully appliedto predict concrete...... workability for different types of concrete, but has also indicated that somepotential cases exist when its application is limited. The paper presents recent studies onimproving the method by analysing how the PMM one-point flow parameter λQ can beexpressed by rheological models (Bingham and Herschel-Bulkley)....

  17. Global limit load solutions for thick-walled cylinders with circumferential cracks under combined internal pressure, axial force and bending moment − Part II: Finite element validation

    International Nuclear Information System (INIS)

    Li, Yuebing; Lei, Yuebao; Gao, Zengliang

    2014-01-01

    Global limit load solutions for thick-walled cylinders with circumferential internal/external surface and through-wall defects under combined positive/negative axial force, positive/negative global bending moment and internal pressure have been developed in Part I of this paper. In this Part II, elastic-perfectly plastic 3-D finite element (FE) analyses are performed for selected cases, covering a wide range of geometries and load combinations, to validate the developed limit load solutions. The results show that these limit load solutions can predict the FE data very well for the cases with shallow or deep and short cracks and are conservative. For the cases with very long and deep cracks, the predictions are reasonably accurate and more conservative. -- Highlights: • Elastic-perfectly plastic 3D finite element limiting analyses of cylinders. • Thin/thick-walled cylinders with circumferential surface defects. • Combined loading for pressure, end-force and global bending moment. • Totally 1458 cases analysed and tabulated normalised results provided. • Results used to validate the developed limit load solutions in Part I of this paper

  18. Statistical limitations in functional neuroimaging. I. Non-inferential methods and statistical models.

    Science.gov (United States)

    Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

    1999-01-01

    Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149

  19. Host Cell Restriction Factors that Limit Influenza A Infection

    Directory of Open Access Journals (Sweden)

    Fernando Villalón-Letelier

    2017-12-01

    Full Text Available Viral infection of different cell types induces a unique spectrum of host defence genes, including interferon-stimulated genes (ISGs and genes encoding other proteins with antiviral potential. Although hundreds of ISGs have been described, the vast majority have not been functionally characterised. Cellular proteins with putative antiviral activity (hereafter referred to as “restriction factors” can target various steps in the virus life-cycle. In the context of influenza virus infection, restriction factors have been described that target virus entry, genomic replication, translation and virus release. Genome wide analyses, in combination with ectopic overexpression and/or gene silencing studies, have accelerated the identification of restriction factors that are active against influenza and other viruses, as well as providing important insights regarding mechanisms of antiviral activity. Herein, we review current knowledge regarding restriction factors that mediate anti-influenza virus activity and consider the viral countermeasures that are known to limit their impact. Moreover, we consider the strengths and limitations of experimental approaches to study restriction factors, discrepancies between in vitro and in vivo studies, and the potential to exploit restriction factors to limit disease caused by influenza and other respiratory viruses.

  20. Linear and kernel methods for multivariate change detection

    DEFF Research Database (Denmark)

    Canty, Morton J.; Nielsen, Allan Aasbjerg

    2012-01-01

    ), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...

  1. Bifurcation and extinction limit of stretched premixed flames with chain-branching intermediate kinetics and radiative loss

    Science.gov (United States)

    Zhang, Huangwei; Chen, Zheng

    2018-05-01

    Premixed counterflow flames with thermally sensitive intermediate kinetics and radiation heat loss are analysed within the framework of large activation energy. Unlike previous studies considering one-step global reaction, two-step chemistry consisting of a chain branching reaction and a recombination reaction is considered here. The correlation between the flame front location and stretch rate is derived. Based on this correlation, the extinction limit and bifurcation characteristics of the strained premixed flame are studied, and the effects of fuel and radical Lewis numbers as well as radiation heat loss are examined. Different flame regimes and their extinction characteristics can be predicted by the present theory. It is found that fuel Lewis number affects the flame bifurcation qualitatively and quantitatively, whereas radical Lewis number only has a quantitative influence. Stretch rates at the stretch and radiation extinction limits respectively decrease and increase with fuel Lewis number before the flammability limit is reached, while the radical Lewis number shows the opposite tendency. In addition, the relation between the standard flammability limit and the limit derived from the strained near stagnation flame is affected by the fuel Lewis number, but not by the radical Lewis number. Meanwhile, the flammability limit increases with decreased fuel Lewis number, but with increased radical Lewis number. Radical behaviours at flame front corresponding to flame bifurcation and extinction are also analysed in this work. It is shown that radical concentration at the flame front, under extinction stretch rate condition, increases with radical Lewis number but decreases with fuel Lewis number. It decreases with increased radiation loss.

  2. Comparison of an infrared anaesthetic agent analyser (Datex-Ohmeda) with refractometry for measurement of isoflurane, sevoflurane and desflurane concentrations.

    Science.gov (United States)

    Rudolff, Andrea S; Moens, Yves P S; Driessen, Bernd; Ambrisko, Tamas D

    2014-07-01

    To assess agreement between infrared (IR) analysers and a refractometer for measurements of isoflurane, sevoflurane and desflurane concentrations and to demonstrate the effect of customized calibration of IR analysers. In vitro experiment. Six IR anaesthetic monitors (Datex-Ohmeda) and a single portable refractometer (Riken). Both devices were calibrated following the manufacturer's recommendations. Gas samples were collected at common gas outlets of anaesthesia machines. A range of agent concentrations was produced by stepwise changes in dial settings: isoflurane (0-5% in 0.5% increments), sevoflurane (0-8% in 1% increments), or desflurane (0-18% in 2% increments). Oxygen flow was 2 L minute(-1) . The orders of testing IR analysers, agents and dial settings were randomized. Duplicate measurements were performed at each setting. The entire procedure was repeated 24 hours later. Bland-Altman analysis was performed. Measurements on day-1 were used to yield calibration equations (IR measurements as dependent and refractometry measurements as independent variables), which were used to modify the IR measurements on day-2. Bias ± limits of agreement for isoflurane, sevoflurane and desflurane were 0.2 ± 0.3, 0.1 ± 0.4 and 0.7 ± 0.9 volume%, respectively. There were significant linear relationships between differences and means for all agents. The IR analysers became less accurate at higher gas concentrations. After customized calibration, the bias became almost zero and the limits of agreement became narrower. If similar IR analysers are used in research studies, they need to be calibrated against a reference method using the agent in question at multiple calibration points overlapping the range of interest. © 2013 Association of Veterinary Anaesthetists and the American College of Veterinary Anesthesia and Analgesia.

  3. Application of the dose limitation system to the control of carbon-14 releases from heavy-water-moderated reactors

    International Nuclear Information System (INIS)

    Beninson, D.; Gonzalez, A.J.

    1982-01-01

    Heavy-water-moderated reactors produce substantially more carbon-14 than light-water reactors. Applying the principles of the systems of dose limitation, the paper presents the rationale used for establishing the release limit for effluents containing this nuclide and for the decisions made regarding the effluent treatment in the third nuclear power station in Argentina. Production of carbon-14 in PHWR and the release routes are analysed in the light of the different effluent treatment possibilities. An optimization assessment is presented, taking into account effluent treatment and waste management costs, and the collective effective dose commitment due to the releases. The contribution of present carbon-14 releases to future individual doses is also analysed in the light of an upper bound for the contribution, representing a fraction of the individual dose limits. The paper presents the resulting requirements for the effluent treatment regarding carbon-14 and the corresponding regulatory aspects used in Argentina. (author)

  4. Performance limit of daytime radiative cooling in warm humid environment

    Directory of Open Access Journals (Sweden)

    Takahiro Suichi

    2018-05-01

    Full Text Available Daytime radiative cooling potentially offers efficient passive cooling, but the performance is naturally limited by the environment, such as the ambient temperature and humidity. Here, we investigate the performance limit of daytime radiative cooling under warm and humid conditions in Okayama, Japan. A cooling device, consisting of alternating layers of SiO2 and poly(methyl methacrylate on an Al mirror, is fabricated and characterized to demonstrate a high reflectance for sunlight and a selective thermal radiation in the mid-infrared region. In the temperature measurement under the sunlight irradiation, the device shows 3.4 °C cooler than a bare Al mirror, but 2.8 °C warmer than the ambient of 35 °C. The corresponding numerical analyses reveal that the atmospheric window in λ = 16 ∼ 25 μm is closed due to a high humidity, thereby limiting the net emission power of the device. Our study on the humidity influence on the cooling performance provides a general guide line of how one can achieve practical passive cooling in a warm humid environment.

  5. Determination of limits for smallest detectable and largest subcritical leakage cracks in piping systems

    International Nuclear Information System (INIS)

    Bieselt, R.; Wolf, M.

    1995-01-01

    Nuclear power plant piping systems - those still in their original as-built condition as well as upgraded designs - are subject to safety analysis. In order to limit the consequences of postulated piping failures, the basic safety concept incorporating rupture preclusion criteria is applied to specific high-energy piping systems. Leak-before-break analyses are also conducted within the framework of this concept. These analyses serve to determine the potential consequences of jet and reaction forces due to maximum subcritical leakage cracks while also establishing the minimum crack sizes that would be reliably detectable by the leakage rates resulting from these cracks. The boundary conditions for these analyses are not clearly defined. Using various examples as a basis, this paper presents and discusses how the leak-before-break concept can be applied. (orig.)

  6. Using Nondestructive Portable X-ray Fluorescence Spectrometers on Stone, Ceramics, Metals, and Other Materials in Museums: Advantages and Limitations.

    Science.gov (United States)

    Tykot, Robert H

    2016-01-01

    Elemental analysis is a fundamental method of analysis on archaeological materials to address their overall composition or identify the source of their geological components, yet having access to instrumentation, its often destructive nature, and the time and cost of analyses have limited the number and/or size of archaeological artifacts tested. The development of portable X-ray fluorescence (pXRF) instruments over the past decade, however, has allowed nondestructive analyses to be conducted in museums around the world, on virtually any size artifact, producing data for up to several hundred samples per day. Major issues have been raised, however, about the sensitivity, precision, and accuracy of these devices, and the limitation of performing surface analysis on potentially heterogeneous objects. The advantages and limitations of pXRF are discussed here regarding archaeological studies of obsidian, ceramics, metals, bone, and painted materials. © The Author(s) 2015.

  7. Jet stream winds - Comparisons of analyses with independent aircraft data over Southwest Asia

    Science.gov (United States)

    Tenenbaum, J.

    1991-01-01

    Cruise-level wind data from commercial aircraft are obtained, and these data are compared with operational jet stream analyses over southwest Asia, an area of limited conventional data. Results from an ensemble of 11 cases during January 1989 and individual cases during December 1988-March 1989 are presented. The key results are: (1) European Centre for Medium-Range Weather Forecasts (ECMWF), National Meteorological Center, and United Kingdom Meteorological Office analyses of the subtropical jet in southwest Asia are 11 percent, 17 percent, and 17 percent weaker, respectively, than aircraft observations; (2) analyzed poleward shears range up to 1 f (0.00007/s) compared with up to 3 f (0.00021/s) in the aircraft observations where f is the local Coriolis parameters; (3) the ECMWF errors are largest at the base of the jet; (4) the mean ECMWF core location is latitudinally correct but has an rms latitude variance of 1.5 deg; (5) isolated erroneous radiosondes produce unmeteorological structures in the analyzed subtropical jet stream; and (6) the increased utilization of automated aircraft reports is likely to produce a spurious secular increase in the apparent strength of the jets. The magnitude and spatial extent of the errors seen are near limits of current GCM resolution (100 km) but should be resolvable. The results imply that studies of GCM systematic jet stream wind errors in weather and climate forecasts must be interpreted with caution in this region.

  8. Activation analyses updating the ITER radioactive waste assessment

    International Nuclear Information System (INIS)

    Pampin, R.; Zheng, S.; Lilley, S.; Na, B.C.; Loughlin, M.J.; Taylor, N.P.

    2012-01-01

    Highlights: ► Comprehensive updated of ITER radwaste assessment. ► Latest coupled neutronics and activation methods. ► Type A waste at shutdown decays to TFA within 100 years. ► Most type B waste at shutdown is still type B after 100 years. - Abstract: A study is reported which computes the radiation transport and activation response throughout the ITER machine and updates the ITER radioactive waste assessment using modern 3D models and up-to-date methods. The latest information on component design, maintenance, replacement schedules and materials is adopted. The radwaste classification is revised for all the major components of ITER, as well as several representative port plugs. Results include categorisation snapshots at different decay times, time histories of radiological quantities throughout the machine, and guidelines on interim decay times for components. All plasma-facing materials except tungsten are found to classify as type B due to the transmutation of their main constituents. Major contributors to the IRAS index of all materials are reported. Elemental concentration limits for type A classification of first wall and divertor materials are obtained; for the steels, only a reduction in service lifetime can reduce the waste class. Comparison of total waste amounts with earlier assessments is limited by the fact that analyses of some components are still preliminary; the trend, however, indicates a potential reduction in the total amount of waste if component segregation is demonstrated.

  9. Large scale surveys suggest limited mercury availability in tropical north Queensland (Australia)

    International Nuclear Information System (INIS)

    Jardine, Timothy D.; Halliday, Ian A.; Howley, Christina; Sinnamon, Vivian; Bunn, Stuart E.

    2012-01-01

    Little is known about the threat of mercury (Hg) to consumers in food webs of Australia's wet–dry tropics. This is despite high concentrations in similar biomes elsewhere and a recent history of gold mining that could lead to a high degree of exposure for biota. We analysed Hg in water, sediments, invertebrates and fishes in rivers and estuaries of north Queensland, Australia to determine its availability and biomagnification in food webs. Concentrations in water and sediments were low relative to other regions of Hg concern, with only four of 138 water samples and five of 60 sediment samples above detection limits of 0.1 μg L −1 and 0.1 μg g −1 , respectively. Concentrations of Hg in fishes and invertebrates from riverine and wetland food webs were well below international consumption guidelines, including those in piscivorous fishes, likely due to low baseline concentrations and limited rates of biomagnification (average slope of log Hg vs. δ 15 N = 0.08). A large fish species of recreational, commercial, and cultural importance (the barramundi, Lates calcarifer), had low concentrations that were below consumption guidelines. Observed variation in Hg concentrations in this species was primarily explained by age and foraging location (floodplain vs. coastal), with floodplain feeders having higher Hg concentrations than those foraging at sea. These analyses suggest that there is a limited threat of Hg exposure for fish-eating consumers in this region. - Highlights: ► Hg concentrations in freshwaters and sediments of north Queensland were low. ► Biomagnification of Hg through riverine food webs was limited. ► Barramundi, a predatory fish, had low concentrations meaning low risk for consumers. ► Floodplain-feeding barramundi had higher Hg concentrations than coastal feeders.

  10. Analysing and Comparing Encodability Criteria

    Directory of Open Access Journals (Sweden)

    Kirstin Peters

    2015-08-01

    Full Text Available Encodings or the proof of their absence are the main way to compare process calculi. To analyse the quality of encodings and to rule out trivial or meaningless encodings, they are augmented with quality criteria. There exists a bunch of different criteria and different variants of criteria in order to reason in different settings. This leads to incomparable results. Moreover it is not always clear whether the criteria used to obtain a result in a particular setting do indeed fit to this setting. We show how to formally reason about and compare encodability criteria by mapping them on requirements on a relation between source and target terms that is induced by the encoding function. In particular we analyse the common criteria full abstraction, operational correspondence, divergence reflection, success sensitiveness, and respect of barbs; e.g. we analyse the exact nature of the simulation relation (coupled simulation versus bisimulation that is induced by different variants of operational correspondence. This way we reduce the problem of analysing or comparing encodability criteria to the better understood problem of comparing relations on processes.

  11. ATHENA/INTRA analyses for ITER, NSSR-2

    International Nuclear Information System (INIS)

    Shen, Kecheng; Eriksson, John; Sjoeberg, A.

    1999-02-01

    The present report is a summary report including thermal-hydraulic analyses made at Studsvik Eco and Safety AB for the ITER NSSR-2 safety documentation. The objective of the analyses was to reveal the safety characteristics of various heat transfer systems at specified operating conditions and to indicate the conditions for which there were obvious risks of jeopardising the structural integrity of the coolant systems. In the latter case also some analyses were made to indicate conceivable mitigating measures for maintaining the integrity.The analyses were primarily concerned with the First Wall and Divertor heat transfer systems. Several enveloping transients were analysed with associated specific flow and heat load boundary conditions. The analyses were performed with the ATHENA and INTRA codes

  12. ATHENA/INTRA analyses for ITER, NSSR-2

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Kecheng; Eriksson, John; Sjoeberg, A

    1999-02-01

    The present report is a summary report including thermal-hydraulic analyses made at Studsvik Eco and Safety AB for the ITER NSSR-2 safety documentation. The objective of the analyses was to reveal the safety characteristics of various heat transfer systems at specified operating conditions and to indicate the conditions for which there were obvious risks of jeopardising the structural integrity of the coolant systems. In the latter case also some analyses were made to indicate conceivable mitigating measures for maintaining the integrity.The analyses were primarily concerned with the First Wall and Divertor heat transfer systems. Several enveloping transients were analysed with associated specific flow and heat load boundary conditions. The analyses were performed with the ATHENA and INTRA codes 8 refs, 14 figs, 15 tabs

  13. Hanford study: a review of its limitations and controversial conclusions

    International Nuclear Information System (INIS)

    Gilbert, E.S.

    1984-10-01

    The Hanford data set has attracted attention primarily because of analyses conducted by Mancuso, Stewart, and Kneale (MSK). These investigators claim that the Hanford data provide evidence that our current estimates of cancer mortality resulting from radiation exposure are too low, and advocate replacing estimates based on populations exposed at relatively high doses (such as the Japanese atom bomb survivors) with estimates based on the Hanford data. In this paper, it is shown that the only evidence of association of radiation exposure and mortality provided by the Hanford data is a small excess of multiple myeloma, and that this data set is not adequate for reliable risk estimation. It is demonstrated that confidence limits for risk estimates are very wide, and that the data are not adequate to differentiate among models. The more recent MSK analyses, which claim to provide adequate models and risk estimates, are critiqued. 18 references, 1 table

  14. Graphical Geometric and Learning/Optimization-Based Methods in Statistical Signal and Image Processing Object Recognition and Data Fusion

    National Research Council Canada - National Science Library

    Willsky, Alan S

    2008-01-01

    ...: (a) the use of graphical, hierarchical, and multiresolution representations for the development of statistical modeling methodologies for complex phenomena and for the construction of scalable algorithms...

  15. Analyses of demand response in Denmark[Electricity market

    Energy Technology Data Exchange (ETDEWEB)

    Moeller Andersen, F.; Grenaa Jensen, S.; Larsen, Helge V.; Meibom, P.; Ravn, H.; Skytte, K.; Togeby, M.

    2006-10-15

    Due to characteristics of the power system, costs of producing electricity vary considerably over short time intervals. Yet, many consumers do not experience corresponding variations in the price they pay for consuming electricity. The topic of this report is: are consumers willing and able to respond to short-term variations in electricity prices, and if so, what is the social benefit of consumers doing so? Taking Denmark and the Nord Pool market as a case, the report focuses on what is known as short-term consumer flexibility or demand response in the electricity market. With focus on market efficiency, efficient allocation of resources and security of supply, the report describes demand response from a micro-economic perspective and provides empirical observations and case studies. The report aims at evaluating benefits from demand response. However, only elements contributing to an overall value are presented. In addition, the analyses are limited to benefits for society, and costs of obtaining demand response are not considered. (au)

  16. Meta-Analyses of Predictors of Hope in Adolescents.

    Science.gov (United States)

    Yarcheski, Adela; Mahon, Noreen E

    2016-03-01

    The purposes of this study were to identify predictors of hope in the literature reviewed, to use meta-analysis to determine the mean effect size (ES) across studies between each predictor and hope, and to examine four moderators on each predictor-hope relationship. Using preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines for the literature reviewed, 77 published studies or doctoral dissertations completed between 1990 and 2012 met the inclusion criteria. Eleven predictors of hope were identified and each predictor in relation to hope was subjected to meta-analysis. Five predictors (positive affect, life satisfaction, optimism, self-esteem, and social support) of hope had large mean ESs, 1 predictor (depression) had a medium ES, 4 predictors (negative affect, stress, academic achievement, and violence) had small ESs, and 1 predictor (gender) had a trivial ES. Findings are interpreted for the 11 predictors in relation to hope. Limitations and conclusions are addressed; future studies are recommended. © The Author(s) 2014.

  17. Experiment-specific analyses in support of code development

    International Nuclear Information System (INIS)

    Ott, L.J.

    1990-01-01

    Experiment-specific models have been developed since 1986 by Oak Ridge National Laboratory Boiling Water Reactor (BWR) severe accident analysis programs for the purpose of BWR experimental planning and optimum interpretation of experimental results. These experiment-specific models have been applied to large integral tests (ergo, experiments) which start from an initial undamaged core state. The tests performed to date in BWR geometry have had significantly different-from-prototypic boundary and experimental conditions because of either normal facility limitations or specific experimental constraints. These experiments (ACRR: DF-4, NRU: FLHT-6, and CORA) were designed to obtain specific phenomenological information such as the degradation and interaction of prototypic components and the effects on melt progression of control-blade materials and channel boxes. Applications of ORNL models specific to the ACRR DF-4 and KfK CORA-16 experiments are discussed and significant findings from the experimental analyses are presented. 32 refs., 16 figs

  18. An umbrella review of meta-analyses of interventions to improve maternal outcomes for teen mothers.

    Science.gov (United States)

    SmithBattle, Lee; Loman, Deborah G; Chantamit-O-Pas, Chutima; Schneider, Joanne Kraenzle

    2017-08-01

    The purpose of this study was to perform an umbrella review of meta-analyses of intervention studies designed to improve outcomes of pregnant or parenting teenagers. An extensive search retrieved nine reports which provided 21 meta-analyses analyses. Data were extracted by two reviewers. Methodological quality was assessed using the AMSTAR Instrument. Most effect sizes were small but high quality studies showed significant outcomes for reduced low birth weight (RR = 0.60), repeat pregnancies/births (OR = 0.47-0.62), maternal education (OR = 1.21-1.83), and maternal employment (OR = 1.26). Several parenting outcomes (parent-child teaching interaction post-intervention [SMD = -0.91] and at follow-up [SMD = -1.07], and parent-child relationship post-intervention [SMD = -0.71] and at follow-up [SMD = -0.90]) were significant, but sample sizes were very small. Many reports did not include moderator analyses. Behavioral interventions offer limited resources and occur too late to mitigate the educational and social disparities that precede teen pregnancy. Future intervention research and policies that redress the social determinants of early childbearing are recommended. Copyright © 2017 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  19. Beta and current limits in the Doublet III tokamak

    International Nuclear Information System (INIS)

    Strait, E.J.; Chu, M.S.; Jahns, G.L.

    1986-04-01

    Neutral-beam heated discharges in Doublet III exhibit an operational beta limit, β/sub T/(%) less than or equal to 3.5 I(MA)/a(m)B(T), in good agreement with several theoretical predictions for ideal external kink or ballooning modes. These theories predict that the β limit has no explicit dependence on plasma shape (for nominal dee shapes). This aspect of the theory was confirmed in Doublet III by varying the elongation (kappa) from 1.0 to 1.6 and the triangularity (delta) from -0.1 to 0.9 and finding in all cases the same β limit. The maximum achievable beta thus depends on the minimum achievable value of the safety factor q. In Doublet III, the operational current limit is given by q greater than or equal to 1.7 for limiter-defined discharges and q greater than or equal to 2.7 for separatrix-defined discharges. Operation with q approx.2 was achieved for 1.0 less than or equal to kappa less than or equal to 1.6. Both β and q limits are characterized by major disruptions which usually terminate the discharge. In both cases, the disruptions often have a precursor oscillation with toroidal mode number n = 1, poloidal mode number m = 2 or 3, a frequency of zero to a few kHz, and a growth time on the order of a millisecond. These observations suggest that the proximate cause of these disruptions is a kink or tearing mode, pressure-driven in one case and current-driven in the other. Theoretical analyses of discharges at both limits will be compared. Modes with a high toroidal mode number, 3 less than or equal to n less than or equal to 5, and ballooning character have been observed near the β/sub T/ limit. These modes do not appear to be closely connected with the disruptions. Heating efficiency, ΔW/ΔP, remains constant up to the limiting disruption. Fishbone modes appear to be mainly a feature of high β/sub p/ operation and not connected to the β/sub T/ limit

  20. Thermodynamic limit for coherence-limited solar power conversion

    Science.gov (United States)

    Mashaal, Heylal; Gordon, Jeffrey M.

    2014-09-01

    The spatial coherence of solar beam radiation is a key constraint in solar rectenna conversion. Here, we present a derivation of the thermodynamic limit for coherence-limited solar power conversion - an expansion of Landsberg's elegant basic bound, originally limited to incoherent converters at maximum flux concentration. First, we generalize Landsberg's work to arbitrary concentration and angular confinement. Then we derive how the values are further lowered for coherence-limited converters. The results do not depend on a particular conversion strategy. As such, they pertain to systems that span geometric to physical optics, as well as classical to quantum physics. Our findings indicate promising potential for solar rectenna conversion.

  1. Tracing common origins of Genomic Islands in prokaryotes based on genome signature analyses.

    Science.gov (United States)

    van Passel, Mark Wj

    2011-09-01

    Horizontal gene transfer constitutes a powerful and innovative force in evolution, but often little is known about the actual origins of transferred genes. Sequence alignments are generally of limited use in tracking the original donor, since still only a small fraction of the total genetic diversity is thought to be uncovered. Alternatively, approaches based on similarities in the genome specific relative oligonucleotide frequencies do not require alignments. Even though the exact origins of horizontally transferred genes may still not be established using these compositional analyses, it does suggest that compositionally very similar regions are likely to have had a common origin. These analyses have shown that up to a third of large acquired gene clusters that reside in the same genome are compositionally very similar, indicative of a shared origin. This brings us closer to uncovering the original donors of horizontally transferred genes, and could help in elucidating possible regulatory interactions between previously unlinked sequences.

  2. Calculational criticality analyses of 10- and 20-MW UF6 freezer/sublimer vessels

    International Nuclear Information System (INIS)

    Jordan, W.C.

    1993-02-01

    Calculational criticality analyses have been performed for 10- and 20-MW UF 6 freezer/sublimer vessels. The freezer/sublimers have been analyzed over a range of conditions that encompass normal operation and abnormal conditions. The effects of HF moderation of the UF 6 in each vessel have been considered for uranium enriched between 2 and 5 wt % 235 U. The results indicate that the nuclearly safe enrichments originally established for the operation of a 10-MW freezer/sublimer, based on a hydrogen-to-uranium moderation ratio of 0.33, are acceptable. If strict moderation control can be demonstrated for hydrogen-to-uranium moderation ratios that are less than 0.33, then the enrichment limits for the 10-MW freezer/sublimer may be increased slightly. The calculations performed also allow safe enrichment limits to be established for a 20-NM freezer/sublimer under moderation control

  3. Origins Space Telescope: Breaking the Confusion Limit

    Science.gov (United States)

    Wright, Edward L.; Origins Space Telescope Science and Technology Definition Team

    2018-01-01

    The Origins Space Telescope (OST) is the mission concept for the Far-Infrared Surveyor, one of the four science and technology definition studies of NASA Headquarters for the 2020 Astronomy and Astrophysics Decadal survey. Origins will enable flagship-quality general observing programs led by the astronomical community in the 2030s.OST will have a background-limited sensitivity for a background 27,000 times lower than the Herschel background caused by thermal emission from Herschel's warm telescope. For continuum observations the confusion limit in a diffraction-limited survey can be reached in very short integration times at longer far-infrared wavelengths. But the confusion limit can be pierced for both the nearest and the farthest objects to be observed by OST. For outer the Solar System the targets' motion across the sky will provide a clear signature in surveys repeated after an interval of days to months. This will provide a size-frequency distribution of TNOs that is not biased toward high albedo objects.For the distant Universe the first galaxies and the first metals will provide a third dimension of spectral information that can be measured with a long-slit, medium resolution spectrograph. This will allow 3Dmapping to measure source densities as a function of redshift. The continuum shape associated with sourcesat different redshifts can be derived from correlation analyses of these 3D maps.Fairly large sky areas can be scanned by moving the spacecraft at a constant angular rate perpendicular to the orientation of the long slit of the spectrograph, avoiding the high overhead of step-and-stare surveying with a large space observatory.We welcome you to contact the Science and Technology Definition Team (STDT) with your science needs and ideas by emailing us at ost_info@lists.ipac.caltech.edu

  4. The radiation analyses of ITER lower ports

    International Nuclear Information System (INIS)

    Petrizzi, L.; Brolatti, G.; Martin, A.; Loughlin, M.; Moro, F.; Villari, R.

    2010-01-01

    The ITER Vacuum Vessel has upper, equatorial, and lower ports used for equipment installation, diagnostics, heating and current drive systems, cryo-vacuum pumping, and access inside the vessel for maintenance. At the level of the divertor, the nine lower ports for remote handling, cryo-vacuum pumping and diagnostic are inclined downwards and toroidally located each every 40 o . The cryopump port has additionally a branch to allocate a second cryopump. The ports, as openings in the Vacuum Vessel, permit radiation streaming out of the vessel which affects the heating in the components in the outer regions of the machine inside and outside the ports. Safety concerns are also raised with respect to the dose after shutdown at the cryostat behind the ports: in such zones the radiation dose level must be kept below the regulatory limit to allow personnel access for maintenance purposes. Neutronic analyses have been required to qualify the ITER project related to the lower ports. A 3-D model was used to take into account full details of the ports and the lower machine surroundings. MCNP version 5 1.40 has been used with the FENDL 2.1 nuclear data library. The ITER 40 o model distributed by the ITER Organization was developed in the lower part to include the relevant details. The results of a first analysis, focused on cryopump system only, were recently published. In this paper more complete data on the cryopump port and analysis for the remote handling port and the diagnostic rack are presented; the results of both analyses give a complete map of the radiation loads in the outer divertor ports. Nuclear heating, dpa, tritium production, and dose rates after shutdown are provided and the implications for the design are discussed.

  5. Biomass feedstock analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wilen, C.; Moilanen, A.; Kurkela, E. [VTT Energy, Espoo (Finland). Energy Production Technologies

    1996-12-31

    The overall objectives of the project `Feasibility of electricity production from biomass by pressurized gasification systems` within the EC Research Programme JOULE II were to evaluate the potential of advanced power production systems based on biomass gasification and to study the technical and economic feasibility of these new processes with different type of biomass feed stocks. This report was prepared as part of this R and D project. The objectives of this task were to perform fuel analyses of potential woody and herbaceous biomasses with specific regard to the gasification properties of the selected feed stocks. The analyses of 15 Scandinavian and European biomass feed stock included density, proximate and ultimate analyses, trace compounds, ash composition and fusion behaviour in oxidizing and reducing atmospheres. The wood-derived fuels, such as whole-tree chips, forest residues, bark and to some extent willow, can be expected to have good gasification properties. Difficulties caused by ash fusion and sintering in straw combustion and gasification are generally known. The ash and alkali metal contents of the European biomasses harvested in Italy resembled those of the Nordic straws, and it is expected that they behave to a great extent as straw in gasification. Any direct relation between the ash fusion behavior (determined according to the standard method) and, for instance, the alkali metal content was not found in the laboratory determinations. A more profound characterisation of the fuels would require gasification experiments in a thermobalance and a PDU (Process development Unit) rig. (orig.) (10 refs.)

  6. Trace element analysis by EPMA in geosciences: detection limit, precision and accuracy

    Science.gov (United States)

    Batanova, V. G.; Sobolev, A. V.; Magnin, V.

    2018-01-01

    Use of the electron probe microanalyser (EPMA) for trace element analysis has increased over the last decade, mainly because of improved stability of spectrometers and the electron column when operated at high probe current; development of new large-area crystal monochromators and ultra-high count rate spectrometers; full integration of energy-dispersive / wavelength-dispersive X-ray spectrometry (EDS/WDS) signals; and the development of powerful software packages. For phases that are stable under a dense electron beam, the detection limit and precision can be decreased to the ppm level by using high acceleration voltage and beam current combined with long counting time. Data on 10 elements (Na, Al, P, Ca, Ti, Cr, Mn, Co, Ni, Zn) in olivine obtained on a JEOL JXA-8230 microprobe with tungsten filament show that the detection limit decreases proportionally to the square root of counting time and probe current. For all elements equal or heavier than phosphorus (Z = 15), the detection limit decreases with increasing accelerating voltage. The analytical precision for minor and trace elements analysed in olivine at 25 kV accelerating voltage and 900 nA beam current is 4 - 18 ppm (2 standard deviations of repeated measurements of the olivine reference sample) and is similar to the detection limit of corresponding elements. To analyse trace elements accurately requires careful estimation of background, and consideration of sample damage under the beam and secondary fluorescence from phase boundaries. The development and use of matrix reference samples with well-characterised trace elements of interest is important for monitoring and improving of the accuracy. An evaluation of the accuracy of trace element analyses in olivine has been made by comparing EPMA data for new reference samples with data obtained by different in-situ and bulk analytical methods in six different laboratories worldwide. For all elements, the measured concentrations in the olivine reference sample

  7. Anisotropic multi-scale fluid registration: evaluation in magnetic resonance breast imaging

    International Nuclear Information System (INIS)

    Crum, W R; Tanner, C; Hawkes, D J

    2005-01-01

    Registration using models of compressible viscous fluids has not found the general application of some other techniques (e.g., free-form-deformation (FFD)) despite its ability to model large diffeomorphic deformations. We report on a multi-resolution fluid registration algorithm which improves on previous work by (a) directly solving the Navier-Stokes equation at the resolution of the images (b) accommodating image sampling anisotropy using semi-coarsening and implicit smoothing in a full multi-grid (FMG) solver and (c) exploiting the inherent multi-resolution nature of FMG to implement a multi-scale approach. Evaluation is on five magnetic resonance (MR) breast images subject to six biomechanical deformation fields over 11 multi-resolution schemes. Quantitative assessment is by tissue overlaps and target registration errors and by registering using the known correspondences rather than image features to validate the fluid model. Context is given by comparison with a validated FFD algorithm and by application to images of volunteers subjected to large applied deformation. The results show that fluid registration of 3D breast MR images to sub-voxel accuracy is possible in minutes on a 1.6 GHz Linux-based Athlon processor with coarse solutions obtainable in a few tens of seconds. Accuracy and computation time are comparable to FFD techniques validated for this application

  8. Investigation on Capacitor Switching Transient Limiter with a Three phase Variable Resistance

    DEFF Research Database (Denmark)

    Naderi, Seyed Behzad; Jafari, Mehdi; Zandnia, Amir

    2017-01-01

    In this paper, a capacitor switching transient limiter based on a three phase variable resistance is proposed. The proposed structure eliminates the capacitor switching transient current and over-voltage by introducing a variable resistance to the current path with its special switching pattern...... transients on capacitor after bypassing. Analytic Analyses for this structure in transient cases are presented in details and simulations are performed by MATLAB software to prove its effectiveness....

  9. Risico-analyse brandstofpontons

    NARCIS (Netherlands)

    Uijt de Haag P; Post J; LSO

    2001-01-01

    Voor het bepalen van de risico's van brandstofpontons in een jachthaven is een generieke risico-analyse uitgevoerd. Er is een referentiesysteem gedefinieerd, bestaande uit een betonnen brandstofponton met een relatief grote inhoud en doorzet. Aangenomen is dat de ponton gelegen is in een

  10. Sensitivity of the direct stop pair production analyses in phenomenological MSSM simplified models with the ATLAS detectors

    CERN Document Server

    Snyder, Ian Michael; The ATLAS collaboration

    2018-01-01

    The sensitivity of the searches for the direct pair production of stops often has been evaluated in simple SUSY scenarios, where only a limited set of supersymmetric particles take part to the stop decay. In this talk, the interpretations of the analyses requiring zero, one or two leptons in the final states to simple but well motivated MSSM scenarios will be discussed.

  11. Spark and HPC for High Energy Physics Data Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sehrish, Saba; Kowalkowski, Jim; Paterno, Marc

    2017-05-01

    A full High Energy Physics (HEP) data analysis is divided into multiple data reduction phases. Processing within these phases is extremely time consuming, therefore intermediate results are stored in files held in mass storage systems and referenced as part of large datasets. This processing model limits what can be done with interactive data analytics. Growth in size and complexity of experimental datasets, along with emerging big data tools are beginning to cause changes to the traditional ways of doing data analyses. Use of big data tools for HEP analysis looks promising, mainly because extremely large HEP datasets can be represented and held in memory across a system, and accessed interactively by encoding an analysis using highlevel programming abstractions. The mainstream tools, however, are not designed for scientific computing or for exploiting the available HPC platform features. We use an example from the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) in Geneva, Switzerland. The LHC is the highest energy particle collider in the world. Our use case focuses on searching for new types of elementary particles explaining Dark Matter in the universe. We use HDF5 as our input data format, and Spark to implement the use case. We show the benefits and limitations of using Spark with HDF5 on Edison at NERSC.

  12. Grey literature in meta-analyses.

    Science.gov (United States)

    Conn, Vicki S; Valentine, Jeffrey C; Cooper, Harris M; Rantz, Marilyn J

    2003-01-01

    In meta-analysis, researchers combine the results of individual studies to arrive at cumulative conclusions. Meta-analysts sometimes include "grey literature" in their evidential base, which includes unpublished studies and studies published outside widely available journals. Because grey literature is a source of data that might not employ peer review, critics have questioned the validity of its data and the results of meta-analyses that include it. To examine evidence regarding whether grey literature should be included in meta-analyses and strategies to manage grey literature in quantitative synthesis. This article reviews evidence on whether the results of studies published in peer-reviewed journals are representative of results from broader samplings of research on a topic as a rationale for inclusion of grey literature. Strategies to enhance access to grey literature are addressed. The most consistent and robust difference between published and grey literature is that published research is more likely to contain results that are statistically significant. Effect size estimates of published research are about one-third larger than those of unpublished studies. Unfunded and small sample studies are less likely to be published. Yet, importantly, methodological rigor does not differ between published and grey literature. Meta-analyses that exclude grey literature likely (a) over-represent studies with statistically significant findings, (b) inflate effect size estimates, and (c) provide less precise effect size estimates than meta-analyses including grey literature. Meta-analyses should include grey literature to fully reflect the existing evidential base and should assess the impact of methodological variations through moderator analysis.

  13. Limitations of Using Microsoft Excel Version 2016 (MS Excel 2016) for Statistical Analysis for Medical Research.

    Science.gov (United States)

    Tanavalee, Chotetawan; Luksanapruksa, Panya; Singhatanadgige, Weerasak

    2016-06-01

    Microsoft Excel (MS Excel) is a commonly used program for data collection and statistical analysis in biomedical research. However, this program has many limitations, including fewer functions that can be used for analysis and a limited number of total cells compared with dedicated statistical programs. MS Excel cannot complete analyses with blank cells, and cells must be selected manually for analysis. In addition, it requires multiple steps of data transformation and formulas to plot survival analysis graphs, among others. The Megastat add-on program, which will be supported by MS Excel 2016 soon, would eliminate some limitations of using statistic formulas within MS Excel.

  14. Penser aux/les limites de nos limites

    Directory of Open Access Journals (Sweden)

    Jacques Lévy

    2010-12-01

    Full Text Available Le mot « frontière » a beaucoup de succès, dans son sens propre mais plus encore comme métaphore d’une multitude de réalités qui ont à voir avec les limites, c’est-à-dire avec notre propension à découper le monde en objets séparables. Mais on constate une grande indétermination entre concept et métaphore et un usage trop facile de mélanges entre ceux-ci. Il faut donc d’abord admettre que la matérialité n’est qu’une des composantes de notre monde, mais que l’immatériel n’est pas l’irréel, le simulé ou le métaphorique. Après un détour par une théorie des limites et ses limites et une distinction entre le topographique (continu et le topologique (discontinu appliquée à l’intérieur et aux limites d’une aire, deux exemples sont développés qui visent à montrer que, si l’on trouve des frontières, ce n’est pas forcément là où on les attend et que l’appréciation juste de la place des frontières suppose la prise en compte de bien d’autres considérations que la seule limitation volontaire et brutale du franchissement d’une ligne imaginaire tracée au sol.Think about limits and the limits of our limitsThe word “boundary” has been very successful in its literal sense but even more so as a metaphor of a multitude of realities involving limits, that is, with regards to our tendency to divide the world into separable objects. However, one can observe a considerable uncertainty between the concept and the metaphor and an utilisation too easy of various mixtures of them. It becomes necessary therefore to first admit that materiality is only one of the components of our world whilst the immaterial is not unreal, simulated or metaphoric. After a detour consisting of examining a theory of limits and its limits and making the distinction between the topographic (continuous and the topologic (discontinuous applied to the interior and the limits of an area, two examples are developed which aim to

  15. Niche conservatism and dispersal limitation cause large-scale phylogenetic structure in the New World palm flora

    DEFF Research Database (Denmark)

    Eiserhardt, Wolf L.; Svenning, J.-C.; Baker, William J.

    similarity decays after speciation depends on the rates of niche evolution and dispersal. If dispersal is slow compared to the tempo of lineage diversification, distributions change little during clade diversification. Phylogenetic niche conservatism precludes distributional shifts in environmental space......, and to the degree that distributions are limited by the niche, also in geographic space. Using phylogenetic turnover methods, we simultaneously analysed the distributions of all New World palms (n=547) and inferred to which degree phylogenetic niche conservatism and dispersal limitation, respectively, caused...

  16. Effect of nitrite, limited reactive settler and plant design configuration on the predicted performance of simultaneous C/N/P removal WWTPs

    DEFF Research Database (Denmark)

    Guerrero, Javier; Flores-Alsina, Xavier; Guisasola, Albert

    2013-01-01

    as the reference model (A1). The second case (A2) adds nitrite as a new state variable, describing nitrification and denitrification as two-step processes. The third set of models (A3 and A4) considers different reactive settlers types (diffusion-limited/non limited). This study analyses the importance...

  17. Cost-Benefit Analyses of Transportation Investments

    DEFF Research Database (Denmark)

    Næss, Petter

    2006-01-01

    This paper discusses the practice of cost-benefit analyses of transportation infrastructure investment projects from the meta-theoretical perspective of critical realism. Such analyses are based on a number of untenable ontological assumptions about social value, human nature and the natural......-to-pay investigations. Accepting the ontological and epistemological assumptions of cost-benefit analysis involves an implicit acceptance of the ethical and political values favoured by these assumptions. Cost-benefit analyses of transportation investment projects tend to neglect long-term environmental consequences...

  18. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic prog...

  19. ICRF power limitation relation to density limit in ASDEX

    International Nuclear Information System (INIS)

    Ryter, F.

    1992-01-01

    Launching high ICRF power into ASDEX plasmas required good antenna-plasma coupling. This could be achieved by sufficient electron density in front of the antennas i.e. small antenna-plasma distance (1-2 cm) and moderate to high line-averaged electron density compared to the density window in ASDEX. These are conditions eventually close to the density limit. ICRF heated discharges terminated by plasma disruptions caused by the RF pulse limited the maximum RF power which can be injected into the plasma. The disruptions occurring in these cases have clear phenomenological similarities with those observed in density limit discharges. We show in this paper that the ICRF-power limitation by plasma disruptions in ASDEX was due to reaching the density limit. (orig.)

  20. ICRF power limitation relation to density limit in ASDEX

    International Nuclear Information System (INIS)

    Ryter, F.

    1992-01-01

    Launching high ICRF power into ASDEX plasmas required good antenna-plasma coupling. This could be achieved by sufficient electron density in front of the antennas i.e. small antenna-plasma distance (1-2 cm) and moderate to high line-averaged electron density compared to the density window in ASDEX. These are conditions eventually close to the density limit. ICRF heated discharges terminated by plasma disruptions caused by the RF pulse limited the maximum RF power which can be injected into the plasma. The disruptions occurring in these cases have clear phenomenological similarities with those observed in density limit discharges. We show in this paper that the ICRF-power limitation by plasma disruptions in ASDEX was due to reaching the density limit. (author) 3 refs., 3 figs

  1. Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses?

    Science.gov (United States)

    McAuley, L; Pham, B; Tugwell, P; Moher, D

    2000-10-07

    The inclusion of only a subset of all available evidence in a meta-analysis may introduce biases and threaten its validity; this is particularly likely if the subset of included studies differ from those not included, which may be the case for published and grey literature (unpublished studies, with limited distribution). We set out to examine whether exclusion of grey literature, compared with its inclusion in meta-analysis, provides different estimates of the effectiveness of interventions assessed in randomised trials. From a random sample of 135 meta-analyses, we identified and retrieved 33 publications that included both grey and published primary studies. The 33 publications contributed 41 separate meta-analyses from several disease areas. General characteristics of the meta-analyses and associated studies and outcome data at the trial level were collected. We explored the effects of the inclusion of grey literature on the quantitative results using logistic-regression analyses. 33% of the meta-analyses were found to include some form of grey literature. The grey literature, when included, accounts for between 4.5% and 75% of the studies in a meta-analysis. On average, published work, compared with grey literature, yielded significantly larger estimates of the intervention effect by 15% (ratio of odds ratios=1.15 [95% CI 1.04-1.28]). Excluding abstracts from the analysis further compounded the exaggeration (1.33 [1.10-1.60]). The exclusion of grey literature from meta-analyses can lead to exaggerated estimates of intervention effectiveness. In general, meta-analysts should attempt to identify, retrieve, and include all reports, grey and published, that meet predefined inclusion criteria.

  2. Liability of statutory organs in limited liability companies

    Directory of Open Access Journals (Sweden)

    Martin Janků

    2011-01-01

    Full Text Available Statutory organs of business companies (and similarly of co-operatives have numerous obligations imposed by generally binding provisions; relied with these is the liability for non-fulfilment of the latter. Some of the obligations are imposed directly by the laws, some are assumed on contractual basis. Their infringements may lead to the liability for the situation and consequences occurred. The regulation of the liability of persons engaged in the company’s bodies covers persons that are entrusted by the management of foreign assets. Sometimes these are in fact not entirely foreign assets because, although the assets are legally owned by the business company, persons acting as statutory organs are mostly partners (shareholders in these companies as well. As such they manage the foreign assets but the company properties were created by their contributions or through the business by themselves. The paper analyses the requirements laid down for the function of managing directors (jednatel in the limited company. Consequently it analyses the scope of the liability of managing directors firstly, in relationship to the company’s creditors (persons standing outside the company and, subsequently, in relationship to the shareholders. It also presents and characterises the recent trends in the Commercial Court’s judgement of the conditions required for the liability for damage and claims for damages put forward by action to recover damages by the managing directors. De lege ferenda the paper recommends that the legal regulation will be amended by provisions limiting the scope of persons to be appointed as executive director and/or extending the liability for damages for the partners of the company in cases where the damage in such cases can not be compensated by the executive director and the partners should bear consequences for their culpa in eligendo.

  3. Sample preparation in foodomic analyses.

    Science.gov (United States)

    Martinović, Tamara; Šrajer Gajdošik, Martina; Josić, Djuro

    2018-04-16

    Representative sampling and adequate sample preparation are key factors for successful performance of further steps in foodomic analyses, as well as for correct data interpretation. Incorrect sampling and improper sample preparation can be sources of severe bias in foodomic analyses. It is well known that both wrong sampling and sample treatment cannot be corrected anymore. These, in the past frequently neglected facts, are now taken into consideration, and the progress in sampling and sample preparation in foodomics is reviewed here. We report the use of highly sophisticated instruments for both high-performance and high-throughput analyses, as well as miniaturization and the use of laboratory robotics in metabolomics, proteomics, peptidomics and genomics. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  4. Relationship between the sensation of activity limitation and the results of functional assessment in asthma patients.

    Science.gov (United States)

    Vermeulen, Francois; Chirumberro, Audrey; Rummens, Peter; Bruyneel, Marie; Ninane, Vincent

    2017-08-01

    In asthma patients, the assessment of activity limitation is based on questions evaluating how limited the patient feels in their activities. However, the lack of functional data complicates the interpretation of the answers. We aimed to evaluate the intensity of relationships between the patient's perception of activity limitation and the results of several functional tests. Twenty patients complaining of asthma exacerbation were invited to complete three scores (Chronic Respiratory Disease questionnaire, Asthma Control Questionnaire, Hospital Anxiety and Depression scale). They also underwent lung function measurements, a 6-minute walk test and a cardio-pulmonary exercise test. In addition, physical activity was studied by actigraphy. Spearman's rank correlation coefficients between the patient's perception of activity limitation and each of the other parameters were analysed. Five parameters were significantly correlated with the perception of activity limitation: ACQ question 4, related to dyspnea (rs 0.74, p perception of activity limitation. In response to questions about limitation of activity, patients do not specifically answer mentioning physical limitation but rather the psychological burden associated with this constraint.

  5. Impact of workstations on criticality analyses at ABB combustion engineering

    International Nuclear Information System (INIS)

    Tarko, L.B.; Freeman, R.S.; O'Donnell, P.F.

    1993-01-01

    During 1991, ABB Combustion Engineering (ABB C-E) made the transition from a CDC Cyber 990 mainframe for nuclear criticality safety analyses to Hewlett Packard (HP)/Apollo workstations. The primary motivation for this change was improved economics of the workstation and maintaining state-of-the-art technology. The Cyber 990 utilized the NOS operating system with a 60-bit word size. The CPU memory size was limited to 131 100 words of directly addressable memory with an extended 250000 words available. The Apollo workstation environment at ABB consists of HP/Apollo-9000/400 series desktop units used by most application engineers, networked with HP/Apollo DN10000 platforms that use 32-bit word size and function as the computer servers and network administrative CPUS, providing a virtual memory system

  6. Analysing changes of health inequalities in the Nordic welfare states

    DEFF Research Database (Denmark)

    Lahelma, Eero; Kivelä, Katariina; Roos, Eva

    2002-01-01

    -standing illness and perceived health were analysed by age, gender, employment status and educational attainment. First, age-adjusted overall prevalence percentages were calculated. Second, changes in the magnitude of relative health inequalities were studied using logistic regression analysis. Within each country......This study examined changes over time in relative health inequalities among men and women in four Nordic countries, Denmark, Finland, Norway and Sweden. A serious economic recession burst out in the early 1990s particularly in Finland and Sweden. We ask whether this adverse social structural......'development influenced health inequalities by employment status and educational attainment, i.e. whether the trends in health inequalities were similar or dissimilar between the Nordic countries. The data derived from comparable interview surveys carried out in 1986/87 and 1994/95 in the four countries. Limiting long...

  7. Stress analyses for reactor pressure vessels by the example of a product line '69 boiling water reactor

    International Nuclear Information System (INIS)

    Mkrtchyan, Lilit; Schau, Henry; Wolf, Werner; Holzer, Wieland; Wernicke, Robert; Trieglaff, Ralf

    2011-01-01

    The reactor pressure vessels (RPV) of boiling water reactors (BWR) belonging to the product line '69 have unusually designed heads. The spherical cap-shaped bottom head of the vessel is welded directly to the support flange of the lower shell course. This unusual construction has led repeatedly to controversial discussions concerning the limits and admissibility of stress intensities arising in the junction of the bottom head to the cylindrical shell. In the present paper, stress analyses for the design conditions are performed with the finite element method in order to determine and categorize the occurring stresses. The procedure of stress classification in accordance with the guidelines of German KTA 3201.2 and Section III of the ASME Code (Subsection NB) is described and subsequently demonstrated by the example of a typical BWR vessel. The accomplished investigations yield allowable stress intensities in the considered area. Additionally, limit load analyses are carried out to verify the obtained results. Complementary studies, performed for a torispherical head, prove that the determined maximum peak stresses in the junction between the bottom head and the cylindrical shell are not unusual also for pressure vessels with regular bottom head constructions. (orig.)

  8. Accuracy, precision, and lower detection limits (a deficit reduction approach)

    International Nuclear Information System (INIS)

    Bishop, C.T.

    1993-01-01

    The evaluation of the accuracy, precision and lower detection limits of the determination of trace radionuclides in environmental samples can become quite sophisticated and time consuming. This in turn could add significant cost to the analyses being performed. In the present method, a open-quotes deficit reduction approachclose quotes has been taken to keep costs low, but at the same time provide defensible data. In order to measure the accuracy of a particular method, reference samples are measured over the time period that the actual samples are being analyzed. Using a Lotus spreadsheet, data are compiled and an average accuracy is computed. If pairs of reference samples are analyzed, then precision can also be evaluated from the duplicate data sets. The standard deviation can be calculated if the reference concentrations of the duplicates are all in the same general range. Laboratory blanks are used to estimate the lower detection limits. The lower detection limit is calculated as 4.65 times the standard deviation of a set of blank determinations made over a given period of time. A Lotus spreadsheet is again used to compile data and LDLs over different periods of time can be compared

  9. New dose limits and distribution of annual doses for controlled groups

    International Nuclear Information System (INIS)

    Vukcevic, M.; Stankovic, S.; Kovacevic, M.

    1993-01-01

    The new calculations of neutron doses received by the population of Hiroshima and Nagasaki, as well as the epidemiological data on the incidence of fatal cancers in the survivors, had led to the conclusion that the risk estimates should be raised by the factor 2 or 3. In this work, the distribution of monthly doses for occupationals was analysed in order to determine the percent of workers who might be considered as overexposed, on the basis of the new dose limits. (author)

  10. Distributed Multiscale Data Analysis and Processing for Sensor Networks

    National Research Council Canada - National Science Library

    Wagner, Raymond; Sarvotham, Shriram; Choi, Hyeokho; Baraniuk, Richard

    2005-01-01

    .... Second, the communication overhead of multiscale algorithms can become prohibitive. In this paper, we take a first step in addressing both shortcomings by introducing two new distributed multiresolution transforms...

  11. Unified Representation for Collaborative Visualization and Processing of Terrain Data, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We build upon our prior work applying subdivision surfaces (subdivs) to planetary terrain mapping. Subdivs are an alternative, multi-resolution method with many...

  12. Quench limits

    International Nuclear Information System (INIS)

    Sapinski, M.

    2012-01-01

    With thirteen beam induced quenches and numerous Machine Development tests, the current knowledge of LHC magnets quench limits still contains a lot of unknowns. Various approaches to determine the quench limits are reviewed and results of the tests are presented. Attempt to reconstruct a coherent picture emerging from these results is taken. The available methods of computation of the quench levels are presented together with dedicated particle shower simulations which are necessary to understand the tests. The future experiments, needed to reach better understanding of quench limits as well as limits for the machine operation are investigated. The possible strategies to set BLM (Beam Loss Monitor) thresholds are discussed. (author)

  13. Cycle O(CY1991) NLS trade studies and analyses report. Book 2, part 2: Propulsion

    Science.gov (United States)

    Cronin, R.; Werner, M.; Bonson, S.; Spring, R.; Houston, R.

    1992-01-01

    This report documents the propulsion system tasks performed in support of the National Launch System (NLS) Cycle O preliminary design activities. The report includes trades and analyses covering the following subjects: (1) Maximum Tank Stretch Study; (2) No LOX Bleed Performance Analysis; (3) LOX Bleed Trade Study; (4) LO2 Tank Pressure Limits; (5) LOX Tank Pressurization System Using Helium; (6) Space Transportation Main Engine (STME) Heat Exchanger Performance; (7) LH2 Passive Recirculation Performance Analysis; (8) LH2 Bleed/Recirculation Study; (9) LH2 Tank Pressure Limits; and (10) LH2 Pressurization System. For each trade study an executive summary and a detailed trade study are provided. For the convenience of the reader, a separate section containing a compilation of only the executive summaries is also provided.

  14. Regional Investment Policy Under The Impact Of Budget Limitations And Economic Sanctions

    OpenAIRE

    Avramenko, Yelena S.; Vlasov, Semyon V.; Lukyanov, Sergey A.; Temkina, Irina M.

    2018-01-01

    This article presents the results of research on the impact which budget limitations and economic sanctions have had on regional investment policy External sanctions and sluggish economic growth have affected the social and economic development of the region. Relying on the results of comparative and statistical analysis, the article demonstrates the need for altering the focus of current investment policy from quantitative growth to qualitative enhancement. The article analyses a new trend i...

  15. The limiting current in a one-dimensional situation: Transition from a space charge limited to magnetically limited flow

    International Nuclear Information System (INIS)

    Kumar, Raghwendra; Biswas, Debabrata

    2008-01-01

    For a nonrelativistic electron beam propagating in a cylindrical drift tube, it is shown that the limiting current density does not saturate to the electrostatic one-dimensional (1D) estimate with increasing beam radius. Fully electromagnetic particle-in-cell (PIC) simulation studies show that beyond a critical aspect ratio, the limiting current density is lower than the 1D electrostatic prediction. The lowering in the limiting current density is found to be due to the transition from the space charge limited to magnetically limited flow. An adaptation of Alfven's single particle trajectory method is used to estimate the magnetically limited current as well as the critical radius beyond which the flow is magnetically limited in a drift tube. The predictions are found to be in close agreement with PIC simulations

  16. Trial sequential analyses of meta-analyses of complications in laparoscopic vs. small-incision cholecystectomy: more randomized patients are needed

    DEFF Research Database (Denmark)

    Keus, Frederik; Wetterslev, Jørn; Gluud, Christian

    2010-01-01

    Conclusions based on meta-analyses of randomized trials carry a status of "truth." Methodological components may identify trials with systematic errors ("bias"). Trial sequential analysis (TSA) evaluates random errors in meta-analysis. We analyzed meta-analyses on laparoscopic vs. small-incision ......Conclusions based on meta-analyses of randomized trials carry a status of "truth." Methodological components may identify trials with systematic errors ("bias"). Trial sequential analysis (TSA) evaluates random errors in meta-analysis. We analyzed meta-analyses on laparoscopic vs. small...

  17. The influence of finite-length flaw effects on PTS analyses

    International Nuclear Information System (INIS)

    Keeney-Walker, J.; Dickson, T.L.

    1993-01-01

    Current licensing issues within the nuclear industry dictate a need to investigate the effects of cladding on the extension of small finite-length cracks near the inside surface of a vessel. Because flaws having depths of the order of the combined clad and heat affected zone thickness dominate the frequency distribution of flaws, their initiation probabilities can govern calculated vessel failure probabilities. Current pressurized-thermal-shock (PTS) analysis computer programs recognize the influence of the inner-surface cladding layer in the heat transfer and stress analysis models, but assume the cladding fracture toughness is the same as that for the base material. The programs do not recognize the influence cladding may have in inhibiting crack initiation and propagation of shallow finite-length surface flaws. Limited experimental data and analyses indicate the cladding can inhibit the propagation of certain shallow flaws. This paper describes an analytical study which was carried out to determine (1) the minimum flaw depth for crack initiation under PTS loading for semicircular surface flaws in a clad reactor pressure vessel and (2) the impact, in terms of the conditional probability of vessel failure, of using a semicircular surface flaw as the initial flaw and assuming that the flaw cannot propagate in the cladding. The analytical results indicate that for initiation a much deeper critical crack depth is required for the finite-length flaw than for the infinite-length flaw, except for the least severe transient. The minimum flaw depths required for crack initiation from the finite-length flaw analyses were incorporated into a modified version of the OCA-P code. The modified code was applied to the analysis of selected PTS transients, and the results produced a substantial decrease in the conditional probability of failure. This initial study indicates a significant effect on probabilistic fracture analyses by incorporating finite-length flaw results

  18. Validating experimental and theoretical Langmuir probe analyses

    Science.gov (United States)

    Pilling, L. S.; Carnegie, D. A.

    2007-08-01

    Analysis of Langmuir probe characteristics contains a paradox in that it is unknown a priori which theory is applicable before it is applied. Often theories are assumed to be correct when certain criteria are met although they may not validate the approach used. We have analysed the Langmuir probe data from cylindrical double and single probes acquired from a dc discharge plasma over a wide variety of conditions. This discharge contains a dual-temperature distribution and hence fitting a theoretically generated curve is impractical. To determine the densities, an examination of the current theories was necessary. For the conditions where the probe radius is the same order of magnitude as the Debye length, the gradient expected for orbital-motion limited (OML) is approximately the same as the radial-motion gradients. An analysis of the 'gradients' from the radial-motion theory was able to resolve the differences from the OML gradient value of two. The method was also able to determine whether radial or OML theories applied without knowledge of the electron temperature, or separation of the ion and electron contributions. Only the value of the space potential is necessary to determine the applicable theory.

  19. Coastal and river flood risk analyses for guiding economically optimal flood adaptation policies: a country-scale study for Mexico

    Science.gov (United States)

    Haer, Toon; Botzen, W. J. Wouter; van Roomen, Vincent; Connor, Harry; Zavala-Hidalgo, Jorge; Eilander, Dirk M.; Ward, Philip J.

    2018-06-01

    Many countries around the world face increasing impacts from flooding due to socio-economic development in flood-prone areas, which may be enhanced in intensity and frequency as a result of climate change. With increasing flood risk, it is becoming more important to be able to assess the costs and benefits of adaptation strategies. To guide the design of such strategies, policy makers need tools to prioritize where adaptation is needed and how much adaptation funds are required. In this country-scale study, we show how flood risk analyses can be used in cost-benefit analyses to prioritize investments in flood adaptation strategies in Mexico under future climate scenarios. Moreover, given the often limited availability of detailed local data for such analyses, we show how state-of-the-art global data and flood risk assessment models can be applied for a detailed assessment of optimal flood-protection strategies. Our results show that especially states along the Gulf of Mexico have considerable economic benefits from investments in adaptation that limit risks from both river and coastal floods, and that increased flood-protection standards are economically beneficial for many Mexican states. We discuss the sensitivity of our results to modelling uncertainties, the transferability of our modelling approach and policy implications. This article is part of the theme issue `Advances in risk assessment for climate change adaptation policy'.

  20. A carbon-carbon panel design concept for the inboard limiter of the Compact Ignition Tokamak (CIT)

    International Nuclear Information System (INIS)

    Mantz, H.C.; Bowers, D.A.; Williams, F.R.; Witten, M.A.

    1989-01-01

    The inboard limiter of the Compact Ignition Tokamak (CIT) must protect the vacuum vessel from the plasma energy. This limiter region must withstand nominal heat fluxes in excess of 10 MW/m 2 and in addition it must be designed to be remotely maintained. Carbon-carbon composite material was selected over bulk graphite materials for the limiter design because of its ability to meet the thermal and structural requirements. The structural design concept consists of carbon-carbon composite panels attached to the vacuum vessel by a hinged rod/retainer concept. Results of the preliminary design study to define this inboard limiter are presented. The design concept is described along with the analyses of the thermal and structural response during nominal plasma operation and during plasma disruption events. 2 refs., 8 figs