WorldWideScience

Sample records for level setting means

  1. Out-of-Core Computations of High-Resolution Level Sets by Means of Code Transformation

    DEFF Research Database (Denmark)

    Christensen, Brian Bunch; Nielsen, Michael Bang; Museth, Ken

    2012-01-01

    We propose a storage efficient, fast and parallelizable out-of-core framework for streaming computations of high resolution level sets. The fundamental techniques are skewing and tiling transformations of streamed level set computations which allow for the combination of interface propagation, re...... computations are now CPU bound and consequently the overall performance is unaffected by disk latency and bandwidth limitations. We demonstrate this with several benchmark tests that show sustained out-of-core throughputs close to that of in-core level set simulations....

  2. Non-Asymptotic Confidence Sets for Circular Means

    Directory of Open Access Journals (Sweden)

    Thomas Hotz

    2016-10-01

    Full Text Available The mean of data on the unit circle is defined as the minimizer of the average squared Euclidean distance to the data. Based on Hoeffding’s mass concentration inequalities, non-asymptotic confidence sets for circular means are constructed which are universal in the sense that they require no distributional assumptions. These are then compared with asymptotic confidence sets in simulations and for a real data set.

  3. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi [Department of Radiology, University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less

  4. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    International Nuclear Information System (INIS)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-01-01

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F≤f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time

  5. Mean-Variance Analysis in a Multiperiod Setting

    OpenAIRE

    Frauendorfer, Karl; Siede, Heiko

    1997-01-01

    Similar to the classical Markowitz approach it is possible to apply a mean-variance criterion to a multiperiod setting to obtain efficient portfolios. To represent the stochastic dynamic characteristics necessary for modelling returns a process of asset returns is discretized with respect to time and space and summarized in a scenario tree. The resulting optimization problem is solved by means of stochastic multistage programming. The optimal solutions show equivalent structural properties as...

  6. A Variational Level Set Model Combined with FCMS for Image Clustering Segmentation

    Directory of Open Access Journals (Sweden)

    Liming Tang

    2014-01-01

    Full Text Available The fuzzy C means clustering algorithm with spatial constraint (FCMS is effective for image segmentation. However, it lacks essential smoothing constraints to the cluster boundaries and enough robustness to the noise. Samson et al. proposed a variational level set model for image clustering segmentation, which can get the smooth cluster boundaries and closed cluster regions due to the use of level set scheme. However it is very sensitive to the noise since it is actually a hard C means clustering model. In this paper, based on Samson’s work, we propose a new variational level set model combined with FCMS for image clustering segmentation. Compared with FCMS clustering, the proposed model can get smooth cluster boundaries and closed cluster regions due to the use of level set scheme. In addition, a block-based energy is incorporated into the energy functional, which enables the proposed model to be more robust to the noise than FCMS clustering and Samson’s model. Some experiments on the synthetic and real images are performed to assess the performance of the proposed model. Compared with some classical image segmentation models, the proposed model has a better performance for the images contaminated by different noise levels.

  7. Level Sets and Voronoi based Feature Extraction from any Imagery

    DEFF Research Database (Denmark)

    Sharma, O.; Anton, François; Mioc, Darka

    2012-01-01

    Polygon features are of interest in many GEOProcessing applications like shoreline mapping, boundary delineation, change detection, etc. This paper presents a unique new GPU-based methodology to automate feature extraction combining level sets, or mean shift based segmentation together with Voron...

  8. Levels of Literary Meaning

    DEFF Research Database (Denmark)

    Klausen, Søren Harnow

    2017-01-01

    I argue that intentionalist theories of meaning and interpretation, like those of Hirsch and Juhl, have been insufficiently attentive to the different levels of authorial intention that are operative in literary works. By countenancing intentions on different levels – ranging from simple semantic...

  9. Discretisation Schemes for Level Sets of Planar Gaussian Fields

    Science.gov (United States)

    Beliaev, D.; Muirhead, S.

    2018-01-01

    Smooth random Gaussian functions play an important role in mathematical physics, a main example being the random plane wave model conjectured by Berry to give a universal description of high-energy eigenfunctions of the Laplacian on generic compact manifolds. Our work is motivated by questions about the geometry of such random functions, in particular relating to the structure of their nodal and level sets. We study four discretisation schemes that extract information about level sets of planar Gaussian fields. Each scheme recovers information up to a different level of precision, and each requires a maximum mesh-size in order to be valid with high probability. The first two schemes are generalisations and enhancements of similar schemes that have appeared in the literature (Beffara and Gayet in Publ Math IHES, 2017. https://doi.org/10.1007/s10240-017-0093-0; Mischaikow and Wanner in Ann Appl Probab 17:980-1018, 2007); these give complete topological information about the level sets on either a local or global scale. As an application, we improve the results in Beffara and Gayet (2017) on Russo-Seymour-Welsh estimates for the nodal set of positively-correlated planar Gaussian fields. The third and fourth schemes are, to the best of our knowledge, completely new. The third scheme is specific to the nodal set of the random plane wave, and provides global topological information about the nodal set up to `visible ambiguities'. The fourth scheme gives a way to approximate the mean number of excursion domains of planar Gaussian fields.

  10. Level Set Approach to Anisotropic Wet Etching of Silicon

    Directory of Open Access Journals (Sweden)

    Branislav Radjenović

    2010-05-01

    Full Text Available In this paper a methodology for the three dimensional (3D modeling and simulation of the profile evolution during anisotropic wet etching of silicon based on the level set method is presented. Etching rate anisotropy in silicon is modeled taking into account full silicon symmetry properties, by means of the interpolation technique using experimentally obtained values for the etching rates along thirteen principal and high index directions in KOH solutions. The resulting level set equations are solved using an open source implementation of the sparse field method (ITK library, developed in medical image processing community, extended for the case of non-convex Hamiltonians. Simulation results for some interesting initial 3D shapes, as well as some more practical examples illustrating anisotropic etching simulation in the presence of masks (simple square aperture mask, convex corner undercutting and convex corner compensation, formation of suspended structures are shown also. The obtained results show that level set method can be used as an effective tool for wet etching process modeling, and that is a viable alternative to the Cellular Automata method which now prevails in the simulations of the wet etching process.

  11. Level set segmentation of bovine corpora lutea in ex situ ovarian ultrasound images

    Directory of Open Access Journals (Sweden)

    Adams Gregg P

    2008-08-01

    Full Text Available Abstract Background The objective of this study was to investigate the viability of level set image segmentation methods for the detection of corpora lutea (corpus luteum, CL boundaries in ultrasonographic ovarian images. It was hypothesized that bovine CL boundaries could be located within 1–2 mm by a level set image segmentation methodology. Methods Level set methods embed a 2D contour in a 3D surface and evolve that surface over time according to an image-dependent speed function. A speed function suitable for segmentation of CL's in ovarian ultrasound images was developed. An initial contour was manually placed and contour evolution was allowed to proceed until the rate of change of the area was sufficiently small. The method was tested on ovarian ultrasonographic images (n = 8 obtained ex situ. A expert in ovarian ultrasound interpretation delineated CL boundaries manually to serve as a "ground truth". Accuracy of the level set segmentation algorithm was determined by comparing semi-automatically determined contours with ground truth contours using the mean absolute difference (MAD, root mean squared difference (RMSD, Hausdorff distance (HD, sensitivity, and specificity metrics. Results and discussion The mean MAD was 0.87 mm (sigma = 0.36 mm, RMSD was 1.1 mm (sigma = 0.47 mm, and HD was 3.4 mm (sigma = 2.0 mm indicating that, on average, boundaries were accurate within 1–2 mm, however, deviations in excess of 3 mm from the ground truth were observed indicating under- or over-expansion of the contour. Mean sensitivity and specificity were 0.814 (sigma = 0.171 and 0.990 (sigma = 0.00786, respectively, indicating that CLs were consistently undersegmented but rarely did the contour interior include pixels that were judged by the human expert not to be part of the CL. It was observed that in localities where gradient magnitudes within the CL were strong due to high contrast speckle, contour expansion stopped too early. Conclusion The

  12. Reconstruction of incomplete cell paths through a 3D-2D level set segmentation

    Science.gov (United States)

    Hariri, Maia; Wan, Justin W. L.

    2012-02-01

    Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.

  13. Novel gene sets improve set-level classification of prokaryotic gene expression data.

    Science.gov (United States)

    Holec, Matěj; Kuželka, Ondřej; Železný, Filip

    2015-10-28

    Set-level classification of gene expression data has received significant attention recently. In this setting, high-dimensional vectors of features corresponding to genes are converted into lower-dimensional vectors of features corresponding to biologically interpretable gene sets. The dimensionality reduction brings the promise of a decreased risk of overfitting, potentially resulting in improved accuracy of the learned classifiers. However, recent empirical research has not confirmed this expectation. Here we hypothesize that the reported unfavorable classification results in the set-level framework were due to the adoption of unsuitable gene sets defined typically on the basis of the Gene ontology and the KEGG database of metabolic networks. We explore an alternative approach to defining gene sets, based on regulatory interactions, which we expect to collect genes with more correlated expression. We hypothesize that such more correlated gene sets will enable to learn more accurate classifiers. We define two families of gene sets using information on regulatory interactions, and evaluate them on phenotype-classification tasks using public prokaryotic gene expression data sets. From each of the two gene-set families, we first select the best-performing subtype. The two selected subtypes are then evaluated on independent (testing) data sets against state-of-the-art gene sets and against the conventional gene-level approach. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. Novel gene sets defined on the basis of regulatory interactions improve set-level classification of gene expression data. The experimental scripts and other material needed to reproduce the experiments are available at http://ida.felk.cvut.cz/novelgenesets.tar.gz.

  14. Hybrid approach for detection of dental caries based on the methods FCM and level sets

    Science.gov (United States)

    Chaabene, Marwa; Ben Ali, Ramzi; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    This paper presents a new technique for detection of dental caries that is a bacterial disease that destroys the tooth structure. In our approach, we have achieved a new segmentation method that combines the advantages of fuzzy C mean algorithm and level set method. The results obtained by the FCM algorithm will be used by Level sets algorithm to reduce the influence of the noise effect on the working of each of these algorithms, to facilitate level sets manipulation and to lead to more robust segmentation. The sensitivity and specificity confirm the effectiveness of proposed method for caries detection.

  15. Level-set techniques for facies identification in reservoir modeling

    Science.gov (United States)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  16. Level-set techniques for facies identification in reservoir modeling

    International Nuclear Information System (INIS)

    Iglesias, Marco A; McLaughlin, Dennis

    2011-01-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil–water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301–29; 2004 Inverse Problems 20 259–82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg–Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush–Kuhn–Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies

  17. A segmentation and classification scheme for single tooth in MicroCT images based on 3D level set and k-means+.

    Science.gov (United States)

    Wang, Liansheng; Li, Shusheng; Chen, Rongzhen; Liu, Sze-Yu; Chen, Jyh-Cheng

    2017-04-01

    Accurate classification of different anatomical structures of teeth from medical images provides crucial information for the stress analysis in dentistry. Usually, the anatomical structures of teeth are manually labeled by experienced clinical doctors, which is time consuming. However, automatic segmentation and classification is a challenging task because the anatomical structures and surroundings of the tooth in medical images are rather complex. Therefore, in this paper, we propose an effective framework which is designed to segment the tooth with a Selective Binary and Gaussian Filtering Regularized Level Set (GFRLS) method improved by fully utilizing 3 dimensional (3D) information, and classify the tooth by employing unsupervised learning i.e., k-means++ method. In order to evaluate the proposed method, the experiments are conducted on the sufficient and extensive datasets of mandibular molars. The experimental results show that our method can achieve higher accuracy and robustness compared to other three clustering methods. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Celebrating 80 years of the Permanent Service for Mean Sea Level (PSMSL

    Directory of Open Access Journals (Sweden)

    L. Rickards

    2015-03-01

    Full Text Available The PSMSL was established as a “Permanent Service” of the International Council for Science in 1958, but in practice was a continuation of the Mean Sea Level Committee which had been set up at the Lisbon International Union of Geodesy and Geophysics (IUGG conference in 1933. Now in its 80th year, the PSMSL continues to be the internationally recognised databank for long-term sea level change information from tide gauge records. The PSMSL dataset consists of over 2100 mean sea level records from across the globe, the longest of which date back to the start of the 19th century. Where possible, all data in a series are provided to a common benchmark-controlled datum, thus providing a record suitable for use in time series analysis. The PSMSL dataset is freely available for all to use, and is accessible through the PSMSL website (www.psmsl.org.

  19. Identifying meaning and perceived level of satisfaction within the context of work.

    Science.gov (United States)

    Brown, Angie; Kitchell, Molly; O'Neill, Tiffany; Lockliear, Jennifer; Vosler, Alyson; Kubek, Dayna; Dale, Lucinda

    2001-01-01

    OBJECTIVE: The primary objectives of this study were to identify sources of meaning for individuals within the context of a work environment, and to compare varied sources of meaning for individuals with high and low work satisfaction levels. METHOD: Participants were chosen based on satisfaction levels in employment, full-time employment status within an organization for at least one year, and diversity in the work setting. Data were gathered through a series of interviews and observations of the participants' workplaces. A comparative analysis of transcribed interviews was conducted by the researchers and with an expert occupational therapy faculty panel. From these analyses, the researchers developed work narratives for a mechanical engineer, a high school teacher, an employee of mechanical services, and a career service counselor. RESULTS: Emerging themes from the work narratives indicated that the various meanings employees found in work had an effect on their perceived levels of job satisfaction. Participants conveyed that organization identification, financial benefits, independent decision-making, reciprocal respect, opportunities for creativity, and maintaining significant relationships outside of work enhanced meaning and satisfaction. CONCLUSIONS: The worker role is a significant source of an individual's identity, meaning, and satisfaction in life. Professionals in various fields can work with employers to develop meaningful work environments for increased job satisfaction, greater motivation for work, increased productivity, and decreased employee turnover.

  20. Volume Sculpting Using the Level-Set Method

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Christensen, Niels Jørgen

    2002-01-01

    In this paper, we propose the use of the Level--Set Method as the underlying technology of a volume sculpting system. The main motivation is that this leads to a very generic technique for deformation of volumetric solids. In addition, our method preserves a distance field volume representation....... A scaling window is used to adapt the Level--Set Method to local deformations and to allow the user to control the intensity of the tool. Level--Set based tools have been implemented in an interactive sculpting system, and we show sculptures created using the system....

  1. Fast Sparse Level Sets on Graphics Hardware

    NARCIS (Netherlands)

    Jalba, Andrei C.; Laan, Wladimir J. van der; Roerdink, Jos B.T.M.

    The level-set method is one of the most popular techniques for capturing and tracking deformable interfaces. Although level sets have demonstrated great potential in visualization and computer graphics applications, such as surface editing and physically based modeling, their use for interactive

  2. A new level set model for multimaterial flows

    Energy Technology Data Exchange (ETDEWEB)

    Starinshak, David P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Karni, Smadar [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of Mathematics; Roe, Philip L. [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of AerospaceEngineering

    2014-01-08

    We present a new level set model for representing multimaterial flows in multiple space dimensions. Instead of associating a level set function with a specific fluid material, the function is associated with a pair of materials and the interface that separates them. A voting algorithm collects sign information from all level sets and determines material designations. M(M ₋1)/2 level set functions might be needed to represent a general M-material configuration; problems of practical interest use far fewer functions, since not all pairs of materials share an interface. The new model is less prone to producing indeterminate material states, i.e. regions claimed by more than one material (overlaps) or no material at all (vacuums). It outperforms existing material-based level set models without the need for reinitialization schemes, thereby avoiding additional computational costs and preventing excessive numerical diffusion.

  3. Level-set simulations of buoyancy-driven motion of single and multiple bubbles

    International Nuclear Information System (INIS)

    Balcázar, Néstor; Lehmkuhl, Oriol; Jofre, Lluís; Oliva, Assensi

    2015-01-01

    Highlights: • A conservative level-set method is validated and verified. • An extensive study of buoyancy-driven motion of single bubbles is performed. • The interactions of two spherical and ellipsoidal bubbles is studied. • The interaction of multiple bubbles is simulated in a vertical channel. - Abstract: This paper presents a numerical study of buoyancy-driven motion of single and multiple bubbles by means of the conservative level-set method. First, an extensive study of the hydrodynamics of single bubbles rising in a quiescent liquid is performed, including its shape, terminal velocity, drag coefficients and wake patterns. These results are validated against experimental and numerical data well established in the scientific literature. Then, a further study on the interaction of two spherical and ellipsoidal bubbles is performed for different orientation angles. Finally, the interaction of multiple bubbles is explored in a periodic vertical channel. The results show that the conservative level-set approach can be used for accurate modelling of bubble dynamics. Moreover, it is demonstrated that the present method is numerically stable for a wide range of Morton and Reynolds numbers.

  4. Exploring the level sets of quantum control landscapes

    International Nuclear Information System (INIS)

    Rothman, Adam; Ho, Tak-San; Rabitz, Herschel

    2006-01-01

    A quantum control landscape is defined by the value of a physical observable as a functional of the time-dependent control field E(t) for a given quantum-mechanical system. Level sets through this landscape are prescribed by a particular value of the target observable at the final dynamical time T, regardless of the intervening dynamics. We present a technique for exploring a landscape level set, where a scalar variable s is introduced to characterize trajectories along these level sets. The control fields E(s,t) accomplishing this exploration (i.e., that produce the same value of the target observable for a given system) are determined by solving a differential equation over s in conjunction with the time-dependent Schroedinger equation. There is full freedom to traverse a level set, and a particular trajectory is realized by making an a priori choice for a continuous function f(s,t) that appears in the differential equation for the control field. The continuous function f(s,t) can assume an arbitrary form, and thus a level set generally contains a family of controls, where each control takes the quantum system to the same final target value, but produces a distinct control mechanism. In addition, although the observable value remains invariant over the level set, other dynamical properties (e.g., the degree of robustness to control noise) are not specifically preserved and can vary greatly. Examples are presented to illustrate the continuous nature of level-set controls and their associated induced dynamical features, including continuously morphing mechanisms for population control in model quantum systems

  5. Evaluation of the Global Mean Sea Level Budget between 1993 and 2014

    DEFF Research Database (Denmark)

    Chambers, Don P.; Cazenave, Anny; Champollion, Nicolas

    2017-01-01

    Evaluating global mean sea level (GMSL) in terms of its components—mass and steric—is useful for both quantifying the accuracy of the measurements and understanding the processes that contribute to GMSL rise. In this paper, we review the GMSL budget over two periods—1993 to 2014 and 2005 to 2014......—using multiple data sets of both total GMSL and the components (mass and steric). In addition to comparing linear trends, we also compare the level of agreement of the time series. For the longer period (1993–2014), we find closure in terms of the long-term trend but not for year-to-year variations...

  6. Level-Set Topology Optimization with Aeroelastic Constraints

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2015-01-01

    Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.

  7. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    Science.gov (United States)

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  8. ACRIM III Level 2 Daily Mean Data V001

    Data.gov (United States)

    National Aeronautics and Space Administration — Active Cavity Radiometer Irradiance Monitor (ACRIM) III Level 2 Daily Mean Data product consists of Level 2 total solar irradiance in the form of daily means...

  9. Integrating cultural community psychology: activity settings and the shared meanings of intersubjectivity.

    Science.gov (United States)

    O'Donnell, Clifford R; Tharp, Roland G

    2012-03-01

    Cultural and community psychology share a common emphasis on context, yet their leading journals rarely cite each other's articles. Greater integration of the concepts of culture and community within and across their disciplines would enrich and facilitate the viability of cultural community psychology. The contextual theory of activity settings is proposed as one means to integrate the concepts of culture and community in cultural community psychology. Through shared activities, participants develop common experiences that affect their psychological being, including their cognitions, emotions, and behavioral development. The psychological result of these experiences is intersubjectivity. Culture is defined as the shared meanings that people develop through their common historic, linguistic, social, economic, and political experiences. The shared meanings of culture arise through the intersubjectivity developed in activity settings. Cultural community psychology presents formidable epistemological challenges, but overcoming these challenges could contribute to the transformation and advancement of community psychology.

  10. Tides, surges and mean sea-level

    National Research Council Canada - National Science Library

    Pugh, D. T

    1987-01-01

    .... Interest in mean sea-level changes has recently been focused on the possibility of significant increases over the coming century as a result of global warming. Examples of applications from North America, Europe and other parts of the world are included.

  11. Mean level signal crossing rate for an arbitrary stochastic process

    DEFF Research Database (Denmark)

    Yura, Harold T.; Hanson, Steen Grüner

    2010-01-01

    The issue of the mean signal level crossing rate for various probability density functions with primary relevance for optics is discussed based on a new analytical method. This method relies on a unique transformation that transforms the probability distribution under investigation into a normal...... probability distribution, for which the distribution of mean level crossings is known. In general, the analytical results for the mean level crossing rate are supported and confirmed by numerical simulations. In particular, we illustrate the present method by presenting analytic expressions for the mean level...

  12. Empirical Assessment of the Mean Block Volume of Rock Masses Intersected by Four Joint Sets

    Science.gov (United States)

    Morelli, Gian Luca

    2016-05-01

    The estimation of a representative value for the rock block volume ( V b) is of huge interest in rock engineering in regards to rock mass characterization purposes. However, while mathematical relationships to precisely estimate this parameter from the spacing of joints can be found in literature for rock masses intersected by three dominant joint sets, corresponding relationships do not actually exist when more than three sets occur. In these cases, a consistent assessment of V b can only be achieved by directly measuring the dimensions of several representative natural rock blocks in the field or by means of more sophisticated 3D numerical modeling approaches. However, Palmström's empirical relationship based on the volumetric joint count J v and on a block shape factor β is commonly used in the practice, although strictly valid only for rock masses intersected by three joint sets. Starting from these considerations, the present paper is primarily intended to investigate the reliability of a set of empirical relationships linking the block volume with the indexes most commonly used to characterize the degree of jointing in a rock mass (i.e. the J v and the mean value of the joint set spacings) specifically applicable to rock masses intersected by four sets of persistent discontinuities. Based on the analysis of artificial 3D block assemblies generated using the software AutoCAD, the most accurate best-fit regression has been found between the mean block volume (V_{{{{b}}_{{m}} }}) of tested rock mass samples and the geometric mean value of the spacings of the joint sets delimiting blocks; thus, indicating this mean value as a promising parameter for the preliminary characterization of the block size. Tests on field outcrops have demonstrated that the proposed empirical methodology has the potential of predicting the mean block volume of multiple-set jointed rock masses with an acceptable accuracy for common uses in most practical rock engineering applications.

  13. On multiple level-set regularization methods for inverse problems

    International Nuclear Information System (INIS)

    DeCezaro, A; Leitão, A; Tai, X-C

    2009-01-01

    We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels

  14. A level set method for multiple sclerosis lesion segmentation.

    Science.gov (United States)

    Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming

    2018-06-01

    In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. A parametric level-set method for partially discrete tomography

    NARCIS (Netherlands)

    A. Kadu (Ajinkya); T. van Leeuwen (Tristan); K.J. Batenburg (Joost)

    2017-01-01

    textabstractThis paper introduces a parametric level-set method for tomographic reconstruction of partially discrete images. Such images consist of a continuously varying background and an anomaly with a constant (known) grey-value. We express the geometry of the anomaly using a level-set function,

  16. Intonational meaning in institutional settings: the role of syntagmatic relations

    Science.gov (United States)

    Wichmann, Anne

    2010-12-01

    This paper addresses the power of intonation to convey interpersonal or attitudinal meaning. Speakers have been shown to accommodate to each other in the course of conversation, and this convergence may be perceived as a sign of empathy. Accommodation often involves paradigmatic choices—choosing the same words, gestures, regional accent or melodic pattern, but this paper suggests that affective meaning can also be conveyed syntagmatically through the relationship between prosodic features in successive utterances. The paper also addresses the use of prosody in situations of conflict, particularly in institutional settings. The requirement of the more powerful participant to exercise control may conflict with the expression of empathy. Situations are described where divergent rather than convergent behaviour is more successful both in keeping control and in maintaining rapport.

  17. Structural level set inversion for microwave breast screening

    International Nuclear Information System (INIS)

    Irishina, Natalia; Álvarez, Diego; Dorn, Oliver; Moscoso, Miguel

    2010-01-01

    We present a new inversion strategy for the early detection of breast cancer from microwave data which is based on a new multiphase level set technique. This novel structural inversion method uses a modification of the color level set technique adapted to the specific situation of structural breast imaging taking into account the high complexity of the breast tissue. We only use data of a few microwave frequencies for detecting the tumors hidden in this complex structure. Three level set functions are employed for describing four different types of breast tissue, where each of these four regions is allowed to have a complicated topology and to have an interior structure which needs to be estimated from the data simultaneously with the region interfaces. The algorithm consists of several stages of increasing complexity. In each stage more details about the anatomical structure of the breast interior is incorporated into the inversion model. The synthetic breast models which are used for creating simulated data are based on real MRI images of the breast and are therefore quite realistic. Our results demonstrate the potential and feasibility of the proposed level set technique for detecting, locating and characterizing a small tumor in its early stage of development embedded in such a realistic breast model. Both the data acquisition simulation and the inversion are carried out in 2D

  18. Privacy-Preserving k-Means Clustering under Multiowner Setting in Distributed Cloud Environments

    Directory of Open Access Journals (Sweden)

    Hong Rong

    2017-01-01

    Full Text Available With the advent of big data era, clients who lack computational and storage resources tend to outsource data mining tasks to cloud service providers in order to improve efficiency and reduce costs. It is also increasingly common for clients to perform collaborative mining to maximize profits. However, due to the rise of privacy leakage issues, the data contributed by clients should be encrypted using their own keys. This paper focuses on privacy-preserving k-means clustering over the joint datasets encrypted under multiple keys. Unfortunately, existing outsourcing k-means protocols are impractical because not only are they restricted to a single key setting, but also they are inefficient and nonscalable for distributed cloud computing. To address these issues, we propose a set of privacy-preserving building blocks and outsourced k-means clustering protocol under Spark framework. Theoretical analysis shows that our scheme protects the confidentiality of the joint database and mining results, as well as access patterns under the standard semihonest model with relatively small computational overhead. Experimental evaluations on real datasets also demonstrate its efficiency improvements compared with existing approaches.

  19. Multi-Attribute Decision-Making Based on Bonferroni Mean Operators under Cubic Intuitionistic Fuzzy Set Environment

    Directory of Open Access Journals (Sweden)

    Gagandeep Kaur

    2018-01-01

    Full Text Available Cubic intuitionistic fuzzy (CIF set is the hybrid set which can contain much more information to express an interval-valued intuitionistic fuzzy set and an intuitionistic fuzzy set simultaneously for handling the uncertainties in the data. Unfortunately, there has been no research on the aggregation operators on CIF sets so far. Since an aggregation operator is an important mathematical tool in decision-making problems, the present paper proposes some new Bonferroni mean and weighted Bonferroni mean averaging operators between the cubic intuitionistic fuzzy numbers for aggregating the different preferences of the decision-maker. Then, we develop a decision-making method based on the proposed operators under the cubic intuitionistic fuzzy environment and illustrated with a numerical example. Finally, a comparison analysis between the proposed and the existing approaches have been performed to illustrate the applicability and feasibility of the developed decision-making method.

  20. Fourier transform and mean quadratic variation of Bernoulli convolution on homogeneous Cantor set

    Energy Technology Data Exchange (ETDEWEB)

    Yu Zuguo E-mail: yuzg@hotmail.comz.yu

    2004-07-01

    For the Bernoulli convolution on homogeneous Cantor set, under some condition, it is proved that the mean quadratic variation and the average of Fourier transform of this measure are bounded above and below.

  1. Identifying Heterogeneities in Subsurface Environment using the Level Set Method

    Energy Technology Data Exchange (ETDEWEB)

    Lei, Hongzhuan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lu, Zhiming [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vesselinov, Velimir Valentinov [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-25

    These are slides from a presentation on identifying heterogeneities in subsurface environment using the level set method. The slides start with the motivation, then explain Level Set Method (LSM), the algorithms, some examples are given, and finally future work is explained.

  2. Hearing Tests on Mobile Devices: Evaluation of the Reference Sound Level by Means of Biological Calibration.

    Science.gov (United States)

    Masalski, Marcin; Kipiński, Lech; Grysiński, Tomasz; Kręcicki, Tomasz

    2016-05-30

    Hearing tests carried out in home setting by means of mobile devices require previous calibration of the reference sound level. Mobile devices with bundled headphones create a possibility of applying the predefined level for a particular model as an alternative to calibrating each device separately. The objective of this study was to determine the reference sound level for sets composed of a mobile device and bundled headphones. Reference sound levels for Android-based mobile devices were determined using an open access mobile phone app by means of biological calibration, that is, in relation to the normal-hearing threshold. The examinations were conducted in 2 groups: an uncontrolled and a controlled one. In the uncontrolled group, the fully automated self-measurements were carried out in home conditions by 18- to 35-year-old subjects, without prior hearing problems, recruited online. Calibration was conducted as a preliminary step in preparation for further examination. In the controlled group, audiologist-assisted examinations were performed in a sound booth, on normal-hearing subjects verified through pure-tone audiometry, recruited offline from among the workers and patients of the clinic. In both the groups, the reference sound levels were determined on a subject's mobile device using the Bekesy audiometry. The reference sound levels were compared between the groups. Intramodel and intermodel analyses were carried out as well. In the uncontrolled group, 8988 calibrations were conducted on 8620 different devices representing 2040 models. In the controlled group, 158 calibrations (test and retest) were conducted on 79 devices representing 50 models. Result analysis was performed for 10 most frequently used models in both the groups. The difference in reference sound levels between uncontrolled and controlled groups was 1.50 dB (SD 4.42). The mean SD of the reference sound level determined for devices within the same model was 4.03 dB (95% CI 3

  3. Setting-level influences on implementation of the responsive classroom approach.

    Science.gov (United States)

    Wanless, Shannon B; Patton, Christine L; Rimm-Kaufman, Sara E; Deutsch, Nancy L

    2013-02-01

    We used mixed methods to examine the association between setting-level factors and observed implementation of a social and emotional learning intervention (Responsive Classroom® approach; RC). In study 1 (N = 33 3rd grade teachers after the first year of RC implementation), we identified relevant setting-level factors and uncovered the mechanisms through which they related to implementation. In study 2 (N = 50 4th grade teachers after the second year of RC implementation), we validated our most salient Study 1 finding across multiple informants. Findings suggested that teachers perceived setting-level factors, particularly principal buy-in to the intervention and individualized coaching, as influential to their degree of implementation. Further, we found that intervention coaches' perspectives of principal buy-in were more related to implementation than principals' or teachers' perspectives. Findings extend the application of setting theory to the field of implementation science and suggest that interventionists may want to consider particular accounts of school setting factors before determining the likelihood of schools achieving high levels of implementation.

  4. Segmenting the Parotid Gland using Registration and Level Set Methods

    DEFF Research Database (Denmark)

    Hollensen, Christian; Hansen, Mads Fogtmann; Højgaard, Liselotte

    . The method was evaluated on a test set consisting of 8 corresponding data sets. The attained total volume Dice coefficient and mean Haussdorff distance were 0.61 ± 0.20 and 15.6 ± 7.4 mm respectively. The method has improvement potential which could be exploited in order for clinical introduction....

  5. Multi-phase flow monitoring with electrical impedance tomography using level set based method

    International Nuclear Information System (INIS)

    Liu, Dong; Khambampati, Anil Kumar; Kim, Sin; Kim, Kyung Youn

    2015-01-01

    Highlights: • LSM has been used for shape reconstruction to monitor multi-phase flow using EIT. • Multi-phase level set model for conductivity is represented by two level set functions. • LSM handles topological merging and breaking naturally during evolution process. • To reduce the computational time, a narrowband technique was applied. • Use of narrowband and optimization approach results in efficient and fast method. - Abstract: In this paper, a level set-based reconstruction scheme is applied to multi-phase flow monitoring using electrical impedance tomography (EIT). The proposed scheme involves applying a narrowband level set method to solve the inverse problem of finding the interface between the regions having different conductivity values. The multi-phase level set model for the conductivity distribution inside the domain is represented by two level set functions. The key principle of the level set-based method is to implicitly represent the shape of interface as the zero level set of higher dimensional function and then solve a set of partial differential equations. The level set-based scheme handles topological merging and breaking naturally during the evolution process. It also offers several advantages compared to traditional pixel-based approach. Level set-based method for multi-phase flow is tested with numerical and experimental data. It is found that level set-based method has better reconstruction performance when compared to pixel-based method

  6. An improved level set method for brain MR images segmentation and bias correction.

    Science.gov (United States)

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  7. Annual mean sea level and its sensitivity to wind climate

    Science.gov (United States)

    Gerkema, Theo; Duran Matute, Matias

    2017-04-01

    Changes in relative mean sea level affect coastal areas in various ways, such as the risk of flooding, the evolution of barrier island systems, or the development of salt marshes. Long-term trends in these changes are partly masked by variability on shorter time scales. Some of this variability, for instance due to wind waves and tides (with the exception of long-period tides), is easily averaged out. In contrast, inter-annual variability is found to be irregular and large, of the order of several decimeters, as is evident from tide gauge records. This is why the climatic trend, typically of a few millimeters per year, can only be reliably identified by examining a record that is long enough to outweigh the inter-annual and decadal variabilities. In this presentation we examine the relation between the annual wind conditions from meteorological records and annual mean sea level along the Dutch coast. To do this, we need reliable and consistent long-term wind records. Some wind records from weather stations in the Netherlands date back to the 19th century, but they are unsuitable for trend analysis because of changes in location, height, surroundings, instrument type or protocol. For this reason, we will use only more recent, homogeneous wind records, from the past two decades. The question then is whether such a relatively short record is sufficient to find a convincing relation with annual mean sea level. It is the purpose of this work to demonstrate that the answer is positive and to suggest methods to find and exploit such a relation. We find that at the Dutch coast, southwesterly winds are dominant in the wind climate, but the west-east direction stands out as having the highest correlation with annual mean sea level. For different stations in the Dutch Wadden Sea and along the coast, we find a qualitatively similar pattern, although the precise values of the correlations vary. The inter-annual variability of mean sea level can already be largely explained by

  8. Level Set Structure of an Integrable Cellular Automaton

    Directory of Open Access Journals (Sweden)

    Taichiro Takagi

    2010-03-01

    Full Text Available Based on a group theoretical setting a sort of discrete dynamical system is constructed and applied to a combinatorial dynamical system defined on the set of certain Bethe ansatz related objects known as the rigged configurations. This system is then used to study a one-dimensional periodic cellular automaton related to discrete Toda lattice. It is shown for the first time that the level set of this cellular automaton is decomposed into connected components and every such component is a torus.

  9. A web-based study of the relationship of duration of insulin pump infusion set use and fasting blood glucose level in adults with type 1 diabetes.

    Science.gov (United States)

    Sampson Perrin, Alysa J; Guzzetta, Russell C; Miller, Kellee M; Foster, Nicole C; Lee, Anna; Lee, Joyce M; Block, Jennifer M; Beck, Roy W

    2015-05-01

    To evaluate the impact of infusion set use duration on glycemic control, we conducted an Internet-based study using the T1D Exchange's online patient community, Glu ( myGlu.org ). For 14 days, 243 electronically consented adults with type 1 diabetes (T1D) entered online that day's fasting blood glucose (FBG) level, the prior day's total daily insulin (TDI) dose, and whether the infusion set was changed. Mean duration of infusion set use was 3.0 days. Mean FBG level was higher with each successive day of infusion set use, increasing from 126 mg/dL on Day 1 to 133 mg/dL on Day 3 to 147 mg/dL on Day 5 (P<0.001). TDI dose did not vary with increased duration of infusion set use. Internet-based data collection was used to rapidly conduct the study at low cost. The results indicate that FBG levels increase with each additional day of insulin pump infusion set use.

  10. Choice architecture as a means to change eating behaviour in self-service settings

    DEFF Research Database (Denmark)

    Skov, Laurits Rohden; Lourenco, Sofia; Laub Hansen, Gitte

    2013-01-01

    Summary: The primary objective of this review was to investigate the current evidence-base for the use of choice architecture as a means to change eating behaviour in self-service eating settings, hence potentially reduce calorie intake. 12 databases were searched systematically for experimental ...

  11. Automatic segmentation of Leishmania parasite in microscopic images using a modified CV level set method

    Science.gov (United States)

    Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab

    2015-12-01

    Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.

  12. Reevaluation of steam generator level trip set point

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Yoon Sub; Soh, Dong Sub; Kim, Sung Oh; Jung, Se Won; Sung, Kang Sik; Lee, Joon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1994-06-01

    The reactor trip by the low level of steam generator water accounts for a substantial portion of reactor scrams in a nuclear plant and the feasibility of modification of the steam generator water level trip system of YGN 1/2 was evaluated in this study. The study revealed removal of the reactor trip function from the SG water level trip system is not possible because of plant safety but relaxation of the trip set point by 9 % is feasible. The set point relaxation requires drilling of new holes for level measurement to operating steam generators. Characteristics of negative neutron flux rate trip and reactor trip were also reviewed as an additional work. Since the purpose of the trip system modification for reduction of a reactor scram frequency is not to satisfy legal requirements but to improve plant performance and the modification yields positive and negative aspects, the decision of actual modification needs to be made based on the results of this study and also the policy of a plant owner. 37 figs, 6 tabs, 14 refs. (Author).

  13. Inter-comparison of stratospheric mean-meridional circulation and eddy mixing among six reanalysis data sets

    Directory of Open Access Journals (Sweden)

    K. Miyazaki

    2016-05-01

    Full Text Available The stratospheric mean-meridional circulation (MMC and eddy mixing are compared among six meteorological reanalysis data sets: NCEP-NCAR, NCEP-CFSR, ERA-40, ERA-Interim, JRA-25, and JRA-55 for the period 1979–2012. The reanalysis data sets produced using advanced systems (i.e., NCEP-CFSR, ERA-Interim, and JRA-55 generally reveal a weaker MMC in the Northern Hemisphere (NH compared with those produced using older systems (i.e., NCEP/NCAR, ERA-40, and JRA-25. The mean mixing strength differs largely among the data products. In the NH lower stratosphere, the contribution of planetary-scale mixing is larger in the new data sets than in the old data sets, whereas that of small-scale mixing is weaker in the new data sets. Conventional data assimilation techniques introduce analysis increments without maintaining physical balance, which may have caused an overly strong MMC and spurious small-scale eddies in the old data sets. At the NH mid-latitudes, only ERA-Interim reveals a weakening MMC trend in the deep branch of the Brewer–Dobson circulation (BDC. The relative importance of the eddy mixing compared with the mean-meridional transport in the subtropical lower stratosphere shows increasing trends in ERA-Interim and JRA-55; this together with the weakened MMC in the deep branch may imply an increasing age-of-air (AoA in the NH middle stratosphere in ERA-Interim. Overall, discrepancies between the different variables and trends therein as derived from the different reanalyses are still relatively large, suggesting that more investments in these products are needed in order to obtain a consolidated picture of observed changes in the BDC and the mechanisms that drive them.

  14. Gradient augmented level set method for phase change simulations

    Science.gov (United States)

    Anumolu, Lakshman; Trujillo, Mario F.

    2018-01-01

    A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.

  15. Mapping topographic structure in white matter pathways with level set trees.

    Directory of Open Access Journals (Sweden)

    Brian P Kent

    Full Text Available Fiber tractography on diffusion imaging data offers rich potential for describing white matter pathways in the human brain, but characterizing the spatial organization in these large and complex data sets remains a challenge. We show that level set trees--which provide a concise representation of the hierarchical mode structure of probability density functions--offer a statistically-principled framework for visualizing and analyzing topography in fiber streamlines. Using diffusion spectrum imaging data collected on neurologically healthy controls (N = 30, we mapped white matter pathways from the cortex into the striatum using a deterministic tractography algorithm that estimates fiber bundles as dimensionless streamlines. Level set trees were used for interactive exploration of patterns in the endpoint distributions of the mapped fiber pathways and an efficient segmentation of the pathways that had empirical accuracy comparable to standard nonparametric clustering techniques. We show that level set trees can also be generalized to model pseudo-density functions in order to analyze a broader array of data types, including entire fiber streamlines. Finally, resampling methods show the reliability of the level set tree as a descriptive measure of topographic structure, illustrating its potential as a statistical descriptor in brain imaging analysis. These results highlight the broad applicability of level set trees for visualizing and analyzing high-dimensional data like fiber tractography output.

  16. A simple mass-conserved level set method for simulation of multiphase flows

    Science.gov (United States)

    Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.

    2018-04-01

    In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.

  17. Reconstruction of thin electromagnetic inclusions by a level-set method

    International Nuclear Information System (INIS)

    Park, Won-Kwang; Lesselier, Dominique

    2009-01-01

    In this contribution, we consider a technique of electromagnetic imaging (at a single, non-zero frequency) which uses the level-set evolution method for reconstructing a thin inclusion (possibly made of disconnected parts) with either dielectric or magnetic contrast with respect to the embedding homogeneous medium. Emphasis is on the proof of the concept, the scattering problem at hand being so far based on a two-dimensional scalar model. To do so, two level-set functions are employed; the first one describes location and shape, and the other one describes connectivity and length. Speeds of evolution of the level-set functions are calculated via the introduction of Fréchet derivatives of a least-square cost functional. Several numerical experiments on noiseless and noisy data as well illustrate how the proposed method behaves

  18. Level-Set Methodology on Adaptive Octree Grids

    Science.gov (United States)

    Gibou, Frederic; Guittet, Arthur; Mirzadeh, Mohammad; Theillard, Maxime

    2017-11-01

    Numerical simulations of interfacial problems in fluids require a methodology capable of tracking surfaces that can undergo changes in topology and capable to imposing jump boundary conditions in a sharp manner. In this talk, we will discuss recent advances in the level-set framework, in particular one that is based on adaptive grids.

  19. High-level radioactive waste management. A means to social consensus

    International Nuclear Information System (INIS)

    Pierce, B.; Hill, D.

    1984-01-01

    The problem of safely disposing of high-level radioactive waste is not new, but it is becoming more pressing as the temporary storage facilities of public utilities run out. The technical questions of how best to immobilize these wastes for many centuries have been studied for years and many feel that these problems are solved, or nearly so. In the USA many states have set up roadblocks to the federal waste management programme, however, and it is clear that social consensus must be reached for any waste disposal programme to be successful. The Nuclear Waste Policy Act of 1982 provides a long needed framework for reaching this consensus, giving the states unprecedented access to federal decision making. The rights of the states in a process of co-operation and consultation are clearly defined by the Act, but the means by which the states exercise these rights are left entirely to them. We examine the structures, methods and goals open to the states, and recommend a rationale for the state decision process defining the roles of the governor and legislature. (author)

  20. High-level radioactive wste management: a means to social consensus

    International Nuclear Information System (INIS)

    Pierce, B.; Hill, D.; Haefele, E.T.

    1983-01-01

    The problem of safely disposing of high-level radioactive waste is not new, but it is becoming more pressing as the temporary storage facilities of public utilities run out. The technical questions of how best to immobilize these wastes for many centuries have been studied for years and many feel that these problems are solved, or nearly so. Many states have set up roadblocks to the federal waste management program, however, and it is clear that social consensus must be reached for any waste disposal program to be successful. The Nuclear Waste Policy Act of 1982 provides a long-needed framework for reaching this consensus, giving the states unprecedented access to federal decision-making. The rights of the states in a process of cooperation and consultation are clearly defined by the Act, but the means by which the states exercise those rights are left entirely to them. We examine the structures, methods, and goals open to the states, and recommend a rationale for the state decision process defining the roles of the governor and legislature

  1. An accurate conservative level set/ghost fluid method for simulating turbulent atomization

    International Nuclear Information System (INIS)

    Desjardins, Olivier; Moureau, Vincent; Pitsch, Heinz

    2008-01-01

    This paper presents a novel methodology for simulating incompressible two-phase flows by combining an improved version of the conservative level set technique introduced in [E. Olsson, G. Kreiss, A conservative level set method for two phase flow, J. Comput. Phys. 210 (2005) 225-246] with a ghost fluid approach. By employing a hyperbolic tangent level set function that is transported and re-initialized using fully conservative numerical schemes, mass conservation issues that are known to affect level set methods are greatly reduced. In order to improve the accuracy of the conservative level set method, high order numerical schemes are used. The overall robustness of the numerical approach is increased by computing the interface normals from a signed distance function reconstructed from the hyperbolic tangent level set by a fast marching method. The convergence of the curvature calculation is ensured by using a least squares reconstruction. The ghost fluid technique provides a way of handling the interfacial forces and large density jumps associated with two-phase flows with good accuracy, while avoiding artificial spreading of the interface. Since the proposed approach relies on partial differential equations, its implementation is straightforward in all coordinate systems, and it benefits from high parallel efficiency. The robustness and efficiency of the approach is further improved by using implicit schemes for the interface transport and re-initialization equations, as well as for the momentum solver. The performance of the method is assessed through both classical level set transport tests and simple two-phase flow examples including topology changes. It is then applied to simulate turbulent atomization of a liquid Diesel jet at Re=3000. The conservation errors associated with the accurate conservative level set technique are shown to remain small even for this complex case

  2. Transport and diffusion of material quantities on propagating interfaces via level set methods

    CERN Document Server

    Adalsteinsson, D

    2003-01-01

    We develop theory and numerical algorithms to apply level set methods to problems involving the transport and diffusion of material quantities in a level set framework. Level set methods are computational techniques for tracking moving interfaces; they work by embedding the propagating interface as the zero level set of a higher dimensional function, and then approximate the solution of the resulting initial value partial differential equation using upwind finite difference schemes. The traditional level set method works in the trace space of the evolving interface, and hence disregards any parameterization in the interface description. Consequently, material quantities on the interface which themselves are transported under the interface motion are not easily handled in this framework. We develop model equations and algorithmic techniques to extend the level set method to include these problems. We demonstrate the accuracy of our approach through a series of test examples and convergence studies.

  3. Transport and diffusion of material quantities on propagating interfaces via level set methods

    International Nuclear Information System (INIS)

    Adalsteinsson, David; Sethian, J.A.

    2003-01-01

    We develop theory and numerical algorithms to apply level set methods to problems involving the transport and diffusion of material quantities in a level set framework. Level set methods are computational techniques for tracking moving interfaces; they work by embedding the propagating interface as the zero level set of a higher dimensional function, and then approximate the solution of the resulting initial value partial differential equation using upwind finite difference schemes. The traditional level set method works in the trace space of the evolving interface, and hence disregards any parameterization in the interface description. Consequently, material quantities on the interface which themselves are transported under the interface motion are not easily handled in this framework. We develop model equations and algorithmic techniques to extend the level set method to include these problems. We demonstrate the accuracy of our approach through a series of test examples and convergence studies

  4. A level set approach for shock-induced α-γ phase transition of RDX

    Science.gov (United States)

    Josyula, Kartik; Rahul; De, Suvranu

    2018-02-01

    We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.

  5. Two Surface-Tension Formulations For The Level Set Interface-Tracking Method

    International Nuclear Information System (INIS)

    Shepel, S.V.; Smith, B.L.

    2005-01-01

    The paper describes a comparative study of two surface-tension models for the Level Set interface tracking method. In both models, the surface tension is represented as a body force, concentrated near the interface, but the technical implementation of the two options is different. The first is based on a traditional Level Set approach, in which the surface tension is distributed over a narrow band around the interface using a smoothed Delta function. In the second model, which is based on the integral form of the fluid-flow equations, the force is imposed only in those computational cells through which the interface passes. Both models have been incorporated into the Finite-Element/Finite-Volume Level Set method, previously implemented into the commercial Computational Fluid Dynamics (CFD) code CFX-4. A critical evaluation of the two models, undertaken in the context of four standard Level Set benchmark problems, shows that the first model, based on the smoothed Delta function approach, is the more general, and more robust, of the two. (author)

  6. A deep level set method for image segmentation

    OpenAIRE

    Tang, Min; Valipour, Sepehr; Zhang, Zichen Vincent; Cobzas, Dana; MartinJagersand

    2017-01-01

    This paper proposes a novel image segmentation approachthat integrates fully convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the integrated method can incorporatesmoothing and prior information to achieve an accurate segmentation.Furthermore, different than using the level set model as a post-processingtool, we integrate it into the training phase to fine-tune the FCN. Thisallows the use of unlabeled data during training in a semi-supervisedsetting. Using two types o...

  7. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin

    2011-04-01

    In this paper, we construct a level set method for an elliptic obstacle problem, which can be reformulated as a shape optimization problem. We provide a detailed shape sensitivity analysis for this reformulation and a stability result for the shape Hessian at the optimal shape. Using the shape sensitivities, we construct a geometric gradient flow, which can be realized in the context of level set methods. We prove the convergence of the gradient flow to an optimal shape and provide a complete analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its behavior through several computational experiments. © 2011 World Scientific Publishing Company.

  8. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin; Matevosyan, Norayr; Wolfram, Marie-Therese

    2011-01-01

    analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its

  9. A local level set method based on a finite element method for unstructured meshes

    International Nuclear Information System (INIS)

    Ngo, Long Cu; Choi, Hyoung Gwon

    2016-01-01

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time

  10. A local level set method based on a finite element method for unstructured meshes

    Energy Technology Data Exchange (ETDEWEB)

    Ngo, Long Cu; Choi, Hyoung Gwon [School of Mechanical Engineering, Seoul National University of Science and Technology, Seoul (Korea, Republic of)

    2016-12-15

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time.

  11. The Mean as Balance Point

    Science.gov (United States)

    O'Dell, Robin S.

    2012-01-01

    There are two primary interpretations of the mean: as a leveler of data (Uccellini 1996, pp. 113-114) and as a balance point of a data set. Typically, both interpretations of the mean are ignored in elementary school and middle school curricula. They are replaced with a rote emphasis on calculation using the standard algorithm. When students are…

  12. An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs

    Directory of Open Access Journals (Sweden)

    Kishore R. Mosaliganti

    2013-12-01

    Full Text Available In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse and grid representations (point, mesh, and image-based. Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g. gradient and Hessians across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a

  13. A parametric level-set approach for topology optimization of flow domains

    DEFF Research Database (Denmark)

    Pingen, Georg; Waidmann, Matthias; Evgrafov, Anton

    2010-01-01

    of the design variables in the traditional approaches is seen as a possible cause for the slow convergence. Non-smooth material distributions are suspected to trigger premature onset of instationary flows which cannot be treated by steady-state flow models. In the present work, we study whether the convergence...... and the versatility of topology optimization methods for fluidic systems can be improved by employing a parametric level-set description. In general, level-set methods allow controlling the smoothness of boundaries, yield a non-local influence of design variables, and decouple the material description from the flow...... field discretization. The parametric level-set method used in this study utilizes a material distribution approach to represent flow boundaries, resulting in a non-trivial mapping between design variables and local material properties. Using a hydrodynamic lattice Boltzmann method, we study...

  14. Multi person detection and tracking based on hierarchical level-set method

    Science.gov (United States)

    Khraief, Chadia; Benzarti, Faouzi; Amiri, Hamid

    2018-04-01

    In this paper, we propose an efficient unsupervised method for mutli-person tracking based on hierarchical level-set approach. The proposed method uses both edge and region information in order to effectively detect objects. The persons are tracked on each frame of the sequence by minimizing an energy functional that combines color, texture and shape information. These features are enrolled in covariance matrix as region descriptor. The present method is fully automated without the need to manually specify the initial contour of Level-set. It is based on combined person detection and background subtraction methods. The edge-based is employed to maintain a stable evolution, guide the segmentation towards apparent boundaries and inhibit regions fusion. The computational cost of level-set is reduced by using narrow band technique. Many experimental results are performed on challenging video sequences and show the effectiveness of the proposed method.

  15. Level set methods for detonation shock dynamics using high-order finite elements

    Energy Technology Data Exchange (ETDEWEB)

    Dobrev, V. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Grogan, F. C. [Univ. of California, San Diego, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kolev, T. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rieben, R [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Tomov, V. Z. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-26

    Level set methods are a popular approach to modeling evolving interfaces. We present a level set ad- vection solver in two and three dimensions using the discontinuous Galerkin method with high-order nite elements. During evolution, the level set function is reinitialized to a signed distance function to maintain ac- curacy. Our approach leads to stable front propagation and convergence on high-order, curved, unstructured meshes. The ability of the solver to implicitly track moving fronts lends itself to a number of applications; in particular, we highlight applications to high-explosive (HE) burn and detonation shock dynamics (DSD). We provide results for two- and three-dimensional benchmark problems as well as applications to DSD.

  16. An investigation of children's levels of inquiry in an informal science setting

    Science.gov (United States)

    Clark-Thomas, Beth Anne

    Elementary school students' understanding of both science content and processes are enhanced by the higher level thinking associated with inquiry-based science investigations. Informal science setting personnel, elementary school teachers, and curriculum specialists charged with designing inquiry-based investigations would be well served by an understanding of the varying influence of certain present factors upon the students' willingness and ability to delve into such higher level inquiries. This study examined young children's use of inquiry-based materials and factors which may influence the level of inquiry they engaged in during informal science activities. An informal science setting was selected as the context for the examination of student inquiry behaviors because of the rich inquiry-based environment present at the site and the benefits previously noted in the research regarding the impact of informal science settings upon the construction of knowledge in science. The study revealed several patterns of behavior among children when they are engaged in inquiry-based activities at informal science exhibits. These repeated behaviors varied in the children's apparent purposeful use of the materials at the exhibits. These levels of inquiry behavior were taxonomically defined as high/medium/low within this study utilizing a researcher-developed tool. Furthermore, in this study adult interventions, questions, or prompting were found to impact the level of inquiry engaged in by the children. This study revealed that higher levels of inquiry were preceded by task directed and physical feature prompts. Moreover, the levels of inquiry behaviors were haltered, even lowered, when preceded by a prompt that focused on a science content or concept question. Results of this study have implications for the enhancement of inquiry-based science activities in elementary schools as well as in informal science settings. These findings have significance for all science educators

  17. Variational Level Set Method for Two-Stage Image Segmentation Based on Morphological Gradients

    Directory of Open Access Journals (Sweden)

    Zemin Ren

    2014-01-01

    Full Text Available We use variational level set method and transition region extraction techniques to achieve image segmentation task. The proposed scheme is done by two steps. We first develop a novel algorithm to extract transition region based on the morphological gradient. After this, we integrate the transition region into a variational level set framework and develop a novel geometric active contour model, which include an external energy based on transition region and fractional order edge indicator function. The external energy is used to drive the zero level set toward the desired image features, such as object boundaries. Due to this external energy, the proposed model allows for more flexible initialization. The fractional order edge indicator function is incorporated into the length regularization term to diminish the influence of noise. Moreover, internal energy is added into the proposed model to penalize the deviation of the level set function from a signed distance function. The results evolution of the level set function is the gradient flow that minimizes the overall energy functional. The proposed model has been applied to both synthetic and real images with promising results.

  18. Aerostructural Level Set Topology Optimization for a Common Research Model Wing

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2014-01-01

    The purpose of this work is to use level set topology optimization to improve the design of a representative wing box structure for the NASA common research model. The objective is to minimize the total compliance of the structure under aerodynamic and body force loading, where the aerodynamic loading is coupled to the structural deformation. A taxi bump case was also considered, where only body force loads were applied. The trim condition that aerodynamic lift must balance the total weight of the aircraft is enforced by allowing the root angle of attack to change. The level set optimization method is implemented on an unstructured three-dimensional grid, so that the method can optimize a wing box with arbitrary geometry. Fast matching and upwind schemes are developed for an unstructured grid, which make the level set method robust and efficient. The adjoint method is used to obtain the coupled shape sensitivities required to perform aerostructural optimization of the wing box structure.

  19. Surface-to-surface registration using level sets

    DEFF Research Database (Denmark)

    Hansen, Mads Fogtmann; Erbou, Søren G.; Vester-Christensen, Martin

    2007-01-01

    This paper presents a general approach for surface-to-surface registration (S2SR) with the Euclidean metric using signed distance maps. In addition, the method is symmetric such that the registration of a shape A to a shape B is identical to the registration of the shape B to the shape A. The S2SR...... problem can be approximated by the image registration (IR) problem of the signed distance maps (SDMs) of the surfaces confined to some narrow band. By shrinking the narrow bands around the zero level sets the solution to the IR problem converges towards the S2SR problem. It is our hypothesis...... that this approach is more robust and less prone to fall into local minima than ordinary surface-to-surface registration. The IR problem is solved using the inverse compositional algorithm. In this paper, a set of 40 pelvic bones of Duroc pigs are registered to each other w.r.t. the Euclidean transformation...

  20. Glycated haemoglobin (HbA1c ) and fasting plasma glucose relationships in sea-level and high-altitude settings.

    Science.gov (United States)

    Bazo-Alvarez, J C; Quispe, R; Pillay, T D; Bernabé-Ortiz, A; Smeeth, L; Checkley, W; Gilman, R H; Málaga, G; Miranda, J J

    2017-06-01

    Higher haemoglobin levels and differences in glucose metabolism have been reported among high-altitude residents, which may influence the diagnostic performance of HbA 1c . This study explores the relationship between HbA 1c and fasting plasma glucose (FPG) in populations living at sea level and at an altitude of > 3000 m. Data from 3613 Peruvian adults without a known diagnosis of diabetes from sea-level and high-altitude settings were evaluated. Linear, quadratic and cubic regression models were performed adjusting for potential confounders. Receiver operating characteristic (ROC) curves were constructed and concordance between HbA 1c and FPG was assessed using a Kappa index. At sea level and high altitude, means were 13.5 and 16.7 g/dl (P > 0.05) for haemoglobin level; 41 and 40 mmol/mol (5.9% and 5.8%; P < 0.01) for HbA 1c ; and 5.8 and 5.1 mmol/l (105 and 91.3 mg/dl; P < 0.001) for FPG, respectively. The adjusted relationship between HbA 1c and FPG was quadratic at sea level and linear at high altitude. Adjusted models showed that, to predict an HbA 1c value of 48 mmol/mol (6.5%), the corresponding mean FPG values at sea level and high altitude were 6.6 and 14.8 mmol/l (120 and 266 mg/dl), respectively. An HbA 1c cut-off of 48 mmol/mol (6.5%) had a sensitivity for high FPG of 87.3% (95% confidence interval (95% CI) 76.5 to 94.4) at sea level and 40.9% (95% CI 20.7 to 63.6) at high altitude. The relationship between HbA 1c and FPG is less clear at high altitude than at sea level. Caution is warranted when using HbA 1c to diagnose diabetes mellitus in this setting. © 2017 The Authors. Diabetic Medicine published by John Wiley & Sons Ltd on behalf of Diabetes UK.

  1. A Memory and Computation Efficient Sparse Level-Set Method

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.

    Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the

  2. Individual and setting level predictors of the implementation of a skin cancer prevention program: a multilevel analysis

    Directory of Open Access Journals (Sweden)

    Brownson Ross C

    2010-05-01

    Full Text Available Abstract Background To achieve widespread cancer control, a better understanding is needed of the factors that contribute to successful implementation of effective skin cancer prevention interventions. This study assessed the relative contributions of individual- and setting-level characteristics to implementation of a widely disseminated skin cancer prevention program. Methods A multilevel analysis was conducted using data from the Pool Cool Diffusion Trial from 2004 and replicated with data from 2005. Implementation of Pool Cool by lifeguards was measured using a composite score (implementation variable, range 0 to 10 that assessed whether the lifeguard performed different components of the intervention. Predictors included lifeguard background characteristics, lifeguard sun protection-related attitudes and behaviors, pool characteristics, and enhanced (i.e., more technical assistance, tailored materials, and incentives are provided versus basic treatment group. Results The mean value of the implementation variable was 4 in both years (2004 and 2005; SD = 2 in 2004 and SD = 3 in 2005 indicating a moderate implementation for most lifeguards. Several individual-level (lifeguard characteristics and setting-level (pool characteristics and treatment group factors were found to be significantly associated with implementation of Pool Cool by lifeguards. All three lifeguard-level domains (lifeguard background characteristics, lifeguard sun protection-related attitudes and behaviors and six pool-level predictors (number of weekly pool visitors, intervention intensity, geographic latitude, pool location, sun safety and/or skin cancer prevention programs, and sun safety programs and policies were included in the final model. The most important predictors of implementation were the number of weekly pool visitors (inverse association and enhanced treatment group (positive association. That is, pools with fewer weekly visitors and pools in the enhanced

  3. Precise mean sea level measurements using the Global Positioning System

    Science.gov (United States)

    Kelecy, Thomas M.; Born, George H.; Parke, Michael E.; Rocken, Christian

    1994-01-01

    This paper describes the results of a sea level measurement test conducted off La Jolla, California, in November of 1991. The purpose of this test was to determine accurate sea level measurements using a Global Positioning System (GPS) equipped buoy. These measurements were intended to be used as the sea level component for calibration of the ERS 1 satellite altimeter. Measurements were collected on November 25 and 28 when the ERS 1 satellite overflew the calibration area. Two different types of buoys were used. A waverider design was used on November 25 and a spar design on November 28. This provided the opportunity to examine how dynamic effects of the measurement platform might affect the sea level accuracy. The two buoys were deployed at locations approximately 1.2 km apart and about 15 km west of a reference GPS receiver located on the rooftop of the Institute of Geophysics and Planetary Physics at the Scripps Institute of Oceanography. GPS solutions were computed for 45 minutes on each day and used to produce two sea level time series. An estimate of the mean sea level at both locations was computed by subtracting tide gage data collected at the Scripps Pier from the GPS-determined sea level measurements and then filtering out the high-frequency components due to waves and buoy dynamics. In both cases the GPS estimate differed from Rapp's mean altimetric surface by 0.06 m. Thus, the gradient in the GPS measurements matched the gradient in Rapp's surface. These results suggest that accurate sea level can be determined using GPS on widely differing platforms as long as care is taken to determine the height of the GPS antenna phase center above water level. Application areas include measurement of absolute sea level, of temporal variations in sea level, and of sea level gradients (dominantly the geoid). Specific applications would include ocean altimeter calibration, monitoring of sea level in remote regions, and regional experiments requiring spatial and

  4. Level sets and extrema of random processes and fields

    CERN Document Server

    Azais, Jean-Marc

    2009-01-01

    A timely and comprehensive treatment of random field theory with applications across diverse areas of study Level Sets and Extrema of Random Processes and Fields discusses how to understand the properties of the level sets of paths as well as how to compute the probability distribution of its extremal values, which are two general classes of problems that arise in the study of random processes and fields and in related applications. This book provides a unified and accessible approach to these two topics and their relationship to classical theory and Gaussian processes and fields, and the most modern research findings are also discussed. The authors begin with an introduction to the basic concepts of stochastic processes, including a modern review of Gaussian fields and their classical inequalities. Subsequent chapters are devoted to Rice formulas, regularity properties, and recent results on the tails of the distribution of the maximum. Finally, applications of random fields to various areas of mathematics a...

  5. Skull defect reconstruction based on a new hybrid level set.

    Science.gov (United States)

    Zhang, Ziqun; Zhang, Ran; Song, Zhijian

    2014-01-01

    Skull defect reconstruction is an important aspect of surgical repair. Historically, a skull defect prosthesis was created by the mirroring technique, surface fitting, or formed templates. These methods are not based on the anatomy of the individual patient's skull, and therefore, the prosthesis cannot precisely correct the defect. This study presented a new hybrid level set model, taking into account both the global optimization region information and the local accuracy edge information, while avoiding re-initialization during the evolution of the level set function. Based on the new method, a skull defect was reconstructed, and the skull prosthesis was produced by rapid prototyping technology. This resulted in a skull defect prosthesis that well matched the skull defect with excellent individual adaptation.

  6. 76 FR 9004 - Public Comment on Setting Achievement Levels in Writing

    Science.gov (United States)

    2011-02-16

    ... DEPARTMENT OF EDUCATION Public Comment on Setting Achievement Levels in Writing AGENCY: U.S... Achievement Levels in Writing. SUMMARY: The National Assessment Governing Board (Governing Board) is... for NAEP in writing. This notice provides opportunity for public comment and submitting...

  7. Appropriate criteria set for personnel promotion across organizational levels using analytic hierarchy process (AHP

    Directory of Open Access Journals (Sweden)

    Charles Noven Castillo

    2017-01-01

    Full Text Available Currently, there has been limited established specific set of criteria for personnel promotion to each level of the organization. This study is conducted in order to develop a personnel promotion strategy by identifying specific sets of criteria for each level of the organization. The complexity of identifying the criteria set along with the subjectivity of these criteria require the use of multi-criteria decision-making approach particularly the analytic hierarchy process (AHP. Results show different sets of criteria for each management level which are consistent with several frameworks in literature. These criteria sets would help avoid mismatch of employee skills and competencies and their job, and at the same time eliminate the issues in personnel promotion such as favouritism, glass ceiling, and gender and physical attractiveness preference. This work also shows that personality and traits, job satisfaction and experience and skills are more critical rather than social capital across different organizational levels. The contribution of this work is in identifying relevant criteria in developing a personnel promotion strategy across organizational levels.

  8. Demons versus level-set motion registration for coronary 18F-sodium fluoride PET

    Science.gov (United States)

    Rubeaux, Mathieu; Joshi, Nikhil; Dweck, Marc R.; Fletcher, Alison; Motwani, Manish; Thomson, Louise E.; Germano, Guido; Dey, Damini; Berman, Daniel S.; Newby, David E.; Slomka, Piotr J.

    2016-03-01

    Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18F-sodium fluoride (18F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18F-NaF PET. To this end, fifteen patients underwent 18F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically

  9. Introduction to the level-set full field modeling of laths spheroidization phenomenon in α/β titanium alloys

    Directory of Open Access Journals (Sweden)

    Polychronopoulou D.

    2016-01-01

    Full Text Available Fragmentation of α lamellae and subsequent spheroidization of α laths in α/β titanium alloys occurring during and after deformation are well known phenomena. We will illustrate the development of a new finite element methodology to model them. This new methodology is based on a level set framework to model the deformation and the ad hoc simultaneous and/or subsequent interfaces kinetics. We will focus, at yet, on the modeling of the surface diffusion at the α/β phase interfaces and the motion by mean curvature at the α/α grain interfaces.

  10. Evaluating model simulations of 20th century sea-level rise. Part 1: global mean sea-level change

    NARCIS (Netherlands)

    Slangen, A.B.A.; Meyssignac, B.; Agosta, C.; Champollion, N.; Church, J.A.; Fettweis, X.; Ligtenberg, S.R.M.; Marzeion, B.; Melet, A.; Palmer, M.D.; Richter, K.; Roberts, C.D.; Spada, G.

    2017-01-01

    Sea level change is one of the major consequences of climate change and is projected to affect coastal communities around the world. Here, global mean sea level (GMSL) change estimated by 12 climate models from phase 5 of the World Climate Research Programme’s Climate Model Intercomparison Project

  11. Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement

    Science.gov (United States)

    Shervani-Tabar, Navid; Vasilyev, Oleg V.

    2016-11-01

    This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.

  12. Some numerical studies of interface advection properties of level set ...

    Indian Academy of Sciences (India)

    explicit computational elements moving through an Eulerian grid. ... location. The interface is implicitly defined (captured) as the location of the discontinuity in the ... This level set function is advected with the background flow field and thus ...

  13. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    Science.gov (United States)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  14. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation.

    Science.gov (United States)

    Barasa, Edwine W; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-09-16

    Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these complementary schools of thought. © 2015

  15. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    Science.gov (United States)

    Barasa, Edwine W.; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-01-01

    Background: Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods: We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results: Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Conclusion: Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these

  16. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    Directory of Open Access Journals (Sweden)

    Edwine W. Barasa

    2015-11-01

    Full Text Available Background Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1 Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a Stakeholder satisfaction, (b Stakeholder understanding, (c Shifted priorities (reallocation of resources, and (d Implementation of decisions. (2 Priority setting processes should also meet the procedural conditions of (a Stakeholder engagement, (b Stakeholder empowerment, (c Transparency, (d Use of evidence, (e Revisions, (f Enforcement, and (g Being grounded on community values. Conclusion Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from

  17. S-N secular ocean tide: explanation of observably coastal velocities of increase of a global mean sea level and mean sea levels in northern and southern hemispheres and prediction of erroneous altimetry velocities

    Science.gov (United States)

    Barkin, Yury

    2010-05-01

    The phenomenon of contrast secular changes of sea levels in the southern and northern hemispheres, predicted on the basis of geodynamic model about the forced relative oscillations and displacements of the Earth shells, has obtained a theoretical explanation. In northern hemisphere the mean sea level of ocean increases with velocity about 2.45±0.32 mm/yr, and in a southern hemisphere the mean sea level increases with velocity about 0.67±0.30 mm/yr. Theoretical values of velocity of increase of global mean sea level of ocean has been estimated in 1.61±0.36 mm/yr. 1 Introduction. The secular drift of the centre of mass of the Earth in the direction of North Pole with velocity about 12-20 mm/yr has been predicted by author in 1995 [1], [2], and now has confirmed with methods of space geodesy. For example the DORIS data in period 1999-2008 let us to estimate velocity of polar drift in 5.24±0.29 mm/yr [3]. To explain this fundamental planetary phenomenon it is possible only, having admitted, that similar northern drift tests the centre of mass of the liquid core relatively to the centre of mass of viscous-elastic and thermodynamically changeable mantle with velocity about 2-3 cm/yr in present [4]. The polar drift of the Earth core with huge superfluous mass results in slow increase of a gravity in northern hemisphere with a mean velocity about 1.4 ?Gal and to its decrease approximately with the same mean velocity in southern hemisphere [5]. This conclusion-prediction has obtained already a number of confirmations in precision gravimetric observations fulfilled in last decade around the world [6]. Naturally, a drift of the core is accompanied by the global changes (deformations) of all layers of the mantle and the core, by inversion changes of their tension states when in one hemisphere the tension increases and opposite on the contrary - decreases. Also it is possible that thermodynamical mechanism actively works with inversion properties of molting and

  18. Forecasting the Global Mean Sea Level, a Continuous-Time State-Space Approach

    DEFF Research Database (Denmark)

    Boldrini, Lorenzo

    In this paper we propose a continuous-time, Gaussian, linear, state-space system to model the relation between global mean sea level (GMSL) and the global mean temperature (GMT), with the aim of making long-term projections for the GMSL. We provide a justification for the model specification based......) and the temperature reconstruction from Hansen et al. (2010). We compare the forecasting performance of the proposed specification to the procedures developed in Rahmstorf (2007b) and Vermeer and Rahmstorf (2009). Finally, we compute projections for the sea-level rise conditional on the 21st century SRES temperature...

  19. Evaluating healthcare priority setting at the meso level: A thematic review of empirical literature

    Science.gov (United States)

    Waithaka, Dennis; Tsofa, Benjamin; Barasa, Edwine

    2018-01-01

    Background: Decentralization of health systems has made sub-national/regional healthcare systems the backbone of healthcare delivery. These regions are tasked with the difficult responsibility of determining healthcare priorities and resource allocation amidst scarce resources. We aimed to review empirical literature that evaluated priority setting practice at the meso (sub-national) level of health systems. Methods: We systematically searched PubMed, ScienceDirect and Google scholar databases and supplemented these with manual searching for relevant studies, based on the reference list of selected papers. We only included empirical studies that described and evaluated, or those that only evaluated priority setting practice at the meso-level. A total of 16 papers were identified from LMICs and HICs. We analyzed data from the selected papers by thematic review. Results: Few studies used systematic priority setting processes, and all but one were from HICs. Both formal and informal criteria are used in priority-setting, however, informal criteria appear to be more perverse in LMICs compared to HICs. The priority setting process at the meso-level is a top-down approach with minimal involvement of the community. Accountability for reasonableness was the most common evaluative framework as it was used in 12 of the 16 studies. Efficiency, reallocation of resources and options for service delivery redesign were the most common outcome measures used to evaluate priority setting. Limitations: Our study was limited by the fact that there are very few empirical studies that have evaluated priority setting at the meso-level and there is likelihood that we did not capture all the studies. Conclusions: Improving priority setting practices at the meso level is crucial to strengthening health systems. This can be achieved through incorporating and adapting systematic priority setting processes and frameworks to the context where used, and making considerations of both process

  20. Online monitoring of oil film using electrical capacitance tomography and level set method

    International Nuclear Information System (INIS)

    Xue, Q.; Ma, M.; Sun, B. Y.; Cui, Z. Q.; Wang, H. X.

    2015-01-01

    In the application of oil-air lubrication system, electrical capacitance tomography (ECT) provides a promising way for monitoring oil film in the pipelines by reconstructing cross sectional oil distributions in real time. While in the case of small diameter pipe and thin oil film, the thickness of the oil film is hard to be observed visually since the interface of oil and air is not obvious in the reconstructed images. And the existence of artifacts in the reconstructions has seriously influenced the effectiveness of image segmentation techniques such as level set method. Besides, level set method is also unavailable for online monitoring due to its low computation speed. To address these problems, a modified level set method is developed: a distance regularized level set evolution formulation is extended to image two-phase flow online using an ECT system, a narrowband image filter is defined to eliminate the influence of artifacts, and considering the continuity of the oil distribution variation, the detected oil-air interface of a former image can be used as the initial contour for the detection of the subsequent frame; thus, the propagation from the initial contour to the boundary can be greatly accelerated, making it possible for real time tracking. To testify the feasibility of the proposed method, an oil-air lubrication facility with 4 mm inner diameter pipe is measured in normal operation using an 8-electrode ECT system. Both simulation and experiment results indicate that the modified level set method is capable of visualizing the oil-air interface accurately online

  1. A level-set method for two-phase flows with soluble surfactant

    Science.gov (United States)

    Xu, Jian-Jun; Shi, Weidong; Lai, Ming-Chih

    2018-01-01

    A level-set method is presented for solving two-phase flows with soluble surfactant. The Navier-Stokes equations are solved along with the bulk surfactant and the interfacial surfactant equations. In particular, the convection-diffusion equation for the bulk surfactant on the irregular moving domain is solved by using a level-set based diffusive-domain method. A conservation law for the total surfactant mass is derived, and a re-scaling procedure for the surfactant concentrations is proposed to compensate for the surfactant mass loss due to numerical diffusion. The whole numerical algorithm is easy for implementation. Several numerical simulations in 2D and 3D show the effects of surfactant solubility on drop dynamics under shear flow.

  2. Cooperative Fuzzy Games Approach to Setting Target Levels of ECs in Quality Function Deployment

    Directory of Open Access Journals (Sweden)

    Zhihui Yang

    2014-01-01

    Full Text Available Quality function deployment (QFD can provide a means of translating customer requirements (CRs into engineering characteristics (ECs for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach.

  3. Cooperative fuzzy games approach to setting target levels of ECs in quality function deployment.

    Science.gov (United States)

    Yang, Zhihui; Chen, Yizeng; Yin, Yunqiang

    2014-01-01

    Quality function deployment (QFD) can provide a means of translating customer requirements (CRs) into engineering characteristics (ECs) for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach.

  4. Level set methods for inverse scattering—some recent developments

    International Nuclear Information System (INIS)

    Dorn, Oliver; Lesselier, Dominique

    2009-01-01

    We give an update on recent techniques which use a level set representation of shapes for solving inverse scattering problems, completing in that matter the exposition made in (Dorn and Lesselier 2006 Inverse Problems 22 R67) and (Dorn and Lesselier 2007 Deformable Models (New York: Springer) pp 61–90), and bringing it closer to the current state of the art

  5. A level set method for cupping artifact correction in cone-beam CT

    International Nuclear Information System (INIS)

    Xie, Shipeng; Li, Haibo; Ge, Qi; Li, Chunming

    2015-01-01

    Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts in CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts

  6. Integrative study of the mean sea level and its components

    CERN Document Server

    Champollion, Nicolas; Paul, Frank; Benveniste, Jérôme

    2017-01-01

    This volume presents the most recent results of global mean sea level variations over the satellite altimetry era (starting in the early 1990s) and associated contributions, such as glaciers and ice sheets mass loss, ocean thermal expansion, and land water storage changes. Sea level is one of the best indicators of global climate changes as it integrates the response of several components of the climate system to external forcing factors (including anthropogenic forcing) and internal climate variability. Providing long, accurate records of the sea level at global and regional scales and of the various components causing sea level changes is of crucial importance to improve our understanding of climate processes at work and to validate the climate models used for future projections. The Climate Change Initiative project of the European Space Agency has provided a first attempt to produce consistent and continuous space-based records for several climate parameters observable from space, among them sea level. Th...

  7. Reservoir characterisation by a binary level set method and adaptive multiscale estimation

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Lars Kristian

    2006-01-15

    The main focus of this work is on estimation of the absolute permeability as a solution of an inverse problem. We have both considered a single-phase and a two-phase flow model. Two novel approaches have been introduced and tested numerical for solving the inverse problems. The first approach is a multi scale zonation technique which is treated in Paper A. The purpose of the work in this paper is to find a coarse scale solution based on production data from wells. In the suggested approach, the robustness of an already developed method, the adaptive multi scale estimation (AME), has been improved by utilising information from several candidate solutions generated by a stochastic optimizer. The new approach also suggests a way of combining a stochastic and a gradient search method, which in general is a problematic issue. The second approach is a piecewise constant level set approach and is applied in Paper B, C, D and E. Paper B considers the stationary single-phase problem, while Paper C, D and E use a two-phase flow model. In the two-phase flow problem we have utilised information from both production data in wells and spatially distributed data gathered from seismic surveys. Due to the higher content of information provided by the spatially distributed data, we search solutions on a slightly finer scale than one typically does with only production data included. The applied level set method is suitable for reconstruction of fields with a supposed known facies-type of solution. That is, the solution should be close to piecewise constant. This information is utilised through a strong restriction of the number of constant levels in the estimate. On the other hand, the flexibility in the geometries of the zones is much larger for this method than in a typical zonation approach, for example the multi scale approach applied in Paper A. In all these papers, the numerical studies are done on synthetic data sets. An advantage of synthetic data studies is that the true

  8. A Level Set Discontinuous Galerkin Method for Free Surface Flows

    DEFF Research Database (Denmark)

    Grooss, Jesper; Hesthaven, Jan

    2006-01-01

    We present a discontinuous Galerkin method on a fully unstructured grid for the modeling of unsteady incompressible fluid flows with free surfaces. The surface is modeled by embedding and represented by a levelset. We discuss the discretization of the flow equations and the level set equation...

  9. Priority setting at the micro-, meso- and macro-levels in Canada, Norway and Uganda.

    Science.gov (United States)

    Kapiriri, Lydia; Norheim, Ole Frithjof; Martin, Douglas K

    2007-06-01

    The objectives of this study were (1) to describe the process of healthcare priority setting in Ontario-Canada, Norway and Uganda at the three levels of decision-making; (2) to evaluate the description using the framework for fair priority setting, accountability for reasonableness; so as to identify lessons of good practices. We carried out case studies involving key informant interviews, with 184 health practitioners and health planners from the macro-level, meso-level and micro-level from Canada-Ontario, Norway and Uganda (selected by virtue of their varying experiences in priority setting). Interviews were audio-recorded, transcribed and analyzed using a modified thematic approach. The descriptions were evaluated against the four conditions of "accountability for reasonableness", relevance, publicity, revisions and enforcement. Areas of adherence to these conditions were identified as lessons of good practices; areas of non-adherence were identified as opportunities for improvement. (i) at the macro-level, in all three countries, cabinet makes most of the macro-level resource allocation decisions and they are influenced by politics, public pressure, and advocacy. Decisions within the ministries of health are based on objective formulae and evidence. International priorities influenced decisions in Uganda. Some priority-setting reasons are publicized through circulars, printed documents and the Internet in Canada and Norway. At the meso-level, hospital priority-setting decisions were made by the hospital managers and were based on national priorities, guidelines, and evidence. Hospital departments that handle emergencies, such as surgery, were prioritized. Some of the reasons are available on the hospital intranet or presented at meetings. Micro-level practitioners considered medical and social worth criteria. These reasons are not publicized. Many practitioners lacked knowledge of the macro- and meso-level priority-setting processes. (ii) Evaluation

  10. Mechanisms of long-term mean sea level variability in the North Sea

    Science.gov (United States)

    Dangendorf, Sönke; Calafat, Francisco; Øie Nilsen, Jan Even; Richter, Kristin; Jensen, Jürgen

    2015-04-01

    We examine mean sea level (MSL) variations in the North Sea on timescales ranging from months to decades under the consideration of different forcing factors since the late 19th century. We use multiple linear regression models, which are validated for the second half of the 20th century against the output of a state-of-the-art tide+surge model (HAMSOM), to determine the barotropic response of the ocean to fluctuations in atmospheric forcing. We demonstrate that local atmospheric forcing mainly triggers MSL variability on timescales up to a few years, with the inverted barometric effect dominating the variability along the UK and Norwegian coastlines and wind (piling up the water along the coast) controlling the MSL variability in the south from Belgium up to Denmark. However, in addition to the large inter-annual sea level variability there is also a considerable fraction of decadal scale variability. We show that on decadal timescales MSL variability in the North Sea mainly reflects steric changes, which are mostly remotely forced. A spatial correlation analysis of altimetry observations and baroclinic ocean model outputs suggests evidence for a coherent signal extending from the Norwegian shelf down to the Canary Islands. This supports the theory of longshore wind forcing along the eastern boundary of the North Atlantic causing coastally trapped waves to propagate along the continental slope. With a combination of oceanographic and meteorological measurements we demonstrate that ~80% of the decadal sea level variability in the North Sea can be explained as response of the ocean to longshore wind forcing, including boundary wave propagation in the Northeast Atlantic. These findings have important implications for (i) detecting significant accelerations in North Sea MSL, (ii) the conceptual set up of regional ocean models in terms of resolution and boundary conditions, and (iii) the development of adequate and realistic regional climate change projections.

  11. Embedded Real-Time Architecture for Level-Set-Based Active Contours

    Directory of Open Access Journals (Sweden)

    Dejnožková Eva

    2005-01-01

    Full Text Available Methods described by partial differential equations have gained a considerable interest because of undoubtful advantages such as an easy mathematical description of the underlying physics phenomena, subpixel precision, isotropy, or direct extension to higher dimensions. Though their implementation within the level set framework offers other interesting advantages, their vast industrial deployment on embedded systems is slowed down by their considerable computational effort. This paper exploits the high parallelization potential of the operators from the level set framework and proposes a scalable, asynchronous, multiprocessor platform suitable for system-on-chip solutions. We concentrate on obtaining real-time execution capabilities. The performance is evaluated on a continuous watershed and an object-tracking application based on a simple gradient-based attraction force driving the active countour. The proposed architecture can be realized on commercially available FPGAs. It is built around general-purpose processor cores, and can run code developed with usual tools.

  12. Numerical Modelling of Three-Fluid Flow Using The Level-set Method

    Science.gov (United States)

    Li, Hongying; Lou, Jing; Shang, Zhi

    2014-11-01

    This work presents a numerical model for simulation of three-fluid flow involving two different moving interfaces. These interfaces are captured using the level-set method via two different level-set functions. A combined formulation with only one set of conservation equations for the whole physical domain, consisting of the three different immiscible fluids, is employed. Numerical solution is performed on a fixed mesh using the finite volume method. Surface tension effect is incorporated using the Continuum Surface Force model. Validation of the present model is made against available results for stratified flow and rising bubble in a container with a free surface. Applications of the present model are demonstrated by a variety of three-fluid flow systems including (1) three-fluid stratified flow, (2) two-fluid stratified flow carrying the third fluid in the form of drops and (3) simultaneous rising and settling of two drops in a stationary third fluid. The work is supported by a Thematic and Strategic Research from A*STAR, Singapore (Ref. #: 1021640075).

  13. CT liver volumetry using geodesic active contour segmentation with a level-set algorithm

    Science.gov (United States)

    Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard

    2010-03-01

    Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.

  14. Mean-field level analysis of epidemics in directed networks

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jiazeng [School of Mathematical Sciences, Peking University, Beijing 100871 (China); Liu, Zengrong [Mathematics Department, Shanghai University, Shanghai 200444 (China)], E-mail: wangjiazen@yahoo.com.cn, E-mail: zrongliu@online.sh.cn

    2009-09-04

    The susceptible-infected-removed spreading model in a directed graph is studied. The mean-field level rate equations are built with the degree-degree connectivity correlation element and the (in, out)-degree distribution. And the outbreak threshold is obtained analytically-it is determined by the combination of connectivity probability and the degree distribution. Furthermore, the methods of calculating the degree-degree correlations in directed networks are presented. The numerical results of the discrete epidemic processes in networks verify our analyses.

  15. Mean-field level analysis of epidemics in directed networks

    International Nuclear Information System (INIS)

    Wang, Jiazeng; Liu, Zengrong

    2009-01-01

    The susceptible-infected-removed spreading model in a directed graph is studied. The mean-field level rate equations are built with the degree-degree connectivity correlation element and the (in, out)-degree distribution. And the outbreak threshold is obtained analytically-it is determined by the combination of connectivity probability and the degree distribution. Furthermore, the methods of calculating the degree-degree correlations in directed networks are presented. The numerical results of the discrete epidemic processes in networks verify our analyses.

  16. A comparison and evaluation between ICESat/GLAS altimetry and mean sea level in Thailand

    Science.gov (United States)

    Naksen, Didsaphan; Yang, Dong Kai

    2015-10-01

    Surface elevation is one of the importance information for GIS. Usually surface elevation can acquired from many sources such as satellite imageries, aerial photograph, SAR data or LiDAR by photogrammetry, remote sensing methodology. However the most trust information describe the actual surface elevation is Leveling from terrestrial survey. Leveling is giving the highest accuracy but in the other hand is also long period process spending a lot of budget and resources, moreover the LiDAR technology is new era to measure surface elevation. ICESat/GLAS is spaceborne LiDAR platform, a scientific satellite lunched by NASA in 2003. The study area was located at the middle part of Thailand between 12. ° - 14° North and 98° -100° East Latitude and Longitude. The main idea is to compare and evaluate about elevation between ICESat/GLAS Altimetry and mean sea level of Thailand. Data are collected from various sources, including the ICESat/GLAS altimetry data product from NASA, mean sea level from Royal Thai Survey Department (RTSD). For methodology, is to transform ICESat GLA14 from TOPX/Poseidon-Jason ellipsoid to WGS84 ellipsoid. In addition, ICESat/GLAS altimetry that extracted form centroid of laser footprint and mean sea level were compared and evaluated by 1st Layer National Vertical Reference Network. The result is shown that generally the range of elevation between ICESat/GLAS and mean sea level is wildly from 0. 8 to 25 meters in study area.

  17. Frequency-Locked Detector Threshold Setting Criteria Based on Mean-Time-To-Lose-Lock (MTLL) for GPS Receivers.

    Science.gov (United States)

    Jin, Tian; Yuan, Heliang; Zhao, Na; Qin, Honglei; Sun, Kewen; Ji, Yuanfa

    2017-12-04

    Frequency-locked detector (FLD) has been widely utilized in tracking loops of Global Positioning System (GPS) receivers to indicate their locking status. The relation between FLD and lock status has been seldom discussed. The traditional PLL experience is not suitable for FLL. In this paper, the threshold setting criteria for frequency-locked detector in the GPS receiver has been proposed by analyzing statistical characteristic of FLD output. The approximate probability distribution of frequency-locked detector is theoretically derived by using a statistical approach, which reveals the relationship between probabilities of frequency-locked detector and the carrier-to-noise ratio ( C / N ₀) of the received GPS signal. The relationship among mean-time-to-lose-lock (MTLL), detection threshold and lock probability related to C / N ₀ can be further discovered by utilizing this probability. Therefore, a theoretical basis for threshold setting criteria in frequency locked loops for GPS receivers is provided based on mean-time-to-lose-lock analysis.

  18. Level set method for image segmentation based on moment competition

    Science.gov (United States)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  19. Change in Vitamin D Levels Occurs Early after Antiretroviral Therapy Initiation and Depends on Treatment Regimen in Resource-Limited Settings

    Science.gov (United States)

    Havers, Fiona P.; Detrick, Barbara; Cardoso, Sandra W.; Berendes, Sima; Lama, Javier R.; Sugandhavesa, Patcharaphan; Mwelase, Noluthando H.; Campbell, Thomas B.; Gupta, Amita

    2014-01-01

    Study Background Vitamin D has wide-ranging effects on the immune system, and studies suggest that low serum vitamin D levels are associated with worse clinical outcomes in HIV. Recent studies have identified an interaction between antiretrovirals used to treat HIV and reduced serum vitamin D levels, but these studies have been done in North American and European populations. Methods Using a prospective cohort study design nested in a multinational clinical trial, we examined the effect of three combination antiretroviral (cART) regimens on serum vitamin D levels in 270 cART-naïve, HIV-infected adults in nine diverse countries, (Brazil, Haiti, Peru, Thailand, India, Malawi, South Africa, Zimbabwe and the United States). We evaluated the change between baseline serum vitamin D levels and vitamin D levels 24 and 48 weeks after cART initiation. Results Serum vitamin D levels decreased significantly from baseline to 24 weeks among those randomized to efavirenz/lamivudine/zidovudine (mean change: −7.94 [95% Confidence Interval (CI) −10.42, −5.54] ng/ml) and efavirenz/emtricitabine/tenofovir-DF (mean change: −6.66 [95% CI −9.40, −3.92] ng/ml) when compared to those randomized to atazanavir/emtricitabine/didanosine-EC (mean change: −2.29 [95% CI –4.83, 0.25] ng/ml). Vitamin D levels did not change significantly between week 24 and 48. Other factors that significantly affected serum vitamin D change included country (p<0.001), season (p<0.001) and baseline vitamin D level (p<0.001). Conclusion Efavirenz-containing cART regimens adversely affected vitamin D levels in patients from economically, geographically and racially diverse resource-limited settings. This effect was most pronounced early after cART initiation. Research is needed to define the role of Vitamin D supplementation in HIV care. PMID:24752177

  20. Topological Hausdorff dimension and level sets of generic continuous functions on fractals

    International Nuclear Information System (INIS)

    Balka, Richárd; Buczolich, Zoltán; Elekes, Márton

    2012-01-01

    Highlights: ► We examine a new fractal dimension, the so called topological Hausdorff dimension. ► The generic continuous function has a level set of maximal Hausdorff dimension. ► This maximal dimension is the topological Hausdorff dimension minus one. ► Homogeneity implies that “most” level sets are of this dimension. ► We calculate the various dimensions of the graph of the generic function. - Abstract: In an earlier paper we introduced a new concept of dimension for metric spaces, the so called topological Hausdorff dimension. For a compact metric space K let dim H K and dim tH K denote its Hausdorff and topological Hausdorff dimension, respectively. We proved that this new dimension describes the Hausdorff dimension of the level sets of the generic continuous function on K, namely sup{ dim H f -1 (y):y∈R} =dim tH K-1 for the generic f ∈ C(K), provided that K is not totally disconnected, otherwise every non-empty level set is a singleton. We also proved that if K is not totally disconnected and sufficiently homogeneous then dim H f −1 (y) = dim tH K − 1 for the generic f ∈ C(K) and the generic y ∈ f(K). The most important goal of this paper is to make these theorems more precise. As for the first result, we prove that the supremum is actually attained on the left hand side of the first equation above, and also show that there may only be a unique level set of maximal Hausdorff dimension. As for the second result, we characterize those compact metric spaces for which for the generic f ∈ C(K) and the generic y ∈ f(K) we have dim H f −1 (y) = dim tH K − 1. We also generalize a result of B. Kirchheim by showing that if K is self-similar then for the generic f ∈ C(K) for every y∈intf(K) we have dim H f −1 (y) = dim tH K − 1. Finally, we prove that the graph of the generic f ∈ C(K) has the same Hausdorff and topological Hausdorff dimension as K.

  1. Relationships between college settings and student alcohol use before, during and after events: a multi-level study.

    Science.gov (United States)

    Paschall, Mallie J; Saltz, Robert F

    2007-11-01

    We examined how alcohol risk is distributed based on college students' drinking before, during and after they go to certain settings. Students attending 14 California public universities (N=10,152) completed a web-based or mailed survey in the fall 2003 semester, which included questions about how many drinks they consumed before, during and after the last time they went to six settings/events: fraternity or sorority party, residence hall party, campus event (e.g. football game), off-campus party, bar/restaurant and outdoor setting (referent). Multi-level analyses were conducted in hierarchical linear modeling (HLM) to examine relationships between type of setting and level of alcohol use before, during and after going to the setting, and possible age and gender differences in these relationships. Drinking episodes (N=24,207) were level 1 units, students were level 2 units and colleges were level 3 units. The highest drinking levels were observed during all settings/events except campus events, with the highest number of drinks being consumed at off-campus parties, followed by residence hall and fraternity/sorority parties. The number of drinks consumed before a fraternity/sorority party was higher than other settings/events. Age group and gender differences in relationships between type of setting/event and 'before,''during' and 'after' drinking levels also were observed. For example, going to a bar/restaurant (relative to an outdoor setting) was positively associated with 'during' drinks among students of legal drinking age while no relationship was observed for underage students. Findings of this study indicate differences in the extent to which college settings are associated with student drinking levels before, during and after related events, and may have implications for intervention strategies targeting different types of settings.

  2. Individual-and Setting-Level Correlates of Secondary Traumatic Stress in Rape Crisis Center Staff.

    Science.gov (United States)

    Dworkin, Emily R; Sorell, Nicole R; Allen, Nicole E

    2016-02-01

    Secondary traumatic stress (STS) is an issue of significant concern among providers who work with survivors of sexual assault. Although STS has been studied in relation to individual-level characteristics of a variety of types of trauma responders, less research has focused specifically on rape crisis centers as environments that might convey risk or protection from STS, and no research to knowledge has modeled setting-level variation in correlates of STS. The current study uses a sample of 164 staff members representing 40 rape crisis centers across a single Midwestern state to investigate the staff member-and agency-level correlates of STS. Results suggest that correlates exist at both levels of analysis. Younger age and greater severity of sexual assault history were statistically significant individual-level predictors of increased STS. Greater frequency of supervision was more strongly related to secondary stress for non-advocates than for advocates. At the setting level, lower levels of supervision and higher client loads agency-wide accounted for unique variance in staff members' STS. These findings suggest that characteristics of both providers and their settings are important to consider when understanding their STS. © The Author(s) 2014.

  3. Novel nonlinear knowledge-based mean force potentials based on machine learning.

    Science.gov (United States)

    Dong, Qiwen; Zhou, Shuigeng

    2011-01-01

    The prediction of 3D structures of proteins from amino acid sequences is one of the most challenging problems in molecular biology. An essential task for solving this problem with coarse-grained models is to deduce effective interaction potentials. The development and evaluation of new energy functions is critical to accurately modeling the properties of biological macromolecules. Knowledge-based mean force potentials are derived from statistical analysis of proteins of known structures. Current knowledge-based potentials are almost in the form of weighted linear sum of interaction pairs. In this study, a class of novel nonlinear knowledge-based mean force potentials is presented. The potential parameters are obtained by nonlinear classifiers, instead of relative frequencies of interaction pairs against a reference state or linear classifiers. The support vector machine is used to derive the potential parameters on data sets that contain both native structures and decoy structures. Five knowledge-based mean force Boltzmann-based or linear potentials are introduced and their corresponding nonlinear potentials are implemented. They are the DIH potential (single-body residue-level Boltzmann-based potential), the DFIRE-SCM potential (two-body residue-level Boltzmann-based potential), the FS potential (two-body atom-level Boltzmann-based potential), the HR potential (two-body residue-level linear potential), and the T32S3 potential (two-body atom-level linear potential). Experiments are performed on well-established decoy sets, including the LKF data set, the CASP7 data set, and the Decoys “R”Us data set. The evaluation metrics include the energy Z score and the ability of each potential to discriminate native structures from a set of decoy structures. Experimental results show that all nonlinear potentials significantly outperform the corresponding Boltzmann-based or linear potentials, and the proposed discriminative framework is effective in developing knowledge

  4. Operational Meanings of Orders of Observables Defined through Quantum Set Theories with Different Conditionals

    Directory of Open Access Journals (Sweden)

    Masanao Ozawa

    2017-01-01

    Full Text Available In quantum logic there is well-known arbitrariness in choosing a binary operation for conditional. Currently, we have at least three candidates, called the Sasaki conditional, the contrapositive Sasaki conditional, and the relevance conditional. A fundamental problem is to show how the form of the conditional follows from an analysis of operational concepts in quantum theory. Here, we attempt such an analysis through quantum set theory (QST. In this paper, we develop quantum set theory based on quantum logics with those three conditionals, each of which defines different quantum logical truth value assignment. We show that those three models satisfy the transfer principle of the same form to determine the quantum logical truth values of theorems of the ZFC set theory. We also show that the reals in the model and the truth values of their equality are the same for those models. Interestingly, however, the order relation between quantum reals significantly depends on the underlying conditionals. We characterize the operational meanings of those order relations in terms of joint probability obtained by the successive projective measurements of arbitrary two observables. Those characterizations clearly show their individual features and will play a fundamental role in future applications to quantum physics.

  5. An update: choice architecture as a means to change eating behaviour in self-service settings

    DEFF Research Database (Denmark)

    Skov, Laurits Rohden; Perez-Cueto, Armando

    2014-01-01

    Objective: The primary objective of this review was to update the current evidence-base for the use of choice architecture as a means to change eating behaviour in self-service eating settings, hence potentially reducing energy intake. Methodology: 12 databases were searched systematically for ex...... in the topic of choice architecture and nudging has increased the scientific output since the last review. There is a clear limitation in the lack of a clear definitions and theoretical foundation....

  6. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    Science.gov (United States)

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  7. Joint level-set and spatio-temporal motion detection for cell segmentation.

    Science.gov (United States)

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan

  8. A new level set model for cell image segmentation

    Science.gov (United States)

    Ma, Jing-Feng; Hou, Kai; Bao, Shang-Lian; Chen, Chun

    2011-02-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.

  9. Shrinkage covariance matrix approach based on robust trimmed mean in gene sets detection

    Science.gov (United States)

    Karjanto, Suryaefiza; Ramli, Norazan Mohamed; Ghani, Nor Azura Md; Aripin, Rasimah; Yusop, Noorezatty Mohd

    2015-02-01

    Microarray involves of placing an orderly arrangement of thousands of gene sequences in a grid on a suitable surface. The technology has made a novelty discovery since its development and obtained an increasing attention among researchers. The widespread of microarray technology is largely due to its ability to perform simultaneous analysis of thousands of genes in a massively parallel manner in one experiment. Hence, it provides valuable knowledge on gene interaction and function. The microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints. Therefore, the sample covariance matrix in Hotelling's T2 statistic is not positive definite and become singular, thus it cannot be inverted. In this research, the Hotelling's T2 statistic is combined with a shrinkage approach as an alternative estimation to estimate the covariance matrix to detect significant gene sets. The use of shrinkage covariance matrix overcomes the singularity problem by converting an unbiased to an improved biased estimator of covariance matrix. Robust trimmed mean is integrated into the shrinkage matrix to reduce the influence of outliers and consequently increases its efficiency. The performance of the proposed method is measured using several simulation designs. The results are expected to outperform existing techniques in many tested conditions.

  10. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    Science.gov (United States)

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  11. Level-set-based reconstruction algorithm for EIT lung images: first clinical results

    International Nuclear Information System (INIS)

    Rahmati, Peyman; Adler, Andy; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz

    2012-01-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure–volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM. (paper)

  12. GSHR, a Web-Based Platform Provides Gene Set-Level Analyses of Hormone Responses in Arabidopsis

    Directory of Open Access Journals (Sweden)

    Xiaojuan Ran

    2018-01-01

    Full Text Available Phytohormones regulate diverse aspects of plant growth and environmental responses. Recent high-throughput technologies have promoted a more comprehensive profiling of genes regulated by different hormones. However, these omics data generally result in large gene lists that make it challenging to interpret the data and extract insights into biological significance. With the rapid accumulation of theses large-scale experiments, especially the transcriptomic data available in public databases, a means of using this information to explore the transcriptional networks is needed. Different platforms have different architectures and designs, and even similar studies using the same platform may obtain data with large variances because of the highly dynamic and flexible effects of plant hormones; this makes it difficult to make comparisons across different studies and platforms. Here, we present a web server providing gene set-level analyses of Arabidopsis thaliana hormone responses. GSHR collected 333 RNA-seq and 1,205 microarray datasets from the Gene Expression Omnibus, characterizing transcriptomic changes in Arabidopsis in response to phytohormones including abscisic acid, auxin, brassinosteroids, cytokinins, ethylene, gibberellins, jasmonic acid, salicylic acid, and strigolactones. These data were further processed and organized into 1,368 gene sets regulated by different hormones or hormone-related factors. By comparing input gene lists to these gene sets, GSHR helped to identify gene sets from the input gene list regulated by different phytohormones or related factors. Together, GSHR links prior information regarding transcriptomic changes induced by hormones and related factors to newly generated data and facilities cross-study and cross-platform comparisons; this helps facilitate the mining of biologically significant information from large-scale datasets. The GSHR is freely available at http://bioinfo.sibs.ac.cn/GSHR/.

  13. Level Set Projection Method for Incompressible Navier-Stokes on Arbitrary Boundaries

    KAUST Repository

    Williams-Rioux, Bertrand

    2012-01-12

    Second order level set projection method for incompressible Navier-Stokes equations is proposed to solve flow around arbitrary geometries. We used rectilinear grid with collocated cell centered velocity and pressure. An explicit Godunov procedure is used to address the nonlinear advection terms, and an implicit Crank-Nicholson method to update viscous effects. An approximate pressure projection is implemented at the end of the time stepping using multigrid as a conventional fast iterative method. The level set method developed by Osher and Sethian [17] is implemented to address real momentum and pressure boundary conditions by the advection of a distance function, as proposed by Aslam [3]. Numerical results for the Strouhal number and drag coefficients validated the model with good accuracy for flow over a cylinder in the parallel shedding regime (47 < Re < 180). Simulations for an array of cylinders and an oscillating cylinder were performed, with the latter demonstrating our methods ability to handle dynamic boundary conditions.

  14. Anxiety towards Mathematics and Educational Level: A Study on Means Differences

    Science.gov (United States)

    Escalera-Chávez, Milka Elena; García-Santillán, Arturo; Córdova-Rangel, Arturo; González-Gómez, Santiago; Tejada-Peña, Esmeralda

    2016-01-01

    The aim of this research work is to analyze whether there is a difference in the degree of anxiety towards mathematics among students of different educational levels. The study is not experimental and cross sectional, and it is based on difference of means between groups. The sample is not probabilistic, and consisted of 226 students from…

  15. A multiresolutional approach to fuzzy text meaning: A first attempt

    Energy Technology Data Exchange (ETDEWEB)

    Mehler, A.

    1996-12-31

    The present paper focuses on the connotative meaning aspect of language signs especially above the level of words. In this context the view is taken that texts can be defined as a kind of supersign, to which-in the same way as to other signs-a meaning can be assigned. A text can therefore be described as the result of a sign articulation which connects the material text sign with a corresponding meaning. For the constitution of the structural text meaning a kind of a semiotic composition principle is responsible, which leads to the emergence of interlocked levels of language units, demonstrating different grades of resolution. Starting on the level of words, and going through the level of sentences this principle reaches finally the level of texts by aggregating step by step the meaning of a unit on a higher level out of the meanings of all components one level below, which occur within this unit. Besides, this article will elaborate the hypothesis that the meaning constitution as a two-stage process, corresponding to the syntagmatic and paradigmatic restrictions of language elements among each other, obtains equally on the level of texts. On text level this two-levelledness leads to the constitution of the connotative text meaning, whose constituents are determined on word level by the syntagmatic and paradigmatic relations of the words. The formalization of the text meaning representation occurs with the help of fuzzy set theory.

  16. APPLICATION OF ROUGH SET THEORY TO MAINTENANCE LEVEL DECISION-MAKING FOR AERO-ENGINE MODULES BASED ON INCREMENTAL KNOWLEDGE LEARNING

    Institute of Scientific and Technical Information of China (English)

    陆晓华; 左洪福; 蔡景

    2013-01-01

    The maintenance of an aero-engine usually includes three levels ,and the maintenance cost and period greatly differ depending on the different maintenance levels .To plan a reasonable maintenance budget program , airlines would like to predict the maintenance level of aero-engine before repairing in terms of performance parame-ters ,which can provide more economic benefits .The maintenance level decision rules are mined using the histori-cal maintenance data of a civil aero-engine based on the rough set theory ,and a variety of possible models of upda-ting rules produced by newly increased maintenance cases added to the historical maintenance case database are in-vestigated by the means of incremental machine learning .The continuously updated rules can provide reasonable guidance suggestions for engineers and decision support for planning a maintenance budget program before repai-ring .The results of an example show that the decision rules become more typical and robust ,and they are more accurate to predict the maintenance level of an aero-engine module as the maintenance data increase ,which illus-trates the feasibility of the represented method .

  17. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shenggao, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Mathematical Center for Interdiscipline Research, Soochow University, 1 Shizi Street, Jiangsu, Suzhou 215006 (China); Sun, Hui; Cheng, Li-Tien [Department of Mathematics, University of California, San Diego, La Jolla, California 92093-0112 (United States); Dzubiella, Joachim [Soft Matter and Functional Materials, Helmholtz-Zentrum Berlin, 14109 Berlin, Germany and Institut für Physik, Humboldt-Universität zu Berlin, 12489 Berlin (Germany); Li, Bo, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Quantitative Biology Graduate Program, University of California, San Diego, La Jolla, California 92093-0112 (United States); McCammon, J. Andrew [Department of Chemistry and Biochemistry, Department of Pharmacology, Howard Hughes Medical Institute, University of California, San Diego, La Jolla, California 92093-0365 (United States)

    2016-08-07

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the

  18. A new level set model for cell image segmentation

    International Nuclear Information System (INIS)

    Ma Jing-Feng; Chen Chun; Hou Kai; Bao Shang-Lian

    2011-01-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing. (cross-disciplinary physics and related areas of science and technology)

  19. Mean glucose level is not an independent risk factor for mortality in mixed ICU patients

    NARCIS (Netherlands)

    Ligtenberg, JJM; Meijering, S; Stienstra, Y; van der Horst, ICC; Vogelzang, M; Nijsten, MWN; Tulleken, JE; Zijlstra, JG

    Objective: To find out if there is an association between hyperglycaemia and mortality in mixed ICU patients. Design and setting: Retrospective cohort study over a 2-year period at the medical ICU of a university hospital. Measurements: Admission glucose, maximum and mean glucose, length of stay,

  20. Application de X-FEM et des level-sets à l'homogénéisation de matériaux aléatoires caractérisés par imagerie numérique

    OpenAIRE

    Ionescu , Irina; Moës , Nicolas; Cartraud , Patrice; Béringhier , Marianne

    2007-01-01

    International audience; The advances of material characterization by means of imaging techniques require powerful computational methods for numerical analyses. This paper focuses on the advantages of coupling the X-FEM and level sets to solve microstructures with complex geometry. The level set information is obtained from a digital image and then used within a X-FEM computation, where the mesh does not need to conform to the material interface. An example of homogeniza-tion is presented.; La...

  1. Keep Meaning in Conversational Coordination

    Directory of Open Access Journals (Sweden)

    Elena Clare Cuffari

    2014-12-01

    Full Text Available Coordination is a widely employed term across recent quantitative and qualitative approaches to intersubjectivity, particularly approaches that give embodiment and enaction central explanatory roles. With a focus on linguistic and bodily coordination in conversational contexts, I review the operational meaning of coordination in recent empirical research and related theorizing of embodied intersubjectivity. This discussion articulates what must be involved in treating linguistic meaning as dynamic processes of coordination. The coordination approach presents languaging as a set of dynamic self-organizing processes and actions on multiple timescales and across multiple modalities that come about and work in certain domains (those jointly constructed in social, interactive, high-order sense-making. These processes go beyond meaning at the level that is available to first-person experience. I take one crucial consequence of this to be the ubiquitously moral nature of languaging with others. Languaging coordinates experience, among other levels of behavior and event. Ethical effort is called for by the automatic autonomy-influencing forces of languaging as coordination.

  2. The predictive value of mean serum uric acid levels for developing prediabetes.

    Science.gov (United States)

    Zhang, Qing; Bao, Xue; Meng, Ge; Liu, Li; Wu, Hongmei; Du, Huanmin; Shi, Hongbin; Xia, Yang; Guo, Xiaoyan; Liu, Xing; Li, Chunlei; Su, Qian; Gu, Yeqing; Fang, Liyun; Yu, Fei; Yang, Huijun; Yu, Bin; Sun, Shaomei; Wang, Xing; Zhou, Ming; Jia, Qiyu; Zhao, Honglin; Huang, Guowei; Song, Kun; Niu, Kaijun

    2016-08-01

    We aimed to assess the predictive value of mean serum uric acid (SUA) levels for incident prediabetes. Normoglycemic adults (n=39,353) were followed for a median of 3.0years. Prediabetes is defined as impaired fasting glucose (IFG), impaired glucose tolerance (IGT), or impaired HbA1c (IA1c), based on the American Diabetes Association criteria. Serum SUA levels were measured annually. Four diagnostic strategies were used to detect prediabetes in four separate analyses (Analysis 1: IFG. Analysis 2: IFG+IGT. Analysis 3: IFG+IA1c. Analysis 4: IFG+IGT+IA1c). Cox proportional hazards regression models were used to assess the relationship between SUA quintiles and prediabetes. C-statistic was additionally used in the final analysis to assess the accuracy of predictions based upon baseline SUA and mean SUA, respectively. After adjustment for potential confounders, the hazard ratios (95% confidence interval) of prediabetes for the highest versus lowest quintile of mean SUA were 1.22 (1.10, 1.36) in analysis 1; 1.59 (1.23, 2.05) in analysis 2; 1.62 (1.34, 1.95) in analysis 3 and 1.67 (1.31, 2.13) in analysis 4. In contrast, for baseline SUA, significance was only reached in analyses 3 and 4. Moreover, compared with baseline SUA, mean SUA value was associated with a significant increase in the C-statistic (Pprediabetes risk, and showed better predictive ability for prediabetes than baseline SUA. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. PERCEPTION LEVEL EVALUATION OF RADIO ELECTRONIC MEANS TO A PULSE OF ELECTROMAGNETIC RADIATION

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available The method for evaluating the perception level of electronic means to pulsed electromagnetic radiation is consid- ered in this article. The electromagnetic wave penetration mechanism towards the elements of electronic systems and the impact on them are determined by the intensity of the radiation field on the elements of electronic systems. The impact of electromagnetic radiation pulses to the electronic systems refers to physical and analytical parameters of the relationship between exposure to pulses of electromagnetic radiation and the sample parameters of electronic systems. A physical and mathematical model of evaluating the perception level of electronic means to pulsed electromagnetic radiation is given. The developed model was based on the physics of electronics means failure which represents the description of electro- magnetic, electric and thermal processes that lead to the degradation of the original structure of the apparatus elements. The conditions that lead to the total equation electronic systems functional destruction when exposed to electromagnetic radia- tion pulses are described. The internal characteristics of the component elements that respond to the damaging effects are considered. The ratio for the power failure is determined. A thermal breakdown temperature versus pulse duration of expo- sure at various power levels is obtained. The way of evaluation the reliability of electronic systems when exposed to pulses of electromagnetic radiation as a destructive factor is obtained.

  4. Some free boundary problems in potential flow regime usinga based level set method

    Energy Technology Data Exchange (ETDEWEB)

    Garzon, M.; Bobillo-Ares, N.; Sethian, J.A.

    2008-12-09

    Recent advances in the field of fluid mechanics with moving fronts are linked to the use of Level Set Methods, a versatile mathematical technique to follow free boundaries which undergo topological changes. A challenging class of problems in this context are those related to the solution of a partial differential equation posed on a moving domain, in which the boundary condition for the PDE solver has to be obtained from a partial differential equation defined on the front. This is the case of potential flow models with moving boundaries. Moreover the fluid front will possibly be carrying some material substance which will diffuse in the front and be advected by the front velocity, as for example the use of surfactants to lower surface tension. We present a Level Set based methodology to embed this partial differential equations defined on the front in a complete Eulerian framework, fully avoiding the tracking of fluid particles and its known limitations. To show the advantages of this approach in the field of Fluid Mechanics we present in this work one particular application: the numerical approximation of a potential flow model to simulate the evolution and breaking of a solitary wave propagating over a slopping bottom and compare the level set based algorithm with previous front tracking models.

  5. Automatic sets and Delone sets

    International Nuclear Information System (INIS)

    Barbe, A; Haeseler, F von

    2004-01-01

    Automatic sets D part of Z m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D part of Z m to be a Delone set in R m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples

  6. Viewers Extract Mean and Individual Identity from Sets of Famous Faces

    Science.gov (United States)

    Neumann, Markus F.; Schweinberger, Stefan R.; Burton, A. Mike

    2013-01-01

    When viewers are shown sets of similar objects (for example circles), they may extract summary information (e.g., average size) while retaining almost no information about the individual items. A similar observation can be made when using sets of unfamiliar faces: Viewers tend to merge identity or expression information from the set exemplars into…

  7. Implications of sea-level rise in a modern carbonate ramp setting

    Science.gov (United States)

    Lokier, Stephen W.; Court, Wesley M.; Onuma, Takumi; Paul, Andreas

    2018-03-01

    This study addresses a gap in our understanding of the effects of sea-level rise on the sedimentary systems and morphological development of recent and ancient carbonate ramp settings. Many ancient carbonate sequences are interpreted as having been deposited in carbonate ramp settings. These settings are poorly-represented in the Recent. The study documents the present-day transgressive flooding of the Abu Dhabi coastline at the southern shoreline of the Arabian/Persian Gulf, a carbonate ramp depositional system that is widely employed as a Recent analogue for numerous ancient carbonate systems. Fourteen years of field-based observations are integrated with historical and recent high-resolution satellite imagery in order to document and assess the onset of flooding. Predicted rates of transgression (i.e. landward movement of the shoreline) of 2.5 m yr- 1 (± 0.2 m yr- 1) based on global sea-level rise alone were far exceeded by the flooding rate calculated from the back-stepping of coastal features (10-29 m yr- 1). This discrepancy results from the dynamic nature of the flooding with increased water depth exposing the coastline to increased erosion and, thereby, enhancing back-stepping. A non-accretionary transgressive shoreline trajectory results from relatively rapid sea-level rise coupled with a low-angle ramp geometry and a paucity of sediments. The flooding is represented by the landward migration of facies belts, a range of erosive features and the onset of bioturbation. Employing Intergovernmental Panel on Climate Change (Church et al., 2013) predictions for 21st century sea-level rise, and allowing for the post-flooding lag time that is typical for the start-up of carbonate factories, it is calculated that the coastline will continue to retrograde for the foreseeable future. Total passive flooding (without considering feedback in the modification of the shoreline) by the year 2100 is calculated to likely be between 340 and 571 m with a flooding rate of 3

  8. Robust boundary detection of left ventricles on ultrasound images using ASM-level set method.

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Li, Hong; Teng, Yueyang; Kang, Yan

    2015-01-01

    Level set method has been widely used in medical image analysis, but it has difficulties when being used in the segmentation of left ventricular (LV) boundaries on echocardiography images because the boundaries are not very distinguish, and the signal-to-noise ratio of echocardiography images is not very high. In this paper, we introduce the Active Shape Model (ASM) into the traditional level set method to enforce shape constraints. It improves the accuracy of boundary detection and makes the evolution more efficient. The experiments conducted on the real cardiac ultrasound image sequences show a positive and promising result.

  9. Identification of Arbitrary Zonation in Groundwater Parameters using the Level Set Method and a Parallel Genetic Algorithm

    Science.gov (United States)

    Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.

    2017-12-01

    Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.

  10. A highly efficient 3D level-set grain growth algorithm tailored for ccNUMA architecture

    Science.gov (United States)

    Mießen, C.; Velinov, N.; Gottstein, G.; Barrales-Mora, L. A.

    2017-12-01

    A highly efficient simulation model for 2D and 3D grain growth was developed based on the level-set method. The model introduces modern computational concepts to achieve excellent performance on parallel computer architectures. Strong scalability was measured on cache-coherent non-uniform memory access (ccNUMA) architectures. To achieve this, the proposed approach considers the application of local level-set functions at the grain level. Ideal and non-ideal grain growth was simulated in 3D with the objective to study the evolution of statistical representative volume elements in polycrystals. In addition, microstructure evolution in an anisotropic magnetic material affected by an external magnetic field was simulated.

  11. Topology optimization in acoustics and elasto-acoustics via a level-set method

    Science.gov (United States)

    Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.

    2018-04-01

    Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.

  12. An Accurate Fire-Spread Algorithm in the Weather Research and Forecasting Model Using the Level-Set Method

    Science.gov (United States)

    Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.

    2018-04-01

    The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.

  13. Dynamic-thresholding level set: a novel computer-aided volumetry method for liver tumors in hepatic CT images

    Science.gov (United States)

    Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.

    2007-03-01

    Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.

  14. An improved empirical dynamic control system model of global mean sea level rise and surface temperature change

    Science.gov (United States)

    Wu, Qing; Luu, Quang-Hung; Tkalich, Pavel; Chen, Ge

    2018-04-01

    Having great impacts on human lives, global warming and associated sea level rise are believed to be strongly linked to anthropogenic causes. Statistical approach offers a simple and yet conceptually verifiable combination of remotely connected climate variables and indices, including sea level and surface temperature. We propose an improved statistical reconstruction model based on the empirical dynamic control system by taking into account the climate variability and deriving parameters from Monte Carlo cross-validation random experiments. For the historic data from 1880 to 2001, we yielded higher correlation results compared to those from other dynamic empirical models. The averaged root mean square errors are reduced in both reconstructed fields, namely, the global mean surface temperature (by 24-37%) and the global mean sea level (by 5-25%). Our model is also more robust as it notably diminished the unstable problem associated with varying initial values. Such results suggest that the model not only enhances significantly the global mean reconstructions of temperature and sea level but also may have a potential to improve future projections.

  15. Mean platelet volume and red cell distribution width levels in initial evaluation of panic disorder

    Directory of Open Access Journals (Sweden)

    Asoglu M

    2016-09-01

    Full Text Available Mehmet Asoglu,1 Mehmet Aslan,2 Okan Imre,1 Yuksel Kivrak,3 Oznur Akil,1 Emin Savik,4 Hasan Buyukaslan,5 Ulker Fedai,1 Abdurrahman Altındag6 1Department of Psychiatry, Faculty of Medicine, Harran University, Sanliurfa, 2Department of Internal Medicine, Faculty of Medicine, Yuzuncu Yil University, Van, 3Department of Psychiatry, Faculty of Medicine, Kafkas University, Kars, 4Department of Clinical Biochemistry, Faculty of Medicine, Harran University, 5Department of Emergency Medicine, Faculty of Medicine, Harran University, Sanliurfa, 6Department of Psychiatry, Faculty of Medicine, Gaziantep University, Gaziantep, Turkey Background: As the relationship between psychological stress and platelet activation has been widely studied in recent years, activated platelets lead to certain biochemical changes, which occur in the brain in patients with mental disorders. However, data relating to the mean platelet volume (MPV in patients with panic disorder (PD are both limited and controversial. Herein, we aimed to evaluate, for the first time, the red cell distribution width (RDW levels combined with MPV levels in patients with PD.Patients and methods: Between January 2012 and June 2015, data of 30 treatment-naïve patients (16 females, 14 males; mean age: 37±10 years; range: 18–59 years who were diagnosed with PD and 25 age- and sex-matched healthy volunteers (10 females, 15 males; mean age: 36±13 years; range: 18–59 years (control group were retrospectively analyzed. The white blood cell count (WBC, MPV, and RDW levels were measured in both groups.Results: The mean WBC, MPV, and RDW levels were 9,173.03±2,400.31/mm3, 8.19±1.13 fl, and 12.47±1.14%, respectively, in the PD group. These values were found to be 7,090.24±1,032.61, 6.85±0.67, and 11.63±0.85, respectively, in the healthy controls. The WBC, MPV, and RDW levels were significantly higher in the patients with PD compared to the healthy controls (P=0.001, P=0.001, and P=0

  16. Examining the Reliability of Interval Level Data Using Root Mean Square Differences and Concordance Correlation Coefficients

    Science.gov (United States)

    Barchard, Kimberly A.

    2012-01-01

    This article introduces new statistics for evaluating score consistency. Psychologists usually use correlations to measure the degree of linear relationship between 2 sets of scores, ignoring differences in means and standard deviations. In medicine, biology, chemistry, and physics, a more stringent criterion is often used: the extent to which…

  17. Glycated albumin is set lower in relation to plasma glucose levels in patients with Cushing's syndrome.

    Science.gov (United States)

    Kitamura, Tetsuhiro; Otsuki, Michio; Tamada, Daisuke; Tabuchi, Yukiko; Mukai, Kosuke; Morita, Shinya; Kasayama, Soji; Shimomura, Iichiro; Koga, Masafumi

    2013-09-23

    Glycated albumin (GA) is an indicator of glycemic control, which has some specific characters in comparison with HbA1c. Since glucocorticoids (GC) promote protein catabolism including serum albumin, GC excess state would influence GA levels. We therefore investigated GA levels in patients with Cushing's syndrome. We studied 16 patients with Cushing's syndrome (8 patients had diabetes mellitus and the remaining 8 patients were non-diabetic). Thirty-two patients with type 2 diabetes mellitus and 32 non-diabetic subjects matched for age, sex and BMI were used as controls. In the patients with Cushing's syndrome, GA was significantly correlated with HbA1c, but the regression line shifted downwards as compared with the controls. The GA/HbA1c ratio in the patients with Cushing's syndrome was also significantly lower than the controls. HbA1c in the non-diabetic patients with Cushing's syndrome was not different from the non-diabetic controls, whereas GA was significantly lower. In 7 patients with Cushing's syndrome who performed self-monitoring of blood glucose, the measured HbA1c was matched with HbA1c estimated from mean blood glucose, whereas the measured GA was significantly lower than the estimated GA. We clarified that GA is set lower in relation to plasma glucose levels in patients with Cushing's syndrome. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Statistical modelling of monthly mean sea level at coastal tide gauge stations along the Indian subcontinent

    Digital Repository Service at National Institute of Oceanography (India)

    Srinivas, K.; Das, V.K.; DineshKumar, P.K.

    This study investigates the suitability of statistical models for their predictive potential for the monthly mean sea level at different stations along the west and east coasts of the Indian subcontinent. Statistical modelling of the monthly mean...

  19. Improved inhalation technology for setting safe exposure levels for workplace chemicals

    Science.gov (United States)

    Stuart, Bruce O.

    1993-01-01

    Threshold Limit Values recommended as allowable air concentrations of a chemical in the workplace are often based upon a no-observable-effect-level (NOEL) determined by experimental inhalation studies using rodents. A 'safe level' for human exposure must then be estimated by the use of generalized safety factors in attempts to extrapolate from experimental rodents to man. The recent development of chemical-specific physiologically-based toxicokinetics makes use of measured physiological, biochemical, and metabolic parameters to construct a validated model that is able to 'scale-up' rodent response data to predict the behavior of the chemical in man. This procedure is made possible by recent advances in personal computer software and the emergence of appropriate biological data, and provides an analytical tool for much more reliable risk evaluation and airborne chemical exposure level setting for humans.

  20. Computerized detection of multiple sclerosis candidate regions based on a level set method using an artificial neural network

    International Nuclear Information System (INIS)

    Kuwazuru, Junpei; Magome, Taiki; Arimura, Hidetaka; Yamashita, Yasuo; Oki, Masafumi; Toyofuku, Fukai; Kakeda, Shingo; Yamamoto, Daisuke

    2010-01-01

    Yamamoto et al. developed the system for computer-aided detection of multiple sclerosis (MS) candidate regions. In a level set method in their proposed method, they employed the constant threshold value for the edge indicator function related to a speed function of the level set method. However, it would be appropriate to adjust the threshold value to each MS candidate region, because the edge magnitudes in MS candidates differ from each other. Our purpose of this study was to develop a computerized detection of MS candidate regions in MR images based on a level set method using an artificial neural network (ANN). To adjust the threshold value for the edge indicator function in the level set method to each true positive (TP) and false positive (FP) region, we constructed the ANN. The ANN could provide the suitable threshold value for each candidate region in the proposed level set method so that TP regions can be segmented and FP regions can be removed. Our proposed method detected MS regions at a sensitivity of 82.1% with 0.204 FPs per slice and similarity index of MS candidate regions was 0.717 on average. (author)

  1. Validation of Mean Absolute Sea Level of the North Atlantic obtained from Drifter, Altimetry and Wind Data

    Science.gov (United States)

    Maximenko, Nikolai A.

    2003-01-01

    Mean absolute sea level reflects the deviation of the Ocean surface from geoid due to the ocean currents and is an important characteristic of the dynamical state of the ocean. Values of its spatial variations (order of 1 m) are generally much smaller than deviations of the geoid shape from ellipsoid (order of 100 m) that makes the derivation of the absolute mean sea level a difficult task for gravity and satellite altimetry observations. Technique used by Niiler et al. for computation of the absolute mean sea level in the Kuroshio Extension was then developed into more general method and applied by Niiler et al. (2003b) to the global Ocean. The method is based on the consideration of balance of horizontal momentum.

  2. Millennial cycles of mean sea level excited by earth´s orbital variations

    Czech Academy of Sciences Publication Activity Database

    Chapanov, Y.; Ron, Cyril; Vondrák, Jan

    2015-01-01

    Roč. 12, č. 3 (2015), s. 259-266 ISSN 1214-9705 R&D Projects: GA ČR GA13-15943S Institutional support: RVO:67985815 Keywords : millenial cycles * mean sea level * Earth's insolation Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 0.561, year: 2015

  3. Kir2.1 channels set two levels of resting membrane potential with inward rectification.

    Science.gov (United States)

    Chen, Kuihao; Zuo, Dongchuan; Liu, Zheng; Chen, Haijun

    2018-04-01

    Strong inward rectifier K + channels (Kir2.1) mediate background K + currents primarily responsible for maintenance of resting membrane potential. Multiple types of cells exhibit two levels of resting membrane potential. Kir2.1 and K2P1 currents counterbalance, partially accounting for the phenomenon of human cardiomyocytes in subphysiological extracellular K + concentrations or pathological hypokalemic conditions. The mechanism of how Kir2.1 channels contribute to the two levels of resting membrane potential in different types of cells is not well understood. Here we test the hypothesis that Kir2.1 channels set two levels of resting membrane potential with inward rectification. Under hypokalemic conditions, Kir2.1 currents counterbalance HCN2 or HCN4 cation currents in CHO cells that heterologously express both channels, generating N-shaped current-voltage relationships that cross the voltage axis three times and reconstituting two levels of resting membrane potential. Blockade of HCN channels eliminated the phenomenon in K2P1-deficient Kir2.1-expressing human cardiomyocytes derived from induced pluripotent stem cells or CHO cells expressing both Kir2.1 and HCN2 channels. Weakly inward rectifier Kir4.1 or inward rectification-deficient Kir2.1•E224G mutant channels do not set such two levels of resting membrane potential when co-expressed with HCN2 channels in CHO cells or when overexpressed in human cardiomyocytes derived from induced pluripotent stem cells. These findings demonstrate a common mechanism that Kir2.1 channels set two levels of resting membrane potential with inward rectification by balancing inward currents through different cation channels such as hyperpolarization-activated HCN channels or hypokalemia-induced K2P1 leak channels.

  4. A nonparametric statistical method for determination of a confidence interval for the mean of a set of results obtained in a laboratory intercomparison

    International Nuclear Information System (INIS)

    Veglia, A.

    1981-08-01

    In cases where sets of data are obviously not normally distributed, the application of a nonparametric method for the estimation of a confidence interval for the mean seems to be more suitable than some other methods because such a method requires few assumptions about the population of data. A two-step statistical method is proposed which can be applied to any set of analytical results: elimination of outliers by a nonparametric method based on Tchebycheff's inequality, and determination of a confidence interval for the mean by a non-parametric method based on binominal distribution. The method is appropriate only for samples of size n>=10

  5. Level of health care and services in a tertiary health setting in Nigeria

    African Journals Online (AJOL)

    Level of health care and services in a tertiary health setting in Nigeria. ... Background: There is a growing awareness and demand for quality health care across the world; hence the ... Doctors and nurses formed 64.3% of the study population.

  6. Numerical simulation of interface movement in gas-liquid two-phase flows with Level Set method

    International Nuclear Information System (INIS)

    Li Huixiong; Chinese Academy of Sciences, Beijing; Deng Sheng; Chen Tingkuan; Zhao Jianfu; Wang Fei

    2005-01-01

    Numerical simulation of gas-liquid two-phase flow and heat transfer has been an attractive work for a quite long time, but still remains as a knotty difficulty due to the inherent complexities of the gas-liquid two-phase flow resulted from the existence of moving interfaces with topology changes. This paper reports the effort and the latest advances that have been made by the authors, with special emphasis on the methods for computing solutions to the advection equation of the Level set function, which is utilized to capture the moving interfaces in gas-liquid two-phase flows. Three different schemes, i.e. the simple finite difference scheme, the Superbee-TVD scheme and the 5-order WENO scheme in combination with the Runge-Kutta method are respectively applied to solve the advection equation of the Level Set. A numerical procedure based on the well-verified SIMPLER method is employed to numerically calculate the momentum equations of the two-phase flow. The above-mentioned three schemes are employed to simulate the movement of four typical interfaces under 5 typical flowing conditions. Analysis of the numerical results shows that the 5-order WENO scheme and the Superbee-TVD scheme are much better than the simple finite difference scheme, and the 5-order WENO scheme is the best to compute solutions to the advection equation of the Level Set. The 5-order WENO scheme will be employed as the main scheme to get solutions to the advection equations of the Level Set when gas-liquid two-phase flows are numerically studied in the future. (authors)

  7. Setting ozone critical levels for protecting horticultural Mediterranean crops: Case study of tomato

    International Nuclear Information System (INIS)

    González-Fernández, I.; Calvo, E.; Gerosa, G.; Bermejo, V.; Marzuoli, R.; Calatayud, V.; Alonso, R.

    2014-01-01

    Seven experiments carried out in Italy and Spain have been used to parameterising a stomatal conductance model and establishing exposure– and dose–response relationships for yield and quality of tomato with the main goal of setting O 3 critical levels (CLe). CLe with confidence intervals, between brackets, were set at an accumulated hourly O 3 exposure over 40 nl l −1 , AOT40 = 8.4 (1.2, 15.6) ppm h and a phytotoxic ozone dose above a threshold of 6 nmol m −2 s −1 , POD6 = 2.7 (0.8, 4.6) mmol m −2 for yield and AOT40 = 18.7 (8.5, 28.8) ppm h and POD6 = 4.1 (2.0, 6.2) mmol m −2 for quality, both indices performing equally well. CLe confidence intervals provide information on the quality of the dataset and should be included in future calculations of O 3 CLe for improving current methodologies. These CLe, derived for sensitive tomato cultivars, should not be applied for quantifying O 3 -induced losses at the risk of making important overestimations of the economical losses associated with O 3 pollution. -- Highlights: • Seven independent experiments from Italy and Spain were analysed. • O 3 critical levels are proposed for the protection of summer horticultural crops. • Exposure- and flux-based O 3 indices performed equally well. • Confidence intervals of the new O 3 critical levels are calculated. • A new method to estimate the degree risk of O 3 damage is proposed. -- Critical levels for tomato yield were set at AOT40 = 8.4 ppm h and POD6 = 2.7 mmol m −2 and confidence intervals should be used for improving O 3 risk assessment

  8. County-level poverty is equally associated with unmet health care needs in rural and urban settings.

    Science.gov (United States)

    Peterson, Lars E; Litaker, David G

    2010-01-01

    Regional poverty is associated with reduced access to health care. Whether this relationship is equally strong in both rural and urban settings or is affected by the contextual and individual-level characteristics that distinguish these areas, is unclear. Compare the association between regional poverty with self-reported unmet need, a marker of health care access, by rural/urban setting. Multilevel, cross-sectional analysis of a state-representative sample of 39,953 adults stratified by rural/urban status, linked at the county level to data describing contextual characteristics. Weighted random intercept models examined the independent association of regional poverty with unmet needs, controlling for a range of contextual and individual-level characteristics. The unadjusted association between regional poverty levels and unmet needs was similar in both rural (OR = 1.06 [95% CI, 1.04-1.08]) and urban (OR = 1.03 [1.02-1.05]) settings. Adjusting for other contextual characteristics increased the size of the association in both rural (OR = 1.11 [1.04-1.19]) and urban (OR = 1.11 [1.05-1.18]) settings. Further adjustment for individual characteristics had little additional effect in rural (OR = 1.10 [1.00-1.20]) or urban (OR = 1.11 [1.01-1.22]) settings. To better meet the health care needs of all Americans, health care systems in areas with high regional poverty should acknowledge the relationship between poverty and unmet health care needs. Investments, or other interventions, that reduce regional poverty may be useful strategies for improving health through better access to health care. © 2010 National Rural Health Association.

  9. Scope of physician procedures independently billed by mid-level providers in the office setting.

    Science.gov (United States)

    Coldiron, Brett; Ratnarathorn, Mondhipa

    2014-11-01

    Mid-level providers (nurse practitioners and physician assistants) were originally envisioned to provide primary care services in underserved areas. This study details the current scope of independent procedural billing to Medicare of difficult, invasive, and surgical procedures by medical mid-level providers. To understand the scope of independent billing to Medicare for procedures performed by mid-level providers in an outpatient office setting for a calendar year. Analyses of the 2012 Medicare Physician/Supplier Procedure Summary Master File, which reflects fee-for-service claims that were paid by Medicare, for Current Procedural Terminology procedures independently billed by mid-level providers. Outpatient office setting among health care providers. The scope of independent billing to Medicare for procedures performed by mid-level providers. In 2012, nurse practitioners and physician assistants billed independently for more than 4 million procedures at our cutoff of 5000 paid claims per procedure. Most (54.8%) of these procedures were performed in the specialty area of dermatology. The findings of this study are relevant to safety and quality of care. Recently, the shortage of primary care clinicians has prompted discussion of widening the scope of practice for mid-level providers. It would be prudent to temper widening the scope of practice of mid-level providers by recognizing that mid-level providers are not solely limited to primary care, and may involve procedures for which they may not have formal training.

  10. The meaning of patient-nurse interaction for older women in healthcare settings: A Qualitative Descriptive Study.

    Science.gov (United States)

    Mize, Darcy

    2018-03-01

    The purpose of this study was to explore the meaning of patient-nurse interaction for older women receiving care in healthcare settings. Older women are often overlooked or misunderstood by the nurses caring for them. Some research exists on nurses' perception of their interaction with patients, yet few studies have described the meaning of such interaction from the patients' perspective. This was a pilot study using qualitative description as a methodology. Data were filtered through a lens of critical feminist theory to interpret interactions taking place in healthcare settings that are often characterised by paternalism. Seven women between the ages of 66 and 81 were interviewed using a semi-structured guide. Participants had a distinctive perspective on the experience of caring. Their expressions include stories of being cared for themselves by nurses as well as historical recalls of being the one-caring for family members. In these combined stories, the contrast between the nurses who held caring in primacy and those who were distinctly uncaring sheds light on the importance of cultivating a moral ideal of caring and respect for personhood. A population of older women who potentially face disabling conditions must rely on direct, meaningful, interaction with nurses to successfully navigate the healthcare system. The findings suggest that these women did not have consistent access to such interaction. The gathering and interpretation of new narratives about patient-nurse interaction for older women could lead to a deeper understanding of power and civility as it impacts a caring relationship. Further research using a theoretical lens of critical feminism has implications for improving healthcare delivery for older women worldwide. © 2017 John Wiley & Sons Ltd.

  11. 23 CFR Appendix A to Part 772 - National Reference Energy Mean Emission Levels as a Function of Speed

    Science.gov (United States)

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false National Reference Energy Mean Emission Levels as a Function of Speed A Appendix A to Part 772 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF... NOISE Pt. 772, App. A Appendix A to Part 772—National Reference Energy Mean Emission Levels as a...

  12. Fluoroscopy-guided insertion of nasojejunal tubes in children - setting local diagnostic reference levels

    International Nuclear Information System (INIS)

    Vitta, Lavanya; Raghavan, Ashok; Sprigg, Alan; Morrell, Rachel

    2009-01-01

    Little is known about the radiation burden from fluoroscopy-guided insertions of nasojejunal tubes (NJTs) in children. There are no recommended or published standards of diagnostic reference levels (DRLs) available. To establish reference dose area product (DAP) levels for the fluoroscopy-guided insertion of nasojejunal tubes as a basis for setting DRLs for children. In addition, we wanted to assess our local practice and determine the success and complication rates associated with this procedure. Children who had NJT insertion procedures were identified retrospectively from the fluoroscopy database. The age of the child at the time of the procedure, DAP, screening time, outcome of the procedure, and any complications were recorded for each procedure. As the radiation dose depends on the size of the child, the children were assigned to three different age groups. The sample size, mean, median and third-quartile DAPs were calculated for each group. The third-quartile values were used to establish the DRLs. Of 186 procedures performed, 172 were successful on the first attempt. These were performed in a total of 43 children with 60% having multiple insertions over time. The third-quartile DAPs were as follows for each age group: 0-12 months, 2.6 cGy cm 2 ; 1-7 years, 2.45 cGy cm 2 ; >8 years, 14.6 cGy cm 2 . High DAP readings were obtained in the 0-12 months (n = 4) and >8 years (n = 2) age groups. No immediate complications were recorded. Fluoroscopy-guided insertion of NJTs is a highly successful procedure in a selected population of children and is associated with a low complication rate. The radiation dose per procedure is relatively low. (orig.)

  13. Fuzzy C-means method for clustering microarray data.

    Science.gov (United States)

    Dembélé, Doulaye; Kastner, Philippe

    2003-05-22

    Clustering analysis of data from DNA microarray hybridization studies is essential for identifying biologically relevant groups of genes. Partitional clustering methods such as K-means or self-organizing maps assign each gene to a single cluster. However, these methods do not provide information about the influence of a given gene for the overall shape of clusters. Here we apply a fuzzy partitioning method, Fuzzy C-means (FCM), to attribute cluster membership values to genes. A major problem in applying the FCM method for clustering microarray data is the choice of the fuzziness parameter m. We show that the commonly used value m = 2 is not appropriate for some data sets, and that optimal values for m vary widely from one data set to another. We propose an empirical method, based on the distribution of distances between genes in a given data set, to determine an adequate value for m. By setting threshold levels for the membership values, genes which are tigthly associated to a given cluster can be selected. Using a yeast cell cycle data set as an example, we show that this selection increases the overall biological significance of the genes within the cluster. Supplementary text and Matlab functions are available at http://www-igbmc.u-strasbg.fr/fcm/

  14. Thermogram breast cancer prediction approach based on Neutrosophic sets and fuzzy c-means algorithm.

    Science.gov (United States)

    Gaber, Tarek; Ismail, Gehad; Anter, Ahmed; Soliman, Mona; Ali, Mona; Semary, Noura; Hassanien, Aboul Ella; Snasel, Vaclav

    2015-08-01

    The early detection of breast cancer makes many women survive. In this paper, a CAD system classifying breast cancer thermograms to normal and abnormal is proposed. This approach consists of two main phases: automatic segmentation and classification. For the former phase, an improved segmentation approach based on both Neutrosophic sets (NS) and optimized Fast Fuzzy c-mean (F-FCM) algorithm was proposed. Also, post-segmentation process was suggested to segment breast parenchyma (i.e. ROI) from thermogram images. For the classification, different kernel functions of the Support Vector Machine (SVM) were used to classify breast parenchyma into normal or abnormal cases. Using benchmark database, the proposed CAD system was evaluated based on precision, recall, and accuracy as well as a comparison with related work. The experimental results showed that our system would be a very promising step toward automatic diagnosis of breast cancer using thermograms as the accuracy reached 100%.

  15. Application of physiologically based pharmacokinetic modeling in setting acute exposure guideline levels for methylene chloride.

    NARCIS (Netherlands)

    Bos, Peter Martinus Jozef; Zeilmaker, Marco Jacob; Eijkeren, Jan Cornelis Henri van

    2006-01-01

    Acute exposure guideline levels (AEGLs) are derived to protect the human population from adverse health effects in case of single exposure due to an accidental release of chemicals into the atmosphere. AEGLs are set at three different levels of increasing toxicity for exposure durations ranging from

  16. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    Science.gov (United States)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  17. Symbolic meanings of sex in relationships: Developing the Meanings of Sexual Behavior Inventory.

    Science.gov (United States)

    Shaw, Amanda M; Rogge, Ronald D

    2017-10-01

    Consistent with symbolic interactionism and motivation research, the study explored the meanings of sexual behavior in romantic relationships in a sample of 3,003 online respondents. Starting with a pool of 104 respondent-generated items, Exploratory and Confirmatory Factor analyses in separate sample halves revealed a stable set of 9 dimensions within that item pool that formed 2 higher-order factors representing positive (to share pleasure, to bond, to de-stress, to energize the relationship, to learn more about each other) and negative (to manage conflict, as an incentive, to express anger, and to control partner) meanings of sexual behavior within relationships. Item Response Theory analyses helped select the 4-5 most effective items of each dimension for inclusion in the Meanings of Sexual Behavior Inventory (MoSBI). Generalizability analyses suggested that the MoSBI subscale scores continued to show high levels of internal consistency across a broad range of demographic subgroups (e.g., racial/ethnic groups, gay and lesbian respondents, and various levels of education). The MoSBI subscales demonstrated moderate and distinct patterns of association with a range of conceptual boundary scales (e.g., relationship and sexual satisfaction, emotional support, negative conflict behavior, and frequency of sexual behavior) suggesting that these scales represent novel relationship processes. Consistent with this, analyses in the 862 respondents completing a 2-month follow-up assessment suggested that the meanings of sexual behavior predicted residual change in relationship satisfaction, even after controlling for frequency of sexual behavior within the relationships. Implications are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Statistical analysis of the acceleration of Baltic mean sea-level rise, 1900-2012

    Directory of Open Access Journals (Sweden)

    Birgit Hünicke

    2016-07-01

    Full Text Available We analyse annual mean sea-level records from tide-gauges located in the Baltic and parts of the North Sea with the aim of detecting an acceleration of sea-level rise over the 20textsuperscript{th} and 21textsuperscript{st} centuries. The acceleration is estimated as a (1 fit to a polynomial of order two in time, (2 a long-term linear increase in the rates computed over gliding overlapping decadal time segments, and (3 a long-term increase of the annual increments of sea level.The estimation methods (1 and (2 prove to be more powerful in detecting acceleration when tested with sea-level records produced in global climate model simulations. These methods applied to the Baltic-Sea tide-gauges are, however, not powerful enough to detect a significant acceleration in most of individual records, although most estimated accelerations are positive. This lack of detection of statistically significant acceleration at the individual tide-gauge level can be due to the high-level of local noise and not necessarily to the absence of acceleration.The estimated accelerations tend to be stronger in the north and east of the Baltic Sea. Two hypothesis to explain this spatial pattern have been explored. One is that this pattern reflects the slow-down of the Glacial Isostatic Adjustment. However, a simple estimation of this effect suggests that this slow-down cannot explain the estimated acceleration. The second hypothesis is related to the diminishing sea-ice cover over the 20textsuperscript{th} century. The melting o of less saline and colder sea-ice can lead to changes in sea-level. Also, the melting of sea-ice can reduce the number of missing values in the tide-gauge records in winter, potentially influencing the estimated trends and acceleration of seasonal mean sea-level This hypothesis cannot be ascertained either since the spatial pattern of acceleration computed for winter and summer separately are very similar. The all-station-average-record displays an

  19. A thick level set interface model for simulating fatigue-drive delamination in composites

    NARCIS (Netherlands)

    Latifi, M.; Van der Meer, F.P.; Sluys, L.J.

    2015-01-01

    This paper presents a new damage model for simulating fatigue-driven delamination in composite laminates. This model is developed based on the Thick Level Set approach (TLS) and provides a favorable link between damage mechanics and fracture mechanics through the non-local evaluation of the energy

  20. An Optimized, Grid Independent, Narrow Band Data Structure for High Resolution Level Sets

    DEFF Research Database (Denmark)

    Nielsen, Michael Bang; Museth, Ken

    2004-01-01

    enforced by the convex boundaries of an underlying cartesian computational grid. Here we present a novel very memory efficient narrow band data structure, dubbed the Sparse Grid, that enables the representation of grid independent high resolution level sets. The key features our new data structure are...

  1. The Two-Level Theory of verb meaning: An approach to integrating the semantics of action with the mirror neuron system.

    Science.gov (United States)

    Kemmerer, David; Gonzalez-Castillo, Javier

    2010-01-01

    Verbs have two separate levels of meaning. One level reflects the uniqueness of every verb and is called the "root". The other level consists of a more austere representation that is shared by all the verbs in a given class and is called the "event structure template". We explore the following hypotheses about how, with specific reference to the motor features of action verbs, these two distinct levels of semantic representation might correspond to two distinct levels of the mirror neuron system. Hypothesis 1: Root-level motor features of verb meaning are partially subserved by somatotopically mapped mirror neurons in the left primary motor and/or premotor cortices. Hypothesis 2: Template-level motor features of verb meaning are partially subserved by representationally more schematic mirror neurons in Brodmann area 44 of the left inferior frontal gyrus. Evidence has been accumulating in support of the general neuroanatomical claims made by these two hypotheses-namely, that each level of verb meaning is associated with the designated cortical areas. However, as yet no studies have satisfied all the criteria necessary to support the more specific neurobiological claims made by the two hypotheses-namely, that each level of verb meaning is associated with mirror neurons in the pertinent brain regions. This would require demonstrating that within those regions the same neuronal populations are engaged during (a) the linguistic processing of particular motor features of verb meaning, (b) the execution of actions with the corresponding motor features, and (c) the observation of actions with the corresponding motor features. 2008 Elsevier Inc. All rights reserved.

  2. A mass conserving level set method for detailed numerical simulation of liquid atomization

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Kun; Shao, Changxiao [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China); Yang, Yue [State Key Laboratory of Turbulence and Complex Systems, Peking University, Beijing 100871 (China); Fan, Jianren, E-mail: fanjr@zju.edu.cn [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)

    2015-10-01

    An improved mass conserving level set method for detailed numerical simulations of liquid atomization is developed to address the issue of mass loss in the existing level set method. This method introduces a mass remedy procedure based on the local curvature at the interface, and in principle, can ensure the absolute mass conservation of the liquid phase in the computational domain. Three benchmark cases, including Zalesak's disk, a drop deforming in a vortex field, and the binary drop head-on collision, are simulated to validate the present method, and the excellent agreement with exact solutions or experimental results is achieved. It is shown that the present method is able to capture the complex interface with second-order accuracy and negligible additional computational cost. The present method is then applied to study more complex flows, such as a drop impacting on a liquid film and the swirling liquid sheet atomization, which again, demonstrates the advantages of mass conservation and the capability to represent the interface accurately.

  3. Shape Reconstruction of Thin Electromagnetic Inclusions via Boundary Measurements: Level-Set Method Combined with the Topological Derivative

    Directory of Open Access Journals (Sweden)

    Won-Kwang Park

    2013-01-01

    Full Text Available An inverse problem for reconstructing arbitrary-shaped thin penetrable electromagnetic inclusions concealed in a homogeneous material is considered in this paper. For this purpose, the level-set evolution method is adopted. The topological derivative concept is incorporated in order to evaluate the evolution speed of the level-set functions. The results of the corresponding numerical simulations with and without noise are presented in this paper.

  4. Robust space-time extraction of ventricular surface evolution using multiphase level sets

    Science.gov (United States)

    Drapaca, Corina S.; Cardenas, Valerie; Studholme, Colin

    2004-05-01

    This paper focuses on the problem of accurately extracting the CSF-tissue boundary, particularly around the ventricular surface, from serial structural MRI of the brain acquired in imaging studies of aging and dementia. This is a challenging problem because of the common occurrence of peri-ventricular lesions which locally alter the appearance of white matter. We examine a level set approach which evolves a four dimensional description of the ventricular surface over time. This has the advantage of allowing constraints on the contour in the temporal dimension, improving the consistency of the extracted object over time. We follow the approach proposed by Chan and Vese which is based on the Mumford and Shah model and implemented using the Osher and Sethian level set method. We have extended this to the 4 dimensional case to propagate a 4D contour toward the tissue boundaries through the evolution of a 5D implicit function. For convergence we use region-based information provided by the image rather than the gradient of the image. This is adapted to allow intensity contrast changes between time frames in the MRI sequence. Results on time sequences of 3D brain MR images are presented and discussed.

  5. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    Science.gov (United States)

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Application of the level set method for multi-phase flow computation in fusion engineering

    International Nuclear Information System (INIS)

    Luo, X-Y.; Ni, M-J.; Ying, A.; Abdou, M.

    2006-01-01

    Numerical simulation of multi-phase flow is essential to evaluate the feasibility of a liquid protection scheme for the power plant chamber. The level set method is one of the best methods for computing and analyzing the motion of interface among the multi-phase flow. This paper presents a general formula for the second-order projection method combined with the level set method to simulate unsteady incompressible multi-phase flow with/out phase change flow encountered in fusion science and engineering. The third-order ENO scheme and second-order semi-implicit Crank-Nicholson scheme is used to update the convective and diffusion term. The numerical results show this method can handle the complex deformation of the interface and the effect of liquid-vapor phase change will be included in the future work

  7. Improvement of global and regional mean sea level derived from satellite altimetry multi missions

    Science.gov (United States)

    Ablain, M.; Faugere, Y.; Larnicol, G.; Picot, N.; Cazenave, A.; Benveniste, J.

    2012-04-01

    With the satellite altimetry missions, the global mean sea level (GMSL) has been calculated on a continual basis since January 1993. 'Verification' phases, during which the satellites follow each other in close succession (Topex/Poseidon--Jason-1, then Jason-1--Jason-2), help to link up these different missions by precisely determining any bias between them. Envisat, ERS-1 and ERS-2 are also used, after being adjusted on these reference missions, in order to compute Mean Sea Level at high latitudes (higher than 66°N and S), and also to improve spatial resolution by combining all these missions together. The global mean sea level (MSL) deduced from TOPEX/Poseidon, Jason-1 and Jason-2 provide a global rate of 3.2 mm from 1993 to 2010 applying the post glacial rebound (MSL aviso website http://www.jason.oceanobs.com/msl). Besides, the regional sea level trends bring out an inhomogeneous repartition of the ocean elevation with local MSL slopes ranging from + 8 mm/yr to - 8 mm/year. A study published in 2009 [Ablain et al., 2009] has shown that the global MSL trend unceratainty was estimated at +/-0.6 mm/year with a confidence interval of 90%. The main sources of errors at global and regional scales are due to the orbit calculation and the wet troposphere correction. But others sea-level components have also a significant impact on the long-term stability of MSL as for instance the stability of instrumental parameters and the atmospheric corrections. Thanks to recent studies performed in the frame of the SALP project (supported by CNES) and Sea-level Climate Change Initiative project (supported by ESA), strong improvements have been provided for the estimation of the global and regional MSL trends. In this paper, we propose to describe them; they concern the orbit calculation thanks to new gravity fields, the atmospheric corrections thanks to ERA-interim reanalyses, the wet troposphere corrections thanks to the stability improvement, and also empirical corrections

  8. Paired structures, imprecision types and two-level knowledge representation by means of opposites

    DEFF Research Database (Denmark)

    Rodríguez, J. Tinguaro; Franco de los Ríos, Camilo; Gómez, Daniel

    2016-01-01

    Opposition-based models are a current hot-topic in knowledge representation. The point of this paper is to suggest that opposition can be in fact introduced at two different levels, those of the predicates of interest being represented (as short/tall) and of the logical references (true/false) used...... to evaluate the verification of the former. We study this issue by means of the consideration of different paired structures at each level. We also pay attention at how different types of fuzziness may be introduced in these paired structures to model imprecision and lack of knowledge. As a consequence, we...

  9. Dose limits, constraints, reference levels. What does it mean for radiation protection?

    International Nuclear Information System (INIS)

    Breckow, J.

    2016-01-01

    The established concept of radiation protection with its basic principles justification, optimization, and limitation has proved its value and is going to be continued. In its deeper meaning, however, the concept is rather subtle and complex. Furthermore, in some aspects there remain some breaches or inconsistencies. This is just true for the terms dose limit, reference lever, and constraint that are tightly associated with the radiation protection principles. In order to guarantee the ability of radiation protection in whole extent, the subtle differences of meaning have to be communicated. There is a permanent need to defend the conceptual function of these terms against deliberate or undeliberate misinterpretations. Reference levels are definitely not the same as dose limits and they may not be misused as such. Any attempt to misinterpret fundamental radiation protection principles for selfish purposes should discouraged vigorously.

  10. Mean of Microaccelerations Estimate in the Small Spacecraft Internal Environment with the Use of Fuzzy Sets

    Science.gov (United States)

    Sedelnikov, A. V.

    2018-05-01

    Assessment of parameters of rotary motion of the small spacecraft around its center of mass and of microaccelerations using measurements of current from silicon photocells is carried out. At the same time there is a problem of interpretation of ambiguous telemetric data. Current from two opposite sides of the small spacecraft is significant. The mean of removal of such uncertainty is considered. It is based on an fuzzy set. As membership function it is offered to use a normality condition of the direction cosines. The example of uncertainty removal for a prototype of the Aist small spacecraft is given. The offered approach can significantly increase the accuracy of microaccelerations estimate when using measurements of current from silicon photocells.

  11. A variational approach to multi-phase motion of gas, liquid and solid based on the level set method

    Science.gov (United States)

    Yokoi, Kensuke

    2009-07-01

    We propose a simple and robust numerical algorithm to deal with multi-phase motion of gas, liquid and solid based on the level set method [S. Osher, J.A. Sethian, Front propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation, J. Comput. Phys. 79 (1988) 12; M. Sussman, P. Smereka, S. Osher, A level set approach for capturing solution to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146; J.A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, 1999; S. Osher, R. Fedkiw, Level Set Methods and Dynamics Implicit Surface, Applied Mathematical Sciences, vol. 153, Springer, 2003]. In Eulerian framework, to simulate interaction between a moving solid object and an interfacial flow, we need to define at least two functions (level set functions) to distinguish three materials. In such simulations, in general two functions overlap and/or disagree due to numerical errors such as numerical diffusion. In this paper, we resolved the problem using the idea of the active contour model [M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International Journal of Computer Vision 1 (1988) 321; V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, International Journal of Computer Vision 22 (1997) 61; G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2001; R. Kimmel, Numerical Geometry of Images: Theory, Algorithms, and Applications, Springer-Verlag, 2003] introduced in the field of image processing.

  12. Teamwork skills in actual, in situ, and in-center pediatric emergencies: performance levels across settings and perceptions of comparative educational impact.

    Science.gov (United States)

    Couto, Thomaz Bittencourt; Kerrey, Benjamin T; Taylor, Regina G; FitzGerald, Michael; Geis, Gary L

    2015-04-01

    Pediatric emergencies require effective teamwork. These skills are developed and demonstrated in actual emergencies and in simulated environments, including simulation centers (in center) and the real care environment (in situ). Our aims were to compare teamwork performance across these settings and to identify perceived educational strengths and weaknesses between simulated settings. We hypothesized that teamwork performance in actual emergencies and in situ simulations would be higher than for in-center simulations. A retrospective, video-based assessment of teamwork was performed in an academic, pediatric level 1 trauma center, using the Team Emergency Assessment Measure (TEAM) tool (range, 0-44) among emergency department providers (physicians, nurses, respiratory therapists, paramedics, patient care assistants, and pharmacists). A survey-based, cross-sectional assessment was conducted to determine provider perceptions regarding simulation training. One hundred thirty-two videos, 44 from each setting, were reviewed. Mean total TEAM scores were similar and high in all settings (31.2 actual, 31.1 in situ, and 32.3 in-center, P = 0.39). Of 236 providers, 154 (65%) responded to the survey. For teamwork training, in situ simulation was considered more realistic (59% vs. 10%) and more effective (45% vs. 15%) than in-center simulation. In a video-based study in an academic pediatric institution, ratings of teamwork were relatively high among actual resuscitations and 2 simulation settings, substantiating the influence of simulation-based training on instilling a culture of communication and teamwork. On the basis of survey results, providers favored the in situ setting for teamwork training and suggested an expansion of our existing in situ program.

  13. On piecewise constant level-set (PCLS) methods for the identification of discontinuous parameters in ill-posed problems

    International Nuclear Information System (INIS)

    De Cezaro, A; Leitão, A; Tai, X-C

    2013-01-01

    We investigate level-set-type methods for solving ill-posed problems with discontinuous (piecewise constant) coefficients. The goal is to identify the level sets as well as the level values of an unknown parameter function on a model described by a nonlinear ill-posed operator equation. The PCLS approach is used here to parametrize the solution of a given operator equation in terms of a L 2 level-set function, i.e. the level-set function itself is assumed to be a piecewise constant function. Two distinct methods are proposed for computing stable solutions of the resulting ill-posed problem: the first is based on Tikhonov regularization, while the second is based on the augmented Lagrangian approach with total variation penalization. Classical regularization results (Engl H W et al 1996 Mathematics and its Applications (Dordrecht: Kluwer)) are derived for the Tikhonov method. On the other hand, for the augmented Lagrangian method, we succeed in proving the existence of (generalized) Lagrangian multipliers in the sense of (Rockafellar R T and Wets R J-B 1998 Grundlehren der Mathematischen Wissenschaften (Berlin: Springer)). Numerical experiments are performed for a 2D inverse potential problem (Hettlich F and Rundell W 1996 Inverse Problems 12 251–66), demonstrating the capabilities of both methods for solving this ill-posed problem in a stable way (complicated inclusions are recovered without any a priori geometrical information on the unknown parameter). (paper)

  14. The Interplay of Text, Meaning and Practice

    DEFF Research Database (Denmark)

    Kärreman, Dan; Levay, Charlotta

    2017-01-01

    Context: The study of discourses (i.e. verbal interactions or written accounts) is increasingly used in social sciences to gain insight into issues connected to discourse, such as meanings, behaviours and actions. This paper situates discourse analysis in medical education, based on a framework...... settings, with a particular focus on the field of medical education. Methods: The study is based on a literature analysis of discourse analysis approaches published in Medical Education. Results: Findings suggest that empirical studies through discourse analysis can be heuristically understood in terms...... of the links between text, practices and meaning. Conclusions: Discourse analysis provides a more strongly supported argument when it is possible to defend claims on three levels: practice, using observational data; meaning, using ethnographic data, and text, using conversational and textual data....

  15. GPU accelerated edge-region based level set evolution constrained by 2D gray-scale histogram.

    Science.gov (United States)

    Balla-Arabé, Souleymane; Gao, Xinbo; Wang, Bin

    2013-07-01

    Due to its intrinsic nature which allows to easily handle complex shapes and topological changes, the level set method (LSM) has been widely used in image segmentation. Nevertheless, LSM is computationally expensive, which limits its applications in real-time systems. For this purpose, we propose a new level set algorithm, which uses simultaneously edge, region, and 2D histogram information in order to efficiently segment objects of interest in a given scene. The computational complexity of the proposed LSM is greatly reduced by using the highly parallelizable lattice Boltzmann method (LBM) with a body force to solve the level set equation (LSE). The body force is the link with image data and is defined from the proposed LSE. The proposed LSM is then implemented using an NVIDIA graphics processing units to fully take advantage of the LBM local nature. The new algorithm is effective, robust against noise, independent to the initial contour, fast, and highly parallelizable. The edge and region information enable to detect objects with and without edges, and the 2D histogram information enable the effectiveness of the method in a noisy environment. Experimental results on synthetic and real images demonstrate subjectively and objectively the performance of the proposed method.

  16. Level-set dynamics and mixing efficiency of passive and active scalars in DNS and LES of turbulent mixing layers

    NARCIS (Netherlands)

    Geurts, Bernard J.; Vreman, Bert; Kuerten, Hans; Luo, Kai H.

    2001-01-01

    The mixing efficiency in a turbulent mixing layer is quantified by monitoring the surface-area of level-sets of scalar fields. The Laplace transform is applied to numerically calculate integrals over arbitrary level-sets. The analysis includes both direct and large-eddy simulation and is used to

  17. Long-term management of high-level radioactive waste. The meaning of a demonstration

    International Nuclear Information System (INIS)

    1983-01-01

    The ''demonstration'' of the safe management of high level radioactive waste is a prerequisite for the further development of nuclear energy. It is therefore essential to be clear about both the meaning of the term ''demonstration'' and the practical means to satisfy this request. In the complex sequence of operations necessary to the safe management of high level waste, short term activities can be directly demonstrated. For longer term activities, such as the long term isolation of radioactive waste in deep undergroung structures, demonstration must be indirect. The ''demonstration'' of deep underground disposal for high level radioactive waste involves two steps: one direct, to prove that the system could be built, operated and closed safely and at acceptable costs, and one indirect, to make a convincing evaluation of the system's performance and long term safety on the basis of predictive analyses confirmed by a body of varied technical and scienfic data, much of it deriving from experimental work. The assessment of the evidence collected from current operations, existing experience in related fields and specific research and development activities, calls for specialized scientific expertise. Uncertainties in far future situations and probabilistic events can be taken into account in a scientific assessment. Competent national authorithies will have to satisfy themselves that the proposed waste management solutions can meet long term safety objectives. An element of judgement will always be needed in determining the acceptability of a waste disposal concept. However, the level of confidence in our ability to predict the performance of waste management systems will increase as supporting evidence is collected from current research and development activities and as our predictive techniques improve

  18. The Little Six Personality Dimensions From Early Childhood to Early Adulthood: Mean-Level Age and Gender Differences in Parents' Reports.

    Science.gov (United States)

    Soto, Christopher J

    2016-08-01

    The present research pursues three major goals. First, we develop scales to measure the Little Six youth personality dimensions: Extraversion, Agreeableness, Conscientiousness, Neuroticism, Openness to Experience, and Activity. Second, we examine mean-level age and gender differences in the Little Six from early childhood into early adulthood. Third, we examine the development of more specific nuance traits. We analyze parent reports, made using the common-language California Child Q-Set (CCQ), for a cross-sectional sample of 16,000 target children ranging from 3 to 20 years old. We construct CCQ-Little Six scales that reliably measure each Little Six dimension. Using these scales, we find (a) curvilinear, U-shaped age trends for Agreeableness, Conscientiousness, and Openness, with declines followed by subsequent inclines; (b) monotonic, negative age trends for Extraversion and Activity; (c) higher levels of Conscientiousness and Agreeableness among girls than boys, as well as higher levels of Activity among boys than girls; and (d) gender-specific age trends for Neuroticism, with girls scoring higher than boys by mid-adolescence. Finally, we find that several nuance traits show distinctive developmental trends that differ from their superordinate Little Six dimension. These results highlight childhood and adolescence as key periods of personality development. © 2015 Wiley Periodicals, Inc.

  19. SET overexpression in HEK293 cells regulates mitochondrial uncoupling proteins levels within a mitochondrial fission/reduced autophagic flux scenario

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Luciana O.; Goto, Renata N. [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Neto, Marinaldo P.C. [Department of Physics and Chemistry, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Sousa, Lucas O. [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Curti, Carlos [Department of Physics and Chemistry, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Leopoldino, Andréia M., E-mail: andreiaml@usp.br [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil)

    2015-03-06

    We hypothesized that SET, a protein accumulated in some cancer types and Alzheimer disease, is involved in cell death through mitochondrial mechanisms. We addressed the mRNA and protein levels of the mitochondrial uncoupling proteins UCP1, UCP2 and UCP3 (S and L isoforms) by quantitative real-time PCR and immunofluorescence as well as other mitochondrial involvements, in HEK293 cells overexpressing the SET protein (HEK293/SET), either in the presence or absence of oxidative stress induced by the pro-oxidant t-butyl hydroperoxide (t-BHP). SET overexpression in HEK293 cells decreased UCP1 and increased UCP2 and UCP3 (S/L) mRNA and protein levels, whilst also preventing lipid peroxidation and decreasing the content of cellular ATP. SET overexpression also (i) decreased the area of mitochondria and increased the number of organelles and lysosomes, (ii) increased mitochondrial fission, as demonstrated by increased FIS1 mRNA and FIS-1 protein levels, an apparent accumulation of DRP-1 protein, and an increase in the VDAC protein level, and (iii) reduced autophagic flux, as demonstrated by a decrease in LC3B lipidation (LC3B-II) in the presence of chloroquine. Therefore, SET overexpression in HEK293 cells promotes mitochondrial fission and reduces autophagic flux in apparent association with up-regulation of UCP2 and UCP3; this implies a potential involvement in cellular processes that are deregulated such as in Alzheimer's disease and cancer. - Highlights: • SET, UCPs and autophagy prevention are correlated. • SET action has mitochondrial involvement. • UCP2/3 may reduce ROS and prevent autophagy. • SET protects cell from ROS via UCP2/3.

  20. Personality traits in old age: measurement and rank-order stability and some mean-level change.

    Science.gov (United States)

    Mõttus, René; Johnson, Wendy; Deary, Ian J

    2012-03-01

    Lothian Birth Cohorts, 1936 and 1921 were used to study the longitudinal comparability of Five-Factor Model (McCrae & John, 1992) personality traits from ages 69 to 72 years and from ages 81 to 87 years, and cross-cohort comparability between ages 69 and 81 years. Personality was measured using the 50-item International Personality Item Pool (Goldberg, 1999). Satisfactory measurement invariance was established across time and cohorts. High rank-order stability was observed in both cohorts. Almost no mean-level change was observed in the younger cohort, whereas Extraversion, Agreeableness, Conscientiousness, and Intellect declined significantly in the older cohort. The older cohort scored higher on Agreeableness and Conscientiousness. In these cohorts, individual differences in personality traits continued to be stable even in very old age, mean-level changes accelerated.

  1. Patient- and population-level health consequences of discontinuing antiretroviral therapy in settings with inadequate HIV treatment availability

    Directory of Open Access Journals (Sweden)

    Kimmel April D

    2012-09-01

    Full Text Available Abstract Background In resource-limited settings, HIV budgets are flattening or decreasing. A policy of discontinuing antiretroviral therapy (ART after HIV treatment failure was modeled to highlight trade-offs among competing policy goals of optimizing individual and population health outcomes. Methods In settings with two available ART regimens, we assessed two strategies: (1 continue ART after second-line failure (Status Quo and (2 discontinue ART after second-line failure (Alternative. A computer model simulated outcomes for a single cohort of newly detected, HIV-infected individuals. Projections were fed into a population-level model allowing multiple cohorts to compete for ART with constraints on treatment capacity. In the Alternative strategy, discontinuation of second-line ART occurred upon detection of antiretroviral failure, specified by WHO guidelines. Those discontinuing failed ART experienced an increased risk of AIDS-related mortality compared to those continuing ART. Results At the population level, the Alternative strategy increased the mean number initiating ART annually by 1,100 individuals (+18.7% to 6,980 compared to the Status Quo. More individuals initiating ART under the Alternative strategy increased total life-years by 15,000 (+2.8% to 555,000, compared to the Status Quo. Although more individuals received treatment under the Alternative strategy, life expectancy for those treated decreased by 0.7 years (−8.0% to 8.1 years compared to the Status Quo. In a cohort of treated patients only, 600 more individuals (+27.1% died by 5 years under the Alternative strategy compared to the Status Quo. Results were sensitive to the timing of detection of ART failure, number of ART regimens, and treatment capacity. Although we believe the results robust in the short-term, this analysis reflects settings where HIV case detection occurs late in the disease course and treatment capacity and the incidence of newly detected patients are

  2. Determining Mean Annual Energy Production

    DEFF Research Database (Denmark)

    Kofoed, Jens Peter; Folley, Matt

    2016-01-01

    This robust book presents all the information required for numerical modelling of a wave energy converter, together with a comparative review of the different available techniques. The calculation of the mean annual energy production (MAEP) is critical to the assessment of the levelized cost...... of energy for a wave energy converter or wave farm. Fundamentally, the MAEP is equal to the sum of the product of the power capture of a set of sea-states and their average annual occurrence. In general, it is necessary in the calculation of the MAEP to achieve a balance between computational demand...

  3. Coping across the Transition to Adolescence: Evidence of Interindividual Consistency and Mean-Level Change

    Science.gov (United States)

    Valiente, Carlos; Eisenberg, Nancy; Fabes, Richard A.; Spinrad, Tracy L.; Sulik, Michael J.

    2015-01-01

    The goal of this study was to examine various forms of coping across the transition to adolescence, with a focus on interindividual (correlational) consistency of coping and mean-level changes in coping. Adolescents' emotional coping, problem solving, positive cognitive restructuring, avoidance, and support seeking in response to everyday…

  4. Setting semantics: conceptual set can determine the physical properties that capture attention.

    Science.gov (United States)

    Goodhew, Stephanie C; Kendall, William; Ferber, Susanne; Pratt, Jay

    2014-08-01

    The ability of a stimulus to capture visuospatial attention depends on the interplay between its bottom-up saliency and its relationship to an observer's top-down control set, such that stimuli capture attention if they match the predefined properties that distinguish a searched-for target from distractors (Folk, Remington, & Johnston, Journal of Experimental Psychology: Human Perception & Performance, 18, 1030-1044 1992). Despite decades of research on this phenomenon, however, the vast majority has focused exclusively on matches based on low-level physical properties. Yet if contingent capture is indeed a "top-down" influence on attention, then semantic content should be accessible and able to determine which physical features capture attention. Here we tested this prediction by examining whether a semantically defined target could create a control set for particular features. To do this, we had participants search to identify a target that was differentiated from distractors by its meaning (e.g., the word "red" among color words all written in black). Before the target array, a cue was presented, and it was varied whether the cue appeared in the physical color implied by the target word. Across three experiments, we found that cues that embodied the meaning of the word produced greater cuing than cues that did not. This suggests that top-down control sets activate content that is semantically associated with the target-defining property, and this content in turn has the ability to exogenously orient attention.

  5. DEEBAR - A BASIC interactive computer programme for estimating mean resonance spacings

    International Nuclear Information System (INIS)

    Booth, M.; Pope, A.L.; Smith, R.W.; Story, J.S.

    1988-02-01

    DEEBAR is a BASIC interactive programme, which uses the theories of Dyson and of Dyson and Mehta, to compute estimates of the mean resonance spacings and associated uncertainty statistics from an input file of neutron resonance energies. In applying these theories the broad scale energy dependence of D-bar, as predicted by the ordinary theory of level densities, is taken into account. The mean spacing D-bar ± δD-bar, referred to zero energy of the incident neutrons, is computed from the energies of the first k resonances, for k = 2,3...K in turn and as if no resonances are missing. The user is asked to survey this set of D-bar and δD-bar values and to form a judgement - up to what value of k is the set of resonances complete and what value, in consequence, does the user adopt as the preferred value of D-bar? When the preferred values for k and D-bar have been input, the programme calculates revised values for the level density parameters, consistent with this value for D-bar and with other input information. Two short tables are printed, illustrating the energy variation and spin dependence of D-bar. Dyson's formula based on his Coulomb gas analogy is used for estimating the most likely energies of the topmost bound levels. Finally the quasi-crystalline character of a single level series is exploited by means of a table in which the resonance energies are set alongside an energy ladder whose rungs are regularly spaced with spacing D-bar(E); this comparative table expedites the search for gaps where resonances may have been missed experimentally. Used in conjunction with the program LJPROB, which calculates neutron strengths and compares them against the expected Porter Thomas distribution, estimates of the statistical parameters for use in the unresolved resonance region may be derived. (author)

  6. Decadal Cycles of Earth Rotation, Mean Sea Level and Climate, Excited by Solar Activity

    Czech Academy of Sciences Publication Activity Database

    Chapanov, Y.; Ron, Cyril; Vondrák, Jan

    2017-01-01

    Roč. 14, č. 2 (2017), s. 241-250 ISSN 1214-9705 R&D Projects: GA ČR GA13-15943S Institutional support: RVO:67985815 Keywords : Earth rotation * solar activity * mean sea level Subject RIV: DE - Earth Magnetism, Geodesy, Geography OBOR OECD: Physical geography Impact factor: 0.699, year: 2016

  7. DESIRE FOR LEVELS. Background study for the policy document "Setting Environmental Quality Standards for Water and Soil"

    NARCIS (Netherlands)

    van de Meent D; Aldenberg T; Canton JH; van Gestel CAM; Slooff W

    1990-01-01

    The report provides scientific support for setting environmental quality objectives for water, sediment and soil. Quality criteria are not set in this report. Only options for decisions are given. The report is restricted to the derivation of the 'maximally acceptable risk' levels (MAR)

  8. Considering Actionability at the Participant's Research Setting Level for Anticipatable Incidental Findings from Clinical Research.

    Science.gov (United States)

    Ortiz-Osorno, Alberto Betto; Ehler, Linda A; Brooks, Judith

    2015-01-01

    Determining what constitutes an anticipatable incidental finding (IF) from clinical research and defining whether, and when, this IF should be returned to the participant have been topics of discussion in the field of human subject protections for the last 10 years. It has been debated that implementing a comprehensive IF-approach that addresses both the responsibility of researchers to return IFs and the expectation of participants to receive them can be logistically challenging. IFs have been debated at different levels, such as the ethical reasoning for considering their disclosure or the need for planning for them during the development of the research study. Some authors have discussed the methods for re-contacting participants for disclosing IFs, as well as the relevance of considering the clinical importance of the IFs. Similarly, other authors have debated about when IFs should be disclosed to participants. However, no author has addressed how the "actionability" of the IFs should be considered, evaluated, or characterized at the participant's research setting level. This paper defines the concept of "Actionability at the Participant's Research Setting Level" (APRSL) for anticipatable IFs from clinical research, discusses some related ethical concepts to justify the APRSL concept, proposes a strategy to incorporate APRSL into the planning and management of IFs, and suggests a strategy for integrating APRSL at each local research setting. © 2015 American Society of Law, Medicine & Ethics, Inc.

  9. Improvement of Global and Regional Mean Sea Level Trends Derived from all Altimetry Missions.

    Science.gov (United States)

    Ablain, Michael; Benveniste, Jérôme; Faugere, Yannice; Larnicol, Gilles; Cazenave, Anny; Johannessen, Johnny A.; Stammer, Detlef; Timms, Gary

    2012-07-01

    The global mean sea level (GMSL) has been calculated on a continual basis since January 1993 using data from satellite altimetry missions. The global mean sea level (MSL) deduced from TOPEX/Poseidon, Jason-1 and Jason-2 is increasing with a global trend of 3.2 mm from 1993 to 2010 applying the post glacial rebound (MSL Aviso website http://www.jason.oceanobs.com/msl). Besides, the regional sea level trends bring out an inhomogeneous repartition of the ocean elevation with local MSL slopes ranging from +/- 8 mm/year. A study published in 2009 [Ablain et al., 2009] has shown that the global MSL trend uncertainty was estimated at +/-0.6 mm/year with a confidence interval of 90%. The main sources of errors at global and regional scales are due to the orbit calculation and the wet troposphere correction. But others sea-level components have also a significant impact on the long-term stability of MSL as for instance the stability of instrumental parameters and the atmospheric corrections. Thanks to recent studies performed in Sea Level Essential Climate Variable Project in the frame of the Climate Change Initiative, an ESA Programme, in addition to activities performed within the SALP/CNES, strong improvements have been provided for the estimation of the global and regional MSL trends. In this paper, we propose to describe them; they concern the orbit calculation thanks to new gravity fields, the atmospheric corrections thanks to ERA-interim reanalyses, the wet troposphere corrections thanks to the stability improvement, and also empirical corrections allowing us to link regional time series together better. These improvements are described at global and regional scale for all the altimetry missions.

  10. Risk assessment of water pollution sources based on an integrated k-means clustering and set pair analysis method in the region of Shiyan, China.

    Science.gov (United States)

    Li, Chunhui; Sun, Lian; Jia, Junxiang; Cai, Yanpeng; Wang, Xuan

    2016-07-01

    Source water areas are facing many potential water pollution risks. Risk assessment is an effective method to evaluate such risks. In this paper an integrated model based on k-means clustering analysis and set pair analysis was established aiming at evaluating the risks associated with water pollution in source water areas, in which the weights of indicators were determined through the entropy weight method. Then the proposed model was applied to assess water pollution risks in the region of Shiyan in which China's key source water area Danjiangkou Reservoir for the water source of the middle route of South-to-North Water Diversion Project is located. The results showed that eleven sources with relative high risk value were identified. At the regional scale, Shiyan City and Danjiangkou City would have a high risk value in term of the industrial discharge. Comparatively, Danjiangkou City and Yunxian County would have a high risk value in terms of agricultural pollution. Overall, the risk values of north regions close to the main stream and reservoir of the region of Shiyan were higher than that in the south. The results of risk level indicated that five sources were in lower risk level (i.e., level II), two in moderate risk level (i.e., level III), one in higher risk level (i.e., level IV) and three in highest risk level (i.e., level V). Also risks of industrial discharge are higher than that of the agricultural sector. It is thus essential to manage the pillar industry of the region of Shiyan and certain agricultural companies in the vicinity of the reservoir to reduce water pollution risks of source water areas. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. NEW APPROACHES: Measurement of the mean lifetime of cosmic ray muons in the A-level laboratory

    Science.gov (United States)

    Dunne, Peter; Costich, David; O'Sullivan, Sean

    1998-09-01

    The Turning Points in Physics module from the NEAB A-level Modular Physics syllabus requires students to have an understanding of relativistic time dilation and offers the measurement of the mean lifetime of cosmic ray muons as an example of supporting experimental evidence. This article describes a direct measurement of muon lifetime carried out in the A-level laboratory.

  12. Analysis of Forensic Autopsy in 120 Cases of Medical Disputes Among Different Levels of Institutional Settings.

    Science.gov (United States)

    Yu, Lin-Sheng; Ye, Guang-Hua; Fan, Yan-Yan; Li, Xing-Biao; Feng, Xiang-Ping; Han, Jun-Ge; Lin, Ke-Zhi; Deng, Miao-Wu; Li, Feng

    2015-09-01

    Despite advances in medical science, the causes of death can sometimes only be determined by pathologists after a complete autopsy. Few studies have investigated the importance of forensic autopsy in medically disputed cases among different levels of institutional settings. Our study aimed to analyze forensic autopsy in 120 cases of medical disputes among five levels of institutional settings between 2001 and 2012 in Wenzhou, China. The results showed an overall concordance rate of 55%. Of the 39% of clinically missed diagnosis, cardiovascular pathology comprises 55.32%, while respiratory pathology accounts for the remaining 44. 68%. Factors that increase the likelihood of missed diagnoses were private clinics, community settings, and county hospitals. These results support that autopsy remains an important tool in establishing causes of death in medically disputed case, which may directly determine or exclude the fault of medical care and therefore in helping in resolving these cases. © 2015 American Academy of Forensic Sciences.

  13. Being in togetherness: meanings of encounters within primary healtcare setting for patients living with long-term illness.

    Science.gov (United States)

    Nygren Zotterman, Anna; Skär, Lisa; Olsson, Malin; Söderberg, Siv

    2016-10-01

    The aim of this study was to elucidate meanings of encounters for patients with long-term illness within the primary healthcare setting. Good encounters can be crucial for patients in terms of how they view their quality of care. Therefore, it is important to understand meanings of interactions between patients and healthcare personnel. A phenomenological hermeneutic method was used to analyse the interviews. Narrative interviews with ten patients with long-term illness were performed, with a focus on their encounters with healthcare personnel within the primary healthcare setting. A phenomenological hermeneutical approach was used to interpret the interview texts. The results demonstrated that patients felt well when they were seen as an important person and felt welcomed by healthcare personnel. Information and follow-ups regarding the need for care were essential. Continuity with the healthcare personnel was one way to establish a relationship, which contributed to patients' feelings of being seen and understood. Good encounters were important for patients' feelings of health and well-being. Being met with mistrust, ignorance and nonchalance had negative effects on patients' perceived health and well-being and led to feelings of lower confidence regarding the care received. Patients described a great need to be confirmed and met with respect by healthcare personnel, which contributed to their sense of togetherness. Having a sense of togetherness strengthened patient well-being. By listening and responding to patients' needs and engaging in meetings with patients in a respectful manner, healthcare personnel can empower patients' feelings of health and well-being. Healthcare personnel need to be aware of the significance of these actions because they can make patients experience feelings of togetherness, even if patients meet with different care personnel at each visit. © 2016 John Wiley & Sons Ltd.

  14. INTEGRATED SFM TECHNIQUES USING DATA SET FROM GOOGLE EARTH 3D MODEL AND FROM STREET LEVEL

    Directory of Open Access Journals (Sweden)

    L. Inzerillo

    2017-08-01

    Full Text Available Structure from motion (SfM represents a widespread photogrammetric method that uses the photogrammetric rules to carry out a 3D model from a photo data set collection. Some complex ancient buildings, such as Cathedrals, or Theatres, or Castles, etc. need to implement the data set (realized from street level with the UAV one in order to have the 3D roof reconstruction. Nevertheless, the use of UAV is strong limited from the government rules. In these last years, Google Earth (GE has been enriched with the 3D models of the earth sites. For this reason, it seemed convenient to start to test the potentiality offered by GE in order to extract from it a data set that replace the UAV function, to close the aerial building data set, using screen images of high resolution 3D models. Users can take unlimited “aerial photos” of a scene while flying around in GE at any viewing angle and altitude. The challenge is to verify the metric reliability of the SfM model carried out with an integrated data set (the one from street level and the one from GE aimed at replace the UAV use in urban contest. This model is called integrated GE SfM model (i-GESfM. In this paper will be present a case study: the Cathedral of Palermo.

  15. Joint inversion of geophysical data using petrophysical clustering and facies deformation wth the level set technique

    Science.gov (United States)

    Revil, A.

    2015-12-01

    Geological expertise and petrophysical relationships can be brought together to provide prior information while inverting multiple geophysical datasets. The merging of such information can result in more realistic solution in the distribution of the model parameters, reducing ipse facto the non-uniqueness of the inverse problem. We consider two level of heterogeneities: facies, described by facies boundaries and heteroegenities inside each facies determined by a correlogram. In this presentation, we pose the geophysical inverse problem in terms of Gaussian random fields with mean functions controlled by petrophysical relationships and covariance functions controlled by a prior geological cross-section, including the definition of spatial boundaries for the geological facies. The petrophysical relationship problem is formulated as a regression problem upon each facies. The inversion of the geophysical data is performed in a Bayesian framework. We demonstrate the usefulness of this strategy using a first synthetic case for which we perform a joint inversion of gravity and galvanometric resistivity data with the stations located at the ground surface. The joint inversion is used to recover the density and resistivity distributions of the subsurface. In a second step, we consider the possibility that the facies boundaries are deformable and their shapes are inverted as well. We use the level set approach to perform such deformation preserving prior topological properties of the facies throughout the inversion. With the help of prior facies petrophysical relationships and topological characteristic of each facies, we make posterior inference about multiple geophysical tomograms based on their corresponding geophysical data misfits. The method is applied to a second synthetic case showing that we can recover the heterogeneities inside the facies, the mean values for the petrophysical properties, and, to some extent, the facies boundaries using the 2D joint inversion of

  16. A combined single-multiphase flow formulation of the premixing phase using the level set method

    International Nuclear Information System (INIS)

    Leskovar, M.; Marn, J.

    1999-01-01

    The premixing phase of a steam explosion covers the interaction of the melt jet or droplets with the water prior to any steam explosion occurring. To get a better insight of the hydrodynamic processes during the premixing phase beside hot premixing experiments, where the water evaporation is significant, also cold isothermal premixing experiments are performed. The specialty of isothermal premixing experiments is that three phases are involved: the water, the air and the spheres phase, but only the spheres phase mixes with the other two phases whereas the water and air phases do not mix and remain separated by a free surface. Our idea therefore was to treat the isothermal premixing process with a combined single-multiphase flow model. In this combined model the water and air phase are treated as a single phase with discontinuous phase properties at the water air interface, whereas the spheres are treated as usually with a multiphase flow model, where the spheres represent the dispersed phase and the common water-air phase represents the continuous phase. The common water-air phase was described with the front capturing method based on the level set formulation. In the level set formulation, the boundary of two-fluid interfaces is modeled as the zero set of a smooth signed normal distance function defined on the entire physical domain. The boundary is then updated by solving a nonlinear equation of the Hamilton-Jacobi type on the whole domain. With this single-multiphase flow model the Queos isothermal premixing Q08 has been simulated. A numerical analysis using different treatments of the water-air interface (level set, high-resolution and upwind) has been performed for the incompressible and compressible case and the results were compared to experimental measurements.(author)

  17. Exercise Self-Efficacy as a Mediator between Goal-Setting and Physical Activity: Developing the Workplace as a Setting for Promoting Physical Activity.

    Science.gov (United States)

    Iwasaki, Yoshie; Honda, Sumihisa; Kaneko, Shuji; Kurishima, Kazuhiro; Honda, Ayumi; Kakinuma, Ayumu; Jahng, Doosub

    2017-03-01

    Physical activity (PA) is ranked as a leading health indicator and the workplace is a key setting to promote PA. The purpose of this study was to examine how goal-setting and exercise self-efficacy (SE) during a health promotion program influenced PA level among Japanese workers. Using a cross-sectional study design, we surveyed 281 employees. The short version of the International Physical Activity Questionnaire was used to assess PA level. Exercise SE was assessed using a partially modified version of Oka's exercise SE scale. Personal goals were assessed as the total numbers of "yes" responses to five items regarding "details of personal goals to perform PA". A mediational model was used to examine whether exercise SE mediates between the number of personal goals and PA level. The mean age of the participants was 46.3 years, 76.2% were men, and the most common occupational category was software engineer (30.6%). The average PA level per week exceeded the recommended level in 127 participants (45.2%). One hundred and eighty-four participants (65.5%) set some form of concrete personal goal to perform PA. The relationship between the number of personal goals and PA level was mediated by exercise SE. Our study showed that exercise SE mediates goal-setting and increases PA. The results suggest that the components of PA promotion programs should be tailored to enhance participants' confidence in performing PA.

  18. HANDBOOK: GUIDANCE ON SETTING PERMIT CONDITIONS AND REPORTING TRIAL BURN RESULTS

    Science.gov (United States)

    This Handbook provides guidance for establishing operational conditions for incinerators. he document provides a means for state and local agencies to achieve a level of consistency in setting permit conditions that will result in establishment of more uniform permit conditions n...

  19. Assessment of Current Estimates of Global and Regional Mean Sea Level from the TOPEX/Poseidon, Jason-1, and OSTM 17-Year Record

    Science.gov (United States)

    Beckley, Brian D.; Ray, Richard D.; Lemoine, Frank G.; Zelensky, N. P.; Holmes, S. A.; Desal, Shailen D.; Brown, Shannon; Mitchum, G. T.; Jacob, Samuel; Luthcke, Scott B.

    2010-01-01

    The science value of satellite altimeter observations has grown dramatically over time as enabling models and technologies have increased the value of data acquired on both past and present missions. With the prospect of an observational time series extending into several decades from TOPEX/Poseidon through Jason-1 and the Ocean Surface Topography Mission (OSTM), and further in time with a future set of operational altimeters, researchers are pushing the bounds of current technology and modeling capability in order to monitor global sea level rate at an accuracy of a few tenths of a mm/yr. The measurement of mean sea-level change from satellite altimetry requires an extreme stability of the altimeter measurement system since the signal being measured is at the level of a few mm/yr. This means that the orbit and reference frame within which the altimeter measurements are situated, and the associated altimeter corrections, must be stable and accurate enough to permit a robust MSL estimate. Foremost, orbit quality and consistency are critical to satellite altimeter measurement accuracy. The orbit defines the altimeter reference frame, and orbit error directly affects the altimeter measurement. Orbit error remains a major component in the error budget of all past and present altimeter missions. For example, inconsistencies in the International Terrestrial Reference Frame (ITRF) used to produce the precision orbits at different times cause systematic inconsistencies to appear in the multimission time-frame between TOPEX and Jason-1, and can affect the intermission calibration of these data. In an effort to adhere to cross mission consistency, we have generated the full time series of orbits for TOPEX/Poseidon (TP), Jason-1, and OSTM based on recent improvements in the satellite force models, reference systems, and modeling strategies. The recent release of the entire revised Jason-1 Geophysical Data Records, and recalibration of the microwave radiometer correction also

  20. Mean-Level Change and Intraindividual Variability in Self-Esteem and Depression among High-Risk Children

    Science.gov (United States)

    Kim, Jungmeen; Cicchetti, Dante

    2009-01-01

    This study investigated mean-level changes and intraindividual variability of self-esteem among maltreated (N = 142) and nonmaltreated (N = 109) school-aged children from low-income families. Longitudinal factor analysis revealed higher temporal stability of self-esteem among maltreated children compared to nonmaltreated children. Cross-domain…

  1. Ultrasonic scalpel causes greater depth of soft tissue necrosis compared to monopolar electrocautery at standard power level settings in a pig model.

    Science.gov (United States)

    Homayounfar, Kia; Meis, Johanna; Jung, Klaus; Klosterhalfen, Bernd; Sprenger, Thilo; Conradi, Lena-Christin; Langer, Claus; Becker, Heinz

    2012-02-23

    Ultrasonic scalpel (UC) and monopolar electrocautery (ME) are common tools for soft tissue dissection. However, morphological data on the related tissue alteration are discordant. We developed an automatic device for standardized sample excision and compared quality and depth of morphological changes caused by UC and ME in a pig model. 100 tissue samples (5 × 3 cm) of the abdominal wall were excised in 16 pigs. Excisions were randomly performed manually or by using the self-constructed automatic device at standard power levels (60 W cutting in ME, level 5 in UC) for abdominal surgery. Quality of tissue alteration and depth of coagulation necrosis were examined histopathologically. Device (UC vs. ME) and mode (manually vs. automatic) effects were studied by two-way analysis of variance at a significance level of 5%. At the investigated power level settings UC and ME induced qualitatively similar coagulation necroses. Mean depth of necrosis was 450.4 ± 457.8 μm for manual UC and 553.5 ± 326.9 μm for automatic UC versus 149.0 ± 74.3 μm for manual ME and 257.6 ± 119.4 μm for automatic ME. Coagulation necrosis was significantly deeper (p power levels.

  2. Some considerations about Gaussian basis sets for electric property calculations

    Science.gov (United States)

    Arruda, Priscilla M.; Canal Neto, A.; Jorge, F. E.

    Recently, segmented contracted basis sets of double, triple, and quadruple zeta valence quality plus polarization functions (XZP, X = D, T, and Q, respectively) for the atoms from H to Ar were reported. In this work, with the objective of having a better description of polarizabilities, the QZP set was augmented with diffuse (s and p symmetries) and polarization (p, d, f, and g symmetries) functions that were chosen to maximize the mean dipole polarizability at the UHF and UMP2 levels, respectively. At the HF and B3LYP levels of theory, electric dipole moment and static polarizability for a sample of molecules were evaluated. Comparison with experimental data and results obtained with a similar size basis set, whose diffuse functions were optimized for the ground state energy of the anion, was done.

  3. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    Energy Technology Data Exchange (ETDEWEB)

    Hosntalab, Mohammad [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Aghaeizadeh Zoroofi, Reza [University of Tehran, Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, Tehran (Iran); Abbaspour Tehrani-Fard, Ali [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Sharif University of Technology, Department of Electrical Engineering, Tehran (Iran); Shirani, Gholamreza [Faculty of Dentistry Medical Science of Tehran University, Oral and Maxillofacial Surgery Department, Tehran (Iran)

    2008-09-15

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  4. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    International Nuclear Information System (INIS)

    Hosntalab, Mohammad; Aghaeizadeh Zoroofi, Reza; Abbaspour Tehrani-Fard, Ali; Shirani, Gholamreza

    2008-01-01

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  5. Wave energy level and geographic setting correlate with Florida beach water quality.

    Science.gov (United States)

    Feng, Zhixuan; Reniers, Ad; Haus, Brian K; Solo-Gabriele, Helena M; Kelly, Elizabeth A

    2016-03-15

    Many recreational beaches suffer from elevated levels of microorganisms, resulting in beach advisories and closures due to lack of compliance with Environmental Protection Agency guidelines. We conducted the first statewide beach water quality assessment by analyzing decadal records of fecal indicator bacteria (enterococci and fecal coliform) levels at 262 Florida beaches. The objectives were to depict synoptic patterns of beach water quality exceedance along the entire Florida shoreline and to evaluate their relationships with wave condition and geographic location. Percent exceedances based on enterococci and fecal coliform were negatively correlated with both long-term mean wave energy and beach slope. Also, Gulf of Mexico beaches exceeded the thresholds significantly more than Atlantic Ocean ones, perhaps partially due to the lower wave energy. A possible linkage between wave energy level and water quality is beach sand, a pervasive nonpoint source that tends to harbor more bacteria in the low-wave-energy environment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Mean-field energy-level shifts and dielectric properties of strongly polarized Rydberg gases

    OpenAIRE

    Zhelyazkova, V.; Jirschik, R.; Hogan, S. D.

    2016-01-01

    Mean-field energy-level shifts arising as a result of strong electrostatic dipole interactions within dilute gases of polarized helium Rydberg atoms have been probed by microwave spectroscopy. The Rydberg states studied had principal quantum numbers n=70 and 72, and electric dipole moments of up to 14 050 D, and were prepared in pulsed supersonic beams at particle number densities on the order of 108 cm−3. Comparisons of the experimental data with the results of Monte Carlo calculations highl...

  7. Quasi-min-max Fuzzy MPC of UTSG Water Level Based on Off-Line Invariant Set

    Science.gov (United States)

    Liu, Xiangjie; Jiang, Di; Lee, Kwang Y.

    2015-10-01

    In a nuclear power plant, the water level of the U-tube steam generator (UTSG) must be maintained within a safe range. Traditional control methods encounter difficulties due to the complexity, strong nonlinearity and “swell and shrink” effects, especially at low power levels. A properly designed robust model predictive control can well solve this problem. In this paper, a quasi-min-max fuzzy model predictive controller is developed for controlling the constrained UTSG system. While the online computational burden could be quite large for the real-time control, a bank of ellipsoid invariant sets together with the corresponding feedback control laws are obtained by off-line solving linear matrix inequalities (LMIs). Based on the UTSG states, the online optimization is simplified as a constrained optimization problem with a bisection search for the corresponding ellipsoid invariant set. Simulation results are given to show the effectiveness of the proposed controller.

  8. On the Relationship between Variational Level Set-Based and SOM-Based Active Contours

    Science.gov (United States)

    Abdelsamea, Mohammed M.; Gnecco, Giorgio; Gaber, Mohamed Medhat; Elyan, Eyad

    2015-01-01

    Most Active Contour Models (ACMs) deal with the image segmentation problem as a functional optimization problem, as they work on dividing an image into several regions by optimizing a suitable functional. Among ACMs, variational level set methods have been used to build an active contour with the aim of modeling arbitrarily complex shapes. Moreover, they can handle also topological changes of the contours. Self-Organizing Maps (SOMs) have attracted the attention of many computer vision scientists, particularly in modeling an active contour based on the idea of utilizing the prototypes (weights) of a SOM to control the evolution of the contour. SOM-based models have been proposed in general with the aim of exploiting the specific ability of SOMs to learn the edge-map information via their topology preservation property and overcoming some drawbacks of other ACMs, such as trapping into local minima of the image energy functional to be minimized in such models. In this survey, we illustrate the main concepts of variational level set-based ACMs, SOM-based ACMs, and their relationship and review in a comprehensive fashion the development of their state-of-the-art models from a machine learning perspective, with a focus on their strengths and weaknesses. PMID:25960736

  9. Adaptable Value-Set Analysis for Low-Level Code

    OpenAIRE

    Brauer, Jörg; Hansen, René Rydhof; Kowalewski, Stefan; Larsen, Kim G.; Olesen, Mads Chr.

    2012-01-01

    This paper presents a framework for binary code analysis that uses only SAT-based algorithms. Within the framework, incremental SAT solving is used to perform a form of weakly relational value-set analysis in a novel way, connecting the expressiveness of the value sets to computational complexity. Another key feature of our framework is that it translates the semantics of binary code into an intermediate representation. This allows for a straightforward translation of the program semantics in...

  10. BACK MUSCLES STRENGTH DEVELOPMENT BY MEANS OF INCREASE AND DECREASE OF EFFORT LOAD DURING GIANT SETS IN BODYBUILDING FOR MASSES

    Directory of Open Access Journals (Sweden)

    TIMNEA OLIVIA

    2013-01-01

    Full Text Available Abstract The aim of the study is to highlight methodological issues on the back muscle strength development by combining methodological procedures in masses bodybuilding.Methods. The study was conducted in three stages over a period of two months (March-April 2011, performing three workouts per week, monitoring the effective use of strength exercises to develop back muscles in the same muscle area by means of giant sets during workouts. In this context, we conducted a case study in "Tonik Fitness Club" in Bucharest, applied to two athletes of 28 and 34 years old. We recorded subjects’ evolutions during the training session, using statistical and mathematical method and graphical representation method.Results. The study content highlights the training programs depending on muscle zone and the specific methodological aspects, the weekly training program per muscle groups, the stages of study carrying out, the test and control trials applied in terms of anthropometric measurements and of back muscle strength development, and the application of the methodical procedure of effort load increase and decrease within the giant sets in a training micro-cycle.Discussion. The study focused on the training programs over two months, monitoring statistically the development of back muscle strength through the application of the procedure of effort load increase and decrease during giant sets in bodybuilding for masses. From the analysis of training programs content we noticed that three giant sets of exercises were used, performed in four series; each exercise within the giant sets was applied by means of the procedure of increasing and decreasing effort load. Study results emphasize the anthropometric measurement results: the study subjects have the age mean of 24.75, with a size of 175.2 cm and a weight of 83.75 kg at initial testing and a decrease by 2.12 kg in final testing. Regarding the chest perimeter, the inspiration is averaging 89.5 in initial

  11. Soils - Mean Permeability

    Data.gov (United States)

    Kansas Data Access and Support Center — This digital spatial data set provides information on the magnitude and spatial pattern of depth-weighted, mean soil permeability throughout the State of Kansas. The...

  12. Fast Streaming 3D Level set Segmentation on the GPU for Smooth Multi-phase Segmentation

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Zhang, Qin; Anton, François

    2011-01-01

    Level set method based segmentation provides an efficient tool for topological and geometrical shape handling, but it is slow due to high computational burden. In this work, we provide a framework for streaming computations on large volumetric images on the GPU. A streaming computational model...

  13. Intonational Meaning in Institutional Settings: The Role of Syntagmatic Relations

    Science.gov (United States)

    Wichmann, Anne

    2010-01-01

    This paper addresses the power of intonation to convey interpersonal or attitudinal meaning. Speakers have been shown to accommodate to each other in the course of conversation, and this convergence may be perceived as a sign of empathy. Accommodation often involves paradigmatic choices--choosing the same words, gestures, regional accent or…

  14. Research on classified real-time flood forecasting framework based on K-means cluster and rough set.

    Science.gov (United States)

    Xu, Wei; Peng, Yong

    2015-01-01

    This research presents a new classified real-time flood forecasting framework. In this framework, historical floods are classified by a K-means cluster according to the spatial and temporal distribution of precipitation, the time variance of precipitation intensity and other hydrological factors. Based on the classified results, a rough set is used to extract the identification rules for real-time flood forecasting. Then, the parameters of different categories within the conceptual hydrological model are calibrated using a genetic algorithm. In real-time forecasting, the corresponding category of parameters is selected for flood forecasting according to the obtained flood information. This research tests the new classified framework on Guanyinge Reservoir and compares the framework with the traditional flood forecasting method. It finds that the performance of the new classified framework is significantly better in terms of accuracy. Furthermore, the framework can be considered in a catchment with fewer historical floods.

  15. HPC in Basin Modeling: Simulating Mechanical Compaction through Vertical Effective Stress using Level Sets

    Science.gov (United States)

    McGovern, S.; Kollet, S. J.; Buerger, C. M.; Schwede, R. L.; Podlaha, O. G.

    2017-12-01

    In the context of sedimentary basins, we present a model for the simulation of the movement of ageological formation (layers) during the evolution of the basin through sedimentation and compactionprocesses. Assuming a single phase saturated porous medium for the sedimentary layers, the modelfocuses on the tracking of the layer interfaces, through the use of the level set method, as sedimentationdrives fluid-flow and reduction of pore space by compaction. On the assumption of Terzaghi's effectivestress concept, the coupling of the pore fluid pressure to the motion of interfaces in 1-D is presented inMcGovern, et.al (2017) [1] .The current work extends the spatial domain to 3-D, though we maintain the assumption ofvertical effective stress to drive the compaction. The idealized geological evolution is conceptualized asthe motion of interfaces between rock layers, whose paths are determined by the magnitude of a speedfunction in the direction normal to the evolving layer interface. The speeds normal to the interface aredependent on the change in porosity, determined through an effective stress-based compaction law,such as the exponential Athy's law. Provided with the speeds normal to the interface, the level setmethod uses an advection equation to evolve a potential function, whose zero level set defines theinterface. Thus, the moving layer geometry influences the pore pressure distribution which couplesback to the interface speeds. The flexible construction of the speed function allows extension, in thefuture, to other terms to represent different physical processes, analogous to how the compaction rulerepresents material deformation.The 3-D model is implemented using the generic finite element method framework Deal II,which provides tools, building on p4est and interfacing to PETSc, for the massively parallel distributedsolution to the model equations [2]. Experiments are being run on the Juelich Supercomputing Center'sJureca cluster. [1] McGovern, et.al. (2017

  16. Mean lives of the 5.106 and 5.834 MeV levels of 14N

    International Nuclear Information System (INIS)

    Bhalla, R.K.; Poletti, A.R.

    1982-01-01

    The recoil distance method (RDM) has been used to measure the mean lives of the 5.106 and 5.834 MeV levels of 14 N as tau = 6.27 +- 0.10 ps and tau = 11.88 +- 0.24 ps respectively. The results are compared to previous measurements and to shell-model calculations. (orig.)

  17. Classification of Normal and Apoptotic Cells from Fluorescence Microscopy Images Using Generalized Polynomial Chaos and Level Set Function.

    Science.gov (United States)

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2016-06-01

    Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.

  18. Home advantage in high-level volleyball varies according to set number.

    Science.gov (United States)

    Marcelino, Rui; Mesquita, Isabel; Palao Andrés, José Manuel; Sampaio, Jaime

    2009-01-01

    The aim of the present study was to identify the probability of winning each Volleyball set according to game location (home, away). Archival data was obtained from 275 sets in the 2005 Men's Senior World League and 65,949 actions were analysed. Set result (win, loss), game location (home, away), set number (first, second, third, fourth and fifth) and performance indicators (serve, reception, set, attack, dig and block) were the variables considered in this study. In a first moment, performance indicators were used in a logistic model of set result, by binary logistic regression analysis. After finding the adjusted logistic model, the log-odds of winning the set were analysed according to game location and set number. The results showed that winning a set is significantly related to performance indicators (Chisquare(18)=660.97, padvantage at the beginning of the game (first set) and in the two last sets of the game (fourth and fifth sets), probably due to facilities familiarity and crowd effects. Different game actions explain these advantages and showed that to win the first set is more important to take risk, through a better performance in the attack and block, and to win the final set is important to manage the risk through a better performance on the reception. These results may suggest intra-game variation in home advantage and can be most useful to better prepare and direct the competition. Key pointsHome teams always have more probability of winning the game than away teams.Home teams have higher performance in reception, set and attack in the total of the sets.The advantage of home teams is more pronounced at the beginning of the game (first set) and in two last sets of the game (fourth and fifth sets) suggesting intra-game variation in home advantage.Analysis by sets showed that home teams have a better performance in the attack and block in the first set and in the reception in the third and fifth sets.

  19. Revisiting global mean sea level budget closure : Preliminary results from an integrative study within ESA's Climate Change Initiative -Sea level Budget Closure-Climate Change Initiative

    Science.gov (United States)

    Palanisamy, H.; Cazenave, A. A.

    2017-12-01

    The global mean sea level budget is revisited over two time periods: the entire altimetry era, 1993-2015 and the Argo/GRACE era, 2003-2015 using the version '0' of sea level components estimated by the SLBC-CCI teams. The SLBC-CCI is an European Space Agency's project on sea level budget closure using CCI products. Over the entire altimetry era, the sea level budget was performed as the sum of steric and mass components that include contributions from total land water storage, glaciers, ice sheets (Greenland and Antarctica) and total water vapor content. Over the Argo/GRACE era, it was performed as the sum of steric and GRACE based ocean mass. Preliminary budget analysis performed over the altimetry era (1993-2015) results in a trend value of 2.83 mm/yr. On comparison with the observed altimetry-based global mean sea level trend over the same period (3.03 ± 0.5 mm/yr), we obtain a residual of 0.2 mm/yr. In spite of a residual of 0.2 mm/yr, the sea level budget result obtained over the altimetry era is very promising as this has been performed using the version '0' of the sea level components. Furthermore, uncertainties are not yet included in this study as uncertainty estimation for each sea level component is currently underway. Over the Argo/GRACE era (2003-2015), the trend estimated from the sum of steric and GRACE ocean mass amounts to 2.63 mm/yr while that observed by satellite altimetry is 3.37 mm/yr, thereby leaving a residual of 0.7 mm/yr. Here an ensemble GRACE ocean mass data (mean of various available GRACE ocean mass data) was used for the estimation. Using individual GRACE data results in a residual range of 0.5 mm/yr -1.1 mm/yr. Investigations are under way to determine the cause of the vast difference between the observed sea level and the sea level obtained from steric and GRACE ocean mass. One main suspect is the impact of GRACE data gaps on sea level budget analysis due to lack of GRACE data over several months since 2011. The current action plan

  20. Multi-domain, higher order level set scheme for 3D image segmentation on the GPU

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Zhang, Qin; Anton, François

    2010-01-01

    to evaluate level set surfaces that are $C^2$ continuous, but are slow due to high computational burden. In this paper, we provide a higher order GPU based solver for fast and efficient segmentation of large volumetric images. We also extend the higher order method to multi-domain segmentation. Our streaming...

  1. Means and extremes: building variability into community-level climate change experiments.

    Science.gov (United States)

    Thompson, Ross M; Beardall, John; Beringer, Jason; Grace, Mike; Sardina, Paula

    2013-06-01

    Experimental studies assessing climatic effects on ecological communities have typically applied static warming treatments. Although these studies have been informative, they have usually failed to incorporate either current or predicted future, patterns of variability. Future climates are likely to include extreme events which have greater impacts on ecological systems than changes in means alone. Here, we review the studies which have used experiments to assess impacts of temperature on marine, freshwater and terrestrial communities, and classify them into a set of 'generations' based on how they incorporate variability. The majority of studies have failed to incorporate extreme events. In terrestrial ecosystems in particular, experimental treatments have reduced temperature variability, when most climate models predict increased variability. Marine studies have tended to not concentrate on changes in variability, likely in part because the thermal mass of oceans will moderate variation. In freshwaters, climate change experiments have a much shorter history than in the other ecosystems, and have tended to take a relatively simple approach. We propose a new 'generation' of climate change experiments using down-scaled climate models which incorporate predicted changes in climatic variability, and describe a process for generating data which can be applied as experimental climate change treatments. © 2013 John Wiley & Sons Ltd/CNRS.

  2. Efficient Cardinality/Mean-Variance Portfolios

    OpenAIRE

    Brito, R. Pedro; Vicente, Luís Nunes

    2014-01-01

    International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...

  3. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    Science.gov (United States)

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment

  4. Assessment of EGM2008 using GPS/levelling and free-air gravity ...

    African Journals Online (AJOL)

    ge

    gravity data over the oceans and point terrestrial gravity data. Details of the above data sets ... town of Mombasa in 1931 and the mean sea level determined using tidal data recorded for a period of one year (Aseno, 1995). ... The datum for height in Kenya is the mean sea level referred to a tide gauge at. Kilindini Harbour in ...

  5. A multilevel, level-set method for optimizing eigenvalues in shape design problems

    International Nuclear Information System (INIS)

    Haber, E.

    2004-01-01

    In this paper, we consider optimal design problems that involve shape optimization. The goal is to determine the shape of a certain structure such that it is either as rigid or as soft as possible. To achieve this goal we combine two new ideas for an efficient solution of the problem. First, we replace the eigenvalue problem with an approximation by using inverse iteration. Second, we use a level set method but rather than propagating the front we use constrained optimization methods combined with multilevel continuation techniques. Combining these two ideas we obtain a robust and rapid method for the solution of the optimal design problem

  6. Comparison of different statistical methods for estimation of extreme sea levels with wave set-up contribution

    Science.gov (United States)

    Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme

    2013-04-01

    Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.

  7. Education level inequalities and transportation injury mortality in the middle aged and elderly in European settings

    NARCIS (Netherlands)

    Borrell, C.; Plasència, A.; Huisman, M.; Costa, G.; Kunst, A.; Andersen, O.; Bopp, M.; Borgan, J.-K.; Deboosere, P.; Glickman, M.; Gadeyne, S.; Minder, C.; Regidor, E.; Spadea, T.; Valkonen, T.; Mackenbach, J. P.

    2005-01-01

    OBJECTIVE: To study the differential distribution of transportation injury mortality by educational level in nine European settings, among people older than 30 years, during the 1990s. METHODS: Deaths of men and women older than 30 years from transportation injuries were studied. Rate differences

  8. Clustering for Binary Data Sets by Using Genetic Algorithm-Incremental K-means

    Science.gov (United States)

    Saharan, S.; Baragona, R.; Nor, M. E.; Salleh, R. M.; Asrah, N. M.

    2018-04-01

    This research was initially driven by the lack of clustering algorithms that specifically focus in binary data. To overcome this gap in knowledge, a promising technique for analysing this type of data became the main subject in this research, namely Genetic Algorithms (GA). For the purpose of this research, GA was combined with the Incremental K-means (IKM) algorithm to cluster the binary data streams. In GAIKM, the objective function was based on a few sufficient statistics that may be easily and quickly calculated on binary numbers. The implementation of IKM will give an advantage in terms of fast convergence. The results show that GAIKM is an efficient and effective new clustering algorithm compared to the clustering algorithms and to the IKM itself. In conclusion, the GAIKM outperformed other clustering algorithms such as GCUK, IKM, Scalable K-means (SKM) and K-means clustering and paves the way for future research involving missing data and outliers.

  9. Microwave imaging of dielectric cylinder using level set method and conjugate gradient algorithm

    International Nuclear Information System (INIS)

    Grayaa, K.; Bouzidi, A.; Aguili, T.

    2011-01-01

    In this paper, we propose a computational method for microwave imaging cylinder and dielectric object, based on combining level set technique and the conjugate gradient algorithm. By measuring the scattered field, we tried to retrieve the shape, localisation and the permittivity of the object. The forward problem is solved by the moment method, while the inverse problem is reformulate in an optimization one and is solved by the proposed scheme. It found that the proposed method is able to give good reconstruction quality in terms of the reconstructed shape and permittivity.

  10. Functional Testing and Characterisation of ISFETs on Wafer Level by Means of a Micro-droplet Cell

    Directory of Open Access Journals (Sweden)

    Michael J. Schöning

    2006-04-01

    Full Text Available A wafer-level functionality testing and characterisation system for ISFETs (ion-sensitive field-effect transistor is realised by means of integration of a specifically designedcapillary electrochemical micro-droplet cell into a commercial wafer prober-station. Thedeveloped system allows the identification and selection of “good” ISFETs at the earlieststage and to avoid expensive bonding, encapsulation and packaging processes for non-functioning ISFETs and thus, to decrease costs, which are wasted for bad dies. Thedeveloped system is also feasible for wafer-level characterisation of ISFETs in terms ofsensitivity, hysteresis and response time. Additionally, the system might be also utilised forwafer-level testing of further electrochemical sensors.

  11. Records for radioactive waste management up to repository closure: Managing the primary level information (PLI) set

    International Nuclear Information System (INIS)

    2004-07-01

    The objective of this publication is to highlight the importance of the early establishment of a comprehensive records system to manage primary level information (PLI) as an integrated set of information, not merely as a collection of information, throughout all the phases of radioactive waste management. Early establishment of a comprehensive records system to manage Primary Level Information as an integrated set of information throughout all phases of radioactive waste management is important. In addition to the information described in the waste inventory record keeping system (WIRKS), the PLI of a radioactive waste repository consists of the entire universe of information, data and records related to any aspect of the repository's life cycle. It is essential to establish PLI requirements based on integrated set of needs from Regulators and Waste Managers involved in the waste management chain and to update these requirements as needs change over time. Information flow for radioactive waste management should be back-end driven. Identification of an Authority that will oversee the management of PLI throughout all phases of the radioactive waste management life cycle would guarantee the information flow to future generations. The long term protection of information essential to future generations can only be assured by the timely establishment of a comprehensive and effective RMS capable of capturing, indexing and evaluating all PLI. The loss of intellectual control over the PLI will make it very difficult to subsequently identify the ILI and HLI information sets. At all times prior to the closure of a radioactive waste repository, there should be an identifiable entity with a legally enforceable financial and management responsibility for the continued operation of a PLI Records Management System. The information presented in this publication will assist Member States in ensuring that waste and repository records, relevant for retention after repository closure

  12. MO-AB-BRA-01: A Global Level Set Based Formulation for Volumetric Modulated Arc Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, D; Lyu, Q; Ruan, D; O’Connor, D; Low, D; Sheng, K [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA (United States)

    2016-06-15

    Purpose: The current clinical Volumetric Modulated Arc Therapy (VMAT) optimization is formulated as a non-convex problem and various greedy heuristics have been employed for an empirical solution, jeopardizing plan consistency and quality. We introduce a novel global direct aperture optimization method for VMAT to overcome these limitations. Methods: The global VMAT (gVMAT) planning was formulated as an optimization problem with an L2-norm fidelity term and an anisotropic total variation term. A level set function was used to describe the aperture shapes and adjacent aperture shapes were penalized to control MLC motion range. An alternating optimization strategy was implemented to solve the fluence intensity and aperture shapes simultaneously. Single arc gVMAT plans, utilizing 180 beams with 2° angular resolution, were generated for a glioblastoma multiforme (GBM), lung (LNG), and 2 head and neck cases—one with 3 PTVs (H&N3PTV) and one with 4 PTVs (H&N4PTV). The plans were compared against the clinical VMAT (cVMAT) plans utilizing two overlapping coplanar arcs. Results: The optimization of the gVMAT plans had converged within 600 iterations. gVMAT reduced the average max and mean OAR dose by 6.59% and 7.45% of the prescription dose. Reductions in max dose and mean dose were as high as 14.5 Gy in the LNG case and 15.3 Gy in the H&N3PTV case. PTV coverages (D95, D98, D99) were within 0.25% of the prescription dose. By globally considering all beams, the gVMAT optimizer allowed some beams to deliver higher intensities, yielding a dose distribution that resembles a static beam IMRT plan with beam orientation optimization. Conclusions: The novel VMAT approach allows for the search of an optimal plan in the global solution space and generates deliverable apertures directly. The single arc VMAT approach fully utilizes the digital linacs’ capability in dose rate and gantry rotation speed modulation. Varian Medical Systems, NIH grant R01CA188300, NIH grant R43CA183390.

  13. Nurses' comfort level with spiritual assessment: a study among nurses working in diverse healthcare settings.

    Science.gov (United States)

    Cone, Pamela H; Giske, Tove

    2017-10-01

    To gain knowledge about nurses' comfort level in assessing spiritual matters and to learn what questions nurses use in practice related to spiritual assessment. Spirituality is important in holistic nursing care; however, nurses report feeling uncomfortable and ill-prepared to address this domain with patients. Education is reported to impact nurses' ability to engage in spiritual care. This cross-sectional exploratory survey reports on a mixed-method study examining how comfortable nurses are with spiritual assessment. In 2014, a 21-item survey with 10 demographic variables and three open-ended questions were distributed to Norwegian nurses working in diverse care settings with 172 nurse responses (72 % response rate). SPSS was used to analyse quantitative data; thematic analysis examined the open-ended questions. Norwegian nurses reported a high level of comfort with most questions even though spirituality is seen as private. Nurses with some preparation or experience in spiritual care were most comfortable assessing spirituality. Statistically significant correlations were found between the nurses' comfort level with spiritual assessment and their preparedness and sense of the importance of spiritual assessment. How well-prepared nurses felt was related to years of experience, degree of spirituality and religiosity, and importance of spiritual assessment. Many nurses are poorly prepared for spiritual assessment and care among patients in diverse care settings; educational preparation increases their comfort level with facilitating such care. Nurses who feel well prepared with spirituality feel more comfortable with the spiritual domain. By fostering a culture where patients' spirituality is discussed and reflected upon in everyday practice and in continued education, nurses' sense of preparedness, and thus their level of comfort, can increase. Clinical supervision and interprofessional collaboration with hospital chaplains and/or other spiritual leaders can

  14. Efficient privacy preserving K-means clustering in a three-party setting

    NARCIS (Netherlands)

    Beye, Michael; Erkin, Zekeriya; Erkin, Zekeriya; Lagendijk, Reginald L.

    2011-01-01

    User clustering is a common operation in online social networks, for example to recommend new friends. In previous work [5], Erkin et al. proposed a privacy-preserving K-means clustering algorithm for the semi-honest model, using homomorphic encryption and multi-party computation. This paper makes

  15. Relationship between salivary levels of cortisol and dehydroepiandrosterone levels in saliva and chronic periodontitis

    Directory of Open Access Journals (Sweden)

    Siddhi Mudrika

    2014-01-01

    Full Text Available Aim: The aim was to investigate the association between cortisol and dehydroepiandrosterone (DHEA levels in patients with periodontitis and healthy controls. Materials and Methods: Cortisol and DHEA levels in saliva were determined in 20 subjects, with clinical examinations including oral hygiene index, sulcus bleeding index (Muhlemann and Son and probing depth was also performed. Statistical Analysis: Data were analyzed with the help of SPSS software package (version 7.0, and the significance level was set at 95% confidence interval. Mann-Whitney test and t-test were used to see the correlation between the groups. Results: In cortisol and DHEA the mean and standard deviation of periodontitis group were (2.6 ± 0.37 and (66.7 ± 8.7 respectively. Conclusion: This shows there is an increase in the mean value of cortisol and DHEA in periodontitis than the control group. Salivary cortisol and DHEA level were found to be increased in concordance with disease severity. This was statistically significant with P < 0.001.

  16. Prediction of financial crises by means of rough sets and decision trees

    Directory of Open Access Journals (Sweden)

    Zuleyka Díaz-Martínez

    2011-03-01

    Full Text Available This paper tries to further investigate the factors behind a financial crisis. By using a large sample of countries in the period 1981 to 1999, it intends to apply two methods coming from the Artificial Intelligence (Rough Sets theory and C4.5 algorithm and analyze the role of a set of macroeconomic and financial variables in explaining banking crises. These variables are both quantitative and qualitative. These methods do not require variables or data used to satisfy any assumptions. Statistical methods traditionally employed call for the explicative variables to satisfy statistical assumptions which is quite difficult to happen. This fact complicates the analysis. We obtained good results based on the classification accuracies (80% of correctly classified countries from an independent sample, which proves the suitability of both methods.

  17. Fluoroscopy in paediatric fractures - Setting a local diagnostic reference level

    International Nuclear Information System (INIS)

    Pillai, A.; McAuley, A.; McMurray, K.; Jain, M.

    2006-01-01

    Background: The ionizing radiations (Medical Exposure) Regulation 2000 has made it mandatory to establish diagnostic reference levels (DRLs) for all typical radiological examinations. Objectives: We attempt to provide dose data for some common fluoroscopic procedures used in orthopaedic trauma that may be used as the basis for setting DRLs for paediatric patients. Materials and methods: The dose area product (DAP) in 865 paediatric trauma examinations was analysed. Median DAP values and screening times for each procedure type along with quartile values for each range are presented. Results: In the upper limb, elbow examinations had maximum exposure with a median DAP value of 1.21 cGy cm 2 . Median DAP values for forearm and wrist examinations were 0.708 and 0.538 cGy cm 2 , respectively. In lower limb, tibia and fibula examinations had a median DAP value of 3.23 cGy cm 2 followed by ankle examinations with a median DAP of 3.10 cGy cm 2 . The rounded third quartile DAP value for each distribution can be used as a provisional DRL for the specific procedure type. (authors)

  18. Three-Dimensional Simulation of DRIE Process Based on the Narrow Band Level Set and Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Jia-Cheng Yu

    2018-02-01

    Full Text Available A three-dimensional topography simulation of deep reactive ion etching (DRIE is developed based on the narrow band level set method for surface evolution and Monte Carlo method for flux distribution. The advanced level set method is implemented to simulate the time-related movements of etched surface. In the meanwhile, accelerated by ray tracing algorithm, the Monte Carlo method incorporates all dominant physical and chemical mechanisms such as ion-enhanced etching, ballistic transport, ion scattering, and sidewall passivation. The modified models of charged particles and neutral particles are epitomized to determine the contributions of etching rate. The effects such as scalloping effect and lag effect are investigated in simulations and experiments. Besides, the quantitative analyses are conducted to measure the simulation error. Finally, this simulator will be served as an accurate prediction tool for some MEMS fabrications.

  19. Clinical meanings of determination of serum β-HCG levels in the management of drug abortion for family planning

    International Nuclear Information System (INIS)

    Wang Xianping; Zou Huifeng; Jiang Juping

    2005-01-01

    Objective: To study the clinical meanings of determination of serum β-HCG levels in the management of drug abortion for birth control. Methods: Serum β-HCG levels were determined with CLIA in 254 pregnant women asking for drug abortion and 102 women with ectopic gestation. Results: The serum β-HCG levels in women with normal pregnancy were significantly higher than those in women with ectopic gestation (P 0.05). Conclusion: For drug abortion in those pregnant women with amenorrhea over 45 days and serum β-HCG levels over 50000mlU/ml, a larger dose of misoprostol may be desirable. (authors)

  20. County-Level Poverty Is Equally Associated with Unmet Health Care Needs in Rural and Urban Settings

    Science.gov (United States)

    Peterson, Lars E.; Litaker, David G.

    2010-01-01

    Context: Regional poverty is associated with reduced access to health care. Whether this relationship is equally strong in both rural and urban settings or is affected by the contextual and individual-level characteristics that distinguish these areas, is unclear. Purpose: Compare the association between regional poverty with self-reported unmet…

  1. Why is mean sea level along the Indian coast higher in the Bay of Bengal than in the Arabian Sea?

    Digital Repository Service at National Institute of Oceanography (India)

    Shankar, D.; Shetye, S.R.

    Levelling observations conducted during the Great Trigonometrical Survey of India (1858-1909) and subsequent observations showed that mean sea level along the coast of India is higher in the Bay of Bengal than in the Arabian Sea, the difference...

  2. Leveling the field: The role of training, safety programs, and knowledge management systems in fostering inclusive field settings

    Science.gov (United States)

    Starkweather, S.; Crain, R.; Derry, K. R.

    2017-12-01

    Knowledge is empowering in all settings, but plays an elevated role in empowering under-represented groups in field research. Field research, particularly polar field research, has deep roots in masculinized and colonial traditions, which can lead to high barriers for women and minorities (e.g. Carey et al., 2016). While recruitment of underrepresented groups into polar field research has improved through the efforts of organizations like the Association of Polar Early Career Scientists (APECS), the experiences and successes of these participants is often contingent on the availability of specialized training opportunities or the quality of explicitly documented information about how to survive Arctic conditions or how to establish successful measurement protocols in harsh environments. In Arctic field research, knowledge is often not explicitly documented or conveyed, but learned through "experience" or informally through ad hoc advice. The advancement of field training programs and knowledge management systems suggest two means for unleashing more explicit forms of knowledge about field work. Examples will be presented along with a case for how they level the playing field and improve the experience of field work for all participants.

  3. Comparing of goal setting strategy with group education method to increase physical activity level: A randomized trial.

    Science.gov (United States)

    Jiryaee, Nasrin; Siadat, Zahra Dana; Zamani, Ahmadreza; Taleban, Roya

    2015-10-01

    Designing an intervention to increase physical activity is important to be based on the health care settings resources and be acceptable by the subject group. This study was designed to assess and compare the effect of the goal setting strategy with a group education method on increasing the physical activity of mothers of children aged 1 to 5. Mothers who had at least one child of 1-5 years were randomized into two groups. The effect of 1) goal-setting strategy and 2) group education method on increasing physical activity was assessed and compared 1 month and 3 months after the intervention. Also, the weight, height, body mass index (BMI), waist and hip circumference, and well-being were compared between the two groups before and after the intervention. Physical activity level increased significantly after the intervention in the goal-setting group and it was significantly different between the two groups after intervention (P goal-setting group after the intervention. In the group education method, only the well-being score improved significantly (P goal-setting strategy to boost physical activity, improving the state of well-being and decreasing BMI, waist, and hip circumference.

  4. A topology optimization method based on the level set method for the design of negative permeability dielectric metamaterials

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Izui, Kazuhiro

    2012-01-01

    This paper presents a level set-based topology optimization method for the design of negative permeability dielectric metamaterials. Metamaterials are artificial materials that display extraordinary physical properties that are unavailable with natural materials. The aim of the formulated...... optimization problem is to find optimized layouts of a dielectric material that achieve negative permeability. The presence of grayscale areas in the optimized configurations critically affects the performance of metamaterials, positively as well as negatively, but configurations that contain grayscale areas...... are highly impractical from an engineering and manufacturing point of view. Therefore, a topology optimization method that can obtain clear optimized configurations is desirable. Here, a level set-based topology optimization method incorporating a fictitious interface energy is applied to a negative...

  5. Acute stroke: automatic perfusion lesion outlining using level sets.

    Science.gov (United States)

    Mouridsen, Kim; Nagenthiraja, Kartheeban; Jónsdóttir, Kristjana Ýr; Ribe, Lars R; Neumann, Anders B; Hjort, Niels; Østergaard, Leif

    2013-11-01

    To develop a user-independent algorithm for the delineation of hypoperfused tissue on perfusion-weighted images and evaluate its performance relative to a standard threshold method in simulated data, as well as in acute stroke patients. The study was approved by the local ethics committee, and patients gave written informed consent prior to their inclusion in the study. The algorithm identifies hypoperfused tissue in mean transit time maps by simultaneously minimizing the mean square error between individual and mean perfusion values inside and outside a smooth boundary. In 14 acute stroke patients, volumetric agreement between automated outlines and manual outlines determined in consensus among four neuroradiologists was assessed with Bland-Altman analysis, while spatial agreement was quantified by using lesion overlap relative to mean lesion volume (Dice coefficient). Performance improvement relative to a standard threshold approach was tested with the Wilcoxon signed rank test. The mean difference in lesion volume between automated outlines and manual outlines was -9.0 mL ± 44.5 (standard deviation). The lowest mean volume difference for the threshold approach was -25.8 mL ± 88.2. A significantly higher Dice coefficient was observed with the algorithm (0.71; interquartile range [IQR], 0.42-0.75) compared with the threshold approach (0.50; IQR, 0.27- 0.57; P , .001). The corresponding agreement among experts was 0.79 (IQR, 0.69-0.83). The perfusion lesions outlined by the automated algorithm agreed well with those defined manually in consensus by four experts and were superior to those obtained by using the standard threshold approach. This user-independent algorithm may improve the assessment of perfusion images as part of acute stroke treatment. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.13121622/-/DC1. RSNA, 2013

  6. New and improved data products from the Permanent Service for Mean Sea Level (PSMSL)

    Science.gov (United States)

    Matthews, Andrew; Bradshaw, Elizabeth; Gordon, Kathy; Hibbert, Angela; Jevrejeva, Svetlana; Rickards, Lesley; Tamisiea, Mark; Williams, Simon

    2015-04-01

    The Permanent Service for Mean Sea Level (PSMSL) is the internationally recognised global sea level data bank for long term sea level change information from tide gauges. Established in 1933, the PSMSL continues to be responsible for the collection, publication, analysis and interpretation of sea level data. The PSMSL operates under the auspices of the International Council for Science (ICSU) and is one of the main data centres for both the International Association for the Physical Sciences of the Oceans (IAPSO) and the International Association of Geodesy (IAG). The PSMSL continues to work closely with other members of the sea level community through the Intergovernmental Oceanographic Commission's Global Sea Level Observing System (GLOSS). Currently, the PSMSL data bank for monthly and annual sea level data holds over 65,000 station-years of data from over 2200 stations. Data from each site are carefully quality controlled and, wherever possible, reduced to a common datum, whose stability is monitored through a network of geodetic benchmarks. Last year, the PSMSL also made available a data bank of measurements taken from in-situ ocean bottom pressure recorders from over 60 locations across the globe. Here, we present an overview of the data available at the PSMSL, and describe some of the ongoing work that aims to provide more information to users of our data. In particular, we describe the ongoing work with the Système d'Observation du Niveau des Eaux Littorales (SONEL) to use measurements from continuous GNSS records located near tide gauges to provide PSMSL data within a geocentric reference frame. We also highlight changes to the method used to present estimated sea level trends to account for seasonal cycles and autocorrelation in the data, and provide an estimate of the error of the trend.

  7. Social Connectedness and Life Satisfaction: Comparing Mean Levels for 2 Undergraduate Samples and Testing for Improvement Based on Brief Counseling

    Science.gov (United States)

    Blau, Gary; DiMino, John; DeMaria, Peter A.; Beverly, Clyde; Chessler, Marcy; Drennan, Rob

    2016-01-01

    Objectives: Comparing the mean levels of social connectedness and life satisfaction, and analyzing their relationship for 2 undergraduate samples, and testing for an increase in their means for a brief counseling sample. Participants: Between October 2013 and May 2015, 3 samples were collected: not-in-counseling (NIC; n = 941), initial counseling…

  8. The Two-Level Theory of Verb Meaning: An Approach to Integrating the Semantics of Action with the Mirror Neuron System

    Science.gov (United States)

    Kemmerer, David; Gonzalez-Castillo, Javier

    2010-01-01

    Verbs have two separate levels of meaning. One level reflects the uniqueness of every verb and is called the "root". The other level consists of a more austere representation that is shared by all the verbs in a given class and is called the "event structure template". We explore the following hypotheses about how, with specific reference to the…

  9. Soil data clustering by using K-means and fuzzy K-means algorithm

    Directory of Open Access Journals (Sweden)

    E. Hot

    2016-06-01

    Full Text Available A problem of soil clustering based on the chemical characteristics of soil, and proper visual representation of the obtained results, is analysed in the paper. To that aim, K-means and fuzzy K-means algorithms are adapted for soil data clustering. A database of soil characteristics sampled in Montenegro is used for a comparative analysis of implemented algorithms. The procedure of setting proper values for control parameters of fuzzy K-means is illustrated on the used database. In addition, validation of clustering is made through visualisation. Classified soil data are presented on the static Google map and dynamic Open Street Map.

  10. Level set method for optimal shape design of MRAM core. Micromagnetic approach

    International Nuclear Information System (INIS)

    Melicher, Valdemar; Cimrak, Ivan; Keer, Roger van

    2008-01-01

    We aim at optimizing the shape of the magnetic core in MRAM memories. The evolution of the magnetization during the writing process is described by the Landau-Lifshitz equation (LLE). The actual shape of the core in one cell is characterized by the coefficient γ. Cost functional f=f(γ) expresses the quality of the writing process having in mind the competition between the full-select and the half-select element. We derive an explicit form of the derivative F=∂f/∂γ which allows for the use of gradient-type methods for the actual computation of the optimized shape (e.g., steepest descend method). The level set method (LSM) is employed for the representation of the piecewise constant coefficient γ

  11. Basic set theory

    CERN Document Server

    Levy, Azriel

    2002-01-01

    An advanced-level treatment of the basics of set theory, this text offers students a firm foundation, stopping just short of the areas employing model-theoretic methods. Geared toward upper-level undergraduate and graduate students, it consists of two parts: the first covers pure set theory, including the basic motions, order and well-foundedness, cardinal numbers, the ordinals, and the axiom of choice and some of it consequences; the second deals with applications and advanced topics such as point set topology, real spaces, Boolean algebras, and infinite combinatorics and large cardinals. An

  12. Data in support of environmental controls on the characteristics of mean number of forest fires and mean forest area burned (1987–2007 in China

    Directory of Open Access Journals (Sweden)

    Yu Chang

    2015-09-01

    Full Text Available Fire frequency and size are two important parameters describing fire characteristics. Exploring the spatial variation of fire characteristics and understanding the environmental controls are indispensable to fire prediction and sustainable forest landscape management. To illustrate the spatial variation of forest fire characteristics over China and to quantitatively determine the relative contribution of each of the environmental controls to this variation, forest fire characteristic data (mean number of forest fires and mean burned forest area and environmental data (climate, land use, vegetation type and topography at provincial level were derived. These data sets can potentially serve as a foundation for future studies relating to fire risk assessment, carbon emission by forest fires, and the impact of climate change on fire characteristics. This data article contains data related to the research article entitled “Environmental controls on the characteristics of mean number of forest fires and mean forest area burned (1987–2007 in China” by chang et al. [1].

  13. A Parametric k-Means Algorithm

    Science.gov (United States)

    Tarpey, Thaddeus

    2007-01-01

    Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692

  14. Computing the dynamics of biomembranes by combining conservative level set and adaptive finite element methods

    OpenAIRE

    Laadhari , Aymen; Saramito , Pierre; Misbah , Chaouqi

    2014-01-01

    International audience; The numerical simulation of the deformation of vesicle membranes under simple shear external fluid flow is considered in this paper. A new saddle-point approach is proposed for the imposition of the fluid incompressibility and the membrane inextensibility constraints, through Lagrange multipliers defined in the fluid and on the membrane respectively. Using a level set formulation, the problem is approximated by mixed finite elements combined with an automatic adaptive ...

  15. Natural setting of Japanese islands and geologic disposal of high-level waste

    International Nuclear Information System (INIS)

    Koide, Hitoshi

    1991-01-01

    The Japanese islands are a combination of arcuate islands along boundaries between four major plates: Eurasia, North America, Pacific and Philippine Sea plates. The interaction among the four plates formed complex geological structures which are basically patchworks of small blocks of land and sea-floor sediments piled up by the subduction of oceanic plates along the margin of the Eurasia continent. Although frequent earthquakes and volcanic eruptions clearly indicate active crustal deformation, the distribution of active faults and volcanoes is localized regionally in the Japanese islands. Crustal displacement faster than 1 mm/year takes place only in restricted regions near plate boundaries or close to major active faults. Volcanic activity is absent in the region between the volcanic front and the subduction zone. The site selection is especially important in Japan. The scenarios for the long-term performance assessment of high-level waste disposal are discussed with special reference to the geological setting of Japan. The long-term prediction of tectonic disturbance, evaluation of faults and fractures in rocks and estimation of long-term water-rock interaction are key issues in the performance assessment of the high-level waste disposal in the Japanese islands. (author)

  16. County-Level Human Well-Being Index and Domain Scores (2000-2010) plus EQI data set (2000-2005)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The HWBI_Draft_1 is an internal map service being prepared for public release (early FY18). This map services contains mean county-level HWBI, domain, indicators and...

  17. Methodology for setting the reference levels in the measurements of the dose rate absorbed in air due to the environmental gamma radiation

    International Nuclear Information System (INIS)

    Dominguez Ley, Orlando; Capote Ferrera, Eduardo; Caveda Ramos, Celia; Alonso Abad, Dolores

    2008-01-01

    Full text: The methodology for setting the reference levels of the measurements of the gamma dose rate absorbed in the air is described. The registration level was obtained using statistical methods. To set the alarm levels, it was necessary to begin with certain affectation level, which activates the investigation operation mode when being reached. It is was necessary to transform this affectation level into values of the indicators selected to set the appearance of an alarm in the network, allowing its direct comparison and at the same time a bigger operability of this one. The affectation level was assumed as an effective dose of 1 mSv/y, which is the international dose limit for public. The conversion factor obtained in a practical way as a consequence of the Chernobyl accident was assumed, converting the value of annual effective dose into values of effective dose rate in air. These factors are the most important in our work, since the main task of the National Network of Environmental Radiological Surveillance of the Republic of Cuba is detecting accidents with a situations regional affectation, and this accident is precisely an example of pollution at this scale. The alarm level setting was based on the results obtained in the first year of the Chernobyl accident. For this purpose, some transformations were achieved. In the final results, a correction factor was introduced depending on the year season the measurement was made. It was taken into account the influence of different meteorological events on the measurement of this indicator. (author)

  18. The Daily Events and Emotions of Master's-Level Family Therapy Trainees in Off-Campus Practicum Settings

    Science.gov (United States)

    Edwards, Todd M.; Patterson, Jo Ellen

    2012-01-01

    The Day Reconstruction Method (DRM) was used to assess the daily events and emotions of one program's master's-level family therapy trainees in off-campus practicum settings. This study examines the DRM reports of 35 family therapy trainees in the second year of their master's program in marriage and family therapy. Four themes emerged from the…

  19. Novel room-temperature-setting phosphate ceramics for stabilizing combustion products and low-level mixed wastes

    International Nuclear Information System (INIS)

    Wagh, A.S.; Singh, D.

    1994-01-01

    Argonne National Laboratory, with support from the Office of Technology in the US Department of Energy (DOE), has developed a new process employing novel, chemically bonded ceramic materials to stabilize secondary waste streams. Such waste streams result from the thermal processes used to stabilize low-level, mixed wastes. The process will help the electric power industry treat its combustion and low-level mixed wastes. The ceramic materials are strong, dense, leach-resistant, and inexpensive to fabricate. The room-temperature-setting process allows stabilization of volatile components containing lead, mercury, cadmium, chromium, and nickel. The process also provides effective stabilization of fossil fuel combustion products. It is most suitable for treating fly and bottom ashes

  20. Training set optimization under population structure in genomic selection.

    Science.gov (United States)

    Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E

    2015-01-01

    Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.

  1. Impacts of climate change on European hydrology at 1.5, 2 and 3 degrees mean global warming above preindustrial level

    NARCIS (Netherlands)

    Donnelly, Chantal; Greuell, Wouter; Andersson, Jafet; Gerten, Dieter; Pisacane, Giovanna; Roudier, Philippe; Ludwig, Fulco

    2017-01-01

    Impacts of climate change at 1.5, 2 and 3 °C mean global warming above preindustrial level are investigated and compared for runoff, discharge and snowpack in Europe. Ensembles of climate projections representing each of the warming levels were assembled to describe the hydro-meteorological climate

  2. Mean sea level and change in the hydrological regime off Loviisa power plant around the year 2050

    International Nuclear Information System (INIS)

    Maelkki, P.; Voipio, A.

    1985-03-01

    On the request of Imatran Voima Oy, the Institute of Marine Research has made an estimate on the future sea level off Loviisa Power Plant. The estimate is based on observationsof mean sea level in the Gulf of Finland. The stations used are Helsinki (observations since 1904) and Hamina (observations since 1928). A litterature review was made in order to estimate impact of climate change on environmental conditions. The results presented are mainly based on various estimates of meterorological Global Circulation Models (GCM). Their usefulness in the connection is briefly discussed

  3. Global mean sea-level rise in a world agreed upon in Paris

    Science.gov (United States)

    Bittermann, Klaus; Rahmstorf, Stefan; Kopp, Robert E.; Kemp, Andrew C.

    2017-12-01

    Although the 2015 Paris Agreement seeks to hold global average temperature to ‘well below 2 °C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5 °C above pre-industrial levels’, projections of global mean sea-level (GMSL) rise commonly focus on scenarios in which there is a high probability that warming exceeds 1.5 °C. Using a semi-empirical model, we project GMSL changes between now and 2150 CE under a suite of temperature scenarios that satisfy the Paris Agreement temperature targets. The projected magnitude and rate of GMSL rise varies among these low emissions scenarios. Stabilizing temperature at 1.5 °C instead of 2 °C above preindustrial reduces GMSL in 2150 CE by 17 cm (90% credible interval: 14-21 cm) and reduces peak rates of rise by 1.9 mm yr-1 (90% credible interval: 1.4-2.6 mm yr-1). Delaying the year of peak temperature has little long-term influence on GMSL, but does reduce the maximum rate of rise. Stabilizing at 2 °C in 2080 CE rather than 2030 CE reduces the peak rate by 2.7 mm yr-1 (90% credible interval: 2.0-4.0 mm yr-1).

  4. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Energy Technology Data Exchange (ETDEWEB)

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  5. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    International Nuclear Information System (INIS)

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang; Hu, Ying; Xiong, Jing; Zhang, Jianwei

    2015-01-01

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm 3 ) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm 3 , 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm 3 , 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm

  6. Transport equations, Level Set and Eulerian mechanics. Application to fluid-structure coupling

    International Nuclear Information System (INIS)

    Maitre, E.

    2008-11-01

    My works were devoted to numerical analysis of non-linear elliptic-parabolic equations, to neutron transport equation and to the simulation of fabrics draping. More recently I developed an Eulerian method based on a level set formulation of the immersed boundary method to deal with fluid-structure coupling problems arising in bio-mechanics. Some of the more efficient algorithms to solve the neutron transport equation make use of the splitting of the transport operator taking into account its characteristics. In the present work we introduced a new algorithm based on this splitting and an adaptation of minimal residual methods to infinite dimensional case. We present the case where the velocity space is of dimension 1 (slab geometry) and 2 (plane geometry) because the splitting is simpler in the former

  7. Setting the stage for master's level success

    Science.gov (United States)

    Roberts, Donna

    Comprehensive reading, writing, research, and study skills play a critical role in a graduate student's success and ability to contribute to a field of study effectively. The literature indicated a need to support graduate student success in the areas of mentoring, navigation, as well as research and writing. The purpose of this two-phased mixed methods explanatory study was to examine factors that characterize student success at the Master's level in the fields of education, sociology and social work. The study was grounded in a transformational learning framework which focused on three levels of learning: technical knowledge, practical or communicative knowledge, and emancipatory knowledge. The study included two data collection points. Phase one consisted of a Master's Level Success questionnaire that was sent via Qualtrics to graduate level students at three colleges and universities in the Central Valley of California: a California State University campus, a University of California campus, and a private college campus. The results of the chi-square indicated that seven questionnaire items were significant with p values less than .05. Phase two in the data collection included semi-structured interview questions that resulted in three themes emerged using Dedoose software: (1) the need for more language and writing support at the Master's level, (2) the need for mentoring, especially for second-language learners, and (3) utilizing the strong influence of faculty in student success. It is recommended that institutions continually assess and strengthen their programs to meet the full range of learners and to support students to degree completion.

  8. Decomposition of variance in terms of conditional means

    Directory of Open Access Journals (Sweden)

    Alessandro Figà Talamanca

    2013-05-01

    Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..

  9. The effects of massage therapy on pain management in the acute care setting.

    Science.gov (United States)

    Adams, Rose; White, Barb; Beckett, Cynthia

    2010-03-17

    Pain management remains a critical issue for hospitals and is receiving the attention of hospital accreditation organizations. The acute care setting of the hospital provides an excellent opportunity for the integration of massage therapy for pain management into the team-centered approach of patient care. This preliminary study evaluated the effect of the use of massage therapy on inpatient pain levels in the acute care setting. The study was conducted at Flagstaff Medical Center in Flagstaff, Arizona-a nonprofit community hospital serving a large rural area of northern Arizona. A convenience sample was used to identify research participants. Pain levels before and after massage therapy were recorded using a 0 - 10 visual analog scale. Quantitative and qualitative methods were used for analysis of this descriptive study. Hospital inpatients (n = 53) from medical, surgical, and obstetrics units participated in the current research by each receiving one or more massage therapy sessions averaging 30 minutes each. The number of sessions received depended on the length of the hospital stay. Before massage, the mean pain level recorded by the patients was 5.18 [standard deviation (SD): 2.01]. After massage, the mean pain level was 2.33 (SD: 2.10). The observed reduction in pain was statistically significant: paired samples t(52) = 12.43, r = .67, d = 1.38, p massage therapy into the acute care setting creates overall positive results in the patient's ability to deal with the challenging physical and psychological aspects of their health condition. The study demonstrated not only significant reduction in pain levels, but also the interrelatedness of pain, relaxation, sleep, emotions, recovery, and finally, the healing process.

  10. Axiomatic set theory

    CERN Document Server

    Suppes, Patrick

    1972-01-01

    This clear and well-developed approach to axiomatic set theory is geared toward upper-level undergraduates and graduate students. It examines the basic paradoxes and history of set theory and advanced topics such as relations and functions, equipollence, finite sets and cardinal numbers, rational and real numbers, and other subjects. 1960 edition.

  11. Comparing of goal setting strategy with group education method to increase physical activity level: A randomized trial

    Directory of Open Access Journals (Sweden)

    Nasrin Jiryaee

    2015-01-01

    Full Text Available Background: Designing an intervention to increase physical activity is important to be based on the health care settings resources and be acceptable by the subject group. This study was designed to assess and compare the effect of the goal setting strategy with a group education method on increasing the physical activity of mothers of children aged 1 to 5. Materials and Methods: Mothers who had at least one child of 1-5 years were randomized into two groups. The effect of 1 goal-setting strategy and 2 group education method on increasing physical activity was assessed and compared 1 month and 3 months after the intervention. Also, the weight, height, body mass index (BMI, waist and hip circumference, and well-being were compared between the two groups before and after the intervention. Results: Physical activity level increased significantly after the intervention in the goal-setting group and it was significantly different between the two groups after intervention (P < 0.05. BMI, waist circumference, hip circumference, and well-being score were significantly different in the goal-setting group after the intervention. In the group education method, only the well-being score improved significantly (P < 0.05. Conclusion: Our study presented the effects of using the goal-setting strategy to boost physical activity, improving the state of well-being and decreasing BMI, waist, and hip circumference.

  12. Setting MEPS for electronic products

    International Nuclear Information System (INIS)

    Siderius, Hans-Paul

    2014-01-01

    When analysing price, performance and efficiency data for 15 consumer electronic and information and communication technology products, we found that in general price did not relate to the efficiency of the product. Prices of electronic products with comparable performance decreased over time. For products where the data allowed fitting the relationship, we found an exponential decrease in price with an average time constant of −0.30 [1/year], meaning that every year the product became 26% cheaper on average. The results imply that the classical approach of setting minimum efficiency performance standards (MEPS) by means of life cycle cost calculations cannot be applied to electronic products. Therefore, an alternative approach based on the improvement of efficiency over time and the variation in efficiency of products on the market, is presented. The concept of a policy action window can provide guidance for the decision on whether setting MEPS for a certain product is appropriate. If the (formal) procedure for setting MEPS takes longer than the policy action window, this means that the efficiency improvement will also be achieved without setting MEPS. We found short, i.e. less than three years, policy action windows for graphic cards, network attached storage products, network switches and televisions. - Highlights: • For electronic consumer products price does not relate to efficiency. • Average price decrease of selected electronic products is 26 % per year. • We give an alternative approach to life cycle cost calculations for setting MEPS. • The policy action window indicates whether setting MEPS is appropriate

  13. Comparisons of different mean airway pressure settings during high-frequency oscillation in inflammatory response to oleic acid-induced lung injury in rabbits

    Directory of Open Access Journals (Sweden)

    Koichi Ono

    2009-03-01

    Full Text Available Koichi Ono1, Tomonobu Koizumi2, Rikimaru Nakagawa1, Sumiko Yoshikawa2, Tetsutarou Otagiri11Department of Anesthesiology and Resuscitation; 2First Department of Internal Medicine, Shinshu University School of Medicine, Matsumoto, JapanPurpose: The present study was designed to examine effects of different mean airway pressure (MAP settings during high-frequency oscillation (HFO on oxygenation and inflammatory responses to acute lung injury (ALI in rabbits.Methods: Anesthetized rabbits were mechanically ventilated with a conventional mechanical ventilation (CMV mode (tidal volume 6 ml/kg, inspired oxygen fraction [FIo2] of 1.0, respiratory rate [RR] of 30/min, positive end-expiratory pressure [PEEP] of 5 cmH2O. ALI was induced by intravenous administration of oleic acid (0.08 ml/kg and the animals were randomly allocated to the following three experimental groups; animals (n = 6 ventilated using the same mode of CMV, or animals ventilated with standard MAP (MAP 10 cmH2O, n = 7, and high MAP (15 cmH2O, n = 6 settings of HFO (Hz 15. The MAP settings were calculated by the inflation limb of the pressure-volume curve during CMV.Results: HFO with a high MAP setting significantly improved the deteriorated oxygenation during oleic acid-induced ALI and reduced wet/dry ratios, neutrophil counts and interleukin-8 concentration in bronchoalveolar lavage fluid, compared to those parameters in CMV and standard MAP-HFO.Conclusions: These findings suggest that only high MAP setting during HFO could contribute to decreased lung inflammation as well as improved oxygenation during the development of ALI.Keywords: lung protective ventilation, open lung ventilation, IL-8, neutrophil

  14. Supportive College Environment for Meaning Searching and Meaning in Life among American College Students

    Science.gov (United States)

    Shin, Joo Yeon; Steger, Michael F.

    2016-01-01

    We examined whether American college students who perceive their college environment as supportive for their meaning searching report higher levels of meaning in life. We also examined whether students' perception of college environmental support for meaning searching moderates the relation between the presence of and search for meaning. Students'…

  15. Pareto-optimal estimates that constrain mean California precipitation change

    Science.gov (United States)

    Langenbrunner, B.; Neelin, J. D.

    2017-12-01

    Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.

  16. Ultrasonic scalpel causes greater depth of soft tissue necrosis compared to monopolar electrocautery at standard power level settings in a pig model

    Science.gov (United States)

    2012-01-01

    Background Ultrasonic scalpel (UC) and monopolar electrocautery (ME) are common tools for soft tissue dissection. However, morphological data on the related tissue alteration are discordant. We developed an automatic device for standardized sample excision and compared quality and depth of morphological changes caused by UC and ME in a pig model. Methods 100 tissue samples (5 × 3 cm) of the abdominal wall were excised in 16 pigs. Excisions were randomly performed manually or by using the self-constructed automatic device at standard power levels (60 W cutting in ME, level 5 in UC) for abdominal surgery. Quality of tissue alteration and depth of coagulation necrosis were examined histopathologically. Device (UC vs. ME) and mode (manually vs. automatic) effects were studied by two-way analysis of variance at a significance level of 5%. Results At the investigated power level settings UC and ME induced qualitatively similar coagulation necroses. Mean depth of necrosis was 450.4 ± 457.8 μm for manual UC and 553.5 ± 326.9 μm for automatic UC versus 149.0 ± 74.3 μm for manual ME and 257.6 ± 119.4 μm for automatic ME. Coagulation necrosis was significantly deeper (p < 0.01) when UC was used compared to ME. The mode of excision (manual versus automatic) did not influence the depth of necrosis (p = 0.85). There was no significant interaction between dissection tool and mode of excision (p = 0.93). Conclusions Thermal injury caused by UC and ME results in qualitatively similar coagulation necrosis. The depth of necrosis is significantly greater in UC compared to ME at investigated standard power levels. PMID:22361346

  17. TCTE Level 3 Total Solar Irradiance Daily Means V002

    Data.gov (United States)

    National Aeronautics and Space Administration — The Total Solar Irradiance (TSI) Calibration Transfer Experiment (TCTE) data set TCTE3TSID contains daily averaged total solar irradiance (a.k.a solar constant) data...

  18. Study unique artistic lopburi province for design brass tea set of bantahkrayang community

    Science.gov (United States)

    Pliansiri, V.; Seviset, S.

    2017-07-01

    The objectives of this study were as follows: 1) to study the production process of handcrafted Brass Tea Set; and 2) to design and develop the handcrafted of Brass Tea Set. The process of design was started by mutual analytical processes and conceptual framework for product design, Quality Function Deployment, Theory of Inventive Problem Solving, Principles of Craft Design, and Principle of Reverse Engineering. The experts in field of both Industrial Product Design and Brass Handicraft Product, have evaluated the Brass Tea Set design and created prototype of Brass tea set by the sample of consumers who have ever bought the Brass Tea Set of Bantahkrayang Community on this research. The statistics methods used were percentage, mean ({{{\\overline X}} = }) and standard deviation (S.D.) 3. To assess consumer satisfaction toward of handcrafted Brass tea set was at the high level.

  19. Social meanings and understandings in patient-nurse interaction in the community practice setting: a grounded theory study

    Directory of Open Access Journals (Sweden)

    Stoddart Kathleen M

    2012-09-01

    Full Text Available Abstract Background The patient-nurse relationship is a traditional concern of healthcare research. However, patient-nurse interaction is under examined from a social perspective. Current research focuses mostly on specific contexts of care delivery and experience related to medical condition or illness, or to nurses’ speciality. Consequentially, this paper is about the social meanings and understandings at play within situated patient-nurse interaction in the community practice setting in a transforming healthcare service. Methods Grounded theory methodology was used and the research process was characterised by principles of theoretical sensitivity and constant comparative analysis. The field of study was four health centres in the community. The participants were patients and nurses representative of those attending or working in the health centres and meeting there by scheduled appointment. Data collection methods were observations, informal interviews and semi-structured interviews. Results Key properties of ‘Being a good patient, being a good nurse’, ‘Institutional experiences’ and ‘Expectations about healthcare’ were associated with the construction of a category entitled ‘Experience’. Those key properties captured that in an evolving healthcare environment individuals continually re-constructed their reality of being a patient or nurse as they endeavoured to perform appropriately; articulation of past and present healthcare experiences was important in that process. Modus operandi in role as patient was influenced by past experiences in healthcare and by those in non-healthcare institutions in terms of engagement and involvement (or not in interaction. Patients’ expectations about interaction in healthcare included some uncertainly as they strived to make sense of the changing roles and expertise of nurses and, differentiating between the roles and expertise of nurses and doctors. Conclusions The importance of social

  20. Efficiency of radiation protection means in pediatric roentgenology

    International Nuclear Information System (INIS)

    Burdina, L.M.; Stavitskij, R.V.; Lapina, T.V.; Yudaev, V.I.; Pavlova, M.K.

    1989-01-01

    Set of radiation protection means made by MAVIG Company and used in pediatric roentgenology is considered. The set includes protective shields, aprous for medical staff, gloves aprous to protect patient gonades, caps for testicules, protectors, for gonades, irregular devices to shield children during examination of hip joints. Schielding coefficients, which indicate high efficiency of individual protection means produced by MAVIG Company and which may be recommended for widespread application in roentgenology, are given

  1. Wind and solar resource data sets: Wind and solar resource data sets

    Energy Technology Data Exchange (ETDEWEB)

    Clifton, Andrew [National Renewable Energy Laboratory, Golden CO USA; Hodge, Bri-Mathias [National Renewable Energy Laboratory, Golden CO USA; Power Systems Engineering Center, National Renewable Energy Laboratory, Golden CO USA; Draxl, Caroline [National Renewable Energy Laboratory, Golden CO USA; National Wind Technology Center, National Renewable Energy Laboratory, Golden CO USA; Badger, Jake [Department of Wind Energy, Danish Technical University, Copenhagen Denmark; Habte, Aron [National Renewable Energy Laboratory, Golden CO USA; Power Systems Engineering Center, National Renewable Energy Laboratory, Golden CO USA

    2017-12-05

    The range of resource data sets spans from static cartography showing the mean annual wind speed or solar irradiance across a region to high temporal and high spatial resolution products that provide detailed information at a potential wind or solar energy facility. These data sets are used to support continental-scale, national, or regional renewable energy development; facilitate prospecting by developers; and enable grid integration studies. This review first provides an introduction to the wind and solar resource data sets, then provides an overview of the common methods used for their creation and validation. A brief history of wind and solar resource data sets is then presented, followed by areas for future research.

  2. Comparing Accuracy of Airborne Laser Scanning and TerraSAR-X Radar Images in the Estimation of Plot-Level Forest Variables

    Directory of Open Access Journals (Sweden)

    Juha Hyyppä

    2010-01-01

    Full Text Available In this study we compared the accuracy of low-pulse airborne laser scanning (ALS data, multi-temporal high-resolution noninterferometric TerraSAR-X radar data and a combined feature set derived from these data in the estimation of forest variables at plot level. The TerraSAR-X data set consisted of seven dual-polarized (HH/HV or VH/VV Stripmap mode images from all seasons of the year. We were especially interested in distinguishing between the tree species. The dependent variables estimated included mean volume, basal area, mean height, mean diameter and tree species-specific mean volumes. Selection of best possible feature set was based on a genetic algorithm (GA. The nonparametric k-nearest neighbour (k-NN algorithm was applied to the estimation. The research material consisted of 124 circular plots measured at tree level and located in the vicinity of Espoo, Finland. There are large variations in the elevation and forest structure in the study area, making it demanding for image interpretation. The best feature set contained 12 features, nine of them originating from the ALS data and three from the TerraSAR-X data. The relative RMSEs for the best performing feature set were 34.7% (mean volume, 28.1% (basal area, 14.3% (mean height, 21.4% (mean diameter, 99.9% (mean volume of Scots pine, 61.6% (mean volume of Norway spruce and 91.6% (mean volume of deciduous tree species. The combined feature set outperformed an ALS-based feature set marginally; in fact, the latter was better in the case of species-specific volumes. Features from TerraSAR-X alone performed poorly. However, due to favorable temporal resolution, satellite-borne radar imaging is a promising data source for updating large-area forest inventories based on low-pulse ALS.

  3. Clinical course and outcome of patients with high-level microsatellite instability cancers in a real-life setting: a retrospective analysis

    Directory of Open Access Journals (Sweden)

    Halpern N

    2017-03-01

    Full Text Available Naama Halpern,1 Yael Goldberg,2 Luna Kadouri,2 Morasha Duvdevani,2 Tamar Hamburger,2 Tamar Peretz,2 Ayala Hubert2 1Institute of Oncology, The Chaim Sheba Medical Center, Tel Hashomer, Israel; 2Sharett Institute of Oncology, Hadassah Medical Center, Hebrew University, Jerusalem, Israel Background: The prognostic and predictive significance of the high-level microsatellite instability (MSI-H phenotype in various malignancies is unclear. We describe the characteristics, clinical course, and outcomes of patients with MSI-H malignancies treated in a real-life hospital setting.Patients and methods: A retrospective analysis of MSI-H cancer patient files was conducted. We analyzed the genetic data, clinical characteristics, and oncological treatments, including chemotherapy and surgical interventions.Results: Clinical data of 73 MSI-H cancer patients were available. Mean age at diagnosis of first malignancy was 52.3 years. Eight patients (11% had more than four malignancies each. Most patients (76% had colorectal cancer (CRC. Seventeen patients (23% had only extracolonic malignancies. Eighteen women (36% had gynecological malignancy. Nine women (18% had breast cancer. Mean follow-up was 8.5 years. Five-year overall survival and disease-free survival of all MSI-H cancer patients from first malignancy were 86% and 74.6%, respectively. Five-year overall survival rates of stage 2, 3, and 4 MSI-H CRC patients were 89.5%, 58.4%, and 22.9%, respectively.Conclusion: Although the overall prognosis of MSI-H cancer patients is favorable, this advantage may not be maintained in advanced MSI-H CRC patients. Keywords: microsatellite instability, malignancy, treatment, outcome

  4. Development of a working set of waste package performance criteria for deepsea disposal of low-level radioactive waste. Final report

    International Nuclear Information System (INIS)

    Columbo, P.; Fuhrmann, M.; Neilson, R.M. Jr; Sailor, V.L.

    1982-11-01

    The United States ocean dumping regulations developed pursuant to PL92-532, the Marine Protection, Research, and Sanctuaries Act of 1972, as amended, provide for a general policy of isolation and containment of low-level radioactive waste after disposal into the ocean. In order to determine whether any particular waste packaging system is adequate to meet this general requirement, it is necessary to establish a set of performance criteria against which to evaluate a particular packaging system. These performance criteria must present requirements for the behavior of the waste in combination with its immobilization agent and outer container in a deepsea environment. This report presents a working set of waste package performance criteria, and includes a glossary of terms, characteristics of low-level radioactive waste, radioisotopes of importance in low-level radioactive waste, and a summary of domestic and international regulations which control the ocean disposal of these wastes

  5. [Noninvasive detection of hematocrit and the mean corpuscular hemoglobin concentration levels by Vis-NIR spectroscopy].

    Science.gov (United States)

    Zhao, Jing; Lin, Ling; Lu, Xiao-Zuo; Li, Gang

    2014-03-01

    Hematocrit (HCT) and mean hemoglobin concentration(MCHC) play a very important role in preventing cardiovascular disease and anemia. A method was developed on the basis of spectroscopy to detect HCT and MCHC non-invasively and accurately. The anatomical study showed that the blood rheology abnormalities and blood viscosity's changes can cause the changes of tongue, so there is a certain correlation between tongue and blood components. Reflectance spectrums from the tongue tips of 240 volunteers were collected, then the tongue pictures were captured and the biochemical analysis results were recorded at the same time. The 240 samples were separated into two parts: calibration sample and test sample. Spectra were then subjected to a partial least squares regression (PLSR) analysis to develop mathematics models for predicting HCT levels. The correlation between the data and prediction of HCT and MCHC yielded calibration samples value of 0.998 and 0.938. HCT and MCHC levels of test samples predicted by this model from Visible-Near infrared spectra provided a coefficient of determination in prediction of 0.979 and 0.883 with an average relative error of prediction of 1.65% and 1.88%, a root mean square error of prediction of 4.066 and 4.139. From the experiment results we can see that the model which was built before can better predict the HCT and MCHC, and the results also showed that spectrometry method may provide a promising approach to the noninvasive measurement of human HCT and MCHC with a combination of PLSR analysis.

  6. Level Set-Based Topology Optimization for the Design of an Electromagnetic Cloak With Ferrite Material

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Andkjær, Jacob Anders

    2013-01-01

    . A level set-based topology optimization method incorporating a fictitious interface energy is used to find optimized configurations of the ferrite material. The numerical results demonstrate that the optimization successfully found an appropriate ferrite configuration that functions as an electromagnetic......This paper presents a structural optimization method for the design of an electromagnetic cloak made of ferrite material. Ferrite materials exhibit a frequency-dependent degree of permeability, due to a magnetic resonance phenomenon that can be altered by changing the magnitude of an externally...

  7. A spectral mean for random closed curves

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette)

    2016-01-01

    textabstractWe propose a spectral mean for closed sets described by sample points on their boundaries subject to mis-alignment and noise. We derive maximum likelihood estimators for the model and noise parameters in the Fourier domain. We estimate the unknown mean boundary curve by

  8. [Influence Additional Cognitive Tasks on EEG Beta Rhythm Parameters during Forming and Testing Set to Perception of the Facial Expression].

    Science.gov (United States)

    Yakovenko, I A; Cheremushkin, E A; Kozlov, M K

    2015-01-01

    The research of changes of a beta rhythm parameters on condition of working memory loading by extension of a interstimuli interval between the target and triggering stimuli to 16 sec is investigated on 70 healthy adults in two series of experiments with set to a facial expression. In the second series at the middle of this interval for strengthening of the load was entered the additional cognitive task in the form of conditioning stimuli like Go/NoGo--circles of blue or green color. Data analysis of the research was carried out by means of continuous wavelet-transformation on the basis of "mather" complex Morlet-wavelet in the range of 1-35 Hz. Beta rhythm power was characterized by the mean level, maxima of wavelet-transformation coefficient (WLC) and latent periods of maxima. Introduction of additional cognitive task to pause between the target and triggering stimuli led to essential increase in absolute values of the mean level of beta rhythm WLC and relative sizes of maxima of beta rhythm WLC. In the series of experiments without conditioning stimulus subjects with large number of mistakes (from 6 to 40), i.e. rigid set, in comparison with subjects with small number of mistakes (to 5), i.e. plastic set, at the forming stage were characterized by higher values of the mean level of beta rhythm WLC. Introduction of the conditioning stimuli led to smoothing of intergroup distinctions throughout the experiment.

  9. Level-set reconstruction algorithm for ultrafast limited-angle X-ray computed tomography of two-phase flows.

    Science.gov (United States)

    Bieberle, M; Hampel, U

    2015-06-13

    Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  10. The Hb E (HBB: c.79G>A), Mean Corpuscular Volume, Mean Corpuscular Hemoglobin Cutoff Points in Double Heterozygous Hb E/- -SEA α-Thalassemia-1 Carriers are Dependent on Hemoglobin Levels.

    Science.gov (United States)

    Leckngam, Prapapun; Limweeraprajak, Ektong; Kiewkarnkha, Tiemjan; Tatu, Thanusak

    2017-01-01

    Identifying double heterozygosities in Hb E (HBB: c.79 G>A)/- - SEA (Southeast Asian) (α-thalassemia-1) (α-thal-1) in patients first diagnosed as carrying Hb E is important in thalassemia control. Low Hb E, mean corpuscular volume (MCV) and mean corpuscular hemoglobin (Hb) (MCH) levels have been observed in this double heterozygosity. However, the cutoff points of these parameters have never been systematically established. Here, we analyzed Hb E and red blood cell (RBC) parameters in 372 Hb E patients grouped by Hb levels, by the status of - - SEA and -α 3.7 (α-thal-2; rightward) deletions, to establish the cutoff points. Then, the established cutoff points were evaluated in 184 Hb E patients. It was found that the cutoff points of Hb E, MCV, MCH were significantly dependent on the Hb levels. In the group having Hb levels Hb E, MCV and MCH were 21.2%, 64.9 fL and 21.0 pg, respectively, and were 25.6%, 72.8 fL and 23.9 pg, respectively, in the group having Hb levels 10.0-11.9 g/dL. Finally, in the group having Hb levels ≥12.0 g/dL, the cutoff points of Hb E, MCV and MCH were 27.1%, 76.7 fL and 25.3 pg, respectively. Thus, to screen for the double heterozygous Hb E/- - SEA anomaly in patients initially diagnosed as carrying Hb E, the Hb levels must be taken into account in choosing the suitable cutoff points of these three parameters.

  11. How Ordinary Meaning Underpins the Meaning of Mathematics.

    Science.gov (United States)

    Ormell, Christopher

    1991-01-01

    Discusses the meaning of mathematics by looking at its uses in the real world. Offers mathematical modeling as a way to represent mathematical applications in real or potential situations. Presents levels of applicability, modus operandi, relationship to "pure mathematics," and consequences for education for mathematical modeling. (MDH)

  12. Modeling Restrained Shrinkage Induced Cracking in Concrete Rings Using the Thick Level Set Approach

    Directory of Open Access Journals (Sweden)

    Rebecca Nakhoul

    2018-03-01

    Full Text Available Modeling restrained shrinkage-induced damage and cracking in concrete is addressed herein. The novel Thick Level Set (TLS damage growth and crack propagation model is used and adapted by introducing shrinkage contribution into the formulation. The TLS capacity to predict damage evolution, crack initiation and growth triggered by restrained shrinkage in absence of external loads is evaluated. A study dealing with shrinkage-induced cracking in elliptical concrete rings is presented herein. Key results such as the effect of rings oblateness on stress distribution and critical shrinkage strain needed to initiate damage are highlighted. In addition, crack positions are compared to those observed in experiments and are found satisfactory.

  13. A spectral mean for random closed curves

    NARCIS (Netherlands)

    van Lieshout, Maria Nicolette Margaretha

    2016-01-01

    We propose a spectral mean for closed sets described by sample points on their boundaries subject to mis-alignment and noise. We derive maximum likelihood estimators for the model and noise parameters in the Fourier domain. We estimate the unknown mean boundary curve by back-transformation and

  14. Effect of liner design, pulsator setting, and vacuum level on bovine teat tissue changes and milking characteristics as measured by ultrasonography

    Directory of Open Access Journals (Sweden)

    Gleeson David E

    2004-05-01

    Full Text Available Friesian-type dairy cows were milked with different machine settings to determine the effect of these settings on teat tissue reaction and on milking characteristics. Three teat-cup liner designs were used with varying upper barrel dimensions (wide-bore WB = 31.6 mm; narrow-bore NB = 21.0 mm; narrow-bore NB1 = 25.0 mm. These liners were tested with alternate and simultaneous pulsation patterns, pulsator ratios (60:40 and 67:33 and three system vacuum levels (40, 44 and 50 kPa. Teat tissue was measured using ultrasonography, before milking and directly after milking. The measurements recorded were teat canal length (TCL, teat diameter (TD, cistern diameter (CD and teat wall thickness (TWT. Teat tissue changes were similar with a system vacuum level of either 50 kPa (mid-level or 40 kPa (low-level. Widening the liner upper barrel bore dimension from 21.0 mm (P

  15. A book of set theory

    CERN Document Server

    Pinter, Charles C

    2014-01-01

    Suitable for upper-level undergraduates, this accessible approach to set theory poses rigorous but simple arguments. Each definition is accompanied by commentary that motivates and explains new concepts. Starting with a repetition of the familiar arguments of elementary set theory, the level of abstract thinking gradually rises for a progressive increase in complexity.A historical introduction presents a brief account of the growth of set theory, with special emphasis on problems that led to the development of the various systems of axiomatic set theory. Subsequent chapters explore classes and

  16. Data set for Tifinagh handwriting character recognition

    Directory of Open Access Journals (Sweden)

    Omar Bencharef

    2015-09-01

    Full Text Available The Tifinagh alphabet-IRCAM is the official alphabet of the Amazigh language widely used in North Africa [1]. It includes thirty-one basic letter and two letters each composed of a base letter followed by the sign of labialization. Normalized only in 2003 (Unicode [2], ICRAM-Tifinagh is a young character repertoire. Which needs more work on all levels. In this context we propose a data set for handwritten Tifinagh characters composed of 1376 image; 43 Image For Each character. The dataset can be used to train a Tifinagh character recognition system, or to extract the meaning characteristics of each character.

  17. Irreducible descriptive sets of attributes for information systems

    KAUST Repository

    Moshkov, Mikhail; Skowron, Andrzej; Suraj, Zbigniew

    2010-01-01

    . An irreducible descriptive set for the considered information system S is a minimal (relative to the inclusion) set B of attributes which defines exactly the set Ext(S) by means of true and realizable rules constructed over attributes from the considered set B

  18. The global Minmax k-means algorithm.

    Science.gov (United States)

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  19. The Personal Meaning of Participation: Enduring Involvement.

    Science.gov (United States)

    McIntyre, N.

    1989-01-01

    Examines the personal meaning of participation, discussing recreation and consumer behavior literature, the development of an instrument to measure the concept, and the relationship between commitment to camping and choice of campground setting. Personal meaning of participation seems to be best represented by the concept of enduring involvement.…

  20. Measurement of thermally ablated lesions in sonoelastographic images using level set methods

    Science.gov (United States)

    Castaneda, Benjamin; Tamez-Pena, Jose Gerardo; Zhang, Man; Hoyt, Kenneth; Bylund, Kevin; Christensen, Jared; Saad, Wael; Strang, John; Rubens, Deborah J.; Parker, Kevin J.

    2008-03-01

    The capability of sonoelastography to detect lesions based on elasticity contrast can be applied to monitor the creation of thermally ablated lesion. Currently, segmentation of lesions depicted in sonoelastographic images is performed manually which can be a time consuming process and prone to significant intra- and inter-observer variability. This work presents a semi-automated segmentation algorithm for sonoelastographic data. The user starts by planting a seed in the perceived center of the lesion. Fast marching methods use this information to create an initial estimate of the lesion. Subsequently, level set methods refine its final shape by attaching the segmented contour to edges in the image while maintaining smoothness. The algorithm is applied to in vivo sonoelastographic images from twenty five thermal ablated lesions created in porcine livers. The estimated area is compared to results from manual segmentation and gross pathology images. Results show that the algorithm outperforms manual segmentation in accuracy, inter- and intra-observer variability. The processing time per image is significantly reduced.

  1. Regression Analysis of Long-Term Profile Ozone Data Set from BUV Instruments

    Science.gov (United States)

    Stolarski, Richard S.

    2005-01-01

    We have produced a profile merged ozone data set (MOD) based on the SBUV/SBUV2 series of nadir-viewing satellite backscatter instruments, covering the period from November 1978 - December 2003. In 2004, data from the Nimbus 7 SBUV and NOAA 9, ll, and 16 SBUV/2 instruments were reprocessed using the Version 8 (V8) algorithm and most recent calibrations. More recently, data from the Nimbus 4 BUT instrument, which was operational from 1970 - 1977, were also reprocessed using the V8 algorithm. As part of the V8 profile calibration, the Nimbus 7 and NOAA 9 (1993-1997 only) instrument calibrations have been adjusted to match the NOAA 11 calibration, which was established based on comparisons with SSBUV shuttle flight data. Differences between NOAA 11, Nimbus 7 and NOAA 9 profile zonal means are within plus or minus 5% at all levels when averaged over the respective periods of data overlap. NOAA 16 SBUV/2 data have insufficient overlap with NOAA 11, so its calibration is based on pre-flight information. Mean differences over 4 months of overlap are within plus or minus 7%. Given the level of agreement between the data sets, we simply average the ozone values during periods of instrument overlap to produce the MOD profile data set. Initial comparisons of coincident matches of N4 BUV and Arosa Umkehr data show mean differences of 0.5 (0.5)% at 30km; 7.5 (0.5)% at 35 km; and 11 (0.7)% at 40 km, where the number in parentheses is the standard error of the mean. In this study, we use the MOD profile data set (1978-2003) to estimate the change in profile ozone due to changing stratospheric chlorine levels. We use a standard linear regression model with proxies for the seasonal cycle, solar cycle, QBO, and ozone trend. To account for the non-linearity of stratospheric chlorine levels since the late 1990s, we use a time series of Effective Chlorine, defined as the global average of Chlorine + 50 * Bromine at 1 hPa, as the trend proxy. The Effective Chlorine data are taken from

  2. First-Class Object Sets

    DEFF Research Database (Denmark)

    Ernst, Erik

    Typically, objects are monolithic entities with a fixed interface. To increase the flexibility in this area, this paper presents first-class object sets as a language construct. An object set offers an interface which is a disjoint union of the interfaces of its member objects. It may also be used...... for a special kind of method invocation involving multiple objects in a dynamic lookup process. With support for feature access and late-bound method calls object sets are similar to ordinary objects, only more flexible. The approach is made precise by means of a small calculus, and the soundness of its type...

  3. Risk implications of renewable support instruments: Comparative analysis of feed-in tariffs and premiums using a mean-variance approach

    DEFF Research Database (Denmark)

    Kitzing, Lena

    2014-01-01

    . Using cash flow analysis, Monte Carlo simulations and mean-variance analysis, we quantify risk-return relationships for an exemplary offshore wind park in a simplified setting. We show that feedin tariffs systematically require lower direct support levels than feed-in premiums while providing the same...

  4. Accurate prediction of complex free surface flow around a high speed craft using a single-phase level set method

    Science.gov (United States)

    Broglia, Riccardo; Durante, Danilo

    2017-11-01

    This paper focuses on the analysis of a challenging free surface flow problem involving a surface vessel moving at high speeds, or planing. The investigation is performed using a general purpose high Reynolds free surface solver developed at CNR-INSEAN. The methodology is based on a second order finite volume discretization of the unsteady Reynolds-averaged Navier-Stokes equations (Di Mascio et al. in A second order Godunov—type scheme for naval hydrodynamics, Kluwer Academic/Plenum Publishers, Dordrecht, pp 253-261, 2001; Proceedings of 16th international offshore and polar engineering conference, San Francisco, CA, USA, 2006; J Mar Sci Technol 14:19-29, 2009); air/water interface dynamics is accurately modeled by a non standard level set approach (Di Mascio et al. in Comput Fluids 36(5):868-886, 2007a), known as the single-phase level set method. In this algorithm the governing equations are solved only in the water phase, whereas the numerical domain in the air phase is used for a suitable extension of the fluid dynamic variables. The level set function is used to track the free surface evolution; dynamic boundary conditions are enforced directly on the interface. This approach allows to accurately predict the evolution of the free surface even in the presence of violent breaking waves phenomena, maintaining the interface sharp, without any need to smear out the fluid properties across the two phases. This paper is aimed at the prediction of the complex free-surface flow field generated by a deep-V planing boat at medium and high Froude numbers (from 0.6 up to 1.2). In the present work, the planing hull is treated as a two-degree-of-freedom rigid object. Flow field is characterized by the presence of thin water sheets, several energetic breaking waves and plungings. The computational results include convergence of the trim angle, sinkage and resistance under grid refinement; high-quality experimental data are used for the purposes of validation, allowing to

  5. A game on the universe of sets

    International Nuclear Information System (INIS)

    Saveliev, D I

    2008-01-01

    Working in set theory without the axiom of regularity, we consider a two-person game on the universe of sets. In this game, the players choose in turn an element of a given set, an element of this element and so on. A player wins if he leaves his opponent no possibility of making a move, that is, if he has chosen the empty set. Winning sets (those admitting a winning strategy for one of the players) form a natural hierarchy with levels indexed by ordinals (in the finite case, the ordinal indicates the shortest length of a winning strategy). We show that the class of hereditarily winning sets is an inner model containing all well-founded sets and that each of the four possible relations between the universe, the class of hereditarily winning sets, and the class of well-founded sets is consistent. As far as the class of winning sets is concerned, either it is equal to the whole universe, or many of the axioms of set theory cannot hold on this class. Somewhat surprisingly, this does not apply to the axiom of regularity: we show that the failure of this axiom is consistent with its relativization to winning sets. We then establish more subtle properties of winning non-well-founded sets. We describe all classes of ordinals for which the following is consistent: winning sets without minimal elements (in the sense of membership) occur exactly at the levels indexed by the ordinals of this class. In particular, we show that if an even level of the hierarchy of winning sets contains a set without minimal elements, then all higher levels contain such sets. We show that the failure of the axiom of regularity implies that all odd levels contain sets without minimal elements, but it is consistent with the absence of such sets at all even levels as well as with their appearance at an arbitrary even non-limit or countable-cofinal level. To obtain consistency results, we propose a new method for obtaining models with non-well-founded sets. Finally, we study how long this game can

  6. Reassessment of 20th century global mean sea level rise

    Science.gov (United States)

    Dangendorf, Sönke; Marcos, Marta; Wöppelmann, Guy; Conrad, Clinton P.; Frederikse, Thomas; Riva, Riccardo

    2017-01-01

    The rate at which global mean sea level (GMSL) rose during the 20th century is uncertain, with little consensus between various reconstructions that indicate rates of rise ranging from 1.3 to 2 mm⋅y−1. Here we present a 20th-century GMSL reconstruction computed using an area-weighting technique for averaging tide gauge records that both incorporates up-to-date observations of vertical land motion (VLM) and corrections for local geoid changes resulting from ice melting and terrestrial freshwater storage and allows for the identification of possible differences compared with earlier attempts. Our reconstructed GMSL trend of 1.1 ± 0.3 mm⋅y−1 (1σ) before 1990 falls below previous estimates, whereas our estimate of 3.1 ± 1.4 mm⋅y−1 from 1993 to 2012 is consistent with independent estimates from satellite altimetry, leading to overall acceleration larger than previously suggested. This feature is geographically dominated by the Indian Ocean–Southern Pacific region, marking a transition from lower-than-average rates before 1990 toward unprecedented high rates in recent decades. We demonstrate that VLM corrections, area weighting, and our use of a common reference datum for tide gauges may explain the lower rates compared with earlier GMSL estimates in approximately equal proportion. The trends and multidecadal variability of our GMSL curve also compare well to the sum of individual contributions obtained from historical outputs of the Coupled Model Intercomparison Project Phase 5. This, in turn, increases our confidence in process-based projections presented in the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. PMID:28533403

  7. The radioactive contamination level in Croatia by means of radioactive rainwaters, caused by the accident in NPP 'Lenin'

    International Nuclear Information System (INIS)

    Barishicj, D.; Koshuticj, K.; Kvastek, K.; Lulicj, S.; Tuta, J.; Vertachnik, A.; Vrhovac, A.

    1987-01-01

    In this paper, the radioactive contamination level in Croatia by means of radioactive rainwaters, caused by the accident in NPP 'Lenin', has been described. The results represent the sum of measured and evaluated data, the map of the radioactive contamination in Croatia caused by radioactive rainwaters between April, 28 to May, 20 1986 has been constructed. (author) 3 tabs.; 5 figs

  8. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.

  9. An analysis on the level changing of UET and SET in blood and urine in early stage of kidney disease caused by diabetes

    International Nuclear Information System (INIS)

    Liu Juzhen; Yang Wenying; Cai Tietie

    2001-01-01

    Objective: To study the relationship between UET and SET variation and early changes of diabetic nephropathy. Methods: UET and SET were measured in 24 patients with diabetes, 19 with early stage diabetic nephropathy, 21 with advanced diabetic nephropathy and 30 normal as contrast. Results: Apparent uprise of UET and SET was observed in all patients when compared to normal contrasts (P 2 -macroglobulin was revealed (P<0.05). Conclusion: UET and SET levels uprose as long as diabetic nephropathy deteriorated. As a result, UET and SET may act as sensitive indices in diagnosing early stage diabetic nephropathy

  10. How Search for Meaning Interacts with Complex Categories of Meaning in Life and Subjective Well-Being?

    Science.gov (United States)

    Damásio, Bruno Figueiredo; Koller, Sílvia Helena

    2015-03-03

    This study sought to assess how the search for meaning interacts with crisis of meaning and with different categories of meaning in life (meaningfulness, crisis of meaning, existential indifference, and existential conflict). Furthermore, the moderation role of search for meaning between the relation of categories of meaning and subjective well-being (SWB) was also evaluated. Participants included 3,034 subjects (63.9% women) ranging in age from 18 to 91 (M = 33.90; SD = 15.01) years old from 22 Brazilian states. Zero-order correlations and a factorial MANOVA were implemented. Positive low correlations were found for search for meaning and crisis of meaning (r = .258; p < .001). Search for meaning presented a small-effect size moderation effect on the relation of the different categories of meaning with subjective happiness, F(6, 3008) = 2.698, p < .05; η2 = .004, but not for satisfaction with life, F(6, 3008) = .935, p = .47; η2 = .002. The differences on the levels of subjective happiness of those inserted in existential indifferent and conflicting categories differ depending on the levels of search for meaning. Further directions for future studies are proposed.

  11. The alternative means process for the Port Hope Area Initiative

    International Nuclear Information System (INIS)

    O'Neill, J.E.; Campbell, D.; Rossi, R.

    2006-01-01

    In March of 2001, the Government of Canada, the Town of Port Hope, Hope Township and the Municipality of Clarington agreed to the cleanup and long-term management of historic, low-level radioactive waste materials in these communities. The agreement identified conceptual designs for long-term management facilities for the wastes. Two environmental assessments (EAs) of the proposed long-term management facilities have been initiated as part of the Port Hope Area Initiative (PHAI); namely the Port Hope Long-Term Low-Level Radioactive Waste Management Project and the Port Granby Long-Term Low-Level Radioactive Waste Management Project. A requirement set out in the Scope for the EAs is the consideration of technically and economically feasible Alternative Means of carrying out the PHAI projects. Alternative Means are the various ways that the projects could be implemented, such as alternative technologies, sites, transportation routes, etc. Early in the overall EA processes the Low-Level Radioactive Waste Management Office (LLRWMO), which is responsible for undertaking the EAs, recognized that it was facing a significant challenge; namely, the successful completion of a clear, technically sound and defendable Alternative Means analysis, including consultation with and acceptance by the community. This would be a fundamental requirement for the success of the PHAI EAs. A further challenge was to develop consistent assessment methodologies for the Port Hope and Port Granby projects, which were both initiated under the PHAI at the same time. Although similar in many respects, the two projects have major differences. For example, the Port Hope Project, with more sources of contamination within a built- up urban area is more complex and has a broader range of potential solutions to be considered than the rural Port Granby Project. This paper describes how the LLRWMO met that challenge, developed and implemented a successful Alternative Means process and presents the

  12. Effects of goal-setting skills on students’academic performance in english language in Enugu Nigeria

    Directory of Open Access Journals (Sweden)

    Abe Iyabo Idowu

    2014-07-01

    Full Text Available The study investigated the effectiveness of goal-setting skills among Senior Secondary II students’ academic performance in English language in Enugu Metropolis, Enugu state, Nigeria. Quasi-experimental pre-test, post- test control group design was adopted for the study. The initial sample was 147 participants (male and female Senior Secondary School II students drawn from two public schools in Enugu zone of Enugu Metropolis. The final sample for the intervention consisted of 80 participants. This sample satisfied the condition for selection from the baseline data. Two research hypotheses were formulated and tested at 0.05 level of significance. Data generated were analyzed using the mean, standard deviation and t-test statistical method. The findings showed that performance in English language was enhanced among participants exposed to goal-setting intervention compared to those in the control group. The study also showed that there is a significant gender difference in students’ performance with female participants recording a higher mean score than males. Parental level of education was also found to be related to performance in English Language. Based on the findings, goal-setting intervention was recommended as a strategy to enhancing students’ academic performance particularly in English Language. 

  13. Volume growth trends in a Douglas-fir levels-of-growing-stock study.

    Science.gov (United States)

    Robert O. Curtis

    2006-01-01

    Mean curves of increment and yield in gross total cubic volume and net merchantable cubic volume were derived from seven installations of the regional cooperative Levels-of-Growing-Stock Study (LOGS) in Douglas-fir. The technique used reduces the seven curves for each treatment for each variable of interest to a single set of readily interpretable mean curves. To a top...

  14. Sets with Prescribed Arithmetic Densities

    Czech Academy of Sciences Publication Activity Database

    Luca, F.; Pomerance, C.; Porubský, Štefan

    2008-01-01

    Roč. 3, č. 2 (2008), s. 67-80 ISSN 1336-913X R&D Projects: GA ČR GA201/07/0191 Institutional research plan: CEZ:AV0Z10300504 Keywords : generalized arithmetic density * generalized asymptotic density * generalized logarithmic density * arithmetical semigroup * weighted arithmetic mean * ratio set * R-dense set * Axiom A * delta-regularly varying function Subject RIV: BA - General Mathematics

  15. THE PROBLEM OF USING GADGETS AS A MEANS OF FALSIFICATION OF RESULTS CHECK LEVEL OF STUDENTS KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    Vyacheslav A. Lipatov

    2016-01-01

    Full Text Available Abstract. The aim of the present research is studying of the range, technical characteristics, and also price categories of micro earphones, the devices intended for simplification by the pupil of examination and distorting the level of knowledge on control activities.Methods. The formalized interview is used as the main method of the research, i.e. a conversation on detailed developed program including a consistent design of seventeen closed and open questions, and also versions of possible answers.Results. The suppliers and sellers of micro earphones were interviewed on the Internet by mobile communication. Besides, the analysis of the relevant information posted at discussion boards, social networks, etc. is carried out. Statistical material is processed, tabulated and visually demonstrated in charts. The obtained data allow us to estimate geography, specification of types, a technical and price variety of the technical means which are used by students to falsification of results of their education. The conclusion is drawn that this type of devices in the Russian market promptly extends and is in huge demand both with students and school pupils.Scientific novelty. The opinions of suppliers and sellers regarding technical characteristics, harm to health and availability of micro earphones to pupils are studied for the first time. The range of offers and price range of these gadgets are analysed.Practical significance. The present investigation can serve as informative basis while developing the recommendations and methods of fight against application of technical means for cheating using electronic technical means at various stages of control of knowledge. These measures are necessary for identification of actual, but not fictitious level of knowledge of pupils, and improvement of quality of education in higher education institutions and schools. 

  16. A conditioned level-set method with block-division strategy to flame front extraction based on OH-PLIF measurements

    International Nuclear Information System (INIS)

    Han Yue; Cai Guo-Biao; Xu Xu; Bruno Renou; Abdelkrim Boukhalfa

    2014-01-01

    A novel approach to extract flame fronts, which is called the conditioned level-set method with block division (CLSB), has been developed. Based on a two-phase level-set formulation, the conditioned initialization and region-lock optimization appear to be beneficial to improve the efficiency and accuracy of the flame contour identification. The original block-division strategy enables the approach to be unsupervised by calculating local self-adaptive threshold values autonomously before binarization. The CLSB approach has been applied to deal with a large set of experimental data involving swirl-stabilized premixed combustion in diluted regimes operating at atmospheric pressures. The OH-PLIF measurements have been carried out in this framework. The resulting images are, thus, featured by lower signal-to-noise ratios (SNRs) than the ideal image; relatively complex flame structures lead to significant non-uniformity in the OH signal intensity; and, the magnitude of the maximum OH gradient observed along the flame front can also vary depending on flow or local stoichiometry. Compared with other conventional edge detection operators, the CLSB method demonstrates a good ability to deal with the OH-PLIF images at low SNR and with the presence of a multiple scales of both OH intensity and OH gradient. The robustness to noise sensitivity and intensity inhomogeneity has been evaluated throughout a range of experimental images of diluted flames, as well as against a circle test as Ground Truth (GT). (interdisciplinary physics and related areas of science and technology)

  17. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    Energy Technology Data Exchange (ETDEWEB)

    Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au [School of Chemistry and Biochemistry, The University of Western Australia, Perth, WA 6009 (Australia)

    2015-05-15

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.

  18. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    International Nuclear Information System (INIS)

    Spackman, Peter R.; Karton, Amir

    2015-01-01

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L α two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol –1 . The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol –1

  19. Joint Markov Blankets in Feature Sets Extracted from Wavelet Packet Decompositions

    Directory of Open Access Journals (Sweden)

    Gert Van Dijck

    2011-07-01

    Full Text Available Since two decades, wavelet packet decompositions have been shown effective as a generic approach to feature extraction from time series and images for the prediction of a target variable. Redundancies exist between the wavelet coefficients and between the energy features that are derived from the wavelet coefficients. We assess these redundancies in wavelet packet decompositions by means of the Markov blanket filtering theory. We introduce the concept of joint Markov blankets. It is shown that joint Markov blankets are a natural extension of Markov blankets, which are defined for single features, to a set of features. We show that these joint Markov blankets exist in feature sets consisting of the wavelet coefficients. Furthermore, we prove that wavelet energy features from the highest frequency resolution level form a joint Markov blanket for all other wavelet energy features. The joint Markov blanket theory indicates that one can expect an increase of classification accuracy with the increase of the frequency resolution level of the energy features.

  20. Serum Copper Level Significantly Influences Platelet Count, Lymphocyte Count and Mean Cell Hemoglobin in Sickle Cell Anemia

    Directory of Open Access Journals (Sweden)

    Okocha Chide

    2015-12-01

    Full Text Available Background Changes in serum micro nutrients levels affect a number of critically important metabolic processes; these could potentially influence blood counts and ultimately disease presentation in patients with sickle cell anemia (SCA. Objectives To evaluate the influence of serum micro-nutrients levels; zinc, copper, selenium and magnesium on blood counts in steady state SCA patients. Methods A cross sectional study that involved 28 steady state adult SCA subjects. Seven milliliters (mls of blood was collected; 3 mls was for hemoglobin electrophoresis and full blood count determination while 4 mls was for measurement of serum micro nutrients levels, by the atomic absorption spectrophotometry. Correlation between serum micro-nutrient levels and blood counts was done by the Pearson’s linear regression. Ethical approval was obtained from the institutional review board and each participant gave informed consent. All data was analyzed by SPSS software version 20. Results There was a significant correlation between serum copper levels and mean cell hemoglobin (MCH, platelet and lymphocyte counts (r = 0.418; P = 0.02, r = -0.376; P = 0.04 and r = -0.383; P = 0.04, respectively. There were no significant correlations between serum levels of other micro nutrients (selenium, zinc and magnesium and blood counts. Conclusions Copper influences blood count in SCA patients probably by inducing red cell haemolysis, oxidant tissue damage and stimulating the immune system.

  1. The utility of imputed matched sets. Analyzing probabilistically linked databases in a low information setting.

    Science.gov (United States)

    Thomas, A M; Cook, L J; Dean, J M; Olson, L M

    2014-01-01

    To compare results from high probability matched sets versus imputed matched sets across differing levels of linkage information. A series of linkages with varying amounts of available information were performed on two simulated datasets derived from multiyear motor vehicle crash (MVC) and hospital databases, where true matches were known. Distributions of high probability and imputed matched sets were compared against the true match population for occupant age, MVC county, and MVC hour. Regression models were fit to simulated log hospital charges and hospitalization status. High probability and imputed matched sets were not significantly different from occupant age, MVC county, and MVC hour in high information settings (p > 0.999). In low information settings, high probability matched sets were significantly different from occupant age and MVC county (p sets were not (p > 0.493). High information settings saw no significant differences in inference of simulated log hospital charges and hospitalization status between the two methods. High probability and imputed matched sets were significantly different from the outcomes in low information settings; however, imputed matched sets were more robust. The level of information available to a linkage is an important consideration. High probability matched sets are suitable for high to moderate information settings and for situations involving case-specific analysis. Conversely, imputed matched sets are preferable for low information settings when conducting population-based analyses.

  2. Mean-level personality development across childhood and adolescence: a temporary defiance of the maturity principle and bidirectional associations with parenting

    NARCIS (Netherlands)

    van den Akker, A.L.; Deković, M.; Asscher, J.; Prinzie, P.

    2014-01-01

    In this study, we investigated mean-level personality development in children from 6 to 20 years of age. Additionally, we investigated longitudinal, bidirectional associations between child personality and maternal overreactive and warm parenting. In this 5-wave study, mothers reported on their

  3. Basic personal values and the meaning of left-right political orientations in 20 countries

    OpenAIRE

    Piurko, Yuval; Schwartz, Shalom H; Davidov, Eldad

    2011-01-01

    This study used basic personal values to elucidate the motivational meanings of ‘left’ and ‘right’ political orientations in 20 representative national samples from the European Social Survey (2002-3). It also compared the importance of personal values and socio-demographic variables as determinants of political orientation. Hypotheses drew on the different histories, prevailing culture, and socio-economic level of 3 sets of countries—liberal, traditional and post-communist. As hy...

  4. The influence of power and actor relations on priority setting and resource allocation practices at the hospital level in Kenya: a case study.

    Science.gov (United States)

    Barasa, Edwine W; Cleary, Susan; English, Mike; Molyneux, Sassy

    2016-09-30

    Priority setting and resource allocation in healthcare organizations often involves the balancing of competing interests and values in the context of hierarchical and politically complex settings with multiple interacting actor relationships. Despite this, few studies have examined the influence of actor and power dynamics on priority setting practices in healthcare organizations. This paper examines the influence of power relations among different actors on the implementation of priority setting and resource allocation processes in public hospitals in Kenya. We used a qualitative case study approach to examine priority setting and resource allocation practices in two public hospitals in coastal Kenya. We collected data by a combination of in-depth interviews of national level policy makers, hospital managers, and frontline practitioners in the case study hospitals (n = 72), review of documents such as hospital plans and budgets, minutes of meetings and accounting records, and non-participant observations in case study hospitals over a period of 7 months. We applied a combination of two frameworks, Norman Long's actor interface analysis and VeneKlasen and Miller's expressions of power framework to examine and interpret our findings RESULTS: The interactions of actors in the case study hospitals resulted in socially constructed interfaces between: 1) senior managers and middle level managers 2) non-clinical managers and clinicians, and 3) hospital managers and the community. Power imbalances resulted in the exclusion of middle level managers (in one of the hospitals) and clinicians and the community (in both hospitals) from decision making processes. This resulted in, amongst others, perceptions of unfairness, and reduced motivation in hospital staff. It also puts to question the legitimacy of priority setting processes in these hospitals. Designing hospital decision making structures to strengthen participation and inclusion of relevant stakeholders could

  5. Goal oriented Mathematics Survey at Preparatory Level- Revised set ...

    African Journals Online (AJOL)

    This cross sectional study design on mathematical syllabi at preparatory levels of the high schools was to investigate the efficiency of the subject at preparatory level education serving as a basis for several streams, like Natural science, Technology, Computer Science, Health Science and Agriculture found at tertiary levels.

  6. Experiences and shared meaning of teamwork and interprofessional collaboration among health care professionals in primary health care settings: a systematic review.

    Science.gov (United States)

    Sangaleti, Carine; Schveitzer, Mariana Cabral; Peduzzi, Marina; Zoboli, Elma Lourdes Campos Pavone; Soares, Cassia Baldini

    2017-11-01

    During the last decade, teamwork has been addressed under the rationale of interprofessional practice or collaboration, highlighted by the attributes of this practice such as: interdependence of professional actions, focus on user needs, negotiation between professionals, shared decision making, mutual respect and trust among professionals, and acknowledgment of the role and work of the different professional groups. Teamwork and interprofessional collaboration have been pointed out as astrategy for effective organization of health care services as the complexity of healthcare requires integration of knowledge and practices from differente professional groups. This integration has a qualitative dimension that can be identified through the experiences of health professionals and to the meaning they give to teamwork. The objective of this systematic review was to synthesize the best available evidence on the experiences of health professionals regarding teamwork and interprofessional collaboration in primary health care settings. The populations included were all officially regulated health professionals that work in primary health settings: dentistry, medicine, midwifery, nursing, nutrition, occupational therapy, pharmacy, physical education, physiotherapy, psychology, social work and speech therapy. In addition to these professionals, community health workers, nursing assistants, licensed practical nurses and other allied health workers were also included. The phenomena of interest were experiences of health professionals regarding teamwork and interprofessional collaboration in primary health care settings. The context was primary health care settings that included health care centers, health maintenance organizations, integrative medicine practices, integrative health care, family practices, primary care organizations and family medical clinics. National health surgery as a setting was excluded. The qualitative component of the review considered studies that

  7. [Cardiac Synchronization Function Estimation Based on ASM Level Set Segmentation Method].

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Tang, Liang; He, Ying; Zhang, Huie

    At present, there is no accurate and quantitative methods for the determination of cardiac mechanical synchronism, and quantitative determination of the synchronization function of the four cardiac cavities with medical images has a great clinical value. This paper uses the whole heart ultrasound image sequence, and segments the left & right atriums and left & right ventricles of each frame. After the segmentation, the number of pixels in each cavity and in each frame is recorded, and the areas of the four cavities of the image sequence are therefore obtained. The area change curves of the four cavities are further extracted, and the synchronous information of the four cavities is obtained. Because of the low SNR of Ultrasound images, the boundary lines of cardiac cavities are vague, so the extraction of cardiac contours is still a challenging problem. Therefore, the ASM model information is added to the traditional level set method to force the curve evolution process. According to the experimental results, the improved method improves the accuracy of the segmentation. Furthermore, based on the ventricular segmentation, the right and left ventricular systolic functions are evaluated, mainly according to the area changes. The synchronization of the four cavities of the heart is estimated based on the area changes and the volume changes.

  8. The European gen-set market: growth and consolidation mean joy and pain

    International Nuclear Information System (INIS)

    French, Ian

    2000-01-01

    The changes in the European gen-set market are discussed. In recent years the market has undergone a period of increasing consolidation: prices fell and some companies folded. However, the market is not dead and continued growth is expected over the next five years although the compound rate is forecast to be only 1.5%. The article is presented under the sub-headings of (i) current market situation; (ii) product lifecycle; (iii) shipments by technology; (iv) market deregulation; (v) technology overview (spark ignition, compression ignition and gas turbines) (vi) European market: national overview and (vii) key market challenges (competition, emissions and over capacity)

  9. Prediction based on mean subset

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Brown, P. J.; Madsen, Henrik

    2002-01-01

    , it is found that the proposed mean subset method has superior prediction performance than prediction based on the best subset method, and in some settings also better than the ridge regression and lasso methods. The conclusions drawn from the Monte Carlo study is corroborated in an example in which prediction......Shrinkage methods have traditionally been applied in prediction problems. In this article we develop a shrinkage method (mean subset) that forms an average of regression coefficients from individual subsets of the explanatory variables. A Bayesian approach is taken to derive an expression of how...... the coefficient vectors from each subset should be weighted. It is not computationally feasible to calculate the mean subset coefficient vector for larger problems, and thus we suggest an algorithm to find an approximation to the mean subset coefficient vector. In a comprehensive Monte Carlo simulation study...

  10. Discussing Firearm Ownership and Access as Part of Suicide Risk Assessment and Prevention: "Means Safety" versus "Means Restriction".

    Science.gov (United States)

    Stanley, Ian H; Hom, Melanie A; Rogers, Megan L; Anestis, Michael D; Joiner, Thomas E

    2017-01-01

    The goal of this study was to describe the relative utility of the terms "means safety" versus "means restriction" in counseling individuals to limit their access to firearms in the context of a mock suicide risk assessment. Overall, 370 participants were randomized to read a vignette depicting a clinical scenario in which managing firearm ownership and access was discussed either using the term "means safety" or "means restriction." Participants rated the term "means safety" as significantly more acceptable and preferable than "means restriction." Participants randomized to the "means safety" condition reported greater intentions to adhere to clinicians' recommendations to limit access to a firearm for safety purposes (F[1,367] = 7.393, p = .007, [Formula: see text]). The term "means safety" may be more advantageous than "means restriction" when discussing firearm ownership and access in clinical settings and public health-oriented suicide prevention efforts.

  11. Meaning in Matter

    DEFF Research Database (Denmark)

    Brix, Anders

    2013-01-01

    When addressing present-day challenges, design discourse put faith mainly in design understood as a problem-solving activity, pertaining to innovation on functional, systems and technological level. The aesthetically founded fields, e.g. craft-based design, on the other hand, do not seem to play ...... - in particular those which carry _corporeal affordance_, such as clothing, blankets, chairs, tables and houses - perhaps even cities. Things that mean to us, what they mean, mainly _qua_ their physical properties....

  12. PERFORMANCE IN ART NATURE AND MEANING

    African Journals Online (AJOL)

    USER

    2009-03-02

    Mar 2, 2009 ... sums to poor people to set up or expand small business. ... means to meet widespread client demand for convenient, appropriate financial services. ..... Offer small initial loans: start with very small loans appropriate for meeting.

  13. Mean-deviation analysis in the theory of choice.

    Science.gov (United States)

    Grechuk, Bogdan; Molyboha, Anton; Zabarankin, Michael

    2012-08-01

    Mean-deviation analysis, along with the existing theories of coherent risk measures and dual utility, is examined in the context of the theory of choice under uncertainty, which studies rational preference relations for random outcomes based on different sets of axioms such as transitivity, monotonicity, continuity, etc. An axiomatic foundation of the theory of coherent risk measures is obtained as a relaxation of the axioms of the dual utility theory, and a further relaxation of the axioms are shown to lead to the mean-deviation analysis. Paradoxes arising from the sets of axioms corresponding to these theories and their possible resolutions are discussed, and application of the mean-deviation analysis to optimal risk sharing and portfolio selection in the context of rational choice is considered. © 2012 Society for Risk Analysis.

  14. Quantification of Coffea arabica and Coffea canephora var. robusta concentration in blends by means of synchronous fluorescence and UV-Vis spectroscopies.

    Science.gov (United States)

    Dankowska, A; Domagała, A; Kowalewski, W

    2017-09-01

    The potential of fluorescence, UV-Vis spectroscopies as well as the low- and mid-level data fusion of both spectroscopies for the quantification of concentrations of roasted Coffea arabica and Coffea canephora var. robusta in coffee blends was investigated. Principal component analysis was used to reduce data multidimensionality. To calculate the level of undeclared addition, multiple linear regression (PCA-MLR) models were used with lowest root mean square error of calibration (RMSEC) of 3.6% and root mean square error of cross-validation (RMSECV) of 7.9%. LDA analysis was applied to fluorescence intensities and UV spectra of Coffea arabica, canephora samples, and their mixtures in order to examine classification ability. The best performance of PCA-LDA analysis was observed for data fusion of UV and fluorescence intensity measurements at wavelength interval of 60nm. LDA showed that data fusion can achieve over 96% of correct classifications (sensitivity) in the test set and 100% of correct classifications in the training set, with low-level data fusion. The corresponding results for individual spectroscopies ranged from 90% (UV-Vis spectroscopy) to 77% (synchronous fluorescence) in the test set, and from 93% to 97% in the training set. The results demonstrate that fluorescence, UV, and visible spectroscopies complement each other, giving a complementary effect for the quantification of roasted Coffea arabica and Coffea canephora var. robusta concentration in blends. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. A hybrid interface tracking - level set technique for multiphase flow with soluble surfactant

    Science.gov (United States)

    Shin, Seungwon; Chergui, Jalel; Juric, Damir; Kahouadji, Lyes; Matar, Omar K.; Craster, Richard V.

    2018-04-01

    A formulation for soluble surfactant transport in multiphase flows recently presented by Muradoglu and Tryggvason (JCP 274 (2014) 737-757) [17] is adapted to the context of the Level Contour Reconstruction Method, LCRM, (Shin et al. IJNMF 60 (2009) 753-778, [8]) which is a hybrid method that combines the advantages of the Front-tracking and Level Set methods. Particularly close attention is paid to the formulation and numerical implementation of the surface gradients of surfactant concentration and surface tension. Various benchmark tests are performed to demonstrate the accuracy of different elements of the algorithm. To verify surfactant mass conservation, values for surfactant diffusion along the interface are compared with the exact solution for the problem of uniform expansion of a sphere. The numerical implementation of the discontinuous boundary condition for the source term in the bulk concentration is compared with the approximate solution. Surface tension forces are tested for Marangoni drop translation. Our numerical results for drop deformation in simple shear are compared with experiments and results from previous simulations. All benchmarking tests compare well with existing data thus providing confidence that the adapted LCRM formulation for surfactant advection and diffusion is accurate and effective in three-dimensional multiphase flows with a structured mesh. We also demonstrate that this approach applies easily to massively parallel simulations.

  16. Characterization of mammographic masses based on level set segmentation with new image features and patient information

    International Nuclear Information System (INIS)

    Shi Jiazheng; Sahiner, Berkman; Chan Heangping; Ge Jun; Hadjiiski, Lubomir; Helvie, Mark A.; Nees, Alexis; Wu Yita; Wei Jun; Zhou Chuan; Zhang Yiheng; Cui Jing

    2008-01-01

    Computer-aided diagnosis (CAD) for characterization of mammographic masses as malignant or benign has the potential to assist radiologists in reducing the biopsy rate without increasing false negatives. The purpose of this study was to develop an automated method for mammographic mass segmentation and explore new image based features in combination with patient information in order to improve the performance of mass characterization. The authors' previous CAD system, which used the active contour segmentation, and morphological, textural, and spiculation features, has achieved promising results in mass characterization. The new CAD system is based on the level set method and includes two new types of image features related to the presence of microcalcifications with the mass and abruptness of the mass margin, and patient age. A linear discriminant analysis (LDA) classifier with stepwise feature selection was used to merge the extracted features into a classification score. The classification accuracy was evaluated using the area under the receiver operating characteristic curve. The authors' primary data set consisted of 427 biopsy-proven masses (200 malignant and 227 benign) in 909 regions of interest (ROIs) (451 malignant and 458 benign) from multiple mammographic views. Leave-one-case-out resampling was used for training and testing. The new CAD system based on the level set segmentation and the new mammographic feature space achieved a view-based A z value of 0.83±0.01. The improvement compared to the previous CAD system was statistically significant (p=0.02). When patient age was included in the new CAD system, view-based and case-based A z values were 0.85±0.01 and 0.87±0.02, respectively. The study also demonstrated the consistency of the newly developed CAD system by evaluating the statistics of the weights of the LDA classifiers in leave-one-case-out classification. Finally, an independent test on the publicly available digital database for screening

  17. Brain glucose and lactate levels during ventilator-induced hypo- and hypercapnia

    NARCIS (Netherlands)

    van Hulst, R. A.; Lameris, T. W.; Haitsma, J. J.; Klein, J.; Lachmann, B.

    2004-01-01

    OBJECTIVE: Levels of glucose and lactate were measured in the brain by means of microdialysis in order to evaluate the effects of ventilator-induced hypocapnia and hypercapnia on brain metabolism in healthy non-brain-traumatized animals. DESIGN AND SETTING: Prospective animal study in a university

  18. Topology optimization of hyperelastic structures using a level set method

    Science.gov (United States)

    Chen, Feifei; Wang, Yiqiang; Wang, Michael Yu; Zhang, Y. F.

    2017-12-01

    Soft rubberlike materials, due to their inherent compliance, are finding widespread implementation in a variety of applications ranging from assistive wearable technologies to soft material robots. Structural design of such soft and rubbery materials necessitates the consideration of large nonlinear deformations and hyperelastic material models to accurately predict their mechanical behaviour. In this paper, we present an effective level set-based topology optimization method for the design of hyperelastic structures that undergo large deformations. The method incorporates both geometric and material nonlinearities where the strain and stress measures are defined within the total Lagrange framework and the hyperelasticity is characterized by the widely-adopted Mooney-Rivlin material model. A shape sensitivity analysis is carried out, in the strict sense of the material derivative, where the high-order terms involving the displacement gradient are retained to ensure the descent direction. As the design velocity enters into the shape derivative in terms of its gradient and divergence terms, we develop a discrete velocity selection strategy. The whole optimization implementation undergoes a two-step process, where the linear optimization is first performed and its optimized solution serves as the initial design for the subsequent nonlinear optimization. It turns out that this operation could efficiently alleviate the numerical instability and facilitate the optimization process. To demonstrate the validity and effectiveness of the proposed method, three compliance minimization problems are studied and their optimized solutions present significant mechanical benefits of incorporating the nonlinearities, in terms of remarkable enhancement in not only the structural stiffness but also the critical buckling load.

  19. process setting models for the minimization of costs defectives

    African Journals Online (AJOL)

    Dr Obe

    determine the mean setting so as to minimise the total loss through under-limit complaints and loss of sales and goodwill as well as over-limit losses through excess materials and rework costs. Models are developed for the two types of setting of the mean so that the minimum costs of losses are achieved. Also, a model is ...

  20. Investigation of the energy levels of 38AR

    International Nuclear Information System (INIS)

    Waanders, F.B.

    1975-07-01

    In this project information on the energy levels of 38 Ar was obtained by means of the (p,γ) reaction. The 1,1 MeV Cockroft-Walton accelerator of the Potchefstroom University for CHE was used to produce the proton beam while a 80 cm 3 Ge(Li) detector was used to detect the gamma-rays. Precise gamma-branchings were determined for 50 bound levels, of which four have not previously been determined. These branchings were obtained from the 28 resonances studied in the 37 Cl(p,γ) 38 Ar reaction. The resonance with a proton energy of (592 plus minus 3) keV was not detected previously. The resonance energies, Q-value and energies of the bound levels used in this project were taken from the study done by Alderliesten. The mean lifetimes of a few bound levels of 38 Ar were measured by means of the doppler shift attenuation method. The results concerning the bound states and mean lifetimes are in good agreement with previous experiments. Limitations on the spin and parities of 19 (p,γ) resonances have been set by means of Weisskopf estimates. Only those cases for which the spin could be limited to two values are discussed in the text. A summary of experimental data obtained on 38 Ar is compared with the results from shellmodel calculations done by various workers. A short discussion on the analogue states in 38 Ar is also given [af

  1. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  2. CT Findings of Disease with Elevated Serum D-Dimer Levels in an Emergency Room Setting

    International Nuclear Information System (INIS)

    Choi, Ji Youn; Kwon, Woo Cheol; Kim, Young Ju

    2012-01-01

    Pulmonary embolism and deep vein thrombosis are the leading causes of elevated serum D-dimer levels in the emergency room. Although D-dimer is a useful screening test because of its high sensitivity and negative predictive value, it has a low specificity. In addition, D-dimer can be elevated in various diseases. Therefore, information on the various diseases with elevated D-dimer levels and their radiologic findings may allow for accurate diagnosis and proper management. Herein, we report the CT findings of various diseases with elevated D-dimer levels in an emergency room setting, including an intravascular contrast filling defect with associated findings in a venous thromboembolism, fracture with soft tissue swelling and hematoma formation in a trauma patient, enlargement with contrast enhancement in the infected organ of a patient, coronary artery stenosis with a perfusion defect of the myocardium in a patient with acute myocardial infarction, high density of acute thrombus in a cerebral vessel with a low density of affected brain parenchyma in an acute cerebral infarction, intimal flap with two separated lumens in a case of aortic dissection, organ involvement of malignancy in a cancer patient, and atrophy of a liver with a dilated portal vein and associated findings.

  3. CT Findings of Disease with Elevated Serum D-Dimer Levels in an Emergency Room Setting

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Ji Youn; Kwon, Woo Cheol; Kim, Young Ju [Dept. of Radiology, Wonju Christian Hospital, Yensei University Wonju College of Medicine, Wonju (Korea, Republic of)

    2012-01-15

    Pulmonary embolism and deep vein thrombosis are the leading causes of elevated serum D-dimer levels in the emergency room. Although D-dimer is a useful screening test because of its high sensitivity and negative predictive value, it has a low specificity. In addition, D-dimer can be elevated in various diseases. Therefore, information on the various diseases with elevated D-dimer levels and their radiologic findings may allow for accurate diagnosis and proper management. Herein, we report the CT findings of various diseases with elevated D-dimer levels in an emergency room setting, including an intravascular contrast filling defect with associated findings in a venous thromboembolism, fracture with soft tissue swelling and hematoma formation in a trauma patient, enlargement with contrast enhancement in the infected organ of a patient, coronary artery stenosis with a perfusion defect of the myocardium in a patient with acute myocardial infarction, high density of acute thrombus in a cerebral vessel with a low density of affected brain parenchyma in an acute cerebral infarction, intimal flap with two separated lumens in a case of aortic dissection, organ involvement of malignancy in a cancer patient, and atrophy of a liver with a dilated portal vein and associated findings.

  4. Phonological Iconicity electrifies: An ERP study on affective sound-to-meaning correspondences in German

    Directory of Open Access Journals (Sweden)

    Susann Ullrich

    2016-08-01

    Full Text Available While linguistic theory posits an arbitrary relation between signifiers and the signified (de Saussure, 1916, our analysis of a large-scale German database containing affective ratings of words revealed that certain phoneme clusters occur more often in words denoting concepts with negative and arousing meaning. Here, we investigate how such phoneme clusters that potentially serve as sublexical markers of affect can influence language processing. We registered the EEG signal during a lexical decision task with a novel manipulation of the words’ putative sublexical affective potential: the means of valence and arousal values for single phoneme clusters, each computed as a function of respective values of words from the database these phoneme clusters occur in. Our experimental manipulations also investigate potential contributions of formal salience to the sublexical affective potential: Typically, negative high-arousing phonological segments—based on our calculations—tend to be less frequent and more structurally complex than neutral ones. We thus constructed two experimental sets, one involving this natural confound, while controlling for it in the other. A negative high-arousing sublexical affective potential in the strictly controlled stimulus set yielded an early posterior negativity (EPN, in similar ways as an independent manipulation of lexical affective content. When other potentially salient formal features at the sublexical level were not controlled for, the effect of the sublexical affective potential was strengthened and prolonged (250-650 ms, presumably because formal salience helps making specific phoneme clusters efficient sublexical markers of negative high-arousing affective meaning. These neurophysiological data support the assumption that the organization of a language´s vocabulary involves systematic sound-to-meaning correspondences at the phonemic level that influence the way we process language.

  5. A Modified MinMax k-Means Algorithm Based on PSO.

    Science.gov (United States)

    Wang, Xiaoyan; Bai, Yanping

    The MinMax k -means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k -means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k -means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k -means algorithm and the original MinMax k -means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically.

  6. Bud development, flowering and fruit set of Moringa oleifera Lam. (Horseradish Tree as affected by various irrigation levels

    Directory of Open Access Journals (Sweden)

    Quintin Ernst Muhl

    2013-12-01

    Full Text Available Moringa oleifera is becoming increasingly popular as an industrial crop due to its multitude of useful attributes as water purifier, nutritional supplement and biofuel feedstock. Given its tolerance to sub-optimal growing conditions, most of the current and anticipated cultivation areas are in medium to low rainfall areas. This study aimed to assess the effect of various irrigation levels on floral initiation, flowering and fruit set. Three treatments namely, a 900 mm (900IT, 600 mm (600IT and 300 mm (300IT per annum irrigation treatment were administered through drip irrigation, simulating three total annual rainfall amounts. Individual inflorescences from each treatment were tagged during floral initiation and monitored throughout until fruit set. Flower bud initiation was highest at the 300IT and lowest at the 900IT for two consecutive growing seasons. Fruit set on the other hand, decreased with the decrease in irrigation treatment. Floral abortion, reduced pollen viability as well as moisture stress in the style were contributing factors to the reduction in fruiting/yield observed at the 300IT. Moderate water stress prior to floral initiation could stimulate flower initiation, however, this should be followed by sufficient irrigation to ensure good pollination, fruit set and yield.

  7. A geometric approach to multiperiod mean variance optimization of assets and liabilities

    OpenAIRE

    Leippold, Markus; Trojani, Fabio; Vanini, Paolo

    2005-01-01

    We present a geometric approach to discrete time multiperiod mean variance portfolio optimization that largely simplifies the mathematical analysis and the economic interpretation of such model settings. We show that multiperiod mean variance optimal policies can be decomposed in an orthogonal set of basis strategies, each having a clear economic interpretation. This implies that the corresponding multi period mean variance frontiers are spanned by an orthogonal basis of dynamic returns. Spec...

  8. An Optimization Study on Listening Experiments to Improve the Comparability of Annoyance Ratings of Noise Samples from Different Experimental Sample Sets.

    Science.gov (United States)

    Di, Guoqing; Lu, Kuanguang; Shi, Xiaofan

    2018-03-08

    Annoyance ratings obtained from listening experiments are widely used in studies on health effect of environmental noise. In listening experiments, participants usually give the annoyance rating of each noise sample according to its relative annoyance degree among all samples in the experimental sample set if there are no reference sound samples, which leads to poor comparability between experimental results obtained from different experimental sample sets. To solve this problem, this study proposed to add several pink noise samples with certain loudness levels into experimental sample sets as reference sound samples. On this basis, the standard curve between logarithmic mean annoyance and loudness level of pink noise was used to calibrate the experimental results and the calibration procedures were described in detail. Furthermore, as a case study, six different types of noise sample sets were selected to conduct listening experiments using this method to examine the applicability of it. Results showed that the differences in the annoyance ratings of each identical noise sample from different experimental sample sets were markedly decreased after calibration. The determination coefficient ( R ²) of linear fitting functions between psychoacoustic annoyance (PA) and mean annoyance (MA) of noise samples from different experimental sample sets increased obviously after calibration. The case study indicated that the method above is applicable to calibrating annoyance ratings obtained from different types of noise sample sets. After calibration, the comparability of annoyance ratings of noise samples from different experimental sample sets can be distinctly improved.

  9. On reinitializing level set functions

    Science.gov (United States)

    Min, Chohong

    2010-04-01

    In this paper, we consider reinitializing level functions through equation ϕt+sgn(ϕ0)(‖∇ϕ‖-1)=0[16]. The method of Russo and Smereka [11] is taken in the spatial discretization of the equation. The spatial discretization is, simply speaking, the second order ENO finite difference with subcell resolution near the interface. Our main interest is on the temporal discretization of the equation. We compare the three temporal discretizations: the second order Runge-Kutta method, the forward Euler method, and a Gauss-Seidel iteration of the forward Euler method. The fact that the time in the equation is fictitious makes a hypothesis that all the temporal discretizations result in the same result in their stationary states. The fact that the absolute stability region of the forward Euler method is not wide enough to include all the eigenvalues of the linearized semi-discrete system of the second order ENO spatial discretization makes another hypothesis that the forward Euler temporal discretization should invoke numerical instability. Our results in this paper contradict both the hypotheses. The Runge-Kutta and Gauss-Seidel methods obtain the second order accuracy, and the forward Euler method converges with order between one and two. Examining all their properties, we conclude that the Gauss-Seidel method is the best among the three. Compared to the Runge-Kutta, it is twice faster and requires memory two times less with the same accuracy.

  10. Investigating the influence of anthropogenic forcing on observed mean and extreme sea level pressure trends over the Mediterranean Region.

    Science.gov (United States)

    Barkhordarian, Armineh

    2012-01-01

    We investigate whether the observed mean sea level pressure (SLP) trends over the Mediterranean region in the period from 1975 to 2004 are significantly consistent with what 17 models projected as response of SLP to anthropogenic forcing (greenhouse gases and sulphate aerosols, GS). Obtained results indicate that the observed trends in mean SLP cannot be explained by natural (internal) variability. Externally forced changes are detectable in all seasons, except spring. The large-scale component (spatial mean) of the GS signal is detectable in all the 17 models in winter and in 12 of the 17 models in summer. However, the small-scale component (spatial anomalies about the spatial mean) of GS signal is only detectable in winter within 11 of the 17 models. We also show that GS signal has a detectable influence on observed decreasing (increasing) tendency in the frequencies of extremely low (high) SLP days in winter and that these changes cannot be explained by internal climate variability. While the detection of GS forcing is robust in winter and summer, there are striking inconsistencies in autumn, where analysis points to the presence of an external forcing, which is not GS forcing.

  11. The development and validation of the Closed-set Mandarin Sentence (CMS) test.

    Science.gov (United States)

    Tao, Duo-Duo; Fu, Qian-Jie; Galvin, John J; Yu, Ya-Feng

    2017-09-01

    Matrix-styled sentence tests offer a closed-set paradigm that may be useful when evaluating speech intelligibility. Ideally, sentence test materials should reflect the distribution of phonemes within the target language. We developed and validated the Closed-set Mandarin Sentence (CMS) test to assess Mandarin speech intelligibility in noise. CMS test materials were selected to be familiar words and to represent the natural distribution of vowels, consonants, and lexical tones found in Mandarin Chinese. Ten key words in each of five categories (Name, Verb, Number, Color, and Fruit) were produced by a native Mandarin talker, resulting in a total of 50 words that could be combined to produce 100,000 unique sentences. Normative data were collected in 10 normal-hearing, adult Mandarin-speaking Chinese listeners using a closed-set test paradigm. Two test runs were conducted for each subject, and 20 sentences per run were randomly generated while ensuring that each word was presented only twice in each run. First, the level of the words in each category were adjusted to produce equal intelligibility in noise. Test-retest reliability for word-in-sentence recognition was excellent according to Cronbach's alpha (0.952). After the category level adjustments, speech reception thresholds (SRTs) for sentences in noise, defined as the signal-to-noise ratio (SNR) that produced 50% correct whole sentence recognition, were adaptively measured by adjusting the SNR according to the correctness of response. The mean SRT was -7.9 (SE=0.41) and -8.1 (SE=0.34) dB for runs 1 and 2, respectively. The mean standard deviation across runs was 0.93 dB, and paired t-tests showed no significant difference between runs 1 and 2 (p=0.74) despite random sentences being generated for each run and each subject. The results suggest that the CMS provides large stimulus set with which to repeatedly and reliably measure Mandarin-speaking listeners' speech understanding in noise using a closed-set paradigm.

  12. Correlation of symptom depression levels with mean platelet volume rate on patients of acute coronary syndrome

    Science.gov (United States)

    Hasugian, L.; Hanum, H.; Hanida, W.; Safri, Z.

    2018-03-01

    Patients with Depression and the acute coronary syndrome (ACS) is rarely detected, although in some studies say that depression can worsen cardiovascularly and increase mortality. From research, Canan F et al found that increasing levels of Mean platelet volume (MPV) as a risk factor for atherosclerosis and MPV was higher in patients with depression compared with patients without depression. In this study used observational methods of measurement of cross-sectional data. Research began in November 2015 - May 2016 against General Hospital inpatients H. Adam Malik Medan. There are 64 patients with a diagnosis of ACS were given quieter Beck Depression Inventory (BDI), then calculated a score of BDI patients and MPV levels were seen when they first entered the hospital before being given treatment. Patients answered quieter on days 3-7 after diagnosis ACS. ACS Patients were divided into 3 groups: acute myocardial infarction with ST elevation, acute myocardial infarction with non-ST elevation and unstable angina pectoris. The level of depression is grouped into not depression, mild depression, moderate depression and severe depression. Statistically significant with p-value<0.05Based on the linear correlation analysis, it was found a positive correlation with r=0.542. And the relationship is statistically significant with p-value 0.000003.

  13. Generalized cost-effectiveness analysis for national-level priority-setting in the health sector

    Directory of Open Access Journals (Sweden)

    Edejer Tessa

    2003-12-01

    Full Text Available Abstract Cost-effectiveness analysis (CEA is potentially an important aid to public health decision-making but, with some notable exceptions, its use and impact at the level of individual countries is limited. A number of potential reasons may account for this, among them technical shortcomings associated with the generation of current economic evidence, political expediency, social preferences and systemic barriers to implementation. As a form of sectoral CEA, Generalized CEA sets out to overcome a number of these barriers to the appropriate use of cost-effectiveness information at the regional and country level. Its application via WHO-CHOICE provides a new economic evidence base, as well as underlying methodological developments, concerning the cost-effectiveness of a range of health interventions for leading causes of, and risk factors for, disease. The estimated sub-regional costs and effects of different interventions provided by WHO-CHOICE can readily be tailored to the specific context of individual countries, for example by adjustment to the quantity and unit prices of intervention inputs (costs or the coverage, efficacy and adherence rates of interventions (effectiveness. The potential usefulness of this information for health policy and planning is in assessing if current intervention strategies represent an efficient use of scarce resources, and which of the potential additional interventions that are not yet implemented, or not implemented fully, should be given priority on the grounds of cost-effectiveness. Health policy-makers and programme managers can use results from WHO-CHOICE as a valuable input into the planning and prioritization of services at national level, as well as a starting point for additional analyses of the trade-off between the efficiency of interventions in producing health and their impact on other key outcomes such as reducing inequalities and improving the health of the poor.

  14. Level-set segmentation of pulmonary nodules in megavolt electronic portal images using a CT prior

    International Nuclear Information System (INIS)

    Schildkraut, J. S.; Prosser, N.; Savakis, A.; Gomez, J.; Nazareth, D.; Singh, A. K.; Malhotra, H. K.

    2010-01-01

    Purpose: Pulmonary nodules present unique problems during radiation treatment due to nodule position uncertainty that is caused by respiration. The radiation field has to be enlarged to account for nodule motion during treatment. The purpose of this work is to provide a method of locating a pulmonary nodule in a megavolt portal image that can be used to reduce the internal target volume (ITV) during radiation therapy. A reduction in the ITV would result in a decrease in radiation toxicity to healthy tissue. Methods: Eight patients with nonsmall cell lung cancer were used in this study. CT scans that include the pulmonary nodule were captured with a GE Healthcare LightSpeed RT 16 scanner. Megavolt portal images were acquired with a Varian Trilogy unit equipped with an AS1000 electronic portal imaging device. The nodule localization method uses grayscale morphological filtering and level-set segmentation with a prior. The treatment-time portion of the algorithm is implemented on a graphical processing unit. Results: The method was retrospectively tested on eight cases that include a total of 151 megavolt portal image frames. The method reduced the nodule position uncertainty by an average of 40% for seven out of the eight cases. The treatment phase portion of the method has a subsecond execution time that makes it suitable for near-real-time nodule localization. Conclusions: A method was developed to localize a pulmonary nodule in a megavolt portal image. The method uses the characteristics of the nodule in a prior CT scan to enhance the nodule in the portal image and to identify the nodule region by level-set segmentation. In a retrospective study, the method reduced the nodule position uncertainty by an average of 40% for seven out of the eight cases studied.

  15. Probabilistic generation of quantum contextual sets

    International Nuclear Information System (INIS)

    Megill, Norman D.; Fresl, Kresimir; Waegell, Mordecai; Aravind, P.K.; Pavicic, Mladen

    2011-01-01

    We give a method for exhaustive generation of a huge number of Kochen-Specker contextual sets, based on the 600-cell, for possible experiments and quantum gates. The method is complementary to our previous parity proof generation of these sets, and it gives all sets while the parity proof method gives only sets with an odd number of edges in their hypergraph representation. Thus we obtain 35 new kinds of critical KS sets with an even number of edges. We also give a statistical estimate of the number of sets that might be obtained in an eventual exhaustive enumeration. -- Highlights: → We generate millions of new Kochen-Specker noncontextual set. → We find thousands of novel critical Kochen-Specker (KS) sets. → We give algorithms for generating KS sets from a new 4-dim class. → We represent KS sets by means of hypergraphs and their figures. → We give a new exact estimation method for random sampling of sets.

  16. Operator Arithmetic-Harmonic Mean Inequality on Krein Spaces

    Directory of Open Access Journals (Sweden)

    M. Dehghani

    2014-03-01

    Full Text Available We prove an operator arithmetic-harmonic mean type inequality in Krein space setting, by using some block matrix techniques of indefinite type. We also give an example which shows that the operator arithmetic-geometric-harmonic mean inequality for two invertible selfadjoint operators on Krein spaces is not valid, in general.

  17. Optimasi Pusat Cluster Awal K-Means dengan Algoritma Genetika Pada Pengelompokan Dokumen

    OpenAIRE

    Fauzi, Muhammad

    2017-01-01

    147038065 Clustering a data set of documents based on certain data points in documents are an easy way to organize document for extension to work. K-Means clustering algorithm is one of iterative cluster algorithm to partition a set of entities into K cluster. Unfortunately, resulting in K?Means cluster is depending on the initial cluster center that generally assigned randomly. In this reserach, determining initial cluster center K-Means for documents clustering are investi...

  18. Terminal values and meaning in life among university students with varied levels of altruism in the present period of socio-cultural change

    Directory of Open Access Journals (Sweden)

    Głaz Stanisław

    2012-01-01

    Full Text Available Abstract The author of this paper, interested in the issues of values preference, meaning in life and altruism among university students has attempted to show a relation between them in the present period of clearly noticeable socio-cultural change. The study was conducted in 2009-2010 in Kraków among university students. The age of the respondents ranged from 21 to 25. 200 sets of correctly completed questionnaires were used for the results analysis.

  19. Association Between Student Loan Debt on Graduation, Demographic Characteristics and Initial Choice of Practice Setting of Pharmacists

    Directory of Open Access Journals (Sweden)

    Akeem A. Yusuf

    2011-01-01

    Full Text Available Objectives: (1 To examine trends in level of student loan indebtedness for groups of pharmacists that were first licensed between 1980 and 2006; (2 To examine if demographic variables are associated with level of student loan indebtedness; (3 To examine the association between student loan debt and choice of practice setting while controlling for demographic variables. Methods: Data for this study were collected from a national random sample of 3,000 pharmacists using a self administered survey. Descriptive statistics were used to examine trends in level of indebtedness. The relationships between level of indebtedness, demographic variables and practice setting choice were examined using Chi-square statistics. Multinomial logistic regression was used to determine the independent association of student loan debt and choice of practice setting while controlling for demographic variables. Results: The proportion of licensed pharmacists reporting student loan debt after graduation, and the mean amount of debt incurred increased between 1980 and 2006. Non-white pharmacists incurred debt at a higher proportion compared to white, and they also incurred significantly higher levels of debt. A lower level of indebtedness was associated with choosing independent practice over chain practice. Conclusions: Student loan indebtedness has been increasing over time, especially for non-white pharmacy students. Future research should be done to examine other factors that might influence student debt load, work contributions and choice of practice settings. The affordability of pharmacy education for students of color and how salaries may or may not help off-set these costs also should be examined closely.   Type: Original Research

  20. Organizational culture in the primary healthcare setting of Cyprus.

    Science.gov (United States)

    Zachariadou, Theodora; Zannetos, Savvas; Pavlakis, Andreas

    2013-03-24

    The concept of organizational culture is important in understanding the behaviour of individuals in organizations as they manage external demands and internal social changes. Cyprus healthcare system is under restructuring and soon a new healthcare scheme will be implemented starting at the Primary Healthcare (PHC) level. The aim of the study was to investigate the underlying culture encountered in the PHC setting of Cyprus and to identify possible differences in desired and prevailing cultures among healthcare professionals. The population of the study included all general practitioners (GPs) and nursing staff working at the 42 PHC centres throughout the island. The shortened version of the Organizational Culture Profile questionnaire comprising 28 statements on organizational values was used in the study. The instrument was already translated and validated in Greek and cross-cultural adaptation was performed. Participants were required to indicate the organization's characteristic cultural values orientation along a five-point Likert scale ranging from "Very Much = 1" to "Not at all= 5". Statistical analysis was performed using SPSS 16.0. Student t-test was used to compare means between two groups of variables whereas for more than two groups analysis of variance (ANOVA) was applied. From the total of 306 healthcare professionals, 223 participated in the study (72.9%). The majority of participants were women (75.3%) and mean age was 42.6 ± 10.7 years. Culture dimension "performance orientation" was the desired culture among healthcare professionals (mean: 1.39 ± 0.45). "Supportiveness" and "social responsibility" were the main cultures encountered in PHC (means: 2.37 ± 0.80, 2.38 ± 0.83). Statistical significant differences were identified between desired and prevailing cultures for all culture dimensions (p= 0.000). This was the first study performed in Cyprus assessing organizational culture in the PHC setting. In the forthcoming health system reform

  1. Human Rights: Its Meaning and Practice in Social Work Field Settings.

    Science.gov (United States)

    Steen, Julie A; Mann, Mary; Restivo, Nichole; Mazany, Shellene; Chapple, Reshawna

    2017-01-01

    The goal of the study reported in this article was to explore the conceptualizations of human rights and human rights practice among students and supervisors in social work field settings. Data were collected from 35 students and 48 supervisors through an online survey system that featured two open-ended questions regarding human rights issues in their agency and human rights practice tasks. Responses suggest that participants encountered human rights issues related to poverty, discrimination, participation/self-determination/autonomy, violence, dignity/respect, privacy, and freedom/liberty. They saw human rights practice as encompassing advocacy, service provision, assessment, awareness of threats to clients' rights, and the nature of the worker-client relationship. These results have implications for the social work profession, which has an opportunity to focus more intently on change efforts that support clients' rights. The study points to the possibilities of expanding the scope of the human rights competency within social work education and addressing the key human rights issues in field education. © 2016 National Association of Social Workers.

  2. Quality of Care and Job Satisfaction in the European Home Care Setting: Research Protocol

    Science.gov (United States)

    van der Roest, Henriëtte; van Hout, Hein; Declercq, Anja

    2016-01-01

    Introduction: Since the European population is ageing, a growing number of elderly will need home care. Consequently, high quality home care for the elderly remains an important challenge. Job satisfaction among care professionals is regarded as an important aspect of the quality of home care. Aim: This paper describes a research protocol to identify elements that have an impact on job satisfaction among care professionals and on quality of care for older people in the home care setting of six European countries. Methods: Data on elements at the macro-level (policy), meso-level (care organisations) and micro-level (clients) are of importance in determining job satisfaction and quality of care. Macro-level indicators will be identified in a previously published literature review. At meso- and micro-level, data will be collected by means of two questionnaires utilsed with both care organisations and care professionals, and by means of interRAI Home Care assessments of clients. The client assessments will be used to calculate quality of care indicators. Subsequently, data will be analysed by means of linear and stepwise multiple regression analyses, correlations and multilevel techniques. Conclusions and Discussion: These results can guide health care policy makers in their decision making process in order to increase the quality of home care in their organisation, in their country or in Europe. PMID:28435423

  3. Quality of Care and Job Satisfaction in the European Home Care Setting: Research Protocol

    Directory of Open Access Journals (Sweden)

    Liza Van Eenoo

    2016-08-01

    Full Text Available Introduction: Since the European population is ageing, a growing number of elderly will need home care. Consequently, high quality home care for the elderly remains an important challenge. Job satisfaction among care professionals is regarded as an important aspect of the quality of home care. Aim: This paper describes a research protocol to identify elements that have an impact on job satisfaction among care professionals and on quality of care for older people in the home care setting of six European countries. Methods: Data on elements at the macro-level (policy, meso-level (care organisations and micro-level (clients are of importance in determining job satisfaction and quality of care. Macro-level indicators will be identified in a previously published literature review. At meso- and micro-level, data will be collected by means of two questionnaires utilsed with both care organisations and care professionals, and by means of interRAI Home Care assessments of clients. The client assessments will be used to calculate quality of care indicators. Subsequently, data will be analysed by means of linear and stepwise multiple regression analyses, correlations and multilevel techniques. Conclusions and Discussion: These results can guide health care policy makers in their decision making process in order to increase the quality of home care in their organisation, in their country or in Europe.

  4. Reporting new cases of anaemia in primary care settings in Crete, Greece: a rural practice study

    Directory of Open Access Journals (Sweden)

    Lionis Christos

    2012-04-01

    Full Text Available Abstract Background Early diagnosis of anaemia represents an important task within primary care settings. This study reports on the frequency of new cases of anaemia among patients attending rural primary care settings in Crete (Greece and to offer an estimate of iron deficiency anaemia (IDA frequency in this study group. Methods All patients attending the rural primary health care units of twelve general practitioners (GPs on the island of Crete for ten consecutive working days were eligible to participate in this study. Hemoglobin (Hb levels were measured by portable analyzers. Laboratory tests to confirm new cases of anaemia were performed at the University General Hospital of Heraklion. Results One hundred and thirteen out of 541 recruited patients had a low value of Hb according to the initial measurement obtained by the use of the portable analyzer. Forty five (45.5% of the 99 subjects who underwent laboratory testing had confirmed anaemia. The mean value of the Hb levels in the group with confirmed anaemia, as detected by the portable analyzer was 11.1 g/dl (95% Confidence Interval (CI from 10.9 to 11.4 and the respective mean value of the Hb levels obtained from the full blood count was 11.4 g/dl (95% CI from 11.2 to 11.7 (P = 0.01. Sixteen out of those 45 patients with anaemia (35.6% had IDA, with ferritin levels lower than 30 ng/ml. Conclusion Keeping in mind that this paper does not deal with specificity or sensitivity figures, it is suggested that in rural and remote settings anaemia is still invisible and point of care testing may have a place to identify it.

  5. A spectral k-means approach to bright-field cell image segmentation.

    Science.gov (United States)

    Bradbury, Laura; Wan, Justin W L

    2010-01-01

    Automatic segmentation of bright-field cell images is important to cell biologists, but difficult to complete due to the complex nature of the cells in bright-field images (poor contrast, broken halo, missing boundaries). Standard approaches such as level set segmentation and active contours work well for fluorescent images where cells appear as round shape, but become less effective when optical artifacts such as halo exist in bright-field images. In this paper, we present a robust segmentation method which combines the spectral and k-means clustering techniques to locate cells in bright-field images. This approach models an image as a matrix graph and segment different regions of the image by computing the appropriate eigenvectors of the matrix graph and using the k-means algorithm. We illustrate the effectiveness of the method by segmentation results of C2C12 (muscle) cells in bright-field images.

  6. MERRA 2D IAU Ocean Surface Diagnostic, Single Level, Monthly Mean (2/3x1/2L1) V5.2.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The MATMNXOCN or tavgM_2d_ocn_Nx data product is the MERRA Data Assimilation System 2-Dimensional ocean surface single-level diagnostics that is monthly mean...

  7. [The European countries confronting cancer: a set of indicators assessing public health status].

    Science.gov (United States)

    Borella, Laurent

    2008-11-01

    We now know that efficient public policies for cancer control need to be global and take into account each and all the factors involved: economics and level of development, style of life and risk factors, access to screening, effectiveness of the care-providing system. A very simple scorecard is proposed, based on publicized public health indicators, which allows a comparison between European countries. We extracted 49 indicators from public databases and literature concerning 22 European countries. We made correlation calculations in order to identify relevant indicators from which a global score was extracted. Using a hierarchical clustering method we were then able to identify subsets of homogeneous countries. A 7 indicator scorecard was drawn up: national gross product, scientific production, smoking rate, breast screening participating rate, all cancer mortality rate (male population), 5 years relative survival for colorectal cancer and life expectancy at birth. A global score shows: 1) the better positioned countries: Switzerland, Sweden, Finland and France; 2) the countries where cancer control is less effective: Estonia, Hungary, Poland and Slovakia. Three subsets of countries with a fairly similar profile were identified: a high level of means and results group; a high level of means but a medium level of results group; and a low level of means and results group. This work emphasizes dramatically heterogeneous situations between countries. A follow-up, using a reduced but regularly updated set of public health indicators, would help induce an active European policy for cancer control.

  8. Classification of upper limb disability levels of children with spastic unilateral cerebral palsy using K-means algorithm.

    Science.gov (United States)

    Raouafi, Sana; Achiche, Sofiane; Begon, Mickael; Sarcher, Aurélie; Raison, Maxime

    2018-01-01

    Treatment for cerebral palsy depends upon the severity of the child's condition and requires knowledge about upper limb disability. The aim of this study was to develop a systematic quantitative classification method of the upper limb disability levels for children with spastic unilateral cerebral palsy based on upper limb movements and muscle activation. Thirteen children with spastic unilateral cerebral palsy and six typically developing children participated in this study. Patients were matched on age and manual ability classification system levels I to III. Twenty-three kinematic and electromyographic variables were collected from two tasks. Discriminative analysis and K-means clustering algorithm were applied using 23 kinematic and EMG variables of each participant. Among the 23 kinematic and electromyographic variables, only two variables containing the most relevant information for the prediction of the four levels of severity of spastic unilateral cerebral palsy, which are fixed by manual ability classification system, were identified by discriminant analysis: (1) the Falconer index (CAI E ) which represents the ratio of biceps to triceps brachii activity during extension and (2) the maximal angle extension (θ Extension,max ). A good correlation (Kendall Rank correlation coefficient = -0.53, p = 0.01) was found between levels fixed by manual ability classification system and the obtained classes. These findings suggest that the cost and effort needed to assess and characterize the disability level of a child can be further reduced.

  9. Critical study of the dispersive n- 90Zr mean field by means of a new variational method

    Science.gov (United States)

    Mahaux, C.; Sartor, R.

    1994-02-01

    A new variational method is developed for the construction of the dispersive nucleon-nucleus mean field at negative and positive energies. Like the variational moment approach that we had previously proposed, the new method only uses phenomenological optical-model potentials as input. It is simpler and more flexible than the previous approach. It is applied to a critical investigation of the n- 90Zr mean field between -25 and +25 MeV. This system is of particular interest because conflicting results had recently been obtained by two different groups. While the imaginary parts of the phenomenological optical-model potentials provided by these two groups are similar, their real parts are quite different. Nevertheless, we demonstrate that these two sets of phenomenological optical-model potentials are both compatible with the dispersion relation which connects the real and imaginary parts of the mean field. Previous hints to the contrary, by one of the two other groups, are shown to be due to unjustified approximations. A striking outcome of the present study is that it is important to explicitly introduce volume absorption in the dispersion relation, although volume absorption is negligible in the energy domain investigated here. Because of the existence of two sets of phenomenological optical-model potentials, our variational method yields two dispersive mean fields whose real parts are quite different at small or negative energies. No preference for one of the two dispersive mean fields can be expressed on purely empirical grounds since they both yield fair agreement with the experimental cross sections as well as with the observed energies of the bound single-particle states. However, we argue that one of these two mean fields is physically more meaningful, because the radial shape of its Hartree-Fock type component is independent of energy, as expected on theoretical grounds. This preferred mean field is very close to the one which had been obtained by the Ohio

  10. Evaluation of gravity field model EIGEN-6C4 by means of various functions of gravity potential, and by GNSS/levelling

    Directory of Open Access Journals (Sweden)

    Jan Kostelecký

    2015-06-01

    Full Text Available The combined gravity field model EIGEN-6C4 (Förste et al., 2014 is the latest combined global gravity field model of GFZ Potsdam and GRGS Toulouse. EIGEN-6C4 has been generated including the satellite gravity gradiometry data of the entire GOCE mission (November 2009 till October 2013 and is of maximum spherical degree and order 2190. In this study EIGEN-6C4 has been compared with EGM2008 to its maximum degree and order via gravity disturbances and Tzz part of the Marussi tensor of the second derivatives of the disturbing potential. The emphasis is put on such areas where GOCE data (complete set of gradiometry measurements after reductions in EIGEN-6C4 obviously contributes to an improvement of the gravity field description. GNSS/levelling geoid heights are independent data source for the evaluation of gravity field models. Therefore, we use the GNSS/levelling data sets over the territories of Europe, Czech Republic and Slovakia for the evaluation of EIGEN-6C4 w.r.t. EGM2008.

  11. Is the Cardiovascular Response Equivalent Between a Supervised Center-Based Setting and a Self-care Home-Based Setting When Rating of Perceived Exertion Is Used to Guide Aerobic Exercise Intensity During a Cardiac Rehabilitation Program?

    DEFF Research Database (Denmark)

    Tang, Lars H.; Zwisler, Ann Dorthe; Kikkenborg Berg, Selina

    2017-01-01

    and atrial fibrillation post–radiofrequency ablation) participating in exercise-based rehabilitation were included. Patients performed a 12-week program in either a center- or a home-based setting. Using RPE, patients recorded their exercise intensity 3 times during an aerobic training phase. Exercise...... intensity was objectively measured using heart rate (HR) monitors. RESULTS: A total of 2622 RPE values with corresponding HR data were available. There was no difference in the level of association (interaction P = 0.51) between HR and RPE seen in the center-based setting (mean of 6.1 beats/min per 1...

  12. Histogram plots and cutoff energies for nuclear discrete levels

    International Nuclear Information System (INIS)

    Belgya, T.; Molnar, G.; Fazekas, B.; Oestoer, J.

    1997-05-01

    Discrete level schemes for 1277 nuclei, from 6 Li through 251 Es, extracted from the Evaluated Nuclear Structure Data File were analyzed. Cutoff energies (U max ), indicating the upper limit of level scheme completeness, were deduced from the inspection of histograms of the cumulative number of levels. Parameters of the constant-temperature level density formula (nuclear temperature T and energy shift U 0 ) were obtained by means of the least square fit of the formula to the known levels below cutoff energy. The results are tabulated for all 1277 nuclei allowing for an easy and reliable application of the constant-temperature level density approach. A complete set of cumulative plots of discrete levels is also provided. (author). 5 figs, 2 tabs

  13. Optimal maintenance policy incorporating system level and unit level for mechanical systems

    Science.gov (United States)

    Duan, Chaoqun; Deng, Chao; Wang, Bingran

    2018-04-01

    The study works on a multi-level maintenance policy combining system level and unit level under soft and hard failure modes. The system experiences system-level preventive maintenance (SLPM) when the conditional reliability of entire system exceeds SLPM threshold, and also undergoes a two-level maintenance for each single unit, which is initiated when a single unit exceeds its preventive maintenance (PM) threshold, and the other is performed simultaneously the moment when any unit is going for maintenance. The units experience both periodic inspections and aperiodic inspections provided by failures of hard-type units. To model the practical situations, two types of economic dependence have been taken into account, which are set-up cost dependence and maintenance expertise dependence due to the same technology and tool/equipment can be utilised. The optimisation problem is formulated and solved in a semi-Markov decision process framework. The objective is to find the optimal system-level threshold and unit-level thresholds by minimising the long-run expected average cost per unit time. A formula for the mean residual life is derived for the proposed multi-level maintenance policy. The method is illustrated by a real case study of feed subsystem from a boring machine, and a comparison with other policies demonstrates the effectiveness of our approach.

  14. Clustering Using Boosted Constrained k-Means Algorithm

    Directory of Open Access Journals (Sweden)

    Masayuki Okabe

    2018-03-01

    Full Text Available This article proposes a constrained clustering algorithm with competitive performance and less computation time to the state-of-the-art methods, which consists of a constrained k-means algorithm enhanced by the boosting principle. Constrained k-means clustering using constraints as background knowledge, although easy to implement and quick, has insufficient performance compared with metric learning-based methods. Since it simply adds a function into the data assignment process of the k-means algorithm to check for constraint violations, it often exploits only a small number of constraints. Metric learning-based methods, which exploit constraints to create a new metric for data similarity, have shown promising results although the methods proposed so far are often slow depending on the amount of data or number of feature dimensions. We present a method that exploits the advantages of the constrained k-means and metric learning approaches. It incorporates a mechanism for accepting constraint priorities and a metric learning framework based on the boosting principle into a constrained k-means algorithm. In the framework, a metric is learned in the form of a kernel matrix that integrates weak cluster hypotheses produced by the constrained k-means algorithm, which works as a weak learner under the boosting principle. Experimental results for 12 data sets from 3 data sources demonstrated that our method has performance competitive to those of state-of-the-art constrained clustering methods for most data sets and that it takes much less computation time. Experimental evaluation demonstrated the effectiveness of controlling the constraint priorities by using the boosting principle and that our constrained k-means algorithm functions correctly as a weak learner of boosting.

  15. A K-means multivariate approach for clustering independent components from magnetoencephalographic data.

    Science.gov (United States)

    Spadone, Sara; de Pasquale, Francesco; Mantini, Dante; Della Penna, Stefania

    2012-09-01

    Independent component analysis (ICA) is typically applied on functional magnetic resonance imaging, electroencephalographic and magnetoencephalographic (MEG) data due to its data-driven nature. In these applications, ICA needs to be extended from single to multi-session and multi-subject studies for interpreting and assigning a statistical significance at the group level. Here a novel strategy for analyzing MEG independent components (ICs) is presented, Multivariate Algorithm for Grouping MEG Independent Components K-means based (MAGMICK). The proposed approach is able to capture spatio-temporal dynamics of brain activity in MEG studies by running ICA at subject level and then clustering the ICs across sessions and subjects. Distinctive features of MAGMICK are: i) the implementation of an efficient set of "MEG fingerprints" designed to summarize properties of MEG ICs as they are built on spatial, temporal and spectral parameters; ii) the implementation of a modified version of the standard K-means procedure to improve its data-driven character. This algorithm groups the obtained ICs automatically estimating the number of clusters through an adaptive weighting of the parameters and a constraint on the ICs independence, i.e. components coming from the same session (at subject level) or subject (at group level) cannot be grouped together. The performances of MAGMICK are illustrated by analyzing two sets of MEG data obtained during a finger tapping task and median nerve stimulation. The results demonstrate that the method can extract consistent patterns of spatial topography and spectral properties across sessions and subjects that are in good agreement with the literature. In addition, these results are compared to those from a modified version of affinity propagation clustering method. The comparison, evaluated in terms of different clustering validity indices, shows that our methodology often outperforms the clustering algorithm. Eventually, these results are

  16. Monitoring electro-magnetic field in urban areas: new set-ups and results

    Energy Technology Data Exchange (ETDEWEB)

    Lubritto, C.; Petraglia, A.; Paribello, G.; Formosi, R.; Rosa, M. de; Vetromile, C.; Palmieri, A.; D' Onofrio, A. [Seconda Universita di Napoli, Dipt. di Scienze Ambientali, Caserta (Italy); Di Bella, G.; Giannini, V. [Vector Group, Roma (Italy)

    2006-07-01

    In this paper two different set-ups for continuous monitoring of electromagnetic levels are presented: the first one (Continuous Time E.M.F. Monitoring System) is based upon a network of fixed stations, allowing a detailed field monitoring as function of the time; the second one (Mobile Measurements Units) resorts to portable stations mounted on standard bicycles, allowing a positional screening in limited time intervals. For both set-ups a particular attention has been paid to the data management, by means of tools like web geographic information systems (Web-Gis). Moreover the V.I.C.R.E.M./E.L.F. software has been used for a predictive analysis of the electromagnetic field levels along with the geo referenced data coming from the field measurements. Starting from these results it has been realized that there is a need for an efficient and correct action of monitoring and information/formation in this domain, where dis-information or bad information is very often spread in the population, in particular in a field where the process of the appreciation and assessment of risk does not necessarily make use of a rationale, technically-informed procedure, but the judgement is rather based on a personal feeling, which may derive from a limited, unstructured set of information, using a set of qualitative attributes rather than a quantity. (N.C.)

  17. Monitoring electro-magnetic field in urban areas: new set-ups and results

    International Nuclear Information System (INIS)

    Lubritto, C.; Petraglia, A.; Paribello, G.; Formosi, R.; Rosa, M. de; Vetromile, C.; Palmieri, A.; D'Onofrio, A.; Di Bella, G.; Giannini, V.

    2006-01-01

    In this paper two different set-ups for continuous monitoring of electromagnetic levels are presented: the first one (Continuous Time E.M.F. Monitoring System) is based upon a network of fixed stations, allowing a detailed field monitoring as function of the time; the second one (Mobile Measurements Units) resorts to portable stations mounted on standard bicycles, allowing a positional screening in limited time intervals. For both set-ups a particular attention has been paid to the data management, by means of tools like web geographic information systems (Web-Gis). Moreover the V.I.C.R.E.M./E.L.F. software has been used for a predictive analysis of the electromagnetic field levels along with the geo referenced data coming from the field measurements. Starting from these results it has been realized that there is a need for an efficient and correct action of monitoring and information/formation in this domain, where dis-information or bad information is very often spread in the population, in particular in a field where the process of the appreciation and assessment of risk does not necessarily make use of a rationale, technically-informed procedure, but the judgement is rather based on a personal feeling, which may derive from a limited, unstructured set of information, using a set of qualitative attributes rather than a quantity. (N.C.)

  18. Effect of a uniform magnetic field on dielectric two-phase bubbly flows using the level set method

    International Nuclear Information System (INIS)

    Ansari, M.R.; Hadidi, A.; Nimvari, M.E.

    2012-01-01

    In this study, the behavior of a single bubble in a dielectric viscous fluid under a uniform magnetic field has been simulated numerically using the Level Set method in two-phase bubbly flow. The two-phase bubbly flow was considered to be laminar and homogeneous. Deformation of the bubble was considered to be due to buoyancy and magnetic forces induced from the external applied magnetic field. A computer code was developed to solve the problem using the flow field, the interface of two phases, and the magnetic field. The Finite Volume method was applied using the SIMPLE algorithm to discretize the governing equations. Using this algorithm enables us to calculate the pressure parameter, which has been eliminated by previous researchers because of the complexity of the two-phase flow. The finite difference method was used to solve the magnetic field equation. The results outlined in the present study agree well with the existing experimental data and numerical results. These results show that the magnetic field affects and controls the shape, size, velocity, and location of the bubble. - Highlights: ►A bubble behavior was simulated numerically. ► A single bubble behavior was considered in a dielectric viscous fluid. ► A uniform magnetic field is used to study a bubble behavior. ► Deformation of the bubble was considered using the Level Set method. ► The magnetic field affects the shape, size, velocity, and location of the bubble.

  19. Priority setting: what constitutes success? A conceptual framework for successful priority setting.

    Science.gov (United States)

    Sibbald, Shannon L; Singer, Peter A; Upshur, Ross; Martin, Douglas K

    2009-03-05

    The sustainability of healthcare systems worldwide is threatened by a growing demand for services and expensive innovative technologies. Decision makers struggle in this environment to set priorities appropriately, particularly because they lack consensus about which values should guide their decisions. One way to approach this problem is to determine what all relevant stakeholders understand successful priority setting to mean. The goal of this research was to develop a conceptual framework for successful priority setting. Three separate empirical studies were completed using qualitative data collection methods (one-on-one interviews with healthcare decision makers from across Canada; focus groups with representation of patients, caregivers and policy makers; and Delphi study including scholars and decision makers from five countries). This paper synthesizes the findings from three studies into a framework of ten separate but interconnected elements germane to successful priority setting: stakeholder understanding, shifted priorities/reallocation of resources, decision making quality, stakeholder acceptance and satisfaction, positive externalities, stakeholder engagement, use of explicit process, information management, consideration of values and context, and revision or appeals mechanism. The ten elements specify both quantitative and qualitative dimensions of priority setting and relate to both process and outcome components. To our knowledge, this is the first framework that describes successful priority setting. The ten elements identified in this research provide guidance for decision makers and a common language to discuss priority setting success and work toward improving priority setting efforts.

  20. Automatic Fontanel Extraction from Newborns' CT Images Using Variational Level Set

    Science.gov (United States)

    Kazemi, Kamran; Ghadimi, Sona; Lyaghat, Alireza; Tarighati, Alla; Golshaeyan, Narjes; Abrishami-Moghaddam, Hamid; Grebe, Reinhard; Gondary-Jouet, Catherine; Wallois, Fabrice

    A realistic head model is needed for source localization methods used for the study of epilepsy in neonates applying Electroencephalographic (EEG) measurements from the scalp. The earliest models consider the head as a series of concentric spheres, each layer corresponding to a different tissue whose conductivity is assumed to be homogeneous. The results of the source reconstruction depend highly on the electric conductivities of the tissues forming the head.The most used model is constituted of three layers (scalp, skull, and intracranial). Most of the major bones of the neonates’ skull are ossified at birth but can slightly move relative to each other. This is due to the sutures, fibrous membranes that at this stage of development connect the already ossified flat bones of the neurocranium. These weak parts of the neurocranium are called fontanels. Thus it is important to enter the exact geometry of fontaneles and flat bone in a source reconstruction because they show pronounced in conductivity. Computer Tomography (CT) imaging provides an excellent tool for non-invasive investigation of the skull which expresses itself in high contrast to all other tissues while the fontanels only can be identified as absence of bone, gaps in the skull formed by flat bone. Therefore, the aim of this paper is to extract the fontanels from CT images applying a variational level set method. We applied the proposed method to CT-images of five different subjects. The automatically extracted fontanels show good agreement with the manually extracted ones.

  1. Settings for Physical Activity – Developing a Site-specific Physical Activity Behavior Model based on Multi-level Intervention Studies

    DEFF Research Database (Denmark)

    Troelsen, Jens; Klinker, Charlotte Demant; Breum, Lars

    Settings for Physical Activity – Developing a Site-specific Physical Activity Behavior Model based on Multi-level Intervention Studies Introduction: Ecological models of health behavior have potential as theoretical framework to comprehend the multiple levels of factors influencing physical...... to be taken into consideration. A theoretical implication of this finding is to develop a site-specific physical activity behavior model adding a layered structure to the ecological model representing the determinants related to the specific site. Support: This study was supported by TrygFonden, Realdania...... activity (PA). The potential is shown by the fact that there has been a dramatic increase in application of ecological models in research and practice. One proposed core principle is that an ecological model is most powerful if the model is behavior-specific. However, based on multi-level interventions...

  2. Protein Oxidation Levels After Different Corneal Collagen Cross-Linking Methods.

    Science.gov (United States)

    Turkcu, Ummuhani Ozel; Yuksel, Nilay; Novruzlu, Sahin; Yalinbas, Duygu; Bilgihan, Ayse; Bilgihan, Kamil

    2016-03-01

    To evaluate advanced oxidation protein products (AOPP) levels, superoxide dismutase (SOD) enzyme activity, and total sulfhydryl (TSH) levels in rabbit corneas after different corneal collagen cross-linking (CXL) methods. Eighteen eyes of 9 adult New Zealand rabbits were divided into 3 groups of 6 eyes. The standard CXL group was continuously exposed to UV-A at a power setting of 3 mW/cm for 30 minutes. The accelerated CXL (A-CXL) group was continuously exposed to UV-A at a power setting of 30 mW/cm for 3 minutes. The pulse light-accelerated CXL (PLA-CXL) group received UV-A at a power setting of 30 mW/cm for 6 minutes of pulsed exposure (1 second on, 1 second off). Corneas were obtained after 1 hour of UV-A exposure, and 360-degree keratotomy was performed. SOD enzyme activity, AOPP, and TSH levels were measured in the corneal tissues. Compared with the standard CXL and A-CXL groups (133.2 ± 8.5 and 140.2 ± 6.2 μmol/mg, respectively), AOPP levels were found to be significantly increased in the PLA-CXL group (230.7 ± 30.2 μmol/mg) (P = 0.005 and 0.009, respectively). SOD enzyme activities and TSH levels did not differ between the groups (P = 0.167 and 0.187, respectively). CXL creates covalent bonds between collagen fibers because of reactive oxygen species. This means that more oxygen concentration during the CXL method will produce more reactive oxygen species and, thereby, AOPP. This means that in which CXL method occurs in more oxygen concentration that will produce more reactive oxygen species and thereby AOPP. This study demonstrated that PLA-CXL results in more AOPP formation than did standard CXL and A-CXL.

  3. Mean-Gini Portfolio Analysis: A Pedagogic Illustration

    Directory of Open Access Journals (Sweden)

    C. Sherman Cheung

    2007-05-01

    Full Text Available It is well known in the finance literature that mean-variance analysis is inappropriate when asset returns are not normally distributed or investors’ preferences of returns are not characterized by quadratic functions. The normality assumption has been widely rejected in cases of emerging market equities and hedge funds. The mean-Gini framework is an attractive alternative as it is consistent with stochastic dominance rules regardless of the probability distributions of asset returns. Applying mean-Gini to a portfolio setting involving multiple assets, however, has always been challenging to business students whose training in optimization is limited. This paper introduces a simple spreadsheet-based approach to mean-Gini portfolio optimization, thus allowing the mean-Gini concepts to be covered more effectively in finance courses such as portfolio theory and investment analysis.

  4. [Dot1 and Set2 Histone Methylases Control the Spontaneous and UV-Induced Mutagenesis Levels in the Saccharomyces cerevisiae Yeasts].

    Science.gov (United States)

    Kozhina, T N; Evstiukhina, T A; Peshekhonov, V T; Chernenkov, A Yu; Korolev, V G

    2016-03-01

    In the Saccharomyces cerevisiae yeasts, the DOT1 gene product provides methylation of lysine 79 (K79) of hi- stone H3 and the SET2 gene product provides the methylation of lysine 36 (K36) of the same histone. We determined that the dot1 and set2 mutants suppress the UV-induced mutagenesis to an equally high degree. The dot1 mutation demonstrated statistically higher sensitivity to the low doses of MMC than the wild type strain. The analysis of the interaction between the dot1 and rad52 mutations revealed a considerable level of spontaneous cell death in the double dot1 rad52 mutant. We observed strong suppression of the gamma-in- duced mutagenesis in the set2 mutant. We determined that the dot1 and set2 mutations decrease the sponta- neous mutagenesis rate in both single and d ouble mutants. The epistatic interaction between the dot1 and set2 mutations and almost similar sensitivity of the corresponding mutants to the different types of DNA damage allow one to conclude that both genes are involved in the control of the same DNA repair pathways, the ho- mologous-recombination-based and the postreplicative DNA repair.

  5. On the modeling of bubble evolution and transport using coupled level-set/CFD method

    International Nuclear Information System (INIS)

    Bartlomiej Wierzbicki; Steven P Antal; Michael Z Podowski

    2005-01-01

    Full text of publication follows: The ability to predict the shape of the gas/liquid/solid interfaces is important for various multiphase flow and heat transfer applications. Specific issues of interest to nuclear reactor thermal-hydraulics, include the evolution of the shape of bubbles attached to solid surfaces during nucleation, bubble surface interactions in complex geometries, etc. Additional problems, making the overall task even more complicated, are associated with the effect of material properties that may be significantly altered by the addition of minute amounts of impurities, such as surfactants or nano-particles. The present paper is concerned with the development of an innovative approach to model time-dependent shape of gas/liquid interfaces in the presence of solid walls. The proposed approach combines a modified level-set method with an advanced CFD code, NPHASE. The coupled numerical solver can be used to simulate the evolution of gas/liquid interfaces in two-phase flows for a variety of geometries and flow conditions, from individual bubbles to free surfaces (stratified flows). The issues discussed in the full paper will include: a description of the novel aspects of the proposed level-set concept based method, an overview of the NPHASE code modeling framework and a description of the coupling method between these two elements of the overall model. A particular attention will be give to the consistency and completeness of model formulation for the interfacial phenomena near the liquid/gas/solid triple line, and to the impact of the proposed numerical approach on the accuracy and consistency of predictions. The accuracy will be measured in terms of both the calculated shape of the interfaces and the gas and liquid velocity fields around the interfaces and in the entire computational domain. The results of model testing and validation will also be shown in the full paper. The situations analyzed will include: bubbles of different sizes and varying

  6. Analysis of carotid artery plaque and wall boundaries on CT images by using a semi-automatic method based on level set model

    International Nuclear Information System (INIS)

    Saba, Luca; Sannia, Stefano; Ledda, Giuseppe; Gao, Hao; Acharya, U.R.; Suri, Jasjit S.

    2012-01-01

    The purpose of this study was to evaluate the potentialities of a semi-automated technique in the detection and measurement of the carotid artery plaque. Twenty-two consecutive patients (18 males, 4 females; mean age 62 years) examined with MDCTA from January 2011 to March 2011 were included in this retrospective study. Carotid arteries are examined with a 16-multi-detector-row CT system, and for each patient, the most diseased carotid was selected. In the first phase, the carotid plaque was identified and one experienced radiologist manually traced the inner and outer boundaries by using polyline and radial distance method (PDM and RDM, respectively). In the second phase, the carotid inner and outer boundaries were traced with an automated algorithm: level-set-method (LSM). Data were compared by using Pearson rho correlation, Bland-Altman, and regression. A total of 715 slices were analyzed. The mean thickness of the plaque using the reference PDM was 1.86 mm whereas using the LSM-PDM was 1.96 mm; using the reference RDM was 2.06 mm whereas using the LSM-RDM was 2.03 mm. The correlation values between the references, the LSM, the PDM and the RDM were 0.8428, 0.9921, 0.745 and 0.6425. Bland-Altman demonstrated a very good agreement in particular with the RDM method. Results of our study indicate that LSM method can automatically measure the thickness of the plaque and that the best results are obtained with the RDM. Our results suggest that advanced computer-based algorithms can identify and trace the plaque boundaries like an experienced human reader. (orig.)

  7. Analysis of carotid artery plaque and wall boundaries on CT images by using a semi-automatic method based on level set model

    Energy Technology Data Exchange (ETDEWEB)

    Saba, Luca; Sannia, Stefano; Ledda, Giuseppe [University of Cagliari - Azienda Ospedaliero Universitaria di Cagliari, Department of Radiology, Monserrato, Cagliari (Italy); Gao, Hao [University of Strathclyde, Signal Processing Centre for Excellence in Signal and Image Processing, Department of Electronic and Electrical Engineering, Glasgow (United Kingdom); Acharya, U.R. [Ngee Ann Polytechnic University, Department of Electronics and Computer Engineering, Clementi (Singapore); Suri, Jasjit S. [Biomedical Technologies Inc., Denver, CO (United States); Idaho State University (Aff.), Pocatello, ID (United States)

    2012-11-15

    The purpose of this study was to evaluate the potentialities of a semi-automated technique in the detection and measurement of the carotid artery plaque. Twenty-two consecutive patients (18 males, 4 females; mean age 62 years) examined with MDCTA from January 2011 to March 2011 were included in this retrospective study. Carotid arteries are examined with a 16-multi-detector-row CT system, and for each patient, the most diseased carotid was selected. In the first phase, the carotid plaque was identified and one experienced radiologist manually traced the inner and outer boundaries by using polyline and radial distance method (PDM and RDM, respectively). In the second phase, the carotid inner and outer boundaries were traced with an automated algorithm: level-set-method (LSM). Data were compared by using Pearson rho correlation, Bland-Altman, and regression. A total of 715 slices were analyzed. The mean thickness of the plaque using the reference PDM was 1.86 mm whereas using the LSM-PDM was 1.96 mm; using the reference RDM was 2.06 mm whereas using the LSM-RDM was 2.03 mm. The correlation values between the references, the LSM, the PDM and the RDM were 0.8428, 0.9921, 0.745 and 0.6425. Bland-Altman demonstrated a very good agreement in particular with the RDM method. Results of our study indicate that LSM method can automatically measure the thickness of the plaque and that the best results are obtained with the RDM. Our results suggest that advanced computer-based algorithms can identify and trace the plaque boundaries like an experienced human reader. (orig.)

  8. Bisimulation as congruence in the behavioral setting

    NARCIS (Netherlands)

    Julius, A.A.; Schaft, A.J. van der

    2005-01-01

    We cast the notion of bisimulation in the Willems’ behavioral setting. We show that in this setting, bisimulation is also a congruence, as it is known in the field of concurrent processes. Bisimulation is a congruence means if A and A' are bisimilar systems, then A || B and A' || B are also

  9. Intersubjective meaning making

    DEFF Research Database (Denmark)

    Davidsen, Jacob

    of single-touch screen interaction among 8-9 year-old children presented here, shows that while the constraints of single-touch screens does not support equality of interaction at the verbal and the physical level, there seems to be an intersubjective learning outcome. More precisely, the constraints...... of single-touch screens offer support for intersubjective meaning making in its ability of constraining the interaction. By presenting a short embodied interaction analysis of 22 seconds of collaboration, I illustrate how an embodied interaction perspective on intersubjective meaning making can tell...... a different story about touch-screen supported collaborative learning....

  10. Discrete time and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  11. Control mechanisms for stochastic biochemical systems via computation of reachable sets.

    Science.gov (United States)

    Lakatos, Eszter; Stumpf, Michael P H

    2017-08-01

    Controlling the behaviour of cells by rationally guiding molecular processes is an overarching aim of much of synthetic biology. Molecular processes, however, are notoriously noisy and frequently nonlinear. We present an approach to studying the impact of control measures on motifs of molecular interactions that addresses the problems faced in many biological systems: stochasticity, parameter uncertainty and nonlinearity. We show that our reachability analysis formalism can describe the potential behaviour of biological (naturally evolved as well as engineered) systems, and provides a set of bounds on their dynamics at the level of population statistics: for example, we can obtain the possible ranges of means and variances of mRNA and protein expression levels, even in the presence of uncertainty about model parameters.

  12. Automated volume analysis of head and neck lesions on CT scans using 3D level set segmentation

    International Nuclear Information System (INIS)

    Street, Ethan; Hadjiiski, Lubomir; Sahiner, Berkman; Gujar, Sachin; Ibrahim, Mohannad; Mukherji, Suresh K.; Chan, Heang-Ping

    2007-01-01

    The authors have developed a semiautomatic system for segmentation of a diverse set of lesions in head and neck CT scans. The system takes as input an approximate bounding box, and uses a multistage level set to perform the final segmentation. A data set consisting of 69 lesions marked on 33 scans from 23 patients was used to evaluate the performance of the system. The contours from automatic segmentation were compared to both 2D and 3D gold standard contours manually drawn by three experienced radiologists. Three performance metric measures were used for the comparison. In addition, a radiologist provided quality ratings on a 1 to 10 scale for all of the automatic segmentations. For this pilot study, the authors observed that the differences between the automatic and gold standard contours were larger than the interobserver differences. However, the system performed comparably to the radiologists, achieving an average area intersection ratio of 85.4% compared to an average of 91.2% between two radiologists. The average absolute area error was 21.1% compared to 10.8%, and the average 2D distance was 1.38 mm compared to 0.84 mm between the radiologists. In addition, the quality rating data showed that, despite the very lax assumptions made on the lesion characteristics in designing the system, the automatic contours approximated many of the lesions very well

  13. Two-phase electro-hydrodynamic flow modeling by a conservative level set model.

    Science.gov (United States)

    Lin, Yuan

    2013-03-01

    The principles of electro-hydrodynamic (EHD) flow have been known for more than a century and have been adopted for various industrial applications, for example, fluid mixing and demixing. Analytical solutions of such EHD flow only exist in a limited number of scenarios, for example, predicting a small deformation of a single droplet in a uniform electric field. Numerical modeling of such phenomena can provide significant insights about EHDs multiphase flows. During the last decade, many numerical results have been reported to provide novel and useful tools of studying the multiphase EHD flow. Based on a conservative level set method, the proposed model is able to simulate large deformations of a droplet by a steady electric field, which is beyond the region of theoretic prediction. The model is validated for both leaky dielectrics and perfect dielectrics, and is found to be in excellent agreement with existing analytical solutions and numerical studies in the literature. Furthermore, simulations of the deformation of a water droplet in decyl alcohol in a steady electric field match better with published experimental data than the theoretical prediction for large deformations. Therefore the proposed model can serve as a practical and accurate tool for simulating two-phase EHD flow. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  15. Portfolio Selection Using Level Crossing Analysis

    Science.gov (United States)

    Bolgorian, Meysam; Shirazi, A. H.; Jafari, G. R.

    Asset allocation is one of the most important and also challenging issues in finance. In this paper using level crossing analysis we introduce a new approach for portfolio selection. We introduce a portfolio index that is obtained based on minimizing the waiting time to receive known return and risk values. By the waiting time, we mean time that a special level is observed in average. The advantage of this approach is that the investors are able to set their goals based on gaining return and knowing the average waiting time and risk value at the same time. As an example we use our model for forming portfolio of stocks in Tehran Stock Exchange (TSE).

  16. Improving district level health planning and priority setting in Tanzania through implementing accountability for reasonableness framework: Perceptions of stakeholders.

    Science.gov (United States)

    Maluka, Stephen; Kamuzora, Peter; San Sebastián, Miguel; Byskov, Jens; Ndawi, Benedict; Hurtig, Anna-Karin

    2010-12-01

    In 2006, researchers and decision-makers launched a five-year project - Response to Accountable Priority Setting for Trust in Health Systems (REACT) - to improve planning and priority-setting through implementing the Accountability for Reasonableness framework in Mbarali District, Tanzania. The objective of this paper is to explore the acceptability of Accountability for Reasonableness from the perspectives of the Council Health Management Team, local government officials, health workforce and members of user boards and committees. Individual interviews were carried out with different categories of actors and stakeholders in the district. The interview guide consisted of a series of questions, asking respondents to describe their perceptions regarding each condition of the Accountability for Reasonableness framework in terms of priority setting. Interviews were analysed using thematic framework analysis. Documentary data were used to support, verify and highlight the key issues that emerged. Almost all stakeholders viewed Accountability for Reasonableness as an important and feasible approach for improving priority-setting and health service delivery in their context. However, a few aspects of Accountability for Reasonableness were seen as too difficult to implement given the socio-political conditions and traditions in Tanzania. Respondents mentioned: budget ceilings and guidelines, low level of public awareness, unreliable and untimely funding, as well as the limited capacity of the district to generate local resources as the major contextual factors that hampered the full implementation of the framework in their context. This study was one of the first assessments of the applicability of Accountability for Reasonableness in health care priority-setting in Tanzania. The analysis, overall, suggests that the Accountability for Reasonableness framework could be an important tool for improving priority-setting processes in the contexts of resource-poor settings

  17. Why a Train Set Helps Participants Co-Construct Meaning in Business Model Innovation

    DEFF Research Database (Denmark)

    Beuthel, Maria Rosa; Buur, Jacob

    In this position paper we show how participants in an innovation workshop employ tangible material – a toy train set – to co-construct understandings of a new business model. In multidisciplinary teams the process of developing new terms and concepts together is crucial for work to progress. Every...... to understand how they construct a concept. We observe that the final result of the workshop is indeed innovative and is co-constructed by all group members. We discuss why the toy train works: It keeps both hands and mind busy, it allows silent participation, and it expands the vocabulary of the discussion....

  18. Feature-level domain adaptation

    DEFF Research Database (Denmark)

    Kouw, Wouter M.; Van Der Maaten, Laurens J P; Krijthe, Jesse H.

    2016-01-01

    -level domain adaptation (flda), that models the dependence between the two domains by means of a feature-level transfer model that is trained to describe the transfer from source to target domain. Subsequently, we train a domain-adapted classifier by minimizing the expected loss under the resulting transfer...... modeled via a dropout distribution, which allows the classiffier to adapt to differences in the marginal probability of features in the source and the target domain. Our experiments on several real-world problems show that flda performs on par with state-of-the-art domainadaptation techniques.......Domain adaptation is the supervised learning setting in which the training and test data are sampled from different distributions: training data is sampled from a source domain, whilst test data is sampled from a target domain. This paper proposes and studies an approach, called feature...

  19. Approximate joint diagonalization and geometric mean of symmetric positive definite matrices.

    Science.gov (United States)

    Congedo, Marco; Afsari, Bijan; Barachant, Alexandre; Moakher, Maher

    2014-01-01

    We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations.

  20. Approximate joint diagonalization and geometric mean of symmetric positive definite matrices.

    Directory of Open Access Journals (Sweden)

    Marco Congedo

    Full Text Available We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD matrices and their approximate joint diagonalization (AJD. Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations.

  1. Not Worth the Extra Cost? Diluting the Differentiation Ability of Highly Rated Products by Altering the Meaning of Rating Scale Levels

    DEFF Research Database (Denmark)

    Meissner, Martin; Heinzle, Stefanie Lena; Decker, Reinhold

    2013-01-01

    Over the last decade, the use of rating scales has grown in popularity in various fields, including customer online reviews and energy labels. Rating scales convey important information on attributes of products or services that consumers evaluate in their purchase decisions. By applying...... characteristics. In addition, two choice-based conjoint studies examine whether the way consumers make their choices among products can be influenced by changing the labeling of rating scale levels. The results show that a manipulation of the meaning of rating scale levels diminishes both the importance...

  2. Scalable Algorithms for Clustering Large Geospatiotemporal Data Sets on Manycore Architectures

    Science.gov (United States)

    Mills, R. T.; Hoffman, F. M.; Kumar, J.; Sreepathi, S.; Sripathi, V.

    2016-12-01

    The increasing availability of high-resolution geospatiotemporal data sets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery using data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe a massively parallel implementation of accelerated k-means clustering and some optimizations to boost computational intensity and utilization of wide SIMD lanes on state-of-the art multi- and manycore processors, including the second-generation Intel Xeon Phi ("Knights Landing") processor based on the Intel Many Integrated Core (MIC) architecture, which includes several new features, including an on-package high-bandwidth memory. We also analyze the code in the context of a few practical applications to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.

  3. First-Class Object Sets

    DEFF Research Database (Denmark)

    Ernst, Erik

    2009-01-01

    Typically, an object is a monolithic entity with a fixed interface.  To increase flexibility in this area, this paper presents first-class object sets as a language construct.  An object set offers an interface which is a disjoint union of the interfaces of its member objects.  It may also be use...... to a mainstream virtual machine in order to improve on the support for family polymorphism.  The approach is made precise by means of a small calculus, and the soundness of its type system has been shown by a mechanically checked proof in Coq....

  4. Numerical simulation of overflow at vertical weirs using a hybrid level set/VOF method

    Science.gov (United States)

    Lv, Xin; Zou, Qingping; Reeve, Dominic

    2011-10-01

    This paper presents the applications of a newly developed free surface flow model to the practical, while challenging overflow problems for weirs. Since the model takes advantage of the strengths of both the level set and volume of fluid methods and solves the Navier-Stokes equations on an unstructured mesh, it is capable of resolving the time evolution of very complex vortical motions, air entrainment and pressure variations due to violent deformations following overflow of the weir crest. In the present study, two different types of vertical weir, namely broad-crested and sharp-crested, are considered for validation purposes. The calculated overflow parameters such as pressure head distributions, velocity distributions, and water surface profiles are compared against experimental data as well as numerical results available in literature. A very good quantitative agreement has been obtained. The numerical model, thus, offers a good alternative to traditional experimental methods in the study of weir problems.

  5. Overcoming Barriers in Unhealthy Settings

    Directory of Open Access Journals (Sweden)

    Michael K. Lemke

    2016-03-01

    Full Text Available We investigated the phenomenon of sustained health-supportive behaviors among long-haul commercial truck drivers, who belong to an occupational segment with extreme health disparities. With a focus on setting-level factors, this study sought to discover ways in which individuals exhibit resiliency while immersed in endemically obesogenic environments, as well as understand setting-level barriers to engaging in health-supportive behaviors. Using a transcendental phenomenological research design, 12 long-haul truck drivers who met screening criteria were selected using purposeful maximum sampling. Seven broad themes were identified: access to health resources, barriers to health behaviors, recommended alternative settings, constituents of health behavior, motivation for health behaviors, attitude toward health behaviors, and trucking culture. We suggest applying ecological theories of health behavior and settings approaches to improve driver health. We also propose the Integrative and Dynamic Healthy Commercial Driving (IDHCD paradigm, grounded in complexity science, as a new theoretical framework for improving driver health outcomes.

  6. Sea level change

    Digital Repository Service at National Institute of Oceanography (India)

    Church, J.A.; Clark, P.U.; Cazenave, A.; Gregory, J.M.; Jevrejeva, S.; Levermann, A.; Merrifield, M.A.; Milne, G.A.; Nerem, R.S.; Nunn, P.D.; Payne, A.J.; Pfeffer, W.T.; Stammer, D.; Unnikrishnan, A.S.

    This chapter considers changes in global mean sea level, regional sea level, sea level extremes, and waves. Confidence in projections of global mean sea level rise has increased since the Fourth Assessment Report (AR4) because of the improved...

  7. Abstract sets and finite ordinals an introduction to the study of set theory

    CERN Document Server

    Keene, G B

    2007-01-01

    This text unites the logical and philosophical aspects of set theory in a manner intelligible both to mathematicians without training in formal logic and to logicians without a mathematical background. It combines an elementary level of treatment with the highest possible degree of logical rigor and precision.Starting with an explanation of all the basic logical terms and related operations, the text progresses through a stage-by-stage elaboration that proves the fundamental theorems of finite sets. It focuses on the Bernays theory of finite classes and finite sets, exploring the system's basi

  8. Multi-level learning: improving the prediction of protein, domain and residue interactions by allowing information flow between levels

    Directory of Open Access Journals (Sweden)

    McDermott Drew

    2009-08-01

    Full Text Available Abstract Background Proteins interact through specific binding interfaces that contain many residues in domains. Protein interactions thus occur on three different levels of a concept hierarchy: whole-proteins, domains, and residues. Each level offers a distinct and complementary set of features for computationally predicting interactions, including functional genomic features of whole proteins, evolutionary features of domain families and physical-chemical features of individual residues. The predictions at each level could benefit from using the features at all three levels. However, it is not trivial as the features are provided at different granularity. Results To link up the predictions at the three levels, we propose a multi-level machine-learning framework that allows for explicit information flow between the levels. We demonstrate, using representative yeast interaction networks, that our algorithm is able to utilize complementary feature sets to make more accurate predictions at the three levels than when the three problems are approached independently. To facilitate application of our multi-level learning framework, we discuss three key aspects of multi-level learning and the corresponding design choices that we have made in the implementation of a concrete learning algorithm. 1 Architecture of information flow: we show the greater flexibility of bidirectional flow over independent levels and unidirectional flow; 2 Coupling mechanism of the different levels: We show how this can be accomplished via augmenting the training sets at each level, and discuss the prevention of error propagation between different levels by means of soft coupling; 3 Sparseness of data: We show that the multi-level framework compounds data sparsity issues, and discuss how this can be dealt with by building local models in information-rich parts of the data. Our proof-of-concept learning algorithm demonstrates the advantage of combining levels, and opens up

  9. Goal-Setting in Youth Football. Are Coaches Missing an Opportunity?

    Science.gov (United States)

    Maitland, Alison; Gervis, Misia

    2010-01-01

    Background: Goal-setting is not always the simple motivational technique when used in an applied sport setting especially in relation to the meaning of achievement in competitive sport. Goal-setting needs to be examined in a broader context than goal-setting theory, such as provided by social cognitive theories of motivation. In football, the…

  10. Reducing the time requirement of k-means algorithm.

    Science.gov (United States)

    Osamor, Victor Chukwudi; Adebiyi, Ezekiel Femi; Oyelade, Jelilli Olarenwaju; Doumbia, Seydou

    2012-01-01

    Traditional k-means and most k-means variants are still computationally expensive for large datasets, such as microarray data, which have large datasets with large dimension size d. In k-means clustering, we are given a set of n data points in d-dimensional space R(d) and an integer k. The problem is to determine a set of k points in R(d), called centers, so as to minimize the mean squared distance from each data point to its nearest center. In this work, we develop a novel k-means algorithm, which is simple but more efficient than the traditional k-means and the recent enhanced k-means. Our new algorithm is based on the recently established relationship between principal component analysis and the k-means clustering. We provided the correctness proof for this algorithm. Results obtained from testing the algorithm on three biological data and six non-biological data (three of these data are real, while the other three are simulated) also indicate that our algorithm is empirically faster than other known k-means algorithms. We assessed the quality of our algorithm clusters against the clusters of a known structure using the Hubert-Arabie Adjusted Rand index (ARI(HA)). We found that when k is close to d, the quality is good (ARI(HA)>0.8) and when k is not close to d, the quality of our new k-means algorithm is excellent (ARI(HA)>0.9). In this paper, emphases are on the reduction of the time requirement of the k-means algorithm and its application to microarray data due to the desire to create a tool for clustering and malaria research. However, the new clustering algorithm can be used for other clustering needs as long as an appropriate measure of distance between the centroids and the members is used. This has been demonstrated in this work on six non-biological data.

  11. Prediction on long-term mean and mean square pollutant concentrations in an urban atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, S; Lamb, R G; Seinfeld, J H

    1976-01-01

    The general problem of predicting long-term average (say yearly) pollutant concentrations in an urban atmosphere is formulated. The pollutant concentration can be viewed as a random process, the complete description of which requires knowledge of its probability density function, which is unknown. The mean concentration is the first moment of the concentration distribution, and at present there exist a number of models for predicting the long-term mean concentration of an inert pollutant. The second moment, or mean square concentration, indicates additional features of the distribution, such as the level of fluctuations about the mean. In the paper a model proposed by Lamb for the long-term mean concentration is reviewed, and a new model for prediction of the long-term mean square concentration of an inert air pollutant is derived. The properties and uses of the model are discussed, and the equations defining the model are presented in a form for direct application to an urban area.

  12. Good fit versus meaning in fife

    NARCIS (Netherlands)

    Muijnck, W. de

    2016-01-01

    Meaning in life is too important not to study systematically, but doing so is made difficult by conceptual indeterminacy. An approach to meaning that is promising but, indeed, conceptually vague is Jonathan Haidt's 'cross-level coherence' account. In order to remove the vagueness, I propose a

  13. Job satisfaction in nurses working in tertiary level health care settings of Islamabad, Pakistan.

    Science.gov (United States)

    Bahalkani, Habib Akhtar; Kumar, Ramesh; Lakho, Abdul Rehman; Mahar, Benazir; Mazhar, Syeda Batool; Majeed, Abdul

    2011-01-01

    Job satisfaction greatly determines the productivity and efficiency of human resource for health. It literally means: 'the extent to which Health Professionals like or dislike their jobs'. Job satisfaction is said to be linked with employee's work environment, job responsibilities, and powers; and time pressure among various health professionals. As such it affects employee's organizational commitment and consequently the quality of health services. Objective of this study was to determine the level of job satisfaction and factors influencing it among nurses in a public sector hospital of Islamabad. A cross sectional study with self-administered structured questionnaire was conducted in the federal capital of Pakistan, Islamabad. Sample included 56 qualified nurses working in a tertiary care hospital. Overall 86% respondents were dissatisfied with about 26% highly dissatisfied with their job. The work environments, poor fringe benefits, dignity, responsibility given at workplace and time pressure were reason for dissatisfaction. Poor work environment, low salaries, lack of training opportunities, proper supervision, time pressure and financial rewards reported by the respondents. Our findings state a low level of overall satisfaction among workers in a public sector tertiary care health organization in Islamabad. Most of this dissatisfaction is caused by poor salaries, not given the due respect, poor work environment, unbalanced responsibilities with little overall control, time pressure, patient care and lack of opportunities for professional development.

  14. Modeling of Two-Phase Flow in Rough-Walled Fracture Using Level Set Method

    Directory of Open Access Journals (Sweden)

    Yunfeng Dai

    2017-01-01

    Full Text Available To describe accurately the flow characteristic of fracture scale displacements of immiscible fluids, an incompressible two-phase (crude oil and water flow model incorporating interfacial forces and nonzero contact angles is developed. The roughness of the two-dimensional synthetic rough-walled fractures is controlled with different fractal dimension parameters. Described by the Navier–Stokes equations, the moving interface between crude oil and water is tracked using level set method. The method accounts for differences in densities and viscosities of crude oil and water and includes the effect of interfacial force. The wettability of the rough fracture wall is taken into account by defining the contact angle and slip length. The curve of the invasion pressure-water volume fraction is generated by modeling two-phase flow during a sudden drainage. The volume fraction of water restricted in the rough-walled fracture is calculated by integrating the water volume and dividing by the total cavity volume of the fracture while the two-phase flow is quasistatic. The effect of invasion pressure of crude oil, roughness of fracture wall, and wettability of the wall on two-phase flow in rough-walled fracture is evaluated.

  15. Wind and solar resource data sets

    DEFF Research Database (Denmark)

    Clifton, Andrew; Hodge, Bri-Mathias; Draxl, Caroline

    2017-01-01

    The range of resource data sets spans from static cartography showing the mean annual wind speed or solar irradiance across a region to high temporal and high spatial resolution products that provide detailed information at a potential wind or solar energy facility. These data sets are used...... to support continental-scale, national, or regional renewable energy development; facilitate prospecting by developers; and enable grid integration studies. This review first provides an introduction to the wind and solar resource data sets, then provides an overview of the common methods used...... for their creation and validation. A brief history of wind and solar resource data sets is then presented, followed by areas for future research. For further resources related to this article, please visit the WIREs website....

  16. Mean platelet volume as a risk stratification tool in the Emergency Department for evaluating patients with ischaemic stroke and TIA

    International Nuclear Information System (INIS)

    Dogan, N.O.; Karakurt, K.

    2013-01-01

    Objective: To investigate the variations of mean platelet volume in patients with ischaemic cerebrovascular complaints, and to find out its diagnostic utility in an acute setting to help risk stratification in patients with ischaemic stroke and transient ischaemic attacks. Methods: The prospective cross-sectional study was conducted at the Gazi University Hospital, Ankara, Turkey, from November 2009 to June 2010. It comprised 143 consecutive patients of acute ischaemic stroke, 39 patients of transient ischaemic attacks and 60 healthy volunteers. SPSS 13 was used for statistical analysis, and so were t-test, one-way analysis of variance test and correlation analysis. Statistical significance was accepted at p <0.05. Results: Mean platelet volume results were significantly higher in patients with cortical infarction and transient ischaemic attack compared to the control group (p <0.001 and p <0.002). A statistically significant increase was also noted in hospitalised patients when compared with discharged patients from the emergency department (p <0.036). A weak positive correlation was identified between the National Institute of Health Stroke Scores and mean platelet volume levels (r=0.207; p <0.001). A significant relationship was identified between mean platelet volume levels and previous stroke (p <0.005). Conclusion: The measurement of mean platelet volume levels may provide useful diagnostic and prognostic information to emergency physicians caring for patients with transient ischaemic attack and ischaemic stroke. In patients with suspected neurological ischaemic symptoms, high levels may be considered as an atherosclerotic risk factor. (author)

  17. Mean Platelet Volume, Vitamin D and C Reactive Protein Levels in Normal Weight Children with Primary Snoring and Obstructive Sleep Apnea Syndrome.

    Science.gov (United States)

    Zicari, Anna Maria; Occasi, Francesca; Di Mauro, Federica; Lollobrigida, Valeria; Di Fraia, Marco; Savastano, Vincenzo; Loffredo, Lorenzo; Nicita, Francesco; Spalice, Alberto; Duse, Marzia

    2016-01-01

    Studies on Mean Platelet Volume (MPV) in children with Sleep Disordered Breathing (SDB) report conflicting results and the hypothesis of an intermittent hypoxemia leading to a systemic inflammation is reaching consensus. Vitamin D exerts anti-inflammatory properties and its deficiency has been supposed to play a role in sleep disorders. Emerging interest is rising about Primary Snoring (PS) since it is reasonable that also undetectable alteration of hypoxia might predispose to an increased production of inflammatory mediators. In this perspective, in a group of children affected by SDB, our aim was to investigate MPV, vitamin D and C Reactive Protein (CRP) levels, which had been previously evaluated separately in different studies focused only on Obstructive Sleep Apnea Syndrome (OSAS). We enrolled 137 children: 70 healthy controls (HC), 67 affected by SDB undergoing a polysomnographic evaluation, 22 with a diagnosis of PS and 45 with a diagnosis of OSAS. All patients underwent routine biochemical evaluations including blood cell counts, CRP and vitamin D. Children affected by SDB had a mean age of 8.49±2.19 and were prevalently males (23 females, 34%; 44 males, 66%). MPV levels were higher in OSAS and PS when compared to HC; platelet count (PLT) and CRP levels were higher while Vitamin D levels were lower in children with SDB when compared to HC. MPV levels were correlated with PLT (r = -0.54; pchildren with PS as well as in children with OSAS, and supports the underlying inflammation, hence, highlighting the importance of an early diagnosis of this previously considered benign form of SDB.

  18. Stopping power, its meaning, and its general characteristics

    International Nuclear Information System (INIS)

    Inokuti, Mitio.

    1995-01-01

    This essay presents remarks on the meaning of stopping, power and of its magnitude. More precisely, the first set of remarks concerns the connection of stopping power with elements of particle-transport theory, which describes particle transport and its consequences in full detail, including its stochastic aspects. The second set of remarks concerns the magnitude of the stopping power of a material and its relation with the material's electronic structure and other properties

  19. Patients' satisfaction ratings and their desire for care improvement across oncology settings from France, Italy, Poland and Sweden.

    Science.gov (United States)

    Brédart, A; Robertson, C; Razavi, D; Batel-Copel, L; Larsson, G; Lichosik, D; Meyza, J; Schraub, S; von Essen, L; de Haes, J C J M

    2003-01-01

    There has been an increasing interest in patient satisfaction assessment across nations recently. This paper reports on a cross-cultural comparison of the comprehensive assessment of satisfaction with care (CASC) response scales. We investigated what proportion of patients wanted care improvement for the same level of satisfaction across samples from oncology settings in France, Italy, Poland and Sweden, and whether age, gender, education level and type of items affected the relationships found. The CASC addresses patient's satisfaction with the care received in oncology hospitals. Patients are invited to rate aspects of care and to mention for each of these aspects, whether they would want improvement.One hundred and forty, 395, 186 and 133 consecutive patients were approached in oncology settings from France, Italy, Poland and Sweden, respectively. Across country settings, an increasing percentage of patients wanted care improvement for decreasing levels of satisfaction. However, in France a higher percentage of patients wanted care improvement for high-satisfaction ratings whereas in Poland a lower percentage of patients wanted care improvement for low-satisfaction ratings. Age and education level had a similar effect across countries. Confronting levels of satisfaction with desire for care improvement appeared useful in comprehending the meaning of response choice labels for the CASC across oncology settings from different linguistic and cultural background. Linguistic or socio-cultural differences were suggested for explaining discrepancies between countries. Copyright 2002 John Wiley & Sons, Ltd.

  20. Groundwater level prediction of landslide based on classification and regression tree

    Directory of Open Access Journals (Sweden)

    Yannan Zhao

    2016-09-01

    Full Text Available According to groundwater level monitoring data of Shuping landslide in the Three Gorges Reservoir area, based on the response relationship between influential factors such as rainfall and reservoir level and the change of groundwater level, the influential factors of groundwater level were selected. Then the classification and regression tree (CART model was constructed by the subset and used to predict the groundwater level. Through the verification, the predictive results of the test sample were consistent with the actually measured values, and the mean absolute error and relative error is 0.28 m and 1.15% respectively. To compare the support vector machine (SVM model constructed using the same set of factors, the mean absolute error and relative error of predicted results is 1.53 m and 6.11% respectively. It is indicated that CART model has not only better fitting and generalization ability, but also strong advantages in the analysis of landslide groundwater dynamic characteristics and the screening of important variables. It is an effective method for prediction of ground water level in landslides.

  1. Networks and knowledge creation for meaning

    DEFF Research Database (Denmark)

    Brink, Tove

    2016-01-01

    . The findings show that business networks provide an important frame for organising both central- and distributed leadership to provide meaning on all levels of the network organisations. The business network consists of customers, suppliers and business partners for support of reciprocal learning for providing......The research in this paper reveals how business networks can create organisational knowledge to provide meaning for enabling innovation and reduction of Levelized Cost Of Energy (LCOE)? The research was conducted from June 2014 to May 2015, using a qualitative deductive research approach...... and due to the organising process highly valuable findings regarding innovation can be utilised both for improved central decisions and for improved application of knowledge locally when the meaning is established. Further research is needed for elaboration of the business network frame and the organising...

  2. Compilation of Water-Resources Data and Hydrogeologic Setting for Brunswick County, North Carolina, 1933-2000

    Science.gov (United States)

    Fine, Jason M.; Cunningham, William L.

    2001-01-01

    Water-resources data were compiled for Brunswick County, North Carolina, to describe the hydrologic conditions of the County. Hydrologic data collected by the U.S. Geological Survey as well as data collected by other governmental agencies and reviewed by the U.S. Geological Survey are presented. Data from four weather stations and two surface-water stations are summarized. Data also are presented for land use and land cover, soils, geology, hydrogeology, 12 continuously monitored ground-water wells, 73 periodically measured ground-water wells, and water-quality measurements from 39 ground-water wells. Mean monthly precipitation at the Longwood, Shallotte, Southport, and Wilmington Airport weather stations ranged from 2.19 to 7.94 inches for the periods of record, and mean monthly temperatures at the Longwood, Southport, and Wilmington Airport weather stations ranged from 43.4 to 80.1 degrees Fahrenheit for the periods of record. An evaluation of land-use and land-cover data for Brunswick County indicated that most of the County is either forested land (about 57 percent) or wetlands (about 29 percent). Cross sections are presented to illustrate the general hydrogeology beneath Brunswick County. Water-level data for Brunswick County indicate that water levels ranged from about 110 feet above mean sea level to about 22 feet below mean sea level. Chloride concentrations measured in aquifers in Brunswick County ranged from near 0 to 15,000 milligrams per liter. Chloride levels in the Black Creek and Cape Fear aquifers were measured at well above the potable limit for ground water of 250 milligrams per liter set by the U.S. Environmental Protection Agency for safe drinking water.

  3. Employability Competencies for Entry Level Occupations in Electronics. Part One: Basic Theory.

    Science.gov (United States)

    Werner, Claire

    This syllabus, which is the first of a two-volume set describing the basic competencies needed by entry-level workers in the field of electronics, deals with the basic theories of electricity and electronics. Competencies are organized according to the following skills areas: the meaning of electricity, how electricity works, resistors, Ohm's law,…

  4. The approximation of the normal distribution by means of chaotic expression

    International Nuclear Information System (INIS)

    Lawnik, M

    2014-01-01

    The approximation of the normal distribution by means of a chaotic expression is achieved by means of Weierstrass function, where, for a certain set of parameters, the density of the derived recurrence renders good approximation of the bell curve

  5. Comparison of norovirus RNA levels in outbreak-related oysters with background environmental levels.

    Science.gov (United States)

    Lowther, James A; Gustar, Nicole E; Hartnell, Rachel E; Lees, David N

    2012-02-01

    Norovirus is the principal agent of bivalve shellfish-associated gastroenteric illness worldwide. Numerous studies using PCR have demonstrated norovirus contamination in a significant proportion of both oyster and other bivalve shellfish production areas and ready-to-eat products. By comparison, the number of epidemiologically confirmed shellfish-associated outbreaks is relatively low. This suggests that factors other than the simple presence or absence of virus RNA are important contributors to the amount of illness reported. This study compares norovirus RNA levels in oyster samples strongly linked to norovirus or norovirus-type illness with the levels typically found in commercial production areas (non-outbreak-related samples). A statistically significant difference between norovirus levels in the two sets of samples was observed. The geometric mean of the levels in outbreak samples (1,048 copies per g) was almost one order of magnitude higher than for positive non-outbreak-related samples (121 copies per g). Further, while none of the outbreak-related samples contained fewer than 152 copies per g, the majority of positive results for non-outbreak-related samples was below this level. These observations support the concept of a dose-response for norovirus RNA levels in shellfish and could help inform the establishment of threshold criteria for risk management.

  6. Effect of culture levels, ultrafiltered retentate addition, total solid levels and heat treatments on quality improvement of buffalo milk plain set yoghurt.

    Science.gov (United States)

    Yadav, Vijesh; Gupta, Vijay Kumar; Meena, Ganga Sahay

    2018-05-01

    Studied the effect of culture (2, 2.5 and 3%), ultrafiltered (UF) retentate addition (0, 11, 18%), total milk solids (13, 13.50, 14%) and heat treatments (80 and 85 °C/30 min) on the change in pH and titratable acidity (TA), sensory scores and rheological parameters of yoghurt. With 3% culture levels, the required TA (0.90% LA) was achieved in minimum 6 h incubation. With an increase in UF retentate addition, there was observed a highly significant decrease in overall acceptability, body and texture and colour and appearance scores, but there was highly significant increase in rheological parameters of yoghurt samples. Yoghurt made from even 13.75% total solids containing nil UF retentate was observed to be sufficiently firm by the sensory panel. Most of the sensory attributes of yoghurt made with 13.50% total solids were significantly better than yoghurt prepared with either 13 or 14% total solids. Standardised milk heated to 85 °C/30 min resulted in significantly better overall acceptability in yoghurt. Overall acceptability of optimised yoghurt was significantly better than a branded market sample. UF retentate addition adversely affected yoghurt quality, whereas optimization of culture levels, totals milk solids and others process parameters noticeably improved the quality of plain set yoghurt with a shelf life of 15 days at 4 °C.

  7. Annual and Seasonal Mean Net Evaporation Rates of the Red Sea Water during Jan 1958 - Dec 2007

    OpenAIRE

    Nassir, Sahbaldeen Abdulaziz

    2012-01-01

    Data set including sea level, temperature, salinity, and current from Simple Ocean Data Assimilation (SODA) is used in this study to estimate the mean net annually and seasonally evaporation rates. Then wind data is used to examine its impact on the evaporation. This work calculated the seasonal and annual evaporation rates based on assumption of that there is no net mass transport (balanced). Hence, the difference in the transport supposed to be equal to the water that has eva...

  8. Patient Safety Communication Among Differently Educated Nurses: Converging and Diverging Meaning Systems.

    Science.gov (United States)

    Anbari, Allison Brandt; Vogelsmeier, Amy; Dougherty, Debbie S

    2017-12-01

    Studies that suggest an increased number of bachelor's prepared nurses (BSNs) at the bedside improves patient safety do not stratify their samples into traditional bachelor's and associates (ADN) to BSN graduates. This qualitative study investigated potential differences in patient safety meaning among BSNs and ADN to BSN graduates. Guided by the theory of Language Convergence/Meaning Divergence, interview data from eight BSN and eight ADN to BSN graduates were analyzed. Findings indicate there are two meaning levels or systems, the local level and the systemic level. At the local level, the meaning of patient safety is focused at the patient's bedside and regulated by the nurse. The systemic level included the notion that health system factors such as policies and staffing are paramount to keeping patients safe. More frequently, ADN to BSN graduates' meaning of patient safety was at the local level, while BSNs' meaning centered at the systemic level.

  9. Quantitative characterization of metastatic disease in the spine. Part I. Semiautomated segmentation using atlas-based deformable registration and the level set method

    International Nuclear Information System (INIS)

    Hardisty, M.; Gordon, L.; Agarwal, P.; Skrinskas, T.; Whyne, C.

    2007-01-01

    Quantitative assessment of metastatic disease in bone is often considered immeasurable and, as such, patients with skeletal metastases are often excluded from clinical trials. In order to effectively quantify the impact of metastatic tumor involvement in the spine, accurate segmentation of the vertebra is required. Manual segmentation can be accurate but involves extensive and time-consuming user interaction. Potential solutions to automating segmentation of metastatically involved vertebrae are demons deformable image registration and level set methods. The purpose of this study was to develop a semiautomated method to accurately segment tumor-bearing vertebrae using the aforementioned techniques. By maintaining morphology of an atlas, the demons-level set composite algorithm was able to accurately differentiate between trans-cortical tumors and surrounding soft tissue of identical intensity. The algorithm successfully segmented both the vertebral body and trabecular centrum of tumor-involved and healthy vertebrae. This work validates our approach as equivalent in accuracy to an experienced user

  10. Comparison of K-means and fuzzy c-means algorithm performance for automated determination of the arterial input function.

    Science.gov (United States)

    Yin, Jiandong; Sun, Hongzan; Yang, Jiawen; Guo, Qiyong

    2014-01-01

    The arterial input function (AIF) plays a crucial role in the quantification of cerebral perfusion parameters. The traditional method for AIF detection is based on manual operation, which is time-consuming and subjective. Two automatic methods have been reported that are based on two frequently used clustering algorithms: fuzzy c-means (FCM) and K-means. However, it is still not clear which is better for AIF detection. Hence, we compared the performance of these two clustering methods using both simulated and clinical data. The results demonstrate that K-means analysis can yield more accurate and robust AIF results, although it takes longer to execute than the FCM method. We consider that this longer execution time is trivial relative to the total time required for image manipulation in a PACS setting, and is acceptable if an ideal AIF is obtained. Therefore, the K-means method is preferable to FCM in AIF detection.

  11. A New Model of the Mean Albedo of the Earth: Estimation and Validation from the GRACE Mission and SLR Satellites.

    Science.gov (United States)

    Deleflie, F.; Sammuneh, M. A.; Coulot, D.; Pollet, A.; Biancale, R.; Marty, J. C.

    2017-12-01

    This talk provides new results of a study that we began last year, and that was the subject of a poster by the same authors presented during AGU FM 2016, entitled « Mean Effect of the Albedo of the Earth on Artificial Satellite Trajectories: an Update Over 2000-2015. »The emissivity of the Earth, split into a part in the visible domain (albedo) and the infrared domain (thermic emissivity), is at the origin of non gravitational perturbations on artificial satellite trajectories. The amplitudes and periods of these perturbations can be investigated if precise orbits can be carried out, and reveal some characteristics of the space environment where the satellite is orbiting. Analyzing the perturbations is, hence, a way to characterize how the energy from the Sun is re-emitted by the Earth. When led over a long period of time, such an approach enables to quantify the variations of the global radiation budget of the Earth.Additionally to the preliminary results presented last year, we draw an assessment of the validity of the mean model based on the orbits of the GRACE missions, and, to a certain extent, of some of the SLR satellite orbits. The accelerometric data of the GRACE satellites are used to evaluate the accuracy of the models accounting for non gravitational forces, and the ones induced by the albedo and the thermic emissivity in particular. Three data sets are used to investigate the mean effects on the orbit perturbations: Stephens tables (Stephens, 1980), ECMWF (European Centre for Medium-Range Weather Forecasts) data sets and CERES (Clouds and the Earth's Radiant Energy System) data sets (publickly available). From the trajectography point of view, based on post-fit residual analysis, we analyze what is the data set leading to the lowest residual level, to define which data set appears to be the most suitable one to derive a new « mean albedo model » from accelerometric data sets of the GRACE mission. The period of investigation covers the full GRACE

  12. Set-up and first operation of a plasma oven for treatment of low level radioactive wastes

    Directory of Open Access Journals (Sweden)

    Nachtrodt Frederik

    2014-01-01

    Full Text Available An experimental device for plasma treatment of low and intermediate level radioactive waste was built and tested in several design variations. The laboratory device is designed with the intention to study the general effects and difficulties in a plasma incineration set-up for the further future development of a larger scale pilot plant. The key part of the device consists of a novel microwave plasma torch driven by 200 W electric power, and operating at atmospheric pressure. It is a specific design characteristic of the torch that a high peak temperature can be reached with a low power input compared to other plasma torches. Experiments have been carried out to analyze the effect of the plasma on materials typical for operational low-level wastes. In some preliminary cold tests the behavior of stable volatile species e. g., caesium was investigated by TXRF measurements of material collected from the oven walls and the filtered off-gas. The results help in improving and scaling up the existing design and in understanding the effects for a pilot plant, especially for the off-gas collection and treatment.

  13. Abstract Level Parallelization of Finite Difference Methods

    Directory of Open Access Journals (Sweden)

    Edwin Vollebregt

    1997-01-01

    Full Text Available A formalism is proposed for describing finite difference calculations in an abstract way. The formalism consists of index sets and stencils, for characterizing the structure of sets of data items and interactions between data items (“neighbouring relations”. The formalism provides a means for lifting programming to a more abstract level. This simplifies the tasks of performance analysis and verification of correctness, and opens the way for automaticcode generation. The notation is particularly useful in parallelization, for the systematic construction of parallel programs in a process/channel programming paradigm (e.g., message passing. This is important because message passing, unfortunately, still is the only approach that leads to acceptable performance for many more unstructured or irregular problems on parallel computers that have non-uniform memory access times. It will be shown that the use of index sets and stencils greatly simplifies the determination of which data must be exchanged between different computing processes.

  14. Intuitionistic Neutrosophic Set Relations and Some of Its Properties

    OpenAIRE

    Monoranjan Bhowmik; Madhumangal Pal

    2010-01-01

    In this paper, we define intuitionistic neutrosophic set (INSs). In fact, all INSs are neutrosophic set but all neutrosophic sets are not INSs. We have shown by means of example that the definition for neutrosophic sets the complement and union are not true for INSs also give new definition of complement, union and intersection of INSs. We define the relation of INSs and four special type of INSs relations. Finally we have studied some properties of INSs relations.

  15. Estonian Mean Snow Depth and Duration (1891-1994)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains the number of days of snow cover in days per year, and three 10-day snow depth means per month in centimeters from stations across Estonia....

  16. An expanded calibration study of the explicitly correlated CCSD(T)-F12b method using large basis set standard CCSD(T) atomization energies.

    Science.gov (United States)

    Feller, David; Peterson, Kirk A

    2013-08-28

    The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies 0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.

  17. Mean Streets: An analysis on street level pollution in NYC

    Science.gov (United States)

    Parker, G.

    2017-12-01

    The overarching objective of this study is to quantify the spatial and temporal variability in particulatematter concentration (PM 2.5) along crowded streets in New York City. Due to their fine size and lowdensity PM 2.5 stays longer in the atmosphere and could bypass human nose and throat and penetratedeep in to the lungs and even enter the circulatory system. PM 2.5 is a by-product of automobilecombustion and is a primary cause of respiratory malfunction in NYC. The study would monitor streetlevel concentration of PM2.5 across three different routes that witness significant pedestrian traffic;observations will be conducted along these three routes at different time periods. The study will use theAirBeam community air quality monitor. The monitor tracks PM 2.5 concentration along with GPS, airtemperature and relative humidity. The surface level concentration monitored by AirBeam will becompared with atmospheric concentration of PM 2.5 that are monitored at the NOAA CREST facility onCCNY campus. The lower atmospheric values will be correlated with street level values to assess thevalidity of using of lower atmospheric values to predict street level concentrations. The street levelconcentration will be compared to the air quality forecasted by the New York Department ofEnvironment Conservation to estimate its accuracy and applicability.

  18. Improving motor skills of children in secondary school by using means specific to football game

    Directory of Open Access Journals (Sweden)

    Sorin BRÎNDESCU

    2017-03-01

    Full Text Available Football, by tradition is a popular sport, a mass sport, both for those who 32ractici it and for the audience. The game of football becomes the only sport that can be 32racticin by everybody. Its simplicity is expressed by a regulation set which includes few basic rules, logical rules and relatively easy to understand. Football is a game that develops basic motor skills: speed, strength, stamina, specific skills. Use of means specific to the football game in physical education classes at the secondary level aims to improve motor skills and streamline the educational process. The means specific to the football game that are used are simple, clear, suitable for both girls and boys, in order to achieve outstanding results in physical education classes

  19. Tokunaga and Horton self-similarity for level set trees of Markov chains

    International Nuclear Information System (INIS)

    Zaliapin, Ilia; Kovchegov, Yevgeniy

    2012-01-01

    Highlights: ► Self-similar properties of the level set trees for Markov chains are studied. ► Tokunaga and Horton self-similarity are established for symmetric Markov chains and regular Brownian motion. ► Strong, distributional self-similarity is established for symmetric Markov chains with exponential jumps. ► It is conjectured that fractional Brownian motions are Tokunaga self-similar. - Abstract: The Horton and Tokunaga branching laws provide a convenient framework for studying self-similarity in random trees. The Horton self-similarity is a weaker property that addresses the principal branching in a tree; it is a counterpart of the power-law size distribution for elements of a branching system. The stronger Tokunaga self-similarity addresses so-called side branching. The Horton and Tokunaga self-similarity have been empirically established in numerous observed and modeled systems, and proven for two paradigmatic models: the critical Galton–Watson branching process with finite progeny and the finite-tree representation of a regular Brownian excursion. This study establishes the Tokunaga and Horton self-similarity for a tree representation of a finite symmetric homogeneous Markov chain. We also extend the concept of Horton and Tokunaga self-similarity to infinite trees and establish self-similarity for an infinite-tree representation of a regular Brownian motion. We conjecture that fractional Brownian motions are also Tokunaga and Horton self-similar, with self-similarity parameters depending on the Hurst exponent.

  20. Low-level HIV-1 replication and the dynamics of the resting CD4+ T cell reservoir for HIV-1 in the setting of HAART

    Directory of Open Access Journals (Sweden)

    Wilke Claus O

    2008-01-01

    Full Text Available Abstract Background In the setting of highly active antiretroviral therapy (HAART, plasma levels of human immunodeficiency type-1 (HIV-1 rapidly decay to below the limit of detection of standard clinical assays. However, reactivation of remaining latently infected memory CD4+ T cells is a source of continued virus production, forcing patients to remain on HAART despite clinically undetectable viral loads. Unfortunately, the latent reservoir decays slowly, with a half-life of up to 44 months, making it the major known obstacle to the eradication of HIV-1 infection. However, the mechanism underlying the long half-life of the latent reservoir is unknown. The most likely potential mechanisms are low-level viral replication and the intrinsic stability of latently infected cells. Methods Here we use a mathematical model of T cell dynamics in the setting of HIV-1 infection to probe the decay characteristics of the latent reservoir upon initiation of HAART. We compare the behavior of this model to patient derived data in order to gain insight into the role of low-level viral replication in the setting of HAART. Results By comparing the behavior of our model to patient derived data, we find that the viral dynamics observed in patients on HAART could be consistent with low-level viral replication but that this replication would not significantly affect the decay rate of the latent reservoir. Rather than low-level replication, the intrinsic stability of latently infected cells and the rate at which they are reactivated primarily determine the observed reservoir decay rate according to the predictions of our model. Conclusion The intrinsic stability of the latent reservoir has important implications for efforts to eradicate HIV-1 infection and suggests that intensified HAART would not accelerate the decay of the latent reservoir.

  1. Low-level HIV-1 replication and the dynamics of the resting CD4+ T cell reservoir for HIV-1 in the setting of HAART

    Science.gov (United States)

    Sedaghat, Ahmad R; Siliciano, Robert F; Wilke, Claus O

    2008-01-01

    Background In the setting of highly active antiretroviral therapy (HAART), plasma levels of human immunodeficiency type-1 (HIV-1) rapidly decay to below the limit of detection of standard clinical assays. However, reactivation of remaining latently infected memory CD4+ T cells is a source of continued virus production, forcing patients to remain on HAART despite clinically undetectable viral loads. Unfortunately, the latent reservoir decays slowly, with a half-life of up to 44 months, making it the major known obstacle to the eradication of HIV-1 infection. However, the mechanism underlying the long half-life of the latent reservoir is unknown. The most likely potential mechanisms are low-level viral replication and the intrinsic stability of latently infected cells. Methods Here we use a mathematical model of T cell dynamics in the setting of HIV-1 infection to probe the decay characteristics of the latent reservoir upon initiation of HAART. We compare the behavior of this model to patient derived data in order to gain insight into the role of low-level viral replication in the setting of HAART. Results By comparing the behavior of our model to patient derived data, we find that the viral dynamics observed in patients on HAART could be consistent with low-level viral replication but that this replication would not significantly affect the decay rate of the latent reservoir. Rather than low-level replication, the intrinsic stability of latently infected cells and the rate at which they are reactivated primarily determine the observed reservoir decay rate according to the predictions of our model. Conclusion The intrinsic stability of the latent reservoir has important implications for efforts to eradicate HIV-1 infection and suggests that intensified HAART would not accelerate the decay of the latent reservoir. PMID:18171475

  2. Evaluation of three methods for hemoglobin measurement in a blood donor setting

    Directory of Open Access Journals (Sweden)

    Jacob Rosenblit

    1999-05-01

    Full Text Available CONTEXT: The hemoglobin (Hb level is the most-used parameter for screening blood donors for the presence of anemia, one of the most-used methods for measuring Hb levels is based on photometric detection of cyanmetahemoglobin, as an alternative to this technology, HemoCue has developed a photometric method based on the determination of azide metahemoglobin. OBJECTIVE: To evaluate the performance of three methods for hemoglobin (Hb determination in a blood bank setting. DESIGN: Prospective study utilizing blood samples to compare methods for Hb determination. SETTING: Hemotherapy Service of the Hospital Israelita Albert Einstein, a private institution in the tertiary health care system. SAMPLE: Serial blood samples were collected from 259 individuals during the period from March to June 1996. MAIN MEASUREMENTS: Test performances and their comparisons were assessed by the analysis of coefficients of variation (CV, linear regression and mean differences. RESULTS: The CV for the three methods were: Coulter 0.68%, Cobas 0.82% and HemoCue 0.69%. There was no difference between the mean Hb determination for the three methods (p>0.05. The Coulter and Cobas methods showed the best agreement and the HemoCue method gave a lower Hb determination when compared to both the Coulter and Cobas methods. However, pairs of methods involving the HemoCue seem to have narrower limits of agreement (± 0.78 and ± 1.02 than the Coulter and Cobas combination (± 1.13. CONCLUSION: The three methods provide good agreement for hemoglobin determination.

  3. The misconception of mean-reversion

    International Nuclear Information System (INIS)

    Eliazar, Iddo I; Cohen, Morrel H

    2012-01-01

    The notion of random motion in a potential well is elemental in the physical sciences and beyond. Quantitatively, this notion is described by reverting diffusions—asymptotically stationary diffusion processes which are simultaneously (i) driven toward a reversion level by a deterministic force, and (ii) perturbed off the reversion level by a random white noise. The archetypal example of reverting diffusions is the Ornstein–Uhlenbeck process, which is mean-reverting. In this paper we analyze reverting diffusions and establish that: (i) if the magnitude of the perturbing noise is constant then the diffusion's stationary density is unimodal and the diffusion is mode-reverting; (ii) if the magnitude of the perturbing noise is non-constant then, in general, neither is the diffusion's stationary density unimodal, nor is the diffusion mode-reverting. In the latter case we further establish a result asserting when unimodality and mode-reversion do hold. In particular, we demonstrate that the notion of mean-reversion, which is fundamental in economics and finance, is a misconception—as mean-reversion is an exception rather than the norm. (fast track communication)

  4. Finding meaning in art: Preferred levels of ambiguity in art appreciation

    Science.gov (United States)

    Jakesch, Martina; Leder, Helmut

    2011-01-01

    Uncertainty is typically not desirable in everyday experiences, but uncertainty in the form of ambiguity may be a defining feature of aesthetic experiences of modern art. In this study, we examined different hypotheses concerning the quantity and quality of information appreciated in art. Artworks were shown together with auditorily presented statements. We tested whether the amount of information, the amount of matching information, or the proportion of matching to nonmatching statements apparent in a picture (levels of ambiguity) affect liking and interestingness. Only the levels of ambiguity predicted differences in the two dependent variables. These findings reveal that ambiguity is an important determinant of aesthetic appreciation and that a certain level of ambiguity is appreciable. PMID:19565431

  5. Instabilities constraint and relativistic mean field parametrization

    International Nuclear Information System (INIS)

    Sulaksono, A.; Kasmudin; Buervenich, T.J.; Reinhard, P.-G.; Maruhn, J.A.

    2011-01-01

    Two parameter sets (Set 1 and Set 2) of the standard relativistic mean field (RMF) model plus additional vector isoscalar nonlinear term, which are constrained by a set of criteria 20 determined by symmetric nuclear matter stabilities at high densities due to longitudinal and transversal particle–hole excitation modes are investigated. In the latter parameter set, δ meson and isoscalar as well as isovector tensor contributions are included. The effects in selected finite nuclei and nuclear matter properties predicted by both parameter sets are systematically studied and compared with the ones predicted by well-known RMF parameter sets. The vector isoscalar nonlinear term addition and instability constraints have reasonably good effects in the high-density properties of the isoscalar sector of nuclear matter and certain finite nuclei properties. However, even though the δ meson and isovector tensor are included, the incompatibility with the constraints from some experimental data in certain nuclear properties at saturation point and the excessive stiffness of the isovector nuclear matter equation of state at high densities as well as the incorrect isotonic trend in binding the energies of finite nuclei are still encountered. It is shown that the problem may be remedied if we introduce additional nonlinear terms not only in the isovector but also in the isoscalar vectors. (author)

  6. The Analysis of Nonstationary Time Series Using Regression, Correlation and Cointegration with an Application to Annual Mean Temperature and Sea Level

    DEFF Research Database (Denmark)

    Johansen, Søren

    There are simple well-known conditions for the validity of regression and correlation as statistical tools. We analyse by examples the effect of nonstationarity on inference using these methods and compare them to model based inference. Finally we analyse some data on annual mean temperature...... and sea level, by applying the cointegrated vector autoregressive model, which explicitly takes into account the nonstationarity of the variables....

  7. Agenda-setting the unknown

    DEFF Research Database (Denmark)

    Dannevig, Halvor

    -setting theory, it is concluded that agenda-setting of climate change adaptation requires human agency in providing local legitimacy and salience for the issue. The thesis also finds that boundary arrangements are needed to bridge the gap between local knowledge and scientific knowledge for adaptation governance....... Attempts at such boundary arrangements are already in place at the regional governance levels, but they must be strengthened if municipalities are to take further steps in implementing adaptation measures....

  8. Contextual control over task-set retrieval.

    Science.gov (United States)

    Crump, Matthew J C; Logan, Gordon D

    2010-11-01

    Contextual cues signaling task likelihood or the likelihood of task repetition are known to modulate the size of switch costs. We follow up on the finding by Leboe, Wong, Crump, and Stobbe (2008) that location cues predictive of the proportion of switch or repeat trials modulate switch costs. Their design employed one cue per task, whereas our experiment employed two cues per task, which allowed separate assessment of modulations to the cue-repetition benefit, a measure of lower level cue-encoding processes, and to the task-alternation cost, a measure of higher level processes representing task-set information. We demonstrate that location information predictive of switch proportion modulates performance at the level of task-set representations. Furthermore, we demonstrate that contextual control occurs even when subjects are unaware of the associations between context and switch likelihood. We discuss the notion that contextual information provides rapid, unconscious control over the extent to which prior task-set representations are retrieved in the service of guiding online performance.

  9. Mean Platelet Volume, Vitamin D and C Reactive Protein Levels in Normal Weight Children with Primary Snoring and Obstructive Sleep Apnea Syndrome.

    Directory of Open Access Journals (Sweden)

    Anna Maria Zicari

    Full Text Available Studies on Mean Platelet Volume (MPV in children with Sleep Disordered Breathing (SDB report conflicting results and the hypothesis of an intermittent hypoxemia leading to a systemic inflammation is reaching consensus. Vitamin D exerts anti-inflammatory properties and its deficiency has been supposed to play a role in sleep disorders. Emerging interest is rising about Primary Snoring (PS since it is reasonable that also undetectable alteration of hypoxia might predispose to an increased production of inflammatory mediators. In this perspective, in a group of children affected by SDB, our aim was to investigate MPV, vitamin D and C Reactive Protein (CRP levels, which had been previously evaluated separately in different studies focused only on Obstructive Sleep Apnea Syndrome (OSAS.We enrolled 137 children: 70 healthy controls (HC, 67 affected by SDB undergoing a polysomnographic evaluation, 22 with a diagnosis of PS and 45 with a diagnosis of OSAS. All patients underwent routine biochemical evaluations including blood cell counts, CRP and vitamin D.Children affected by SDB had a mean age of 8.49±2.19 and were prevalently males (23 females, 34%; 44 males, 66%. MPV levels were higher in OSAS and PS when compared to HC; platelet count (PLT and CRP levels were higher while Vitamin D levels were lower in children with SDB when compared to HC. MPV levels were correlated with PLT (r = -0.54; p<0.001, vitamin D (r = -0.39; p<0.001 and CRP (r = 0.21; p<0.01. A multiple regression was run to predict MPV levels from vitamin D, CRP and PLT and these variables significantly predicted MPV (F = 17.42, p<0.0001; adjusted R2 = 0.37. Only platelet count and vitamin D added statistically significantly to the prediction (p<0.05.The present study provides evidence of higher MPV and lower vitamin D levels in children with PS as well as in children with OSAS, and supports the underlying inflammation, hence, highlighting the importance of an early diagnosis of this

  10. Coastal lagoon systems as indicator of Holocene sea-level development in a periglacial soft-sediment setting: Samsø, Denmark

    DEFF Research Database (Denmark)

    Sander, Lasse; Fruergaard, Mikkel; Johannessen, Peter N.

    2014-01-01

    . Stratigraphy, grain-size distribution, fossil and organic matter content of cores retrieved from the lagoons were analyzed and compared. Age control was established using radiocarbon and optically stimulated luminescence dating. Our data produced a surprisingly consistent pattern for the sedimentary......Confined shallow-water environments are encountered many places along the coast of the inner Danish waters. Despite their common occurrence, these environments have rarely been studied as sedimentary archives. In this study we set out to trace back changes in relative sea-level and associated...... geomorphological responses in sediment cores retrieved from coastal lagoon systems on the island of Samsø, central Denmark. In the mid-Atlantic period, the post-glacial sea-level rise reached what is today the southern Kattegat Sea. Waves, currents and tides began to erode the unconsolidated moraine material...

  11. Projecting future sea level

    Science.gov (United States)

    Cayan, Daniel R.; Bromirski, Peter; Hayhoe, Katharine; Tyree, Mary; Dettinger, Mike; Flick, Reinhard

    2006-01-01

    California’s coastal observations and global model projections indicate that California’s open coast and estuaries will experience increasing sea levels over the next century. Sea level rise has affected much of the coast of California, including the Southern California coast, the Central California open coast, and the San Francisco Bay and upper estuary. These trends, quantified from a small set of California tide gages, have ranged from 10–20 centimeters (cm) (3.9–7.9 inches) per century, quite similar to that estimated for global mean sea level. So far, there is little evidence that the rate of rise has accelerated, and the rate of rise at California tide gages has actually flattened since 1980, but projections suggest substantial sea level rise may occur over the next century. Climate change simulations project a substantial rate of global sea level rise over the next century due to thermal expansion as the oceans warm and runoff from melting land-based snow and ice accelerates. Sea level rise projected from the models increases with the amount of warming. Relative to sea levels in 2000, by the 2070–2099 period, sea level rise projections range from 11–54 cm (4.3–21 in) for simulations following the lower (B1) greenhouse gas (GHG) emissions scenario, from 14–61 cm (5.5–24 in) for the middle-upper (A2) emission scenario, and from 17–72 cm (6.7–28 in) for the highest (A1fi) scenario. In addition to relatively steady secular trends, sea levels along the California coast undergo shorter period variability above or below predicted tide levels and changes associated with long-term trends. These variations are caused by weather events and by seasonal to decadal climate fluctuations over the Pacific Ocean that in turn affect the Pacific coast. Highest coastal sea levels have occurred when winter storms and Pacific climate disturbances, such as El Niño, have coincided with high astronomical tides. This study considers a range of projected future

  12. Mean-level personality development across childhood and adolescence: a temporary defiance of the maturity principle and bidirectional associations with parenting.

    Science.gov (United States)

    Van den Akker, Alithe L; Deković, Maja; Asscher, Jessica; Prinzie, Peter

    2014-10-01

    In this study, we investigated mean-level personality development in children from 6 to 20 years of age. Additionally, we investigated longitudinal, bidirectional associations between child personality and maternal overreactive and warm parenting. In this 5-wave study, mothers reported on their child's personality from Time 1 (T1) through Time 4 (T4), and children provided self-reports from Time 2 (T2) through Time 5 (T5). Mothers reported on their levels of overreactive and warm parenting from T2 through T4. Using cohort-sequential latent growth curve modeling, we investigated mother reported child personality from 6 to 17 years of age and child reported personality from 9 to 20 years of age. Extraversion decreased linearly across the entire study. Benevolence and conscientiousness increased from middle to late childhood, temporarily declined from late childhood to mid-adolescence, and increased again thereafter. Imagination decreased from middle childhood to mid-adolescence and also increased thereafter. Mothers reported a temporary decline in emotional stability with an increase thereafter, whereas children did not. Boys and girls differed in mean-levels of the personality dimensions and, to a lesser extent, in the degree and direction of changes. Latent difference score modeling showed that child personality predicted changes in parenting and that, to a lesser extent, parenting predicted changes in child traits. Additionally, changes in child personality were associated with changes in maternal parenting. Results of the present study show that personality change is not directed at increasing maturity from childhood to mid-adolescence and that it elicits and is shaped by both positive and negative parenting. 2014 APA, all rights reserved

  13. Theory of random sets

    CERN Document Server

    Molchanov, Ilya

    2017-01-01

    This monograph, now in a thoroughly revised second edition, offers the latest research on random sets. It has been extended to include substantial developments achieved since 2005, some of them motivated by applications of random sets to econometrics and finance. The present volume builds on the foundations laid by Matheron and others, including the vast advances in stochastic geometry, probability theory, set-valued analysis, and statistical inference. It shows the various interdisciplinary relationships of random set theory within other parts of mathematics, and at the same time fixes terminology and notation that often vary in the literature, establishing it as a natural part of modern probability theory and providing a platform for future development. It is completely self-contained, systematic and exhaustive, with the full proofs that are necessary to gain insight. Aimed at research level, Theory of Random Sets will be an invaluable reference for probabilists; mathematicians working in convex and integ...

  14. [A survey of the level of AIDS knowledge among people concerned in Nanjing City].

    Science.gov (United States)

    Sun, Ze-Yu; Zhu, Ning; Li, Ping; Fang, Qun; Chen, Hui-Ling; Tang, Xiao-Ning; Yu, Hong-Bo; Wei, Zhong-Qing; Xu, Zhi-Peng

    2003-10-01

    To investigate the level of AIDS knowledge among people concerned in Nanjing city in order to provide scientific evidence and constructive suggestions for the government to formulate relevant policies for AIDS control. Three sets of questionnaires on AIDS knowledge were designed, the scores calculated, and the results evaluated. Of the 2,500 questionnaires issued to 4 different groups of people, 2,436 were collected back with effective answers, 991 from medical and health-related workers with the mean score of 58, 473 from college students with the mean score of 39.9, 524 from common city residents with the mean score of 42.3, and 448 from those working in high risk environment with the mean score of 47. The level of AIDS knowledge among people concerned in Nanjing city was far below the requirement of the nation, especially among medical and health-related workers. Efforts must be made to raise the level of AIDS knowledge of people concerned so as to enhance the prevention and treatment of the disease.

  15. Effects of supervised structured aerobic exercise training program on fasting blood glucose level, plasma insulin level, glycemic control, and insulin resistance in type 2 diabetes mellitus.

    Science.gov (United States)

    Shakil-Ur-Rehman, Syed; Karimi, Hossein; Gillani, Syed Amir

    2017-01-01

    To determine the effects of supervised structured aerobic exercise training (SSAET) program on fasting blood glucose level (FBGL), plasma insulin level (PIL), glycemic control (GC), and insulin resistance (IR) in type 2 diabetes mellitus (T2DM). Riphah Rehabilitation and Research Centre (RRRC) was the clinical setting for this randomized controlled trial, located at Pakistan Railways General Hospital (PRGH), Rawalpindi, Pakistan. Study duration was 18 months from January 1, 2015 to June 30, 2016. Patients of both genders ranging 40-70 years of age with at least one year of history of T2DM were considered eligible according to WHO criteria, while patients with other chronic diseases, history of smoking, regular exercise and diet plan were excluded. Cohorts of 195 patients were screened out of whom 120 fulfilled the inclusion criteria. Amongst them 102 agreed to participate and were assigned to experimental (n=51) and control (n=51) groups. Experimental group underwent SSAET program, routine medication and dietary plan, whereas the control group received routine medication and dietary plan, while both group received treatment for 25 weeks. The blood samples were taken at baseline and on the completion of 25 weeks. The investigation of fasting blood glucose level, plasma insulin level, and glycemic control was conducted to calculate IR. Patients with T2DM in experimental group (n=51) treated with SSAET program, routine medication and dietary plan significantly improved FBGL (pre-mean= 276.41±25.31, post-mean=250.07±28.23), PIL (pre-mean=13.66±5.31, post-mean=8.91±3.83), GC (pre-mean=8.31±1.79, post-mean 7.28±1.43), and IR (pre-mean=64.95±27.26, post-mean 37.97±15.58), as compared with patients in control group treated with routine medication and dietary plan in whom deteriorations were noted in FBGL (pre-mean=268.19±22.48, post-mean=281.41±31.30), PIL(pre-mean=14.14±5.48, post-mean=14.85±5.27) GC (pre-mean=8.15±1.74, post-mean=8.20±1.44, and IR (pre-mean

  16. K-means clustering versus validation measures: a data-distribution perspective.

    Science.gov (United States)

    Xiong, Hui; Wu, Junjie; Chen, Jian

    2009-04-01

    K-means is a well-known and widely used partitional clustering method. While there are considerable research efforts to characterize the key features of the K-means clustering algorithm, further investigation is needed to understand how data distributions can have impact on the performance of K-means clustering. To that end, in this paper, we provide a formal and organized study of the effect of skewed data distributions on K-means clustering. Along this line, we first formally illustrate that K-means tends to produce clusters of relatively uniform size, even if input data have varied "true" cluster sizes. In addition, we show that some clustering validation measures, such as the entropy measure, may not capture this uniform effect and provide misleading information on the clustering performance. Viewed in this light, we provide the coefficient of variation (CV) as a necessary criterion to validate the clustering results. Our findings reveal that K-means tends to produce clusters in which the variations of cluster sizes, as measured by CV, are in a range of about 0.3-1.0. Specifically, for data sets with large variation in "true" cluster sizes (e.g., CV > 1.0), K-means reduces variation in resultant cluster sizes to less than 1.0. In contrast, for data sets with small variation in "true" cluster sizes (e.g., CV K-means increases variation in resultant cluster sizes to greater than 0.3. In other words, for the earlier two cases, K-means produces the clustering results which are away from the "true" cluster distributions.

  17. Estimating climate resilience for conservation across geophysical settings.

    Science.gov (United States)

    Anderson, Mark G; Clark, Melissa; Sheldon, Arlene Olivero

    2014-08-01

    Conservationists need methods to conserve biological diversity while allowing species and communities to rearrange in response to a changing climate. We developed and tested such a method for northeastern North America that we based on physical features associated with ecological diversity and site resilience to climate change. We comprehensively mapped 30 distinct geophysical settings based on geology and elevation. Within each geophysical setting, we identified sites that were both connected by natural cover and that had relatively more microclimates indicated by diverse topography and elevation gradients. We did this by scoring every 405 ha hexagon in the region for these two characteristics and selecting those that scored >SD 0.5 above the mean combined score for each setting. We hypothesized that these high-scoring sites had the greatest resilience to climate change, and we compared them with sites selected by The Nature Conservancy for their high-quality rare species populations and natural community occurrences. High-scoring sites captured significantly more of the biodiversity sites than expected by chance (p < 0.0001): 75% of the 414 target species, 49% of the 4592 target species locations, and 53% of the 2170 target community locations. Calcareous bedrock, coarse sand, and fine silt settings scored markedly lower for estimated resilience and had low levels of permanent land protection (average 7%). Because our method identifies-for every geophysical setting-sites that are the most likely to retain species and functions longer under a changing climate, it reveals natural strongholds for future conservation that would also capture substantial existing biodiversity and correct the bias in current secured lands. © 2014 The Authors. Conservation Biology published by Wiley Periodicals, Inc., on behalf of the Society for Conservation Biology.

  18. Experiments in Reconstructing Twentieth-Century Sea Levels

    Science.gov (United States)

    Ray, Richard D.; Douglas, Bruce C.

    2011-01-01

    One approach to reconstructing historical sea level from the relatively sparse tide-gauge network is to employ Empirical Orthogonal Functions (EOFs) as interpolatory spatial basis functions. The EOFs are determined from independent global data, generally sea-surface heights from either satellite altimetry or a numerical ocean model. The problem is revisited here for sea level since 1900. A new approach to handling the tide-gauge datum problem by direct solution offers possible advantages over the method of integrating sea-level differences, with the potential of eventually adjusting datums into the global terrestrial reference frame. The resulting time series of global mean sea levels appears fairly insensitive to the adopted set of EOFs. In contrast, charts of regional sea level anomalies and trends are very sensitive to the adopted set of EOFs, especially for the sparser network of gauges in the early 20th century. The reconstructions appear especially suspect before 1950 in the tropical Pacific. While this limits some applications of the sea-level reconstructions, the sensitivity does appear adequately captured by formal uncertainties. All our solutions show regional trends over the past five decades to be fairly uniform throughout the global ocean, in contrast to trends observed over the shorter altimeter era. Consistent with several previous estimates, the global sea-level rise since 1900 is 1.70 +/- 0.26 mm/yr. The global trend since 1995 exceeds 3 mm/yr which is consistent with altimeter measurements, but this large trend was possibly also reached between 1935 and 1950.

  19. Setting analyst: A practical harvest planning technique

    Science.gov (United States)

    Olivier R.M. Halleux; W. Dale Greene

    2001-01-01

    Setting Analyst is an ArcView extension that facilitates practical harvest planning for ground-based systems. By modeling the travel patterns of ground-based machines, it compares different harvesting settings based on projected average skidding distance, logging costs, and site disturbance levels. Setting Analyst uses information commonly available to consulting...

  20. Describing Product Variety Using Set Theory

    DEFF Research Database (Denmark)

    Brunø, Thomas Ditlev; Nielsen, Kjeld; Jørgensen, Kaj Asbjørn

    2014-01-01

    Three capabilities: solution space development, robust process design, and choice navigation are critical for mass customizers. In order to become and stay competitive, it is proposed to establish assessment methods for these capabilities. This paper investigates the usage of set theory as a mean...

  1. Effect of Average Annual Mean Serum Ferritin Levels on QTc Interval and QTc Dispersion in Beta-Thalassemia Major

    Directory of Open Access Journals (Sweden)

    Yazdan Ghandi

    2017-08-01

    Full Text Available Background There is evidence indicating impaired cardiomyocytic contractility, delayed electrical conduction and increased electrophysiological heterogeneities due to iron toxicity in beta-thalassemia major patients. In the present study, we compared the electrocardiographic and echocardiographic features of beta-thalassemia major patients with a healthy control group. Materials and Methods The average annual serum ferritin levels of fifty beta-thalassemia major patients were assessed. For each patient, corrected QT (QTc intervals and QTc dispersions (QTcd were calculated and V1S and V5R were measured. All subjects underwent two-dimensional M-mode echocardiography and Doppler study and were compared with 50 healthy subjects as a control group. Results QTc interval and dispersion were significantly higher in beta-thalassemia major patients (P= 0.001. The mean V5R (20.04 ± 4.34 vs. 17.14 ± 2.55 mm and V1S (10.24 ± 2.62 vs. 7.83 ± 0.38 mm showed considerably higher mean values in patients in comparison with control group.Peak mitral inflow velocity at early diastole and early to late ratio in the case- group was markedly higher(P

  2. Meaning of life, representation of death, and their association with psychological distress.

    Science.gov (United States)

    Testoni, Ines; Sansonetto, Giulia; Ronconi, Lucia; Rodelli, Maddalena; Baracco, Gloria; Grassi, Luigi

    2017-08-09

    This paper presents a two-phase cross-sectional study aimed at examining the possible mitigating role of perceived meaning of life and representation of death on psychological distress, anxiety, and depression. The first phase involved 219 healthy participants, while the second encompassed 30 cancer patients. Each participant completed the Personal Meaning Profile (PMP), the Testoni Death Representation Scale (TDRS), the Hospital Anxiety and Depression Scale (HADS), and the Distress Thermometer (DT). The primary analyses comprised (1) correlation analyses between the overall scores of each of the instruments and (2) path analysis to assess the indirect effect of the PMP on DT score through anxiety and depression as determined by the HADS. The path analysis showed that the PMP was inversely correlated with depression and anxiety, which, in turn, mediated the effect on distress. Inverse correlations were found between several dimensions of the PMP, the DT, and the HADS-Anxiety and HADS-Depression subscales, in both healthy participants and cancer patients. Religious orientation (faith in God) was related to a stronger sense of meaning in life and the ontological representation of death as a passage, rather than annihilation. Our findings support the hypothesis that participants who represent death as a passage and have a strong perception of the meaning of life tend to report lower levels of distress, anxiety, and depression. We recommend that perceived meaning of life and representation of death be more specifically examined in the cancer and palliative care settings.

  3. Adaptive nonlocal means filtering based on local noise level for CT denoising

    International Nuclear Information System (INIS)

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando; Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H.

    2014-01-01

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  4. The Meaning of Meaning, Etc.

    Science.gov (United States)

    Nilsen, Don L. F.

    This paper attempts to dispel a number of misconceptions about the nature of meaning, namely that: (1) synonyms are words that have the same meanings, (2) antonyms are words that have opposite meanings, (3) homonyms are words that sound the same but have different spellings and meanings, (4) converses are antonyms rather than synonyms, (5)…

  5. Validation of a non-uniform meshing algorithm for the 3D-FDTD method by means of a two-wire crosstalk experimental set-up

    Directory of Open Access Journals (Sweden)

    Raúl Esteban Jiménez-Mejía

    2015-06-01

    Full Text Available This paper presents an algorithm used to automatically mesh a 3D computational domain in order to solve electromagnetic interaction scenarios by means of the Finite-Difference Time-Domain -FDTD-  Method. The proposed algorithm has been formulated in a general mathematical form, where convenient spacing functions can be defined for the problem space discretization, allowing the inclusion of small sized objects in the FDTD method and the calculation of detailed variations of the electromagnetic field at specified regions of the computation domain. The results obtained by using the FDTD method with the proposed algorithm have been contrasted not only with a typical uniform mesh algorithm, but also with experimental measurements for a two-wire crosstalk set-up, leading to excellent agreement between theoretical and experimental waveforms. A discussion about the advantages of the non-uniform mesh over the uniform one is also presented.

  6. Maximal Abelian sets of roots

    CERN Document Server

    Lawther, R

    2018-01-01

    In this work the author lets \\Phi be an irreducible root system, with Coxeter group W. He considers subsets of \\Phi which are abelian, meaning that no two roots in the set have sum in \\Phi \\cup \\{ 0 \\}. He classifies all maximal abelian sets (i.e., abelian sets properly contained in no other) up to the action of W: for each W-orbit of maximal abelian sets we provide an explicit representative X, identify the (setwise) stabilizer W_X of X in W, and decompose X into W_X-orbits. Abelian sets of roots are closely related to abelian unipotent subgroups of simple algebraic groups, and thus to abelian p-subgroups of finite groups of Lie type over fields of characteristic p. Parts of the work presented here have been used to confirm the p-rank of E_8(p^n), and (somewhat unexpectedly) to obtain for the first time the 2-ranks of the Monster and Baby Monster sporadic groups, together with the double cover of the latter. Root systems of classical type are dealt with quickly here; the vast majority of the present work con...

  7. Evaluation of the relationship between serum apelin levels and vitamin D and mean platelet volume in diabetic patients.

    Science.gov (United States)

    Kiskac, Muharrem; Zorlu, Mehmet; Cakirca, Mustafa; Karatoprak, Cumali; Kesgin, Sıdıka; Büyükaydın, Banu; Yavuz, Erdinc; Ardic, Cuneyt; Camli, Ahmet Adil; Cikrikcioglu, Mehmet Ali

    2014-09-01

    It was reported that Vitamin D deficiency was associated with a greater risk of cardiometabolic diseases, obesity, impaired glucose tolerance and diabetes mellitus type 2, arterial hypertension, and dyslipidemia. Apelin is an adipocytokine suspected to have a role in skeletal muscle glucose utilization and glycemic regulation which may be a promising treatment modality for diabetes. It was recently reported that increased mean platelet volume (MPV) was emerging as an independent risk factor for thromboembolism, stroke, and myocardial infarction. In patients with diabetes, MPV was higher compared with the normal glycemic controls; in addition, it has been proposed that an increase in MPV may play a role in the micro- and macro-vascular complications related to diabetes. We postulated that deficiency in Vitamin D levels might be associated with higher MPV and lower serum apelin levels leading a further increase in insulin resistance in diabetic patients. So, we aimed to investigate Vitamin D levels, MPV and serum apelin levels in diabetic patients and their correlations between each other. This is a cross-sectional study design. Seventy-eight patients with Diabetes Mellitus type 2, admitted to our outpatient clinic of internal medicine department at Bezmialem Vakif University, were included in our study. Forty-one patients were female; 37 patients were male. Serum apelin levels, fasting glucose levels, urea, creatinine, triglycerides, total cholesterol, low-density lipoprotein cholesterol (LDL-C), high-density lipoprotein cholesterol (HDL-C), fasting serum insulin level, HbA1c, free T3, free T4, TSH, vitamin D (25-OH Vitamin D) and complete blood counts were analyzed in all subjects. Each sex was analyzed separately. We found that a positive correlation existed between serum apelin levels and BMI in female patients. (r: 0.380, P: 0.014) There was also a significant positive correlation between MPV and HbA1c and fasting glucose levels and a negative correlation

  8. The Model Confidence Set

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.

    The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS......, beyond the comparison of models. We apply the MCS procedure to two empirical problems. First, we revisit the inflation forecasting problem posed by Stock and Watson (1999), and compute the MCS for their set of inflation forecasts. Second, we compare a number of Taylor rule regressions and determine...... the MCS of the best in terms of in-sample likelihood criteria....

  9. Children with severe Osteogenesis imperfecta and short stature present on average with normal IGF-I and IGFBP-3 levels.

    Science.gov (United States)

    Hoyer-Kuhn, Heike; Höbing, Laura; Cassens, Julia; Schoenau, Eckhard; Semler, Oliver

    2016-07-01

    Osteogenesis imperfecta (OI) is characterized by bone fragility and short stature. Data about IGF-I/IGFBP-3 levels are rare in OI. Therefore IGF-I/IGFBP-3 levels in children with different types of OI were investigated. IGF-I and IGFBP-3 levels of 60 children (male n=38) were assessed in a retrospective cross-sectional setting. Height/weight was significant different [height z-score type 3 versus type 4: p=0.0011 and weight (p≤0.0001)] between OI type 3 and 4. Mean IGF-I levels were in the lower normal range (mean±SD level 137.4±109.1 μg/L). Mean IGFBP-3 measurements were in the normal range (mean±SD 3.105±1.175 mg/L). No significant differences between OI type 3 and 4 children have been observed (IGF-I: p=0.0906; IGFBP-3: p=0.2042). Patients with different severities of OI have IGF-I and IGFBP-3 levels in the lower normal range. The type of OI does not significantly influence these growth factors.

  10. L2 English Intonation: Relations between Form-Meaning Associations, Access to Meaning, and L1 Transfer

    Science.gov (United States)

    Ortega-Llebaria, Marta; Colantoni, Laura

    2014-01-01

    Although there is consistent evidence that higher levels of processing, such as learning the form-meaning associations specific to the second language (L2), are a source of difficulty in acquiring L2 speech, no study has addressed how these levels interact in shaping L2 perception and production of intonation. We examine the hypothesis of whether…

  11. “What do you mean I can’t just use Google?” Information Literacy in an Academic Setting

    OpenAIRE

    Laura Thorne

    2012-01-01