WorldWideScience

Sample records for level set technique

  1. Level-set techniques for facies identification in reservoir modeling

    Science.gov (United States)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  2. Level-set techniques for facies identification in reservoir modeling

    International Nuclear Information System (INIS)

    Iglesias, Marco A; McLaughlin, Dennis

    2011-01-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil–water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301–29; 2004 Inverse Problems 20 259–82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg–Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush–Kuhn–Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies

  3. Fast Sparse Level Sets on Graphics Hardware

    NARCIS (Netherlands)

    Jalba, Andrei C.; Laan, Wladimir J. van der; Roerdink, Jos B.T.M.

    The level-set method is one of the most popular techniques for capturing and tracking deformable interfaces. Although level sets have demonstrated great potential in visualization and computer graphics applications, such as surface editing and physically based modeling, their use for interactive

  4. Volume Sculpting Using the Level-Set Method

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Christensen, Niels Jørgen

    2002-01-01

    In this paper, we propose the use of the Level--Set Method as the underlying technology of a volume sculpting system. The main motivation is that this leads to a very generic technique for deformation of volumetric solids. In addition, our method preserves a distance field volume representation....... A scaling window is used to adapt the Level--Set Method to local deformations and to allow the user to control the intensity of the tool. Level--Set based tools have been implemented in an interactive sculpting system, and we show sculptures created using the system....

  5. Structural level set inversion for microwave breast screening

    International Nuclear Information System (INIS)

    Irishina, Natalia; Álvarez, Diego; Dorn, Oliver; Moscoso, Miguel

    2010-01-01

    We present a new inversion strategy for the early detection of breast cancer from microwave data which is based on a new multiphase level set technique. This novel structural inversion method uses a modification of the color level set technique adapted to the specific situation of structural breast imaging taking into account the high complexity of the breast tissue. We only use data of a few microwave frequencies for detecting the tumors hidden in this complex structure. Three level set functions are employed for describing four different types of breast tissue, where each of these four regions is allowed to have a complicated topology and to have an interior structure which needs to be estimated from the data simultaneously with the region interfaces. The algorithm consists of several stages of increasing complexity. In each stage more details about the anatomical structure of the breast interior is incorporated into the inversion model. The synthetic breast models which are used for creating simulated data are based on real MRI images of the breast and are therefore quite realistic. Our results demonstrate the potential and feasibility of the proposed level set technique for detecting, locating and characterizing a small tumor in its early stage of development embedded in such a realistic breast model. Both the data acquisition simulation and the inversion are carried out in 2D

  6. Transport and diffusion of material quantities on propagating interfaces via level set methods

    CERN Document Server

    Adalsteinsson, D

    2003-01-01

    We develop theory and numerical algorithms to apply level set methods to problems involving the transport and diffusion of material quantities in a level set framework. Level set methods are computational techniques for tracking moving interfaces; they work by embedding the propagating interface as the zero level set of a higher dimensional function, and then approximate the solution of the resulting initial value partial differential equation using upwind finite difference schemes. The traditional level set method works in the trace space of the evolving interface, and hence disregards any parameterization in the interface description. Consequently, material quantities on the interface which themselves are transported under the interface motion are not easily handled in this framework. We develop model equations and algorithmic techniques to extend the level set method to include these problems. We demonstrate the accuracy of our approach through a series of test examples and convergence studies.

  7. Transport and diffusion of material quantities on propagating interfaces via level set methods

    International Nuclear Information System (INIS)

    Adalsteinsson, David; Sethian, J.A.

    2003-01-01

    We develop theory and numerical algorithms to apply level set methods to problems involving the transport and diffusion of material quantities in a level set framework. Level set methods are computational techniques for tracking moving interfaces; they work by embedding the propagating interface as the zero level set of a higher dimensional function, and then approximate the solution of the resulting initial value partial differential equation using upwind finite difference schemes. The traditional level set method works in the trace space of the evolving interface, and hence disregards any parameterization in the interface description. Consequently, material quantities on the interface which themselves are transported under the interface motion are not easily handled in this framework. We develop model equations and algorithmic techniques to extend the level set method to include these problems. We demonstrate the accuracy of our approach through a series of test examples and convergence studies

  8. Demons versus level-set motion registration for coronary 18F-sodium fluoride PET

    Science.gov (United States)

    Rubeaux, Mathieu; Joshi, Nikhil; Dweck, Marc R.; Fletcher, Alison; Motwani, Manish; Thomson, Louise E.; Germano, Guido; Dey, Damini; Berman, Daniel S.; Newby, David E.; Slomka, Piotr J.

    2016-03-01

    Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18F-sodium fluoride (18F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18F-NaF PET. To this end, fifteen patients underwent 18F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically

  9. Fuzzy Bi-level Decision-Making Techniques: A Survey

    Directory of Open Access Journals (Sweden)

    Guangquan Zhang

    2016-04-01

    Full Text Available Bi-level decision-making techniques aim to deal with decentralized management problems that feature interactive decision entities distributed throughout a bi-level hierarchy. A challenge in handling bi-level decision problems is that various uncertainties naturally appear in decision-making process. Significant efforts have been devoted that fuzzy set techniques can be used to effectively deal with uncertain issues in bi-level decision-making, known as fuzzy bi-level decision-making techniques, and researchers have successfully gained experience in this area. It is thus vital that an instructive review of current trends in this area should be conducted, not only of the theoretical research but also the practical developments. This paper systematically reviews up-to-date fuzzy bi-level decisionmaking techniques, including models, approaches, algorithms and systems. It also clusters related technique developments into four main categories: basic fuzzy bi-level decision-making, fuzzy bi-level decision-making with multiple optima, fuzzy random bi-level decision-making, and the applications of bi-level decision-making techniques in different domains. By providing state-of-the-art knowledge, this survey paper will directly support researchers and practitioners in their understanding of developments in theoretical research results and applications in relation to fuzzy bi-level decision-making techniques.

  10. An accurate conservative level set/ghost fluid method for simulating turbulent atomization

    International Nuclear Information System (INIS)

    Desjardins, Olivier; Moureau, Vincent; Pitsch, Heinz

    2008-01-01

    This paper presents a novel methodology for simulating incompressible two-phase flows by combining an improved version of the conservative level set technique introduced in [E. Olsson, G. Kreiss, A conservative level set method for two phase flow, J. Comput. Phys. 210 (2005) 225-246] with a ghost fluid approach. By employing a hyperbolic tangent level set function that is transported and re-initialized using fully conservative numerical schemes, mass conservation issues that are known to affect level set methods are greatly reduced. In order to improve the accuracy of the conservative level set method, high order numerical schemes are used. The overall robustness of the numerical approach is increased by computing the interface normals from a signed distance function reconstructed from the hyperbolic tangent level set by a fast marching method. The convergence of the curvature calculation is ensured by using a least squares reconstruction. The ghost fluid technique provides a way of handling the interfacial forces and large density jumps associated with two-phase flows with good accuracy, while avoiding artificial spreading of the interface. Since the proposed approach relies on partial differential equations, its implementation is straightforward in all coordinate systems, and it benefits from high parallel efficiency. The robustness and efficiency of the approach is further improved by using implicit schemes for the interface transport and re-initialization equations, as well as for the momentum solver. The performance of the method is assessed through both classical level set transport tests and simple two-phase flow examples including topology changes. It is then applied to simulate turbulent atomization of a liquid Diesel jet at Re=3000. The conservation errors associated with the accurate conservative level set technique are shown to remain small even for this complex case

  11. Exploring the level sets of quantum control landscapes

    International Nuclear Information System (INIS)

    Rothman, Adam; Ho, Tak-San; Rabitz, Herschel

    2006-01-01

    A quantum control landscape is defined by the value of a physical observable as a functional of the time-dependent control field E(t) for a given quantum-mechanical system. Level sets through this landscape are prescribed by a particular value of the target observable at the final dynamical time T, regardless of the intervening dynamics. We present a technique for exploring a landscape level set, where a scalar variable s is introduced to characterize trajectories along these level sets. The control fields E(s,t) accomplishing this exploration (i.e., that produce the same value of the target observable for a given system) are determined by solving a differential equation over s in conjunction with the time-dependent Schroedinger equation. There is full freedom to traverse a level set, and a particular trajectory is realized by making an a priori choice for a continuous function f(s,t) that appears in the differential equation for the control field. The continuous function f(s,t) can assume an arbitrary form, and thus a level set generally contains a family of controls, where each control takes the quantum system to the same final target value, but produces a distinct control mechanism. In addition, although the observable value remains invariant over the level set, other dynamical properties (e.g., the degree of robustness to control noise) are not specifically preserved and can vary greatly. Examples are presented to illustrate the continuous nature of level-set controls and their associated induced dynamical features, including continuously morphing mechanisms for population control in model quantum systems

  12. A Memory and Computation Efficient Sparse Level-Set Method

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.

    Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the

  13. Reconstruction of thin electromagnetic inclusions by a level-set method

    International Nuclear Information System (INIS)

    Park, Won-Kwang; Lesselier, Dominique

    2009-01-01

    In this contribution, we consider a technique of electromagnetic imaging (at a single, non-zero frequency) which uses the level-set evolution method for reconstructing a thin inclusion (possibly made of disconnected parts) with either dielectric or magnetic contrast with respect to the embedding homogeneous medium. Emphasis is on the proof of the concept, the scattering problem at hand being so far based on a two-dimensional scalar model. To do so, two level-set functions are employed; the first one describes location and shape, and the other one describes connectivity and length. Speeds of evolution of the level-set functions are calculated via the introduction of Fréchet derivatives of a least-square cost functional. Several numerical experiments on noiseless and noisy data as well illustrate how the proposed method behaves

  14. Multi-phase flow monitoring with electrical impedance tomography using level set based method

    International Nuclear Information System (INIS)

    Liu, Dong; Khambampati, Anil Kumar; Kim, Sin; Kim, Kyung Youn

    2015-01-01

    Highlights: • LSM has been used for shape reconstruction to monitor multi-phase flow using EIT. • Multi-phase level set model for conductivity is represented by two level set functions. • LSM handles topological merging and breaking naturally during evolution process. • To reduce the computational time, a narrowband technique was applied. • Use of narrowband and optimization approach results in efficient and fast method. - Abstract: In this paper, a level set-based reconstruction scheme is applied to multi-phase flow monitoring using electrical impedance tomography (EIT). The proposed scheme involves applying a narrowband level set method to solve the inverse problem of finding the interface between the regions having different conductivity values. The multi-phase level set model for the conductivity distribution inside the domain is represented by two level set functions. The key principle of the level set-based method is to implicitly represent the shape of interface as the zero level set of higher dimensional function and then solve a set of partial differential equations. The level set-based scheme handles topological merging and breaking naturally during the evolution process. It also offers several advantages compared to traditional pixel-based approach. Level set-based method for multi-phase flow is tested with numerical and experimental data. It is found that level set-based method has better reconstruction performance when compared to pixel-based method

  15. Level set methods for inverse scattering—some recent developments

    International Nuclear Information System (INIS)

    Dorn, Oliver; Lesselier, Dominique

    2009-01-01

    We give an update on recent techniques which use a level set representation of shapes for solving inverse scattering problems, completing in that matter the exposition made in (Dorn and Lesselier 2006 Inverse Problems 22 R67) and (Dorn and Lesselier 2007 Deformable Models (New York: Springer) pp 61–90), and bringing it closer to the current state of the art

  16. Out-of-Core Computations of High-Resolution Level Sets by Means of Code Transformation

    DEFF Research Database (Denmark)

    Christensen, Brian Bunch; Nielsen, Michael Bang; Museth, Ken

    2012-01-01

    We propose a storage efficient, fast and parallelizable out-of-core framework for streaming computations of high resolution level sets. The fundamental techniques are skewing and tiling transformations of streamed level set computations which allow for the combination of interface propagation, re...... computations are now CPU bound and consequently the overall performance is unaffected by disk latency and bandwidth limitations. We demonstrate this with several benchmark tests that show sustained out-of-core throughputs close to that of in-core level set simulations....

  17. Mapping topographic structure in white matter pathways with level set trees.

    Directory of Open Access Journals (Sweden)

    Brian P Kent

    Full Text Available Fiber tractography on diffusion imaging data offers rich potential for describing white matter pathways in the human brain, but characterizing the spatial organization in these large and complex data sets remains a challenge. We show that level set trees--which provide a concise representation of the hierarchical mode structure of probability density functions--offer a statistically-principled framework for visualizing and analyzing topography in fiber streamlines. Using diffusion spectrum imaging data collected on neurologically healthy controls (N = 30, we mapped white matter pathways from the cortex into the striatum using a deterministic tractography algorithm that estimates fiber bundles as dimensionless streamlines. Level set trees were used for interactive exploration of patterns in the endpoint distributions of the mapped fiber pathways and an efficient segmentation of the pathways that had empirical accuracy comparable to standard nonparametric clustering techniques. We show that level set trees can also be generalized to model pseudo-density functions in order to analyze a broader array of data types, including entire fiber streamlines. Finally, resampling methods show the reliability of the level set tree as a descriptive measure of topographic structure, illustrating its potential as a statistical descriptor in brain imaging analysis. These results highlight the broad applicability of level set trees for visualizing and analyzing high-dimensional data like fiber tractography output.

  18. Level Set Approach to Anisotropic Wet Etching of Silicon

    Directory of Open Access Journals (Sweden)

    Branislav Radjenović

    2010-05-01

    Full Text Available In this paper a methodology for the three dimensional (3D modeling and simulation of the profile evolution during anisotropic wet etching of silicon based on the level set method is presented. Etching rate anisotropy in silicon is modeled taking into account full silicon symmetry properties, by means of the interpolation technique using experimentally obtained values for the etching rates along thirteen principal and high index directions in KOH solutions. The resulting level set equations are solved using an open source implementation of the sparse field method (ITK library, developed in medical image processing community, extended for the case of non-convex Hamiltonians. Simulation results for some interesting initial 3D shapes, as well as some more practical examples illustrating anisotropic etching simulation in the presence of masks (simple square aperture mask, convex corner undercutting and convex corner compensation, formation of suspended structures are shown also. The obtained results show that level set method can be used as an effective tool for wet etching process modeling, and that is a viable alternative to the Cellular Automata method which now prevails in the simulations of the wet etching process.

  19. Enteral Feeding Set Handling Techniques.

    Science.gov (United States)

    Lyman, Beth; Williams, Maria; Sollazzo, Janet; Hayden, Ashley; Hensley, Pam; Dai, Hongying; Roberts, Cristine

    2017-04-01

    Enteral nutrition therapy is common practice in pediatric clinical settings. Often patients will receive a pump-assisted bolus feeding over 30 minutes several times per day using the same enteral feeding set (EFS). This study aims to determine the safest and most efficacious way to handle the EFS between feedings. Three EFS handling techniques were compared through simulation for bacterial growth, nursing time, and supply costs: (1) rinsing the EFS with sterile water after each feeding, (2) refrigerating the EFS between feedings, and (3) using a ready-to-hang (RTH) product maintained at room temperature. Cultures were obtained at baseline, hour 12, and hour 21 of the 24-hour cycle. A time-in-motion analysis was conducted and reported in average number of seconds to complete each procedure. Supply costs were inventoried for 1 month comparing the actual usage to our estimated usage. Of 1080 cultures obtained, the overall bacterial growth rate was 8.7%. The rinse and refrigeration techniques displayed similar bacterial growth (11.4% vs 10.3%, P = .63). The RTH technique displayed the least bacterial growth of any method (4.4%, P = .002). The time analysis in minutes showed the rinse method was the most time-consuming (44.8 ± 2.7) vs refrigeration (35.8 ± 2.6) and RTH (31.08 ± 0.6) ( P refrigerating the EFS between uses is the next most efficacious method for handling the EFS between bolus feeds.

  20. Multi person detection and tracking based on hierarchical level-set method

    Science.gov (United States)

    Khraief, Chadia; Benzarti, Faouzi; Amiri, Hamid

    2018-04-01

    In this paper, we propose an efficient unsupervised method for mutli-person tracking based on hierarchical level-set approach. The proposed method uses both edge and region information in order to effectively detect objects. The persons are tracked on each frame of the sequence by minimizing an energy functional that combines color, texture and shape information. These features are enrolled in covariance matrix as region descriptor. The present method is fully automated without the need to manually specify the initial contour of Level-set. It is based on combined person detection and background subtraction methods. The edge-based is employed to maintain a stable evolution, guide the segmentation towards apparent boundaries and inhibit regions fusion. The computational cost of level-set is reduced by using narrow band technique. Many experimental results are performed on challenging video sequences and show the effectiveness of the proposed method.

  1. Skull defect reconstruction based on a new hybrid level set.

    Science.gov (United States)

    Zhang, Ziqun; Zhang, Ran; Song, Zhijian

    2014-01-01

    Skull defect reconstruction is an important aspect of surgical repair. Historically, a skull defect prosthesis was created by the mirroring technique, surface fitting, or formed templates. These methods are not based on the anatomy of the individual patient's skull, and therefore, the prosthesis cannot precisely correct the defect. This study presented a new hybrid level set model, taking into account both the global optimization region information and the local accuracy edge information, while avoiding re-initialization during the evolution of the level set function. Based on the new method, a skull defect was reconstructed, and the skull prosthesis was produced by rapid prototyping technology. This resulted in a skull defect prosthesis that well matched the skull defect with excellent individual adaptation.

  2. Gradient augmented level set method for phase change simulations

    Science.gov (United States)

    Anumolu, Lakshman; Trujillo, Mario F.

    2018-01-01

    A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.

  3. INTEGRATED SFM TECHNIQUES USING DATA SET FROM GOOGLE EARTH 3D MODEL AND FROM STREET LEVEL

    Directory of Open Access Journals (Sweden)

    L. Inzerillo

    2017-08-01

    Full Text Available Structure from motion (SfM represents a widespread photogrammetric method that uses the photogrammetric rules to carry out a 3D model from a photo data set collection. Some complex ancient buildings, such as Cathedrals, or Theatres, or Castles, etc. need to implement the data set (realized from street level with the UAV one in order to have the 3D roof reconstruction. Nevertheless, the use of UAV is strong limited from the government rules. In these last years, Google Earth (GE has been enriched with the 3D models of the earth sites. For this reason, it seemed convenient to start to test the potentiality offered by GE in order to extract from it a data set that replace the UAV function, to close the aerial building data set, using screen images of high resolution 3D models. Users can take unlimited “aerial photos” of a scene while flying around in GE at any viewing angle and altitude. The challenge is to verify the metric reliability of the SfM model carried out with an integrated data set (the one from street level and the one from GE aimed at replace the UAV use in urban contest. This model is called integrated GE SfM model (i-GESfM. In this paper will be present a case study: the Cathedral of Palermo.

  4. Joint level-set and spatio-temporal motion detection for cell segmentation.

    Science.gov (United States)

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan

  5. Variational Level Set Method for Two-Stage Image Segmentation Based on Morphological Gradients

    Directory of Open Access Journals (Sweden)

    Zemin Ren

    2014-01-01

    Full Text Available We use variational level set method and transition region extraction techniques to achieve image segmentation task. The proposed scheme is done by two steps. We first develop a novel algorithm to extract transition region based on the morphological gradient. After this, we integrate the transition region into a variational level set framework and develop a novel geometric active contour model, which include an external energy based on transition region and fractional order edge indicator function. The external energy is used to drive the zero level set toward the desired image features, such as object boundaries. Due to this external energy, the proposed model allows for more flexible initialization. The fractional order edge indicator function is incorporated into the length regularization term to diminish the influence of noise. Moreover, internal energy is added into the proposed model to penalize the deviation of the level set function from a signed distance function. The results evolution of the level set function is the gradient flow that minimizes the overall energy functional. The proposed model has been applied to both synthetic and real images with promising results.

  6. Hybrid approach for detection of dental caries based on the methods FCM and level sets

    Science.gov (United States)

    Chaabene, Marwa; Ben Ali, Ramzi; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    This paper presents a new technique for detection of dental caries that is a bacterial disease that destroys the tooth structure. In our approach, we have achieved a new segmentation method that combines the advantages of fuzzy C mean algorithm and level set method. The results obtained by the FCM algorithm will be used by Level sets algorithm to reduce the influence of the noise effect on the working of each of these algorithms, to facilitate level sets manipulation and to lead to more robust segmentation. The sensitivity and specificity confirm the effectiveness of proposed method for caries detection.

  7. Novel gene sets improve set-level classification of prokaryotic gene expression data.

    Science.gov (United States)

    Holec, Matěj; Kuželka, Ondřej; Železný, Filip

    2015-10-28

    Set-level classification of gene expression data has received significant attention recently. In this setting, high-dimensional vectors of features corresponding to genes are converted into lower-dimensional vectors of features corresponding to biologically interpretable gene sets. The dimensionality reduction brings the promise of a decreased risk of overfitting, potentially resulting in improved accuracy of the learned classifiers. However, recent empirical research has not confirmed this expectation. Here we hypothesize that the reported unfavorable classification results in the set-level framework were due to the adoption of unsuitable gene sets defined typically on the basis of the Gene ontology and the KEGG database of metabolic networks. We explore an alternative approach to defining gene sets, based on regulatory interactions, which we expect to collect genes with more correlated expression. We hypothesize that such more correlated gene sets will enable to learn more accurate classifiers. We define two families of gene sets using information on regulatory interactions, and evaluate them on phenotype-classification tasks using public prokaryotic gene expression data sets. From each of the two gene-set families, we first select the best-performing subtype. The two selected subtypes are then evaluated on independent (testing) data sets against state-of-the-art gene sets and against the conventional gene-level approach. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. Novel gene sets defined on the basis of regulatory interactions improve set-level classification of gene expression data. The experimental scripts and other material needed to reproduce the experiments are available at http://ida.felk.cvut.cz/novelgenesets.tar.gz.

  8. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    Energy Technology Data Exchange (ETDEWEB)

    Hosntalab, Mohammad [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Aghaeizadeh Zoroofi, Reza [University of Tehran, Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, Tehran (Iran); Abbaspour Tehrani-Fard, Ali [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Sharif University of Technology, Department of Electrical Engineering, Tehran (Iran); Shirani, Gholamreza [Faculty of Dentistry Medical Science of Tehran University, Oral and Maxillofacial Surgery Department, Tehran (Iran)

    2008-09-15

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  9. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    International Nuclear Information System (INIS)

    Hosntalab, Mohammad; Aghaeizadeh Zoroofi, Reza; Abbaspour Tehrani-Fard, Ali; Shirani, Gholamreza

    2008-01-01

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  10. Setting analyst: A practical harvest planning technique

    Science.gov (United States)

    Olivier R.M. Halleux; W. Dale Greene

    2001-01-01

    Setting Analyst is an ArcView extension that facilitates practical harvest planning for ground-based systems. By modeling the travel patterns of ground-based machines, it compares different harvesting settings based on projected average skidding distance, logging costs, and site disturbance levels. Setting Analyst uses information commonly available to consulting...

  11. A new level set model for multimaterial flows

    Energy Technology Data Exchange (ETDEWEB)

    Starinshak, David P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Karni, Smadar [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of Mathematics; Roe, Philip L. [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of AerospaceEngineering

    2014-01-08

    We present a new level set model for representing multimaterial flows in multiple space dimensions. Instead of associating a level set function with a specific fluid material, the function is associated with a pair of materials and the interface that separates them. A voting algorithm collects sign information from all level sets and determines material designations. M(M ₋1)/2 level set functions might be needed to represent a general M-material configuration; problems of practical interest use far fewer functions, since not all pairs of materials share an interface. The new model is less prone to producing indeterminate material states, i.e. regions claimed by more than one material (overlaps) or no material at all (vacuums). It outperforms existing material-based level set models without the need for reinitialization schemes, thereby avoiding additional computational costs and preventing excessive numerical diffusion.

  12. Online monitoring of oil film using electrical capacitance tomography and level set method

    International Nuclear Information System (INIS)

    Xue, Q.; Ma, M.; Sun, B. Y.; Cui, Z. Q.; Wang, H. X.

    2015-01-01

    In the application of oil-air lubrication system, electrical capacitance tomography (ECT) provides a promising way for monitoring oil film in the pipelines by reconstructing cross sectional oil distributions in real time. While in the case of small diameter pipe and thin oil film, the thickness of the oil film is hard to be observed visually since the interface of oil and air is not obvious in the reconstructed images. And the existence of artifacts in the reconstructions has seriously influenced the effectiveness of image segmentation techniques such as level set method. Besides, level set method is also unavailable for online monitoring due to its low computation speed. To address these problems, a modified level set method is developed: a distance regularized level set evolution formulation is extended to image two-phase flow online using an ECT system, a narrowband image filter is defined to eliminate the influence of artifacts, and considering the continuity of the oil distribution variation, the detected oil-air interface of a former image can be used as the initial contour for the detection of the subsequent frame; thus, the propagation from the initial contour to the boundary can be greatly accelerated, making it possible for real time tracking. To testify the feasibility of the proposed method, an oil-air lubrication facility with 4 mm inner diameter pipe is measured in normal operation using an 8-electrode ECT system. Both simulation and experiment results indicate that the modified level set method is capable of visualizing the oil-air interface accurately online

  13. A hybrid interface tracking - level set technique for multiphase flow with soluble surfactant

    Science.gov (United States)

    Shin, Seungwon; Chergui, Jalel; Juric, Damir; Kahouadji, Lyes; Matar, Omar K.; Craster, Richard V.

    2018-04-01

    A formulation for soluble surfactant transport in multiphase flows recently presented by Muradoglu and Tryggvason (JCP 274 (2014) 737-757) [17] is adapted to the context of the Level Contour Reconstruction Method, LCRM, (Shin et al. IJNMF 60 (2009) 753-778, [8]) which is a hybrid method that combines the advantages of the Front-tracking and Level Set methods. Particularly close attention is paid to the formulation and numerical implementation of the surface gradients of surfactant concentration and surface tension. Various benchmark tests are performed to demonstrate the accuracy of different elements of the algorithm. To verify surfactant mass conservation, values for surfactant diffusion along the interface are compared with the exact solution for the problem of uniform expansion of a sphere. The numerical implementation of the discontinuous boundary condition for the source term in the bulk concentration is compared with the approximate solution. Surface tension forces are tested for Marangoni drop translation. Our numerical results for drop deformation in simple shear are compared with experiments and results from previous simulations. All benchmarking tests compare well with existing data thus providing confidence that the adapted LCRM formulation for surfactant advection and diffusion is accurate and effective in three-dimensional multiphase flows with a structured mesh. We also demonstrate that this approach applies easily to massively parallel simulations.

  14. Special set-up and treatment techniques for the radiotherapy of pediatric malignancies

    International Nuclear Information System (INIS)

    Martinez, A.; Donaldson, S.S.; Bagshaw, M.A.

    1977-01-01

    The prevention of serious and long term complications of treatment have become as important a consideration in the therapy of children with malignant disease as the goal of tumor control. This balance requires meticulous treatment planning and attention to the treatment preparation and immobilization techniques when radiotherapy is administered to children. Accurate localization of tumor volume and daily reproducibility is essential for delivering precise irradiation. Four special set-up and treatment techniques which have a specific usefulness in radiotherapy for pediatric malignancies are defined and illustrated with the aid of clinical cases. They include the three point set-up, the split beam technique, the isocentric technique, and the strinking field technique

  15. Level-Set Topology Optimization with Aeroelastic Constraints

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2015-01-01

    Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.

  16. Topology optimization in acoustics and elasto-acoustics via a level-set method

    Science.gov (United States)

    Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.

    2018-04-01

    Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.

  17. Some free boundary problems in potential flow regime usinga based level set method

    Energy Technology Data Exchange (ETDEWEB)

    Garzon, M.; Bobillo-Ares, N.; Sethian, J.A.

    2008-12-09

    Recent advances in the field of fluid mechanics with moving fronts are linked to the use of Level Set Methods, a versatile mathematical technique to follow free boundaries which undergo topological changes. A challenging class of problems in this context are those related to the solution of a partial differential equation posed on a moving domain, in which the boundary condition for the PDE solver has to be obtained from a partial differential equation defined on the front. This is the case of potential flow models with moving boundaries. Moreover the fluid front will possibly be carrying some material substance which will diffuse in the front and be advected by the front velocity, as for example the use of surfactants to lower surface tension. We present a Level Set based methodology to embed this partial differential equations defined on the front in a complete Eulerian framework, fully avoiding the tracking of fluid particles and its known limitations. To show the advantages of this approach in the field of Fluid Mechanics we present in this work one particular application: the numerical approximation of a potential flow model to simulate the evolution and breaking of a solitary wave propagating over a slopping bottom and compare the level set based algorithm with previous front tracking models.

  18. New Technique for Improving Performance of LDPC Codes in the Presence of Trapping Sets

    Directory of Open Access Journals (Sweden)

    Mohamed Adnan Landolsi

    2008-06-01

    Full Text Available Trapping sets are considered the primary factor for degrading the performance of low-density parity-check (LDPC codes in the error-floor region. The effect of trapping sets on the performance of an LDPC code becomes worse as the code size decreases. One approach to tackle this problem is to minimize trapping sets during LDPC code design. However, while trapping sets can be reduced, their complete elimination is infeasible due to the presence of cycles in the underlying LDPC code bipartite graph. In this work, we introduce a new technique based on trapping sets neutralization to minimize the negative effect of trapping sets under belief propagation (BP decoding. Simulation results for random, progressive edge growth (PEG and MacKay LDPC codes demonstrate the effectiveness of the proposed technique. The hardware cost of the proposed technique is also shown to be minimal.

  19. Inference of RMR value using fuzzy set theory and neuro-fuzzy techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Gyu-Jin; Cho, Mahn-Sup [Korea Institute of Construction Technology, Koyang(Korea)

    2001-12-31

    In the design of tunnel, it contains inaccuracy of data, fuzziness of evaluation, observer error and so on. The face observation during tunnel excavation, therefore, plays an important role to raise stability and to reduce supporting cost. This study is carried out to minimize the subjectiveness of observer and to exactly evaluate the natural properties of ground during the face observation. For these purpose, fuzzy set theory and neuro-fuzzy techniques in artificial intelligent techniques are applied to the inference of the RMR(Rock Mass Rating) value from the observation data. The correlation between original RMR value and inferred RMR{sub {sub F}U} and RMR{sub {sub N}F} values from fuzzy Set theory and neuro-fuzzy techniques is investigated using 46 data. The results show that good correlation between original RMR value and inferred RMR{sub {sub F}U} and RMR{sub {sub N}F} values is observed when the correlation coefficients are |R|=0.96 and |R|=0.95 respectively. >From these results, applicability of fuzzy set theory and neuro-fuzzy techniques to rock mass classification is proved to be sufficiently high enough. (author). 17 refs., 5 tabs., 9 figs.

  20. On multiple level-set regularization methods for inverse problems

    International Nuclear Information System (INIS)

    DeCezaro, A; Leitão, A; Tai, X-C

    2009-01-01

    We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels

  1. A level set method for multiple sclerosis lesion segmentation.

    Science.gov (United States)

    Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming

    2018-06-01

    In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A parametric level-set method for partially discrete tomography

    NARCIS (Netherlands)

    A. Kadu (Ajinkya); T. van Leeuwen (Tristan); K.J. Batenburg (Joost)

    2017-01-01

    textabstractThis paper introduces a parametric level-set method for tomographic reconstruction of partially discrete images. Such images consist of a continuously varying background and an anomaly with a constant (known) grey-value. We express the geometry of the anomaly using a level-set function,

  3. Reconstruction of incomplete cell paths through a 3D-2D level set segmentation

    Science.gov (United States)

    Hariri, Maia; Wan, Justin W. L.

    2012-02-01

    Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.

  4. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shenggao, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Mathematical Center for Interdiscipline Research, Soochow University, 1 Shizi Street, Jiangsu, Suzhou 215006 (China); Sun, Hui; Cheng, Li-Tien [Department of Mathematics, University of California, San Diego, La Jolla, California 92093-0112 (United States); Dzubiella, Joachim [Soft Matter and Functional Materials, Helmholtz-Zentrum Berlin, 14109 Berlin, Germany and Institut für Physik, Humboldt-Universität zu Berlin, 12489 Berlin (Germany); Li, Bo, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Quantitative Biology Graduate Program, University of California, San Diego, La Jolla, California 92093-0112 (United States); McCammon, J. Andrew [Department of Chemistry and Biochemistry, Department of Pharmacology, Howard Hughes Medical Institute, University of California, San Diego, La Jolla, California 92093-0365 (United States)

    2016-08-07

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the

  5. Industrial level measurement techniques - a review

    International Nuclear Information System (INIS)

    Schaudel, D.E.

    1984-01-01

    The outlined methods of industrial level measurement technique are nowadays in current use. In correspondence with the technical evolution the mechanical techniques are mentioned first, followed by a description of the more modern electronic methods. These measurement methods comply especially to the requirements of computer aided process guiding systems, i.e. compatibility of signals, self-checking and reliability. (orig.) [de

  6. Microwave imaging of dielectric cylinder using level set method and conjugate gradient algorithm

    International Nuclear Information System (INIS)

    Grayaa, K.; Bouzidi, A.; Aguili, T.

    2011-01-01

    In this paper, we propose a computational method for microwave imaging cylinder and dielectric object, based on combining level set technique and the conjugate gradient algorithm. By measuring the scattered field, we tried to retrieve the shape, localisation and the permittivity of the object. The forward problem is solved by the moment method, while the inverse problem is reformulate in an optimization one and is solved by the proposed scheme. It found that the proposed method is able to give good reconstruction quality in terms of the reconstructed shape and permittivity.

  7. A multilevel, level-set method for optimizing eigenvalues in shape design problems

    International Nuclear Information System (INIS)

    Haber, E.

    2004-01-01

    In this paper, we consider optimal design problems that involve shape optimization. The goal is to determine the shape of a certain structure such that it is either as rigid or as soft as possible. To achieve this goal we combine two new ideas for an efficient solution of the problem. First, we replace the eigenvalue problem with an approximation by using inverse iteration. Second, we use a level set method but rather than propagating the front we use constrained optimization methods combined with multilevel continuation techniques. Combining these two ideas we obtain a robust and rapid method for the solution of the optimal design problem

  8. Discretisation Schemes for Level Sets of Planar Gaussian Fields

    Science.gov (United States)

    Beliaev, D.; Muirhead, S.

    2018-01-01

    Smooth random Gaussian functions play an important role in mathematical physics, a main example being the random plane wave model conjectured by Berry to give a universal description of high-energy eigenfunctions of the Laplacian on generic compact manifolds. Our work is motivated by questions about the geometry of such random functions, in particular relating to the structure of their nodal and level sets. We study four discretisation schemes that extract information about level sets of planar Gaussian fields. Each scheme recovers information up to a different level of precision, and each requires a maximum mesh-size in order to be valid with high probability. The first two schemes are generalisations and enhancements of similar schemes that have appeared in the literature (Beffara and Gayet in Publ Math IHES, 2017. https://doi.org/10.1007/s10240-017-0093-0; Mischaikow and Wanner in Ann Appl Probab 17:980-1018, 2007); these give complete topological information about the level sets on either a local or global scale. As an application, we improve the results in Beffara and Gayet (2017) on Russo-Seymour-Welsh estimates for the nodal set of positively-correlated planar Gaussian fields. The third and fourth schemes are, to the best of our knowledge, completely new. The third scheme is specific to the nodal set of the random plane wave, and provides global topological information about the nodal set up to `visible ambiguities'. The fourth scheme gives a way to approximate the mean number of excursion domains of planar Gaussian fields.

  9. Level set segmentation of bovine corpora lutea in ex situ ovarian ultrasound images

    Directory of Open Access Journals (Sweden)

    Adams Gregg P

    2008-08-01

    hypothesis that level set segmentation can be accurate to within 1–2 mm on average was supported, although there can be some greater deviation. The method was robust to boundary leakage as evidenced by the high specificity. It was concluded that the technique is promising and that a suitable data set of human ovarian images should be obtained to conduct further studies.

  10. Identifying Heterogeneities in Subsurface Environment using the Level Set Method

    Energy Technology Data Exchange (ETDEWEB)

    Lei, Hongzhuan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lu, Zhiming [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vesselinov, Velimir Valentinov [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-25

    These are slides from a presentation on identifying heterogeneities in subsurface environment using the level set method. The slides start with the motivation, then explain Level Set Method (LSM), the algorithms, some examples are given, and finally future work is explained.

  11. Setting-level influences on implementation of the responsive classroom approach.

    Science.gov (United States)

    Wanless, Shannon B; Patton, Christine L; Rimm-Kaufman, Sara E; Deutsch, Nancy L

    2013-02-01

    We used mixed methods to examine the association between setting-level factors and observed implementation of a social and emotional learning intervention (Responsive Classroom® approach; RC). In study 1 (N = 33 3rd grade teachers after the first year of RC implementation), we identified relevant setting-level factors and uncovered the mechanisms through which they related to implementation. In study 2 (N = 50 4th grade teachers after the second year of RC implementation), we validated our most salient Study 1 finding across multiple informants. Findings suggested that teachers perceived setting-level factors, particularly principal buy-in to the intervention and individualized coaching, as influential to their degree of implementation. Further, we found that intervention coaches' perspectives of principal buy-in were more related to implementation than principals' or teachers' perspectives. Findings extend the application of setting theory to the field of implementation science and suggest that interventionists may want to consider particular accounts of school setting factors before determining the likelihood of schools achieving high levels of implementation.

  12. Level Set Structure of an Integrable Cellular Automaton

    Directory of Open Access Journals (Sweden)

    Taichiro Takagi

    2010-03-01

    Full Text Available Based on a group theoretical setting a sort of discrete dynamical system is constructed and applied to a combinatorial dynamical system defined on the set of certain Bethe ansatz related objects known as the rigged configurations. This system is then used to study a one-dimensional periodic cellular automaton related to discrete Toda lattice. It is shown for the first time that the level set of this cellular automaton is decomposed into connected components and every such component is a torus.

  13. Effect of workload setting on propulsion technique in handrim wheelchair propulsion

    NARCIS (Netherlands)

    van Drongelen, Stefan; Arnet, Ursina; Veeger, DirkJan (H E. J); van der Woude, Lucas H. V.

    Objective: To investigate the influence of workload setting (speed at constant power, method to impose power) on the propulsion technique (i.e. force and timing characteristics) in handrim wheelchair propulsion. Method: Twelve able-bodied men participated in this study. External forces were measured

  14. Development of a technique for level measurement in pressure vessels using thermal probes and artificial neural networks

    International Nuclear Information System (INIS)

    Torres, Walmir Maximo

    2008-01-01

    A technique for level measurement in pressure vessels was developed using thermal probes with internal cooling and artificial neural networks (ANN's). This new concept of thermal probes was experimentally tested in an experimental facility (BETSNI) with two test sections, ST1 and ST2. Two different thermal probes were designed and constructed: concentric tubes probe and U tube probe. A data acquisition system (DAS) was assembled to record the experimental data during the tests. Steady state and transient level tests were carried out and the experimental data obtained were used as learning and recall data sets in the ANN's program RETRO-05 that simulate a multilayer perceptron with backpropagation. The results of the analysis show that the technique can be applied for level measurements in pressure vessel. The technique is applied for a less input temperature data than the initially designed to the probes. The technique is robust and can be used in case of lack of some temperature data. Experimental data available in literature from electrically heated thermal probe were also used in the ANN's analysis producing good results. The results of the ANN's analysis show that the technique can be improved and applied to level measurements in pressure vessels. (author)

  15. Reevaluation of steam generator level trip set point

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Yoon Sub; Soh, Dong Sub; Kim, Sung Oh; Jung, Se Won; Sung, Kang Sik; Lee, Joon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1994-06-01

    The reactor trip by the low level of steam generator water accounts for a substantial portion of reactor scrams in a nuclear plant and the feasibility of modification of the steam generator water level trip system of YGN 1/2 was evaluated in this study. The study revealed removal of the reactor trip function from the SG water level trip system is not possible because of plant safety but relaxation of the trip set point by 9 % is feasible. The set point relaxation requires drilling of new holes for level measurement to operating steam generators. Characteristics of negative neutron flux rate trip and reactor trip were also reviewed as an additional work. Since the purpose of the trip system modification for reduction of a reactor scram frequency is not to satisfy legal requirements but to improve plant performance and the modification yields positive and negative aspects, the decision of actual modification needs to be made based on the results of this study and also the policy of a plant owner. 37 figs, 6 tabs, 14 refs. (Author).

  16. A simple mass-conserved level set method for simulation of multiphase flows

    Science.gov (United States)

    Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.

    2018-04-01

    In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.

  17. Software refactoring at the package level using clustering techniques

    KAUST Repository

    Alkhalid, A.

    2011-01-01

    Enhancing, modifying or adapting the software to new requirements increases the internal software complexity. Software with high level of internal complexity is difficult to maintain. Software refactoring reduces software complexity and hence decreases the maintenance effort. However, software refactoring becomes quite challenging task as the software evolves. The authors use clustering as a pattern recognition technique to assist in software refactoring activities at the package level. The approach presents a computer aided support for identifying ill-structured packages and provides suggestions for software designer to balance between intra-package cohesion and inter-package coupling. A comparative study is conducted applying three different clustering techniques on different software systems. In addition, the application of refactoring at the package level using an adaptive k-nearest neighbour (A-KNN) algorithm is introduced. The authors compared A-KNN technique with the other clustering techniques (viz. single linkage algorithm, complete linkage algorithm and weighted pair-group method using arithmetic averages). The new technique shows competitive performance with lower computational complexity. © 2011 The Institution of Engineering and Technology.

  18. Level-Set Methodology on Adaptive Octree Grids

    Science.gov (United States)

    Gibou, Frederic; Guittet, Arthur; Mirzadeh, Mohammad; Theillard, Maxime

    2017-11-01

    Numerical simulations of interfacial problems in fluids require a methodology capable of tracking surfaces that can undergo changes in topology and capable to imposing jump boundary conditions in a sharp manner. In this talk, we will discuss recent advances in the level-set framework, in particular one that is based on adaptive grids.

  19. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  20. Effect of workload setting on propulsion technique in handrim wheelchair propulsion.

    Science.gov (United States)

    van Drongelen, Stefan; Arnet, Ursina; Veeger, Dirkjan H E J; van der Woude, Lucas H V

    2013-03-01

    To investigate the influence of workload setting (speed at constant power, method to impose power) on the propulsion technique (i.e. force and timing characteristics) in handrim wheelchair propulsion. Twelve able-bodied men participated in this study. External forces were measured during handrim wheelchair propulsion on a motor driven treadmill at different velocities and constant power output (to test the forced effect of speed) and at power outputs imposed by incline vs. pulley system (to test the effect of method to impose power). Outcome measures were the force and timing variables of the propulsion technique. FEF and timing variables showed significant differences between the speed conditions when propelling at the same power output (p propulsion technique parameters despite an overall constant power output. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  1. A level set approach for shock-induced α-γ phase transition of RDX

    Science.gov (United States)

    Josyula, Kartik; Rahul; De, Suvranu

    2018-02-01

    We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.

  2. Measurement of liquid level in a natural circulation circuit using an ultrasonic technique

    International Nuclear Information System (INIS)

    Barbosa, Amanda Cardozo; Su, Jian

    2017-01-01

    The measurement by an ultrasonic technique of the water level in the expansion tank of the Natural Circulation Circuit (NCC) of the Experimental Thermo-Hydraulic Laboratory of the Institute of Nuclear Engineering is presented. In the single-phase NCC operation the water level in the expansion tank is stable. However, during the two-phase operation, oscillations occur in the water level due to temperature and vacuum fraction variations. Thus, the development of a technique that allows the measurement of these oscillations, will allow an estimation of the variation of the vacuum fraction of the circuit over time. The experimental set - up was performed on a test bench, using an ultrasonic transducer. The ultrasonic technique used is pulse-echo, in which the same transducer is the transmitter and receiver of the signal. The transducer-shoe assembly is part of an ultrasonic system consisting of an ultrasonic signal generating plate, transducers and a computer (PC) with a program in LabView to control the system. The program is able to calculate the transit time that the ultrasonic signals take to cross the tank base wall, the layer (level) of liquid and return to the transducer. Knowing the speed of the ultrasound in the wall and in the liquid it is possible to calculate the thickness of the wall and the height of the liquid. Measurements were made by filling the tank with a known volume of water and under varying temperature conditions, from room temperature to 90 deg C. The liquid heights are determined and the volume of water calculated by measuring the temperature with a digital thermometer. The volumes measured were highly accurate when compared to the known volumes

  3. Polluted rainwater runoff from waste recovery and recycling companies: Determination of emission levels associated with the best available techniques.

    Science.gov (United States)

    Huybrechts, D; Verachtert, E; Vander Aa, S; Polders, C; Van den Abeele, L

    2016-08-01

    Rainwater falling on outdoor storage areas of waste recovery and recycling companies becomes polluted via contact with the stored materials. It contains various pollutants, including heavy metals, polycyclic aromatic hydrocarbons and polychlorinated biphenyls, and is characterized by a highly fluctuating composition and flow rate. This polluted rainwater runoff is legally considered as industrial wastewater, and the polluting substances contained in the rainwater runoff at the point of discharge, are considered as emissions into water. The permitting authorities can set emission limit values (discharge limits) at the point of discharge. Best available techniques are an important reference point for setting emission limit values. In this paper, the emission levels associated with the best available techniques for dealing with polluted rainwater runoff from waste recovery and recycling companies were determined. The determination is based on an analysis of emission data measured at different companies in Flanders. The data show that a significant fraction of the pollution in rainwater runoff is associated with particles. A comparison with literature data provides strong indications that not only leaching, but also atmospheric deposition play an important role in the contamination of rainwater at waste recovery and recycling companies. The prevention of pollution and removal of suspended solids from rainwater runoff to levels below 60mg/l are considered as best available techniques. The associated emission levels were determined by considering only emission data from plants applying wastewater treatment, and excluding all samples with suspended solid levels >60mg/l. The resulting BAT-AEL can be used as a reference point for setting emission limit values for polluted rainwater runoff from waste recovery and recycling companies. Since the BAT-AEL (e.g. 150μg/l for Cu) are significantly lower than current emission levels (e.g. 300μg/l as the 90% percentile and 4910

  4. Two Surface-Tension Formulations For The Level Set Interface-Tracking Method

    International Nuclear Information System (INIS)

    Shepel, S.V.; Smith, B.L.

    2005-01-01

    The paper describes a comparative study of two surface-tension models for the Level Set interface tracking method. In both models, the surface tension is represented as a body force, concentrated near the interface, but the technical implementation of the two options is different. The first is based on a traditional Level Set approach, in which the surface tension is distributed over a narrow band around the interface using a smoothed Delta function. In the second model, which is based on the integral form of the fluid-flow equations, the force is imposed only in those computational cells through which the interface passes. Both models have been incorporated into the Finite-Element/Finite-Volume Level Set method, previously implemented into the commercial Computational Fluid Dynamics (CFD) code CFX-4. A critical evaluation of the two models, undertaken in the context of four standard Level Set benchmark problems, shows that the first model, based on the smoothed Delta function approach, is the more general, and more robust, of the two. (author)

  5. A deep level set method for image segmentation

    OpenAIRE

    Tang, Min; Valipour, Sepehr; Zhang, Zichen Vincent; Cobzas, Dana; MartinJagersand

    2017-01-01

    This paper proposes a novel image segmentation approachthat integrates fully convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the integrated method can incorporatesmoothing and prior information to achieve an accurate segmentation.Furthermore, different than using the level set model as a post-processingtool, we integrate it into the training phase to fine-tune the FCN. Thisallows the use of unlabeled data during training in a semi-supervisedsetting. Using two types o...

  6. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin

    2011-04-01

    In this paper, we construct a level set method for an elliptic obstacle problem, which can be reformulated as a shape optimization problem. We provide a detailed shape sensitivity analysis for this reformulation and a stability result for the shape Hessian at the optimal shape. Using the shape sensitivities, we construct a geometric gradient flow, which can be realized in the context of level set methods. We prove the convergence of the gradient flow to an optimal shape and provide a complete analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its behavior through several computational experiments. © 2011 World Scientific Publishing Company.

  7. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin; Matevosyan, Norayr; Wolfram, Marie-Therese

    2011-01-01

    analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its

  8. Perceptions of Teachers towards Assessment Techniques at Secondary Level Private School of Karachi

    Directory of Open Access Journals (Sweden)

    Henna Fatemah

    2015-12-01

    Full Text Available This paper sets out to explore the perceptions of teachers towards assessment techniques at a secondary level private school of Karachi. This was conjectured on the basis of the circumstances of parallel boards in the education system of Pakistan and its effectiveness within the context with respect to the curriculum. This was gauged in line with the forms and techniques of assessment corresponding with the curriculum. A qualitative research design based on interviews was chosen for this study. Purposive sampling was used to select the teachers from a school. The findings of the study revealed that the General Certificate of Secondary Education (GCSE is best suited to assess students’ knowledge and skills and the teachers viewed that in order for students to be accomplished in this board, the ways of assessment must take a more meaningful measure of evaluating student’s progress

  9. A local level set method based on a finite element method for unstructured meshes

    International Nuclear Information System (INIS)

    Ngo, Long Cu; Choi, Hyoung Gwon

    2016-01-01

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time

  10. A local level set method based on a finite element method for unstructured meshes

    Energy Technology Data Exchange (ETDEWEB)

    Ngo, Long Cu; Choi, Hyoung Gwon [School of Mechanical Engineering, Seoul National University of Science and Technology, Seoul (Korea, Republic of)

    2016-12-15

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time.

  11. Laser techniques for spectroscopy of core-excited atomic levels

    Science.gov (United States)

    Harris, S. E.; Young, J. F.; Falcone, R. W.; Rothenberg, J. E.; Willison, J. R.

    1982-01-01

    We discuss three techniques which allow the use of tunable lasers for high resolution and picosecond time scale spectroscopy of core-excited atomic levels. These are: anti-Stokes absorption spectroscopy, laser induced emission from metastable levels, and laser designation of selected core-excited levels.

  12. An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs

    Directory of Open Access Journals (Sweden)

    Kishore R. Mosaliganti

    2013-12-01

    Full Text Available In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse and grid representations (point, mesh, and image-based. Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g. gradient and Hessians across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a

  13. A parametric level-set approach for topology optimization of flow domains

    DEFF Research Database (Denmark)

    Pingen, Georg; Waidmann, Matthias; Evgrafov, Anton

    2010-01-01

    of the design variables in the traditional approaches is seen as a possible cause for the slow convergence. Non-smooth material distributions are suspected to trigger premature onset of instationary flows which cannot be treated by steady-state flow models. In the present work, we study whether the convergence...... and the versatility of topology optimization methods for fluidic systems can be improved by employing a parametric level-set description. In general, level-set methods allow controlling the smoothness of boundaries, yield a non-local influence of design variables, and decouple the material description from the flow...... field discretization. The parametric level-set method used in this study utilizes a material distribution approach to represent flow boundaries, resulting in a non-trivial mapping between design variables and local material properties. Using a hydrodynamic lattice Boltzmann method, we study...

  14. Level set methods for detonation shock dynamics using high-order finite elements

    Energy Technology Data Exchange (ETDEWEB)

    Dobrev, V. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Grogan, F. C. [Univ. of California, San Diego, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kolev, T. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rieben, R [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Tomov, V. Z. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-26

    Level set methods are a popular approach to modeling evolving interfaces. We present a level set ad- vection solver in two and three dimensions using the discontinuous Galerkin method with high-order nite elements. During evolution, the level set function is reinitialized to a signed distance function to maintain ac- curacy. Our approach leads to stable front propagation and convergence on high-order, curved, unstructured meshes. The ability of the solver to implicitly track moving fronts lends itself to a number of applications; in particular, we highlight applications to high-explosive (HE) burn and detonation shock dynamics (DSD). We provide results for two- and three-dimensional benchmark problems as well as applications to DSD.

  15. An investigation of children's levels of inquiry in an informal science setting

    Science.gov (United States)

    Clark-Thomas, Beth Anne

    Elementary school students' understanding of both science content and processes are enhanced by the higher level thinking associated with inquiry-based science investigations. Informal science setting personnel, elementary school teachers, and curriculum specialists charged with designing inquiry-based investigations would be well served by an understanding of the varying influence of certain present factors upon the students' willingness and ability to delve into such higher level inquiries. This study examined young children's use of inquiry-based materials and factors which may influence the level of inquiry they engaged in during informal science activities. An informal science setting was selected as the context for the examination of student inquiry behaviors because of the rich inquiry-based environment present at the site and the benefits previously noted in the research regarding the impact of informal science settings upon the construction of knowledge in science. The study revealed several patterns of behavior among children when they are engaged in inquiry-based activities at informal science exhibits. These repeated behaviors varied in the children's apparent purposeful use of the materials at the exhibits. These levels of inquiry behavior were taxonomically defined as high/medium/low within this study utilizing a researcher-developed tool. Furthermore, in this study adult interventions, questions, or prompting were found to impact the level of inquiry engaged in by the children. This study revealed that higher levels of inquiry were preceded by task directed and physical feature prompts. Moreover, the levels of inquiry behaviors were haltered, even lowered, when preceded by a prompt that focused on a science content or concept question. Results of this study have implications for the enhancement of inquiry-based science activities in elementary schools as well as in informal science settings. These findings have significance for all science educators

  16. Aerostructural Level Set Topology Optimization for a Common Research Model Wing

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2014-01-01

    The purpose of this work is to use level set topology optimization to improve the design of a representative wing box structure for the NASA common research model. The objective is to minimize the total compliance of the structure under aerodynamic and body force loading, where the aerodynamic loading is coupled to the structural deformation. A taxi bump case was also considered, where only body force loads were applied. The trim condition that aerodynamic lift must balance the total weight of the aircraft is enforced by allowing the root angle of attack to change. The level set optimization method is implemented on an unstructured three-dimensional grid, so that the method can optimize a wing box with arbitrary geometry. Fast matching and upwind schemes are developed for an unstructured grid, which make the level set method robust and efficient. The adjoint method is used to obtain the coupled shape sensitivities required to perform aerostructural optimization of the wing box structure.

  17. A Variational Level Set Model Combined with FCMS for Image Clustering Segmentation

    Directory of Open Access Journals (Sweden)

    Liming Tang

    2014-01-01

    Full Text Available The fuzzy C means clustering algorithm with spatial constraint (FCMS is effective for image segmentation. However, it lacks essential smoothing constraints to the cluster boundaries and enough robustness to the noise. Samson et al. proposed a variational level set model for image clustering segmentation, which can get the smooth cluster boundaries and closed cluster regions due to the use of level set scheme. However it is very sensitive to the noise since it is actually a hard C means clustering model. In this paper, based on Samson’s work, we propose a new variational level set model combined with FCMS for image clustering segmentation. Compared with FCMS clustering, the proposed model can get smooth cluster boundaries and closed cluster regions due to the use of level set scheme. In addition, a block-based energy is incorporated into the energy functional, which enables the proposed model to be more robust to the noise than FCMS clustering and Samson’s model. Some experiments on the synthetic and real images are performed to assess the performance of the proposed model. Compared with some classical image segmentation models, the proposed model has a better performance for the images contaminated by different noise levels.

  18. Surface-to-surface registration using level sets

    DEFF Research Database (Denmark)

    Hansen, Mads Fogtmann; Erbou, Søren G.; Vester-Christensen, Martin

    2007-01-01

    This paper presents a general approach for surface-to-surface registration (S2SR) with the Euclidean metric using signed distance maps. In addition, the method is symmetric such that the registration of a shape A to a shape B is identical to the registration of the shape B to the shape A. The S2SR...... problem can be approximated by the image registration (IR) problem of the signed distance maps (SDMs) of the surfaces confined to some narrow band. By shrinking the narrow bands around the zero level sets the solution to the IR problem converges towards the S2SR problem. It is our hypothesis...... that this approach is more robust and less prone to fall into local minima than ordinary surface-to-surface registration. The IR problem is solved using the inverse compositional algorithm. In this paper, a set of 40 pelvic bones of Duroc pigs are registered to each other w.r.t. the Euclidean transformation...

  19. Joint inversion of geophysical data using petrophysical clustering and facies deformation wth the level set technique

    Science.gov (United States)

    Revil, A.

    2015-12-01

    Geological expertise and petrophysical relationships can be brought together to provide prior information while inverting multiple geophysical datasets. The merging of such information can result in more realistic solution in the distribution of the model parameters, reducing ipse facto the non-uniqueness of the inverse problem. We consider two level of heterogeneities: facies, described by facies boundaries and heteroegenities inside each facies determined by a correlogram. In this presentation, we pose the geophysical inverse problem in terms of Gaussian random fields with mean functions controlled by petrophysical relationships and covariance functions controlled by a prior geological cross-section, including the definition of spatial boundaries for the geological facies. The petrophysical relationship problem is formulated as a regression problem upon each facies. The inversion of the geophysical data is performed in a Bayesian framework. We demonstrate the usefulness of this strategy using a first synthetic case for which we perform a joint inversion of gravity and galvanometric resistivity data with the stations located at the ground surface. The joint inversion is used to recover the density and resistivity distributions of the subsurface. In a second step, we consider the possibility that the facies boundaries are deformable and their shapes are inverted as well. We use the level set approach to perform such deformation preserving prior topological properties of the facies throughout the inversion. With the help of prior facies petrophysical relationships and topological characteristic of each facies, we make posterior inference about multiple geophysical tomograms based on their corresponding geophysical data misfits. The method is applied to a second synthetic case showing that we can recover the heterogeneities inside the facies, the mean values for the petrophysical properties, and, to some extent, the facies boundaries using the 2D joint inversion of

  20. Prediction of monthly regional groundwater levels through hybrid soft-computing techniques

    Science.gov (United States)

    Chang, Fi-John; Chang, Li-Chiu; Huang, Chien-Wei; Kao, I.-Feng

    2016-10-01

    Groundwater systems are intrinsically heterogeneous with dynamic temporal-spatial patterns, which cause great difficulty in quantifying their complex processes, while reliable predictions of regional groundwater levels are commonly needed for managing water resources to ensure proper service of water demands within a region. In this study, we proposed a novel and flexible soft-computing technique that could effectively extract the complex high-dimensional input-output patterns of basin-wide groundwater-aquifer systems in an adaptive manner. The soft-computing models combined the Self Organized Map (SOM) and the Nonlinear Autoregressive with Exogenous Inputs (NARX) network for predicting monthly regional groundwater levels based on hydrologic forcing data. The SOM could effectively classify the temporal-spatial patterns of regional groundwater levels, the NARX could accurately predict the mean of regional groundwater levels for adjusting the selected SOM, the Kriging was used to interpolate the predictions of the adjusted SOM into finer grids of locations, and consequently the prediction of a monthly regional groundwater level map could be obtained. The Zhuoshui River basin in Taiwan was the study case, and its monthly data sets collected from 203 groundwater stations, 32 rainfall stations and 6 flow stations during 2000 and 2013 were used for modelling purpose. The results demonstrated that the hybrid SOM-NARX model could reliably and suitably predict monthly basin-wide groundwater levels with high correlations (R2 > 0.9 in both training and testing cases). The proposed methodology presents a milestone in modelling regional environmental issues and offers an insightful and promising way to predict monthly basin-wide groundwater levels, which is beneficial to authorities for sustainable water resources management.

  1. [Manual airway clearance techniques in adults and adolescents: What level of evidence?

    Science.gov (United States)

    Cabillic, Michel; Gouilly, Pascal; Reychler, Gregory

    2016-04-13

    The aim of this systematic literature review was to grade the levels of evidence of the most widely used manual airway clearance techniques. A literature search was conducted over the period 1995-2014 from the Medline, PEDro, ScienceDirect, Cochrane Library, REEDOC and kinedoc databases, with the following keywords: "postural drainage", "manual vibrations", "manual chest percussion", "directed cough", "increased expiratory flow", "ELTGOL", "autogenic drainage" and "active cycle of breathing technique". Two-hundred and fifty-six articles were identified. After removing duplicates and reading the titles and abstracts, 63 articles were selected, including 9 systematic reviews. This work highlights the lack of useful scientific data and the difficulty of determining levels of evidence for manual airway clearance techniques. Techniques were assessed principally with patients with sputum production (cystic fibrosis, DDB, COPD, etc.). It also shows the limited pertinence of outcome measures to quantify congestion and hence the efficacy of airway clearance techniques. The 1994 consensus conference summary table classifying airway clearance techniques according to physical mechanism provides an interesting tool for assessment, grouping together techniques having identical mechanisms of action. From the findings of the present systematic review, it appears that only ELTGOL, autogenic drainage and ACBT present levels of evidence "B". All other techniques have lower levels of evidence. II. Copyright © 2016. Published by Elsevier Masson SAS.

  2. Set Theory : Techniques and Applications : Curaçao 1995 and Barcelona 1996 Conferences

    CERN Document Server

    Larson, Jean; Bagaria, Joan; Mathias, A

    1998-01-01

    During the past 25 years, set theory has developed in several interesting directions. The most outstanding results cover the application of sophisticated techniques to problems in analysis, topology, infinitary combinatorics and other areas of mathematics. This book contains a selection of contributions, some of which are expository in nature, embracing various aspects of the latest developments. Amongst topics treated are forcing axioms and their applications, combinatorial principles used to construct models, and a variety of other set theoretical tools including inner models, partitions and trees. Audience: This book will be of interest to graduate students and researchers in foundational problems of mathematics.

  3. Slow neutron mapping technique for level interface measurement

    Science.gov (United States)

    Zain, R. M.; Ithnin, H.; Razali, A. M.; Yusof, N. H. M.; Mustapha, I.; Yahya, R.; Othman, N.; Rahman, M. F. A.

    2017-01-01

    Modern industrial plant operations often require accurate level measurement of process liquids in production and storage vessels. A variety of advanced level indicators are commercially available to meet the demand, but these may not suit specific need of situations. The neutron backscatter technique is exceptionally useful for occasional and routine determination, particularly in situations such as pressure vessel with wall thickness up to 10 cm, toxic and corrosive chemical in sealed containers, liquid petroleum gas storage vessels. In level measurement, high energy neutrons from 241Am-Be radioactive source are beamed onto a vessel. Fast neutrons are slowed down mostly by collision with hydrogen atoms of material inside the vessel. Parts of thermal neutron are bounced back towards the source. By placing a thermal detector next to the source, these backscatter neutrons can be measured. The number of backscattered neutrons is directly proportional to the concentration of the hydrogen atoms in front of the neutron detector. As the source and detector moved by the matrix around the side of the vessel, interfaces can be determined as long as it involves a change in hydrogen atom concentration. This paper presents the slow neutron mapping technique to indicate level interface of a test vessel.

  4. Level sets and extrema of random processes and fields

    CERN Document Server

    Azais, Jean-Marc

    2009-01-01

    A timely and comprehensive treatment of random field theory with applications across diverse areas of study Level Sets and Extrema of Random Processes and Fields discusses how to understand the properties of the level sets of paths as well as how to compute the probability distribution of its extremal values, which are two general classes of problems that arise in the study of random processes and fields and in related applications. This book provides a unified and accessible approach to these two topics and their relationship to classical theory and Gaussian processes and fields, and the most modern research findings are also discussed. The authors begin with an introduction to the basic concepts of stochastic processes, including a modern review of Gaussian fields and their classical inequalities. Subsequent chapters are devoted to Rice formulas, regularity properties, and recent results on the tails of the distribution of the maximum. Finally, applications of random fields to various areas of mathematics a...

  5. 76 FR 9004 - Public Comment on Setting Achievement Levels in Writing

    Science.gov (United States)

    2011-02-16

    ... DEPARTMENT OF EDUCATION Public Comment on Setting Achievement Levels in Writing AGENCY: U.S... Achievement Levels in Writing. SUMMARY: The National Assessment Governing Board (Governing Board) is... for NAEP in writing. This notice provides opportunity for public comment and submitting...

  6. A reference data set for validating vapor pressure measurement techniques: homologous series of polyethylene glycols

    Science.gov (United States)

    Krieger, Ulrich K.; Siegrist, Franziska; Marcolli, Claudia; Emanuelsson, Eva U.; Gøbel, Freya M.; Bilde, Merete; Marsh, Aleksandra; Reid, Jonathan P.; Huisman, Andrew J.; Riipinen, Ilona; Hyttinen, Noora; Myllys, Nanna; Kurtén, Theo; Bannan, Thomas; Percival, Carl J.; Topping, David

    2018-01-01

    To predict atmospheric partitioning of organic compounds between gas and aerosol particle phase based on explicit models for gas phase chemistry, saturation vapor pressures of the compounds need to be estimated. Estimation methods based on functional group contributions require training sets of compounds with well-established saturation vapor pressures. However, vapor pressures of semivolatile and low-volatility organic molecules at atmospheric temperatures reported in the literature often differ by several orders of magnitude between measurement techniques. These discrepancies exceed the stated uncertainty of each technique which is generally reported to be smaller than a factor of 2. At present, there is no general reference technique for measuring saturation vapor pressures of atmospherically relevant compounds with low vapor pressures at atmospheric temperatures. To address this problem, we measured vapor pressures with different techniques over a wide temperature range for intercomparison and to establish a reliable training set. We determined saturation vapor pressures for the homologous series of polyethylene glycols (H - (O - CH2 - CH2)n - OH) for n = 3 to n = 8 ranging in vapor pressure at 298 K from 10-7 to 5×10-2 Pa and compare them with quantum chemistry calculations. Such a homologous series provides a reference set that covers several orders of magnitude in saturation vapor pressure, allowing a critical assessment of the lower limits of detection of vapor pressures for the different techniques as well as permitting the identification of potential sources of systematic error. Also, internal consistency within the series allows outlying data to be rejected more easily. Most of the measured vapor pressures agreed within the stated uncertainty range. Deviations mostly occurred for vapor pressure values approaching the lower detection limit of a technique. The good agreement between the measurement techniques (some of which are sensitive to the mass

  7. Appropriate criteria set for personnel promotion across organizational levels using analytic hierarchy process (AHP

    Directory of Open Access Journals (Sweden)

    Charles Noven Castillo

    2017-01-01

    Full Text Available Currently, there has been limited established specific set of criteria for personnel promotion to each level of the organization. This study is conducted in order to develop a personnel promotion strategy by identifying specific sets of criteria for each level of the organization. The complexity of identifying the criteria set along with the subjectivity of these criteria require the use of multi-criteria decision-making approach particularly the analytic hierarchy process (AHP. Results show different sets of criteria for each management level which are consistent with several frameworks in literature. These criteria sets would help avoid mismatch of employee skills and competencies and their job, and at the same time eliminate the issues in personnel promotion such as favouritism, glass ceiling, and gender and physical attractiveness preference. This work also shows that personality and traits, job satisfaction and experience and skills are more critical rather than social capital across different organizational levels. The contribution of this work is in identifying relevant criteria in developing a personnel promotion strategy across organizational levels.

  8. Modified scintigrafic technique for amputation level selection in diabetics

    Energy Technology Data Exchange (ETDEWEB)

    Dwars, B.J.; Rauwerda, J.A.; Broek, T.A.A. van den; Rij, G.L. van; Hollander, W. den; Heidendal, G.A.K.

    1989-01-01

    A modified /sup 123/I-antipyrine cutaneous washout technique for the selection of amputation levels is described. The modifications imply a reduction of time needed for the examination by simultaneous recordings on different levels, and a better patient acceptance by reducing inconvenience. Furthermore, both skin perfusion pressure (SPP) and skin blood flow (SBF) are determined from each clearance curve. In a prospective study among 26 diabetic patients presenting with ulcers or gangrene of the foot, both SPP and SBF were determined preoperatively on the selected level of surgery and on adjacent amputation sites. These 26 patients underwent 12 minor foot amputations and 17 major lower limb amputations. Two of these amputations failed to heal. SBF values appeared indicative for the degree of peripheral vascular disease, as low SBF values were found with low SPP values. SPP determinations revealed good predictive values: All surgical procedures healed when SPP>20 mmHg, but 2 out of 3 failed when SPP<2 mmHg. If SPP values would have been decisive, the amputation would have been converted to a lower level in 6 out of 17 cases. This modified scintigrafic technique provides accurate objective information for amputation level selection.

  9. Level-set reconstruction algorithm for ultrafast limited-angle X-ray computed tomography of two-phase flows.

    Science.gov (United States)

    Bieberle, M; Hampel, U

    2015-06-13

    Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  10. Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement

    Science.gov (United States)

    Shervani-Tabar, Navid; Vasilyev, Oleg V.

    2016-11-01

    This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.

  11. Some numerical studies of interface advection properties of level set ...

    Indian Academy of Sciences (India)

    explicit computational elements moving through an Eulerian grid. ... location. The interface is implicitly defined (captured) as the location of the discontinuity in the ... This level set function is advected with the background flow field and thus ...

  12. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    Science.gov (United States)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  13. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation.

    Science.gov (United States)

    Barasa, Edwine W; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-09-16

    Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these complementary schools of thought. © 2015

  14. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    Science.gov (United States)

    Barasa, Edwine W.; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-01-01

    Background: Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods: We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results: Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Conclusion: Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these

  15. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    Directory of Open Access Journals (Sweden)

    Edwine W. Barasa

    2015-11-01

    Full Text Available Background Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1 Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a Stakeholder satisfaction, (b Stakeholder understanding, (c Shifted priorities (reallocation of resources, and (d Implementation of decisions. (2 Priority setting processes should also meet the procedural conditions of (a Stakeholder engagement, (b Stakeholder empowerment, (c Transparency, (d Use of evidence, (e Revisions, (f Enforcement, and (g Being grounded on community values. Conclusion Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from

  16. Level Sets and Voronoi based Feature Extraction from any Imagery

    DEFF Research Database (Denmark)

    Sharma, O.; Anton, François; Mioc, Darka

    2012-01-01

    Polygon features are of interest in many GEOProcessing applications like shoreline mapping, boundary delineation, change detection, etc. This paper presents a unique new GPU-based methodology to automate feature extraction combining level sets, or mean shift based segmentation together with Voron...

  17. Evaluating healthcare priority setting at the meso level: A thematic review of empirical literature

    Science.gov (United States)

    Waithaka, Dennis; Tsofa, Benjamin; Barasa, Edwine

    2018-01-01

    Background: Decentralization of health systems has made sub-national/regional healthcare systems the backbone of healthcare delivery. These regions are tasked with the difficult responsibility of determining healthcare priorities and resource allocation amidst scarce resources. We aimed to review empirical literature that evaluated priority setting practice at the meso (sub-national) level of health systems. Methods: We systematically searched PubMed, ScienceDirect and Google scholar databases and supplemented these with manual searching for relevant studies, based on the reference list of selected papers. We only included empirical studies that described and evaluated, or those that only evaluated priority setting practice at the meso-level. A total of 16 papers were identified from LMICs and HICs. We analyzed data from the selected papers by thematic review. Results: Few studies used systematic priority setting processes, and all but one were from HICs. Both formal and informal criteria are used in priority-setting, however, informal criteria appear to be more perverse in LMICs compared to HICs. The priority setting process at the meso-level is a top-down approach with minimal involvement of the community. Accountability for reasonableness was the most common evaluative framework as it was used in 12 of the 16 studies. Efficiency, reallocation of resources and options for service delivery redesign were the most common outcome measures used to evaluate priority setting. Limitations: Our study was limited by the fact that there are very few empirical studies that have evaluated priority setting at the meso-level and there is likelihood that we did not capture all the studies. Conclusions: Improving priority setting practices at the meso level is crucial to strengthening health systems. This can be achieved through incorporating and adapting systematic priority setting processes and frameworks to the context where used, and making considerations of both process

  18. Reservoir characterisation by a binary level set method and adaptive multiscale estimation

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Lars Kristian

    2006-01-15

    The main focus of this work is on estimation of the absolute permeability as a solution of an inverse problem. We have both considered a single-phase and a two-phase flow model. Two novel approaches have been introduced and tested numerical for solving the inverse problems. The first approach is a multi scale zonation technique which is treated in Paper A. The purpose of the work in this paper is to find a coarse scale solution based on production data from wells. In the suggested approach, the robustness of an already developed method, the adaptive multi scale estimation (AME), has been improved by utilising information from several candidate solutions generated by a stochastic optimizer. The new approach also suggests a way of combining a stochastic and a gradient search method, which in general is a problematic issue. The second approach is a piecewise constant level set approach and is applied in Paper B, C, D and E. Paper B considers the stationary single-phase problem, while Paper C, D and E use a two-phase flow model. In the two-phase flow problem we have utilised information from both production data in wells and spatially distributed data gathered from seismic surveys. Due to the higher content of information provided by the spatially distributed data, we search solutions on a slightly finer scale than one typically does with only production data included. The applied level set method is suitable for reconstruction of fields with a supposed known facies-type of solution. That is, the solution should be close to piecewise constant. This information is utilised through a strong restriction of the number of constant levels in the estimate. On the other hand, the flexibility in the geometries of the zones is much larger for this method than in a typical zonation approach, for example the multi scale approach applied in Paper A. In all these papers, the numerical studies are done on synthetic data sets. An advantage of synthetic data studies is that the true

  19. A level-set method for two-phase flows with soluble surfactant

    Science.gov (United States)

    Xu, Jian-Jun; Shi, Weidong; Lai, Ming-Chih

    2018-01-01

    A level-set method is presented for solving two-phase flows with soluble surfactant. The Navier-Stokes equations are solved along with the bulk surfactant and the interfacial surfactant equations. In particular, the convection-diffusion equation for the bulk surfactant on the irregular moving domain is solved by using a level-set based diffusive-domain method. A conservation law for the total surfactant mass is derived, and a re-scaling procedure for the surfactant concentrations is proposed to compensate for the surfactant mass loss due to numerical diffusion. The whole numerical algorithm is easy for implementation. Several numerical simulations in 2D and 3D show the effects of surfactant solubility on drop dynamics under shear flow.

  20. Multiple variables data sets visualization in ROOT

    International Nuclear Information System (INIS)

    Couet, O

    2008-01-01

    The ROOT graphical framework provides support for many different functions including basic graphics, high-level visualization techniques, output on files, 3D viewing etc. They use well-known world standards to render graphics on screen, to produce high-quality output files, and to generate images for Web publishing. Many techniques allow visualization of all the basic ROOT data types, but the graphical framework was still a bit weak in the visualization of multiple variables data sets. This paper presents latest developments done in the ROOT framework to visualize multiple variables (>4) data sets

  1. The Jump Set under Geometric Regularization. Part 1: Basic Technique and First-Order Denoising

    KAUST Repository

    Valkonen, Tuomo

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Let u ∈ BV(Ω) solve the total variation (TV) denoising problem with L2-squared fidelity and data f. Caselles, Chambolle, and Novaga [Multiscale Model. Simul., 6 (2008), pp. 879-894] have shown the containment Hm-1 (Ju \\\\Jf) = 0 of the jump set Ju of u in that of f. Their proof unfortunately depends heavily on the co-area formula, as do many results in this area, and as such is not directly extensible to higher-order, curvature-based, and other advanced geometric regularizers, such as total generalized variation and Euler\\'s elastica. These have received increased attention in recent times due to their better practical regularization properties compared to conventional TV or wavelets. We prove analogous jump set containment properties for a general class of regularizers. We do this with novel Lipschitz transformation techniques and do not require the co-area formula. In the present Part 1 we demonstrate the general technique on first-order regularizers, while in Part 2 we will extend it to higher-order regularizers. In particular, we concentrate in this part on TV and, as a novelty, Huber-regularized TV. We also demonstrate that the technique would apply to nonconvex TV models as well as the Perona-Malik anisotropic diffusion, if these approaches were well-posed to begin with.

  2. A level set method for cupping artifact correction in cone-beam CT

    International Nuclear Information System (INIS)

    Xie, Shipeng; Li, Haibo; Ge, Qi; Li, Chunming

    2015-01-01

    Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts in CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts

  3. Level-set simulations of buoyancy-driven motion of single and multiple bubbles

    International Nuclear Information System (INIS)

    Balcázar, Néstor; Lehmkuhl, Oriol; Jofre, Lluís; Oliva, Assensi

    2015-01-01

    Highlights: • A conservative level-set method is validated and verified. • An extensive study of buoyancy-driven motion of single bubbles is performed. • The interactions of two spherical and ellipsoidal bubbles is studied. • The interaction of multiple bubbles is simulated in a vertical channel. - Abstract: This paper presents a numerical study of buoyancy-driven motion of single and multiple bubbles by means of the conservative level-set method. First, an extensive study of the hydrodynamics of single bubbles rising in a quiescent liquid is performed, including its shape, terminal velocity, drag coefficients and wake patterns. These results are validated against experimental and numerical data well established in the scientific literature. Then, a further study on the interaction of two spherical and ellipsoidal bubbles is performed for different orientation angles. Finally, the interaction of multiple bubbles is explored in a periodic vertical channel. The results show that the conservative level-set approach can be used for accurate modelling of bubble dynamics. Moreover, it is demonstrated that the present method is numerically stable for a wide range of Morton and Reynolds numbers.

  4. LOFT liquid level transducer application techniques and measurement uncertainty

    International Nuclear Information System (INIS)

    Batt, D.L.; Biladeau, G.L.; Goodrich, L.D.; Nightingale, C.M.

    1979-01-01

    A conductivity sensitive liquid level transducer (LLT) has been designed and used successfully for determining whether steam or water is present in the Loss-of-Fluid Tests (LOFT) performed by EG and G Idaho, Inc., at the Idaho National Engineering Laboratory. The presence of steam or water is determined by establishing a discriminator level which is set manually. A computer program establishes the presence or absence of water for each data point taken. In addition to liquid level, the LLT is used for reactor vessel mass and volume calculations. The uncertainty in the liquid level is essentially the spacing of the LLT electrodes

  5. A Simple, Safe Technique for Thorough Seroma Evacuation in the Outpatient Setting

    Directory of Open Access Journals (Sweden)

    Julie E. Park, MD, FACS

    2014-09-01

    Full Text Available Summary: Seroma formation, a common postoperative complication in reconstructive cases, can lead to capsular contracture and increased office visits and expenses. The authors present a safe, novel technique for ensuring the thorough removal of serous fluid in the outpatient setting. By relying on access with an angiocatheter, potential injury to permanent implants is minimized. The use of low continuous wall suction obviates the need of manual suction via multiple syringes and offers a rapid and thorough evacuation of all types of seromas.

  6. A Level Set Discontinuous Galerkin Method for Free Surface Flows

    DEFF Research Database (Denmark)

    Grooss, Jesper; Hesthaven, Jan

    2006-01-01

    We present a discontinuous Galerkin method on a fully unstructured grid for the modeling of unsteady incompressible fluid flows with free surfaces. The surface is modeled by embedding and represented by a levelset. We discuss the discretization of the flow equations and the level set equation...

  7. Priority setting at the micro-, meso- and macro-levels in Canada, Norway and Uganda.

    Science.gov (United States)

    Kapiriri, Lydia; Norheim, Ole Frithjof; Martin, Douglas K

    2007-06-01

    The objectives of this study were (1) to describe the process of healthcare priority setting in Ontario-Canada, Norway and Uganda at the three levels of decision-making; (2) to evaluate the description using the framework for fair priority setting, accountability for reasonableness; so as to identify lessons of good practices. We carried out case studies involving key informant interviews, with 184 health practitioners and health planners from the macro-level, meso-level and micro-level from Canada-Ontario, Norway and Uganda (selected by virtue of their varying experiences in priority setting). Interviews were audio-recorded, transcribed and analyzed using a modified thematic approach. The descriptions were evaluated against the four conditions of "accountability for reasonableness", relevance, publicity, revisions and enforcement. Areas of adherence to these conditions were identified as lessons of good practices; areas of non-adherence were identified as opportunities for improvement. (i) at the macro-level, in all three countries, cabinet makes most of the macro-level resource allocation decisions and they are influenced by politics, public pressure, and advocacy. Decisions within the ministries of health are based on objective formulae and evidence. International priorities influenced decisions in Uganda. Some priority-setting reasons are publicized through circulars, printed documents and the Internet in Canada and Norway. At the meso-level, hospital priority-setting decisions were made by the hospital managers and were based on national priorities, guidelines, and evidence. Hospital departments that handle emergencies, such as surgery, were prioritized. Some of the reasons are available on the hospital intranet or presented at meetings. Micro-level practitioners considered medical and social worth criteria. These reasons are not publicized. Many practitioners lacked knowledge of the macro- and meso-level priority-setting processes. (ii) Evaluation

  8. Embedded Real-Time Architecture for Level-Set-Based Active Contours

    Directory of Open Access Journals (Sweden)

    Dejnožková Eva

    2005-01-01

    Full Text Available Methods described by partial differential equations have gained a considerable interest because of undoubtful advantages such as an easy mathematical description of the underlying physics phenomena, subpixel precision, isotropy, or direct extension to higher dimensions. Though their implementation within the level set framework offers other interesting advantages, their vast industrial deployment on embedded systems is slowed down by their considerable computational effort. This paper exploits the high parallelization potential of the operators from the level set framework and proposes a scalable, asynchronous, multiprocessor platform suitable for system-on-chip solutions. We concentrate on obtaining real-time execution capabilities. The performance is evaluated on a continuous watershed and an object-tracking application based on a simple gradient-based attraction force driving the active countour. The proposed architecture can be realized on commercially available FPGAs. It is built around general-purpose processor cores, and can run code developed with usual tools.

  9. Numerical Modelling of Three-Fluid Flow Using The Level-set Method

    Science.gov (United States)

    Li, Hongying; Lou, Jing; Shang, Zhi

    2014-11-01

    This work presents a numerical model for simulation of three-fluid flow involving two different moving interfaces. These interfaces are captured using the level-set method via two different level-set functions. A combined formulation with only one set of conservation equations for the whole physical domain, consisting of the three different immiscible fluids, is employed. Numerical solution is performed on a fixed mesh using the finite volume method. Surface tension effect is incorporated using the Continuum Surface Force model. Validation of the present model is made against available results for stratified flow and rising bubble in a container with a free surface. Applications of the present model are demonstrated by a variety of three-fluid flow systems including (1) three-fluid stratified flow, (2) two-fluid stratified flow carrying the third fluid in the form of drops and (3) simultaneous rising and settling of two drops in a stationary third fluid. The work is supported by a Thematic and Strategic Research from A*STAR, Singapore (Ref. #: 1021640075).

  10. Evaluation of set-up deviations during the irradiation of patients suffering from breast cancer treated with two different techniques

    International Nuclear Information System (INIS)

    KukoIowicz, Pawel Franciszek; Debrowski, Andrzej; Gut, Piotr; Chmielewski, Leszek; Wieczorek, Andrzej; Kedzierawski, Piotr

    2005-01-01

    Purpose: To compare reproducibility of set-up for two different treatment techniques for external irradiation of the breast. Methods and materials: In total, the analysis comprised 56 pairs of portal and simulator films for 14 consecutive patients treated following breast conserving therapy and 98 pairs of portal and simulator films for 20 consecutive patients treated after mastectomy. For the first group the tangential field technique (TF technique) was used, for the second the inverse hockey stick technique (IHS technique). Evaluation of the treatment reproducibility was performed in terms of systematic and random error calculated for the whole groups, comparison of set-up accuracy by means of comparison of cumulative distribution of the length of the displacement vector. Results: In the IHS and TF techniques for medial and lateral fields, displacement larger than 5 mm occurred in 28.3, 15.8 and 25.4%, respectively. For the IHS technique, the systematic errors for lateral and cranial-caudal direction were 1.9 and 1.7 mm, respectively (1 SD), the random errors for lateral and cranial-caudal direction were 2.0 and 2.5 mm. For the TF technique, the systematic errors for ventral-dorsal and cranial-caudal direction were 2.6 and 1.3 mm for medial field and 3.7 and 0.7 mm for lateral fields, respectively, the random errors for lateral and cranial-caudal direction were 2.2 and 1.0 mm for medial field and 2.9 and 1.1 for lateral field, respectively. Rotations were negligible in the IHS technique. For the TF technique the systematic and random components amounted to about 2.0 degrees (1 SD). Conclusions: Both the inverse hockey stick and standard tangential techniques showed good reproducibility of patients' set-up with respect to cranial-caudal direction. For the TF technique, the accuracy should be improved for the medial field with respect to the ventral-dorsal direction

  11. Constituent-level pile-up mitigation techniques in ATLAS

    CERN Document Server

    The ATLAS collaboration

    2017-01-01

    Pile-up of simultaneous proton-proton collisions at the LHC has a significant impact on jet reconstruction. In this note the performance of several pile-up mitigation techniques is evaluated in detailed simulations of the ATLAS experiment. Four algorithms that act on the jet-constituent level are evaluated: SoftKiller, the cluster vertex fraction algorithm and Voronoi and constituent subtraction. We find that application of these constituent-level algorithms improves the resolution of low-transverse-momentum jets. The improvement is significant for collisions with 80-200 simultaneous proton-proton collisions envisaged in future runs of the LHC.

  12. A modified scintigrafic technique for amputation level selection in diabetics

    International Nuclear Information System (INIS)

    Dwars, B.J.; Rauwerda, J.A.; Broek, T.A.A. van den; Rij, G.L. van; Hollander, W. den; Heidendal, G.A.K.

    1989-01-01

    A modified 123 I-antipyrine cutaneous washout technique for the selection of amputation levels is described. The modifications imply a reduction of time needed for the examination by simultaneous recordings on different levels, and a better patient acceptance by reducing inconvenience. Furthermore, both skin perfusion pressure (SPP) and skin blood flow (SBF) are determined from each clearance curve. In a prospective study among 26 diabetic patients presenting with ulcers or gangrene of the foot, both SPP and SBF were determined preoperatively on the selected level of surgery and on adjacent amputation sites. These 26 patients underwent 12 minor foot amputations and 17 major lower limb amputations. Two of these amputations failed to heal. SBF values appeared indicative for the degree of peripheral vascular disease, as low SBF values were found with low SPP values. SPP determinations revealed good predictive values: All surgical procedures healed when SPP>20 mmHg, but 2 out of 3 failed when SPP<2 mmHg. If SPP values would have been decisive, the amputation would have been converted to a lower level in 6 out of 17 cases. This modified scintigrafic technique provides accurate objective information for amputation level selection. (orig.)

  13. Ant colony optimisation-direct cover: a hybrid ant colony direct cover technique for multi-level synthesis of multiple-valued logic functions

    Science.gov (United States)

    Abd-El-Barr, Mostafa

    2010-12-01

    The use of non-binary (multiple-valued) logic in the synthesis of digital systems can lead to savings in chip area. Advances in very large scale integration (VLSI) technology have enabled the successful implementation of multiple-valued logic (MVL) circuits. A number of heuristic algorithms for the synthesis of (near) minimal sum-of products (two-level) realisation of MVL functions have been reported in the literature. The direct cover (DC) technique is one such algorithm. The ant colony optimisation (ACO) algorithm is a meta-heuristic that uses constructive greediness to explore a large solution space in finding (near) optimal solutions. The ACO algorithm mimics the ant's behaviour in the real world in using the shortest path to reach food sources. We have previously introduced an ACO-based heuristic for the synthesis of two-level MVL functions. In this article, we introduce the ACO-DC hybrid technique for the synthesis of multi-level MVL functions. The basic idea is to use an ant to decompose a given MVL function into a number of levels and then synthesise each sub-function using a DC-based technique. The results obtained using the proposed approach are compared to those obtained using existing techniques reported in the literature. A benchmark set consisting of 50,000 randomly generated 2-variable 4-valued functions is used in the comparison. The results obtained using the proposed ACO-DC technique are shown to produce efficient realisation in terms of the average number of gates (as a measure of chip area) needed for the synthesis of a given MVL function.

  14. Level set method for image segmentation based on moment competition

    Science.gov (United States)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  15. Topological Hausdorff dimension and level sets of generic continuous functions on fractals

    International Nuclear Information System (INIS)

    Balka, Richárd; Buczolich, Zoltán; Elekes, Márton

    2012-01-01

    Highlights: ► We examine a new fractal dimension, the so called topological Hausdorff dimension. ► The generic continuous function has a level set of maximal Hausdorff dimension. ► This maximal dimension is the topological Hausdorff dimension minus one. ► Homogeneity implies that “most” level sets are of this dimension. ► We calculate the various dimensions of the graph of the generic function. - Abstract: In an earlier paper we introduced a new concept of dimension for metric spaces, the so called topological Hausdorff dimension. For a compact metric space K let dim H K and dim tH K denote its Hausdorff and topological Hausdorff dimension, respectively. We proved that this new dimension describes the Hausdorff dimension of the level sets of the generic continuous function on K, namely sup{ dim H f -1 (y):y∈R} =dim tH K-1 for the generic f ∈ C(K), provided that K is not totally disconnected, otherwise every non-empty level set is a singleton. We also proved that if K is not totally disconnected and sufficiently homogeneous then dim H f −1 (y) = dim tH K − 1 for the generic f ∈ C(K) and the generic y ∈ f(K). The most important goal of this paper is to make these theorems more precise. As for the first result, we prove that the supremum is actually attained on the left hand side of the first equation above, and also show that there may only be a unique level set of maximal Hausdorff dimension. As for the second result, we characterize those compact metric spaces for which for the generic f ∈ C(K) and the generic y ∈ f(K) we have dim H f −1 (y) = dim tH K − 1. We also generalize a result of B. Kirchheim by showing that if K is self-similar then for the generic f ∈ C(K) for every y∈intf(K) we have dim H f −1 (y) = dim tH K − 1. Finally, we prove that the graph of the generic f ∈ C(K) has the same Hausdorff and topological Hausdorff dimension as K.

  16. Relationships between college settings and student alcohol use before, during and after events: a multi-level study.

    Science.gov (United States)

    Paschall, Mallie J; Saltz, Robert F

    2007-11-01

    We examined how alcohol risk is distributed based on college students' drinking before, during and after they go to certain settings. Students attending 14 California public universities (N=10,152) completed a web-based or mailed survey in the fall 2003 semester, which included questions about how many drinks they consumed before, during and after the last time they went to six settings/events: fraternity or sorority party, residence hall party, campus event (e.g. football game), off-campus party, bar/restaurant and outdoor setting (referent). Multi-level analyses were conducted in hierarchical linear modeling (HLM) to examine relationships between type of setting and level of alcohol use before, during and after going to the setting, and possible age and gender differences in these relationships. Drinking episodes (N=24,207) were level 1 units, students were level 2 units and colleges were level 3 units. The highest drinking levels were observed during all settings/events except campus events, with the highest number of drinks being consumed at off-campus parties, followed by residence hall and fraternity/sorority parties. The number of drinks consumed before a fraternity/sorority party was higher than other settings/events. Age group and gender differences in relationships between type of setting/event and 'before,''during' and 'after' drinking levels also were observed. For example, going to a bar/restaurant (relative to an outdoor setting) was positively associated with 'during' drinks among students of legal drinking age while no relationship was observed for underage students. Findings of this study indicate differences in the extent to which college settings are associated with student drinking levels before, during and after related events, and may have implications for intervention strategies targeting different types of settings.

  17. Individual-and Setting-Level Correlates of Secondary Traumatic Stress in Rape Crisis Center Staff.

    Science.gov (United States)

    Dworkin, Emily R; Sorell, Nicole R; Allen, Nicole E

    2016-02-01

    Secondary traumatic stress (STS) is an issue of significant concern among providers who work with survivors of sexual assault. Although STS has been studied in relation to individual-level characteristics of a variety of types of trauma responders, less research has focused specifically on rape crisis centers as environments that might convey risk or protection from STS, and no research to knowledge has modeled setting-level variation in correlates of STS. The current study uses a sample of 164 staff members representing 40 rape crisis centers across a single Midwestern state to investigate the staff member-and agency-level correlates of STS. Results suggest that correlates exist at both levels of analysis. Younger age and greater severity of sexual assault history were statistically significant individual-level predictors of increased STS. Greater frequency of supervision was more strongly related to secondary stress for non-advocates than for advocates. At the setting level, lower levels of supervision and higher client loads agency-wide accounted for unique variance in staff members' STS. These findings suggest that characteristics of both providers and their settings are important to consider when understanding their STS. © The Author(s) 2014.

  18. Computational techniques in tribology and material science at the atomic level

    Science.gov (United States)

    Ferrante, J.; Bozzolo, G. H.

    1992-01-01

    Computations in tribology and material science at the atomic level present considerable difficulties. Computational techniques ranging from first-principles to semi-empirical and their limitations are discussed. Example calculations of metallic surface energies using semi-empirical techniques are presented. Finally, application of the methods to calculation of adhesion and friction are presented.

  19. A Simplified Technique for Implant-Abutment Level Impression after Soft Tissue Adaptation around Provisional Restoration

    Science.gov (United States)

    Kutkut, Ahmad; Abu-Hammad, Osama; Frazer, Robert

    2016-01-01

    Impression techniques for implant restorations can be implant level or abutment level impressions with open tray or closed tray techniques. Conventional implant-abutment level impression techniques are predictable for maximizing esthetic outcomes. Restoration of the implant traditionally requires the use of the metal or plastic impression copings, analogs, and laboratory components. Simplifying the dental implant restoration by reducing armamentarium through incorporating conventional techniques used daily for crowns and bridges will allow more general dentists to restore implants in their practices. The demonstrated technique is useful when modifications to implant abutments are required to correct the angulation of malpositioned implants. This technique utilizes conventional crown and bridge impression techniques. As an added benefit, it reduces costs by utilizing techniques used daily for crowns and bridges. The aim of this report is to describe a simplified conventional impression technique for custom abutments and modified prefabricated solid abutments for definitive restorations. PMID:29563457

  20. A Simplified Technique for Implant-Abutment Level Impression after Soft Tissue Adaptation around Provisional Restoration

    Directory of Open Access Journals (Sweden)

    Ahmad Kutkut

    2016-05-01

    Full Text Available Impression techniques for implant restorations can be implant level or abutment level impressions with open tray or closed tray techniques. Conventional implant-abutment level impression techniques are predictable for maximizing esthetic outcomes. Restoration of the implant traditionally requires the use of the metal or plastic impression copings, analogs, and laboratory components. Simplifying the dental implant restoration by reducing armamentarium through incorporating conventional techniques used daily for crowns and bridges will allow more general dentists to restore implants in their practices. The demonstrated technique is useful when modifications to implant abutments are required to correct the angulation of malpositioned implants. This technique utilizes conventional crown and bridge impression techniques. As an added benefit, it reduces costs by utilizing techniques used daily for crowns and bridges. The aim of this report is to describe a simplified conventional impression technique for custom abutments and modified prefabricated solid abutments for definitive restorations.

  1. Application of neutron backscatter techniques to level measurement problems

    International Nuclear Information System (INIS)

    Leonardi-Cattolica, A.M.; McMillan, D.H.; Telfer, A.; Griffin, L.H.; Hunt, R.H.

    1982-01-01

    We have designed and built portable level detectors and fixed level monitors based on neutron scattering and detection principles. The main components of these devices, which we call neutron backscatter gauges, are a neutron emitting radioisotope, a neutron detector, and a ratemeter. The gauge is a good detector for hydrogen but is much less sensitive to most other materials. This allows level measurements of hydrogen bearing materials, such as hydrocarbons, to be made through the walls of metal vessels. Measurements can be made conveniently through steel walls which are a few inches thick. We have used neutron backscatter gauges in a wide variety of level measurement applications encountered in the petrochemical industry. In a number of cases, the neutron techniques have proven to be superior to conventional level measurement methods, including gamma ray methods

  2. Application de X-FEM et des level-sets à l'homogénéisation de matériaux aléatoires caractérisés par imagerie numérique

    OpenAIRE

    Ionescu , Irina; Moës , Nicolas; Cartraud , Patrice; Béringhier , Marianne

    2007-01-01

    International audience; The advances of material characterization by means of imaging techniques require powerful computational methods for numerical analyses. This paper focuses on the advantages of coupling the X-FEM and level sets to solve microstructures with complex geometry. The level set information is obtained from a digital image and then used within a X-FEM computation, where the mesh does not need to conform to the material interface. An example of homogeniza-tion is presented.; La...

  3. Improved mortar set-up technique

    CSIR Research Space (South Africa)

    De Villiers, D

    2010-05-01

    Full Text Available -up Technique Presented at the Mortar Systems Conference By D de Villiers May 2009 Mobile Mortars Slide 2 © CSIR 2008 www.csir.co.za Mobile Mortars Slide 3 © CSIR 2008 www.csir.co.za Mobile Mortars Slide 4... © CSIR 2008 www.csir.co.za Mortar Tests Slide 5 © CSIR 2008 www.csir.co.za Mortar Tests Slide 6 © CSIR 2008 www.csir.co.za Electronic Sensors Slide 7 © CSIR 2008...

  4. Use of segregation techniques to reduce stored low level waste

    International Nuclear Information System (INIS)

    Nascimento Viana, R.; Vianna Mariano, N.; Antonio do Amaral, M.

    2000-01-01

    This paper describes the use of segregation techniques in reducing the stored Low Level Waste on Intermediate Waste Repository 1, at Angra Nuclear Power Plant Site, from 1701 to 425 drums of compacted waste. (author)

  5. Pulsating potentiometric titration technique for assay of dissolved oxygen in water at trace level.

    Science.gov (United States)

    Sahoo, P; Ananthanarayanan, R; Malathi, N; Rajiniganth, M P; Murali, N; Swaminathan, P

    2010-06-11

    A simple but high performance potentiometric titration technique using pulsating sensors has been developed for assay of dissolved oxygen (DO) in water samples down to 10.0 microg L(-1) levels. The technique involves Winkler titration chemistry, commonly used for determination of dissolved oxygen in water at mg L(-1) levels, with modification in methodology for accurate detection of end point even at 10.0 microg L(-1) levels DO present in the sample. An indigenously built sampling cum pretreatment vessel has been deployed for collection and chemical fixing of dissolved oxygen in water samples from flowing water line without exposure to air. A potentiometric titration facility using pulsating sensors developed in-house is used to carry out titration. The power of the titration technique has been realised in estimation of very dilute solution of iodine equivalent to 10 microg L(-1) O(2). Finally, several water samples containing dissolved oxygen from mg L(-1) to microg L(-1) levels were successfully analysed with excellent reproducibility using this new technique. The precision in measurement of DO in water at 10 microg L(-1) O(2) level is 0.14 (n=5), RSD: 1.4%. Probably for the first time a potentiometric titration technique has been successfully deployed for assay of dissolved oxygen in water samples at 10 microg L(-1) levels. Copyright 2010 Elsevier B.V. All rights reserved.

  6. Pulsating potentiometric titration technique for assay of dissolved oxygen in water at trace level

    International Nuclear Information System (INIS)

    Sahoo, P.; Ananthanarayanan, R.; Malathi, N.; Rajiniganth, M.P.; Murali, N.; Swaminathan, P.

    2010-01-01

    A simple but high performance potentiometric titration technique using pulsating sensors has been developed for assay of dissolved oxygen (DO) in water samples down to 10.0 μg L -1 levels. The technique involves Winkler titration chemistry, commonly used for determination of dissolved oxygen in water at mg L -1 levels, with modification in methodology for accurate detection of end point even at 10.0 μg L -1 levels DO present in the sample. An indigenously built sampling cum pretreatment vessel has been deployed for collection and chemical fixing of dissolved oxygen in water samples from flowing water line without exposure to air. A potentiometric titration facility using pulsating sensors developed in-house is used to carry out titration. The power of the titration technique has been realised in estimation of very dilute solution of iodine equivalent to 10 μg L -1 O 2 . Finally, several water samples containing dissolved oxygen from mg L -1 to μg L -1 levels were successfully analysed with excellent reproducibility using this new technique. The precision in measurement of DO in water at 10 μg L -1 O 2 level is 0.14 (n = 5), RSD: 1.4%. Probably for the first time a potentiometric titration technique has been successfully deployed for assay of dissolved oxygen in water samples at 10 μg L -1 levels.

  7. Quantitative characterization of metastatic disease in the spine. Part I. Semiautomated segmentation using atlas-based deformable registration and the level set method

    International Nuclear Information System (INIS)

    Hardisty, M.; Gordon, L.; Agarwal, P.; Skrinskas, T.; Whyne, C.

    2007-01-01

    Quantitative assessment of metastatic disease in bone is often considered immeasurable and, as such, patients with skeletal metastases are often excluded from clinical trials. In order to effectively quantify the impact of metastatic tumor involvement in the spine, accurate segmentation of the vertebra is required. Manual segmentation can be accurate but involves extensive and time-consuming user interaction. Potential solutions to automating segmentation of metastatically involved vertebrae are demons deformable image registration and level set methods. The purpose of this study was to develop a semiautomated method to accurately segment tumor-bearing vertebrae using the aforementioned techniques. By maintaining morphology of an atlas, the demons-level set composite algorithm was able to accurately differentiate between trans-cortical tumors and surrounding soft tissue of identical intensity. The algorithm successfully segmented both the vertebral body and trabecular centrum of tumor-involved and healthy vertebrae. This work validates our approach as equivalent in accuracy to an experienced user

  8. A new level set model for cell image segmentation

    Science.gov (United States)

    Ma, Jing-Feng; Hou, Kai; Bao, Shang-Lian; Chen, Chun

    2011-02-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.

  9. Setting Component Priorities in Protecting NPPs against Cyber-Attacks Using Reliability Analysis Techniques

    International Nuclear Information System (INIS)

    Choi, Moon Kyoung; Seong, Poong Hyun; Son, Han Seong

    2017-01-01

    The digitalization of infrastructure makes systems vulnerable to cyber threats and hybrid attacks. According to ICS-CERT report, as time goes by, the number of vulnerabilities in ICS industries increases rapidly. Digital I and C systems have been developed and installed in nuclear power plants, and due to installation of the digital I and C systems, cyber security concerns are increasing in nuclear industry. However, there are too many critical digital assets to be inspected in digitalized NPPs. In order to reduce the inefficiency of regulation in nuclear facilities, the critical components that are directly related to an accident are elicited by using the reliability analysis techniques. Target initial events are selected, and their headings are analyzed through event tree analysis about whether the headings can be affected by cyber-attacks or not. Among the headings, the headings that can be proceeded directly to the core damage by the cyber-attack when they are fail are finally selected as the target of deriving the minimum cut-sets. We analyze the fault trees and derive the minimum set-cuts. In terms of original PSA, the value of probability for the cut-sets is important but the probability is not important in terms of cyber security of NPPs. The important factors is the number of basic events consisting of the minimal cut-sets that is proportional to vulnerability.

  10. Performance of an iterative two-stage bayesian technique for population pharmacokinetic analysis of rich data sets

    NARCIS (Netherlands)

    Proost, Johannes H.; Eleveld, Douglas J.

    2006-01-01

    Purpose. To test the suitability of an Iterative Two-Stage Bayesian (ITSB) technique for population pharmacokinetic analysis of rich data sets, and to compare ITSB with Standard Two-Stage (STS) analysis and nonlinear Mixed Effect Modeling (MEM). Materials and Methods. Data from a clinical study with

  11. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    Science.gov (United States)

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  12. Level-set-based reconstruction algorithm for EIT lung images: first clinical results

    International Nuclear Information System (INIS)

    Rahmati, Peyman; Adler, Andy; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz

    2012-01-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure–volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM. (paper)

  13. Assessment of Random Assignment in Training and Test Sets using Generalized Cluster Analysis Technique

    Directory of Open Access Journals (Sweden)

    Sorana D. BOLBOACĂ

    2011-06-01

    Full Text Available Aim: The properness of random assignment of compounds in training and validation sets was assessed using the generalized cluster technique. Material and Method: A quantitative Structure-Activity Relationship model using Molecular Descriptors Family on Vertices was evaluated in terms of assignment of carboquinone derivatives in training and test sets during the leave-many-out analysis. Assignment of compounds was investigated using five variables: observed anticancer activity and four structure descriptors. Generalized cluster analysis with K-means algorithm was applied in order to investigate if the assignment of compounds was or not proper. The Euclidian distance and maximization of the initial distance using a cross-validation with a v-fold of 10 was applied. Results: All five variables included in analysis proved to have statistically significant contribution in identification of clusters. Three clusters were identified, each of them containing both carboquinone derivatives belonging to training as well as to test sets. The observed activity of carboquinone derivatives proved to be normal distributed on every. The presence of training and test sets in all clusters identified using generalized cluster analysis with K-means algorithm and the distribution of observed activity within clusters sustain a proper assignment of compounds in training and test set. Conclusion: Generalized cluster analysis using the K-means algorithm proved to be a valid method in assessment of random assignment of carboquinone derivatives in training and test sets.

  14. The development and use of decision aiding techniques for establishing intervention levels

    International Nuclear Information System (INIS)

    Kelly, G.N.; Sinnaeve, J.

    1989-01-01

    Following the Chernobyl accident there has been considerable international discussion on the principles underlying the choice of intervention levels and their practical application. While there is broad agreement on the underlying principles - that is to put potentially exposed individuals into a better position in the sense that lower overall risks are achieved at reasonable cost in financial and social terms - the determination of what constitutes the most appropriate type and level of intervention in any particular circumstances is more complex. Within the CEC Radiation Protection Research Programme techniques are being developed to aid well founded and more transparent decisions on the choice of intervention levels. The techniques are described and areas identified where they might usefully be applied

  15. Level Set Projection Method for Incompressible Navier-Stokes on Arbitrary Boundaries

    KAUST Repository

    Williams-Rioux, Bertrand

    2012-01-12

    Second order level set projection method for incompressible Navier-Stokes equations is proposed to solve flow around arbitrary geometries. We used rectilinear grid with collocated cell centered velocity and pressure. An explicit Godunov procedure is used to address the nonlinear advection terms, and an implicit Crank-Nicholson method to update viscous effects. An approximate pressure projection is implemented at the end of the time stepping using multigrid as a conventional fast iterative method. The level set method developed by Osher and Sethian [17] is implemented to address real momentum and pressure boundary conditions by the advection of a distance function, as proposed by Aslam [3]. Numerical results for the Strouhal number and drag coefficients validated the model with good accuracy for flow over a cylinder in the parallel shedding regime (47 < Re < 180). Simulations for an array of cylinders and an oscillating cylinder were performed, with the latter demonstrating our methods ability to handle dynamic boundary conditions.

  16. Using SETS to find minimal cut sets in large fault trees

    International Nuclear Information System (INIS)

    Worrell, R.B.; Stack, D.W.

    1978-01-01

    An efficient algebraic algorithm for finding the minimal cut sets for a large fault tree was defined and a new procedure which implements the algorithm was added to the Set Equation Transformation System (SETS). The algorithm includes the identification and separate processing of independent subtrees, the coalescing of consecutive gates of the same kind, the creation of additional independent subtrees, and the derivation of the fault tree stem equation in stages. The computer time required to determine the minimal cut sets using these techniques is shown to be substantially less than the computer time required to determine the minimal cut sets when these techniques are not employed. It is shown for a given example that the execution time required to determine the minimal cut sets can be reduced from 7,686 seconds to 7 seconds when all of these techniques are employed

  17. Techniques for the solidification of high-level wastes

    International Nuclear Information System (INIS)

    1977-01-01

    The problem of the long-term management of the high-level wastes from the reprocessing of irradiated nuclear fuel is receiving world-wide attention. While the majority of the waste solutions from the reprocessing of commercial fuels are currently being stored in stainless-steel tanks, increasing effort is being devoted to developing technology for the conversion of these wastes into solids. A number of full-scale solidification facilities are expected to come into operation in the next decade. The object of this report is to survey and compare all the work currently in progress on the techniques available for the solidification of high-level wastes. It will examine the high-level liquid wastes arising from the various processes currently under development or in operation, the advantages and disadvantages of each process for different types and quantities of waste solutions, the stages of development, the scale-up potential and flexibility of the processes

  18. A new level set model for cell image segmentation

    International Nuclear Information System (INIS)

    Ma Jing-Feng; Chen Chun; Hou Kai; Bao Shang-Lian

    2011-01-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing. (cross-disciplinary physics and related areas of science and technology)

  19. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    Science.gov (United States)

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  20. Correction of failure in antenna array using matrix pencil technique

    International Nuclear Information System (INIS)

    Khan, SU; Rahim, MKA

    2017-01-01

    In this paper a non-iterative technique is developed for the correction of faulty antenna array based on matrix pencil technique (MPT). The failure of a sensor in antenna array can damage the radiation power pattern in terms of sidelobes level and nulls. In the developed technique, the radiation pattern of the array is sampled to form discrete power pattern information set. Then this information set can be arranged in the form of Hankel matrix (HM) and execute the singular value decomposition (SVD). By removing nonprincipal values, we obtain an optimum lower rank estimation of HM. This lower rank matrix corresponds to the corrected pattern. Then the proposed technique is employed to recover the weight excitation and position allocations from the estimated matrix. Numerical simulations confirm the efficiency of the proposed technique, which is compared with the available techniques in terms of sidelobes level and nulls. (paper)

  1. Comparison of Eight Techniques for Reconstructing Multi-Satellite Sensor Time-Series NDVI Data Sets in the Heihe River Basin, China

    Directory of Open Access Journals (Sweden)

    Liying Geng

    2014-03-01

    Full Text Available More than 20 techniques have been developed to de-noise time-series vegetation index data from different satellite sensors to reconstruct long time-series data sets. Although many studies have compared Normalized Difference Vegetation Index (NDVI noise-reduction techniques, few studies have compared these techniques systematically and comprehensively. This study tested eight techniques for smoothing different vegetation types using different types of multi-temporal NDVI data (Advanced Very High Resolution Radiometer (AVHRR (Global Inventory Modeling and Map Studies (GIMMS and Pathfinder AVHRR Land (PAL, Satellite Pour l’ Observation de la Terre (SPOT VEGETATION (VGT, and Moderate Resolution Imaging Spectroradiometer (MODIS (Terra with the ultimate purpose of determining the best reconstruction technique for each type of vegetation captured with four satellite sensors. These techniques include the modified best index slope extraction (M-BISE technique, the Savitzky-Golay (S-G technique, the mean value iteration filter (MVI technique, the asymmetric Gaussian (A-G technique, the double logistic (D-L technique, the changing-weight filter (CW technique, the interpolation for data reconstruction (IDR technique, and the Whittaker smoother (WS technique. These techniques were evaluated by calculating the root mean square error (RMSE, the Akaike Information Criterion (AIC, and the Bayesian Information Criterion (BIC. The results indicate that the S-G, CW, and WS techniques perform better than the other tested techniques, while the IDR, M-BISE, and MVI techniques performed worse than the other techniques. The best de-noise technique varies with different vegetation types and NDVI data sources. The S-G performs best in most situations. In addition, the CW and WS are effective techniques that were exceeded only by the S-G technique. The assessment results are consistent in terms of the three evaluation indexes for GIMMS, PAL, and SPOT data in the study

  2. A retrospective randomized study to compare the energy delivered using CDE with different techniques and OZil settings by different surgeons in phacoemulsification.

    Science.gov (United States)

    Chen, Ming; Sweeney, Henry W; Luke, Becky; Chen, Mindy; Brown, Mathew

    2009-01-01

    Cumulative dissipated energy (CDE) was used with Infiniti((R)) Vision System (Alcon Labs) as an energy delivery guide to compare four different phaco techniques and phaco settings. The supracapsular phaco technique and burst mode is known for efficiency and surgery is faster compared with the old phaco unit. In this study, we found that supracapsular phaco with burst mode had the least CDE in both cataract and nuclear sclerosis cataract with the new Infiniti((R)) unit. We suggest that CDE can be used as one of the references to modify technique and setting to improve outcome for surgeons, especially for new surgeons.

  3. Standard Establishment Through Scenarios (SETS): A new technique for occupational fitness standards.

    Science.gov (United States)

    Blacklock, R E; Reilly, T J; Spivock, M; Newton, P S; Olinek, S M

    2015-01-01

    An objective and scientific task analysis provides the basis for establishing legally defensible Physical Employment Standards (PES), based on common and essential occupational tasks. Infrequent performance of these tasks creates challenges when developing PES based on criterion, or content validity. Develop a systematic approach using Subject Matter Experts (SME) to provide tasks with 1) an occupationally relevant scenario considered common to all personnel; 2) a minimum performance standard defined by time, distance, load or work. Examples provided here relate to the development of a new PES for the Canadian Armed Forces (CAF). SME of various experience are selected based on their eligibility criteria. SME are required to define a reasonable scenario for each task from personal experience, provide occupational performance requirements of the scenario in sub-groups, and discuss and agree by consensus vote on the final standard based on the definition of essential. A common and essential task for the CAF is detailed as a case example of process application. Techniques to avoid common SME rating errors are discussed and advantages to the method described. The SETS method was developed as a systematic approach to setting occupational performance standards and qualifying information from SME.

  4. Optimization of auxiliary basis sets for the LEDO expansion and a projection technique for LEDO-DFT.

    Science.gov (United States)

    Götz, Andreas W; Kollmar, Christian; Hess, Bernd A

    2005-09-01

    We present a systematic procedure for the optimization of the expansion basis for the limited expansion of diatomic overlap density functional theory (LEDO-DFT) and report on optimized auxiliary orbitals for the Ahlrichs split valence plus polarization basis set (SVP) for the elements H, Li--F, and Na--Cl. A new method to deal with near-linear dependences in the LEDO expansion basis is introduced, which greatly reduces the computational effort of LEDO-DFT calculations. Numerical results for a test set of small molecules demonstrate the accuracy of electronic energies, structural parameters, dipole moments, and harmonic frequencies. For larger molecular systems the numerical errors introduced by the LEDO approximation can lead to an uncontrollable behavior of the self-consistent field (SCF) process. A projection technique suggested by Löwdin is presented in the framework of LEDO-DFT, which guarantees for SCF convergence. Numerical results on some critical test molecules suggest the general applicability of the auxiliary orbitals presented in combination with this projection technique. Timing results indicate that LEDO-DFT is competitive with conventional density fitting methods. (c) 2005 Wiley Periodicals, Inc.

  5. Attempts to develop a new nuclear measurement technique of β-glucuronidase levels in biological samples

    International Nuclear Information System (INIS)

    Unak, T.; Avcibasi, U.; Yildirim, Y.; Cetinkaya, B.

    2003-01-01

    β-Glucuronidase is one of the most important hydrolytic enzymes in living systems and plays an essential role in the detoxification pathway of toxic materials incorporated into the metabolism. Some organs, especially liver and some tumour tissues, have high level of β-glucuronidase activity. As a result the enzymatic activity of some kind of tumour cells, the radiolabelled glucuronide conjugates of cytotoxic, as well as radiotoxic compounds have potentially very valuable diagnostic and therapeutic applications in cancer research. For this reason, a sensitive measurement of β-glucuronidase levels in normal and tumour tissues is a very important step for these kinds of applications. According to the classical measurement method of β-glucuronidase activity, in general, the quantity of phenolphthalein liberated from its glucuronide conjugate, i.e. phenolphthalein-glucuronide, by β-glucuronidase has been measured by use of the spectrophotometric technique. The lower detection limit of phenolphthalein by the spectrophotometric technique is about 1-3 mg. This means that the β-glucuronidase levels could not be detected in biological samples having lower levels of β-glucuronidase activity and therefore the applications of the spectrophotometric technique in cancer research are very seriously limited. Starting from this consideration, we recently attempted to develop a new nuclear technique to measure much lower concentrations of β-glucuronidase in biological samples. To improve the detection limit, phenolphthalein-glucuronide and also phenyl-N-glucuronide were radioiodinated with 131 I and their radioactivity was measured by use of the counting technique. Therefore, the quantity of phenolphthalein or aniline radioiodinated with 131 I and liberated by the deglucuronidation reactivity of β-glucuronidase was used in an attempt to measure levels lower than the spectrophotometric measurement technique. The results obtained clearly verified that 0.01 pg level of

  6. Ultrafuzziness Optimization Based on Type II Fuzzy Sets for Image Thresholding

    Directory of Open Access Journals (Sweden)

    Hudan Studiawan

    2010-11-01

    Full Text Available Image thresholding is one of the processing techniques to provide high quality preprocessed image. Image vagueness and bad illumination are common obstacles yielding in a poor image thresholding output. By assuming image as fuzzy sets, several different fuzzy thresholding techniques have been proposed to remove these obstacles during threshold selection. In this paper, we proposed an algorithm for thresholding image using ultrafuzziness optimization to decrease uncertainty in fuzzy system by common fuzzy sets like type II fuzzy sets. Optimization was conducted by involving ultrafuzziness measurement for background and object fuzzy sets separately. Experimental results demonstrated that the proposed image thresholding method had good performances for images with high vagueness, low level contrast, and grayscale ambiguity.

  7. Implications of sea-level rise in a modern carbonate ramp setting

    Science.gov (United States)

    Lokier, Stephen W.; Court, Wesley M.; Onuma, Takumi; Paul, Andreas

    2018-03-01

    This study addresses a gap in our understanding of the effects of sea-level rise on the sedimentary systems and morphological development of recent and ancient carbonate ramp settings. Many ancient carbonate sequences are interpreted as having been deposited in carbonate ramp settings. These settings are poorly-represented in the Recent. The study documents the present-day transgressive flooding of the Abu Dhabi coastline at the southern shoreline of the Arabian/Persian Gulf, a carbonate ramp depositional system that is widely employed as a Recent analogue for numerous ancient carbonate systems. Fourteen years of field-based observations are integrated with historical and recent high-resolution satellite imagery in order to document and assess the onset of flooding. Predicted rates of transgression (i.e. landward movement of the shoreline) of 2.5 m yr- 1 (± 0.2 m yr- 1) based on global sea-level rise alone were far exceeded by the flooding rate calculated from the back-stepping of coastal features (10-29 m yr- 1). This discrepancy results from the dynamic nature of the flooding with increased water depth exposing the coastline to increased erosion and, thereby, enhancing back-stepping. A non-accretionary transgressive shoreline trajectory results from relatively rapid sea-level rise coupled with a low-angle ramp geometry and a paucity of sediments. The flooding is represented by the landward migration of facies belts, a range of erosive features and the onset of bioturbation. Employing Intergovernmental Panel on Climate Change (Church et al., 2013) predictions for 21st century sea-level rise, and allowing for the post-flooding lag time that is typical for the start-up of carbonate factories, it is calculated that the coastline will continue to retrograde for the foreseeable future. Total passive flooding (without considering feedback in the modification of the shoreline) by the year 2100 is calculated to likely be between 340 and 571 m with a flooding rate of 3

  8. Comparison of the pain levels of computer-controlled and conventional anesthesia techniques in prosthodontic treatment

    Directory of Open Access Journals (Sweden)

    Murat Yenisey

    2009-10-01

    Full Text Available OBJECTIVE: The objective of this study was to compare the pain levels on opposite sides of the maxilla at needle insertion during delivery of local anesthetic solution and tooth preparation for both conventional and anterior middle superior alveolar (AMSA technique with the Wand computer-controlled local anesthesia application. MATERIAL AND METHODS: Pain scores of 16 patients were evaluated with a 5-point verbal rating scale (VRS and data were analyzed nonparametrically. Pain differences at needle insertion, during delivery of local anesthetic, and at tooth preparation, for conventional versus the Wand technique, were analyzed using the Mann-Whitney U test (p=0.01. RESULTS: The Wand technique had a lower pain level compared to conventional injection for needle insertion (p0.05. CONCLUSIONS: The AMSA technique using the Wand is recommended for prosthodontic treatment because it reduces pain during needle insertion and during delivery of local anaesthetic. However, these two techniques have the same pain levels for tooth preparation.

  9. Robust boundary detection of left ventricles on ultrasound images using ASM-level set method.

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Li, Hong; Teng, Yueyang; Kang, Yan

    2015-01-01

    Level set method has been widely used in medical image analysis, but it has difficulties when being used in the segmentation of left ventricular (LV) boundaries on echocardiography images because the boundaries are not very distinguish, and the signal-to-noise ratio of echocardiography images is not very high. In this paper, we introduce the Active Shape Model (ASM) into the traditional level set method to enforce shape constraints. It improves the accuracy of boundary detection and makes the evolution more efficient. The experiments conducted on the real cardiac ultrasound image sequences show a positive and promising result.

  10. A highly efficient 3D level-set grain growth algorithm tailored for ccNUMA architecture

    Science.gov (United States)

    Mießen, C.; Velinov, N.; Gottstein, G.; Barrales-Mora, L. A.

    2017-12-01

    A highly efficient simulation model for 2D and 3D grain growth was developed based on the level-set method. The model introduces modern computational concepts to achieve excellent performance on parallel computer architectures. Strong scalability was measured on cache-coherent non-uniform memory access (ccNUMA) architectures. To achieve this, the proposed approach considers the application of local level-set functions at the grain level. Ideal and non-ideal grain growth was simulated in 3D with the objective to study the evolution of statistical representative volume elements in polycrystals. In addition, microstructure evolution in an anisotropic magnetic material affected by an external magnetic field was simulated.

  11. Efficient operating system level virtualization techniques for cloud resources

    Science.gov (United States)

    Ansu, R.; Samiksha; Anju, S.; Singh, K. John

    2017-11-01

    Cloud computing is an advancing technology which provides the servcies of Infrastructure, Platform and Software. Virtualization and Computer utility are the keys of Cloud computing. The numbers of cloud users are increasing day by day. So it is the need of the hour to make resources available on demand to satisfy user requirements. The technique in which resources namely storage, processing power, memory and network or I/O are abstracted is known as Virtualization. For executing the operating systems various virtualization techniques are available. They are: Full System Virtualization and Para Virtualization. In Full Virtualization, the whole architecture of hardware is duplicated virtually. No modifications are required in Guest OS as the OS deals with the VM hypervisor directly. In Para Virtualization, modifications of OS is required to run in parallel with other OS. For the Guest OS to access the hardware, the host OS must provide a Virtual Machine Interface. OS virtualization has many advantages such as migrating applications transparently, consolidation of server, online maintenance of OS and providing security. This paper briefs both the virtualization techniques and discusses the issues in OS level virtualization.

  12. An Accurate Fire-Spread Algorithm in the Weather Research and Forecasting Model Using the Level-Set Method

    Science.gov (United States)

    Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.

    2018-04-01

    The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.

  13. Dynamic-thresholding level set: a novel computer-aided volumetry method for liver tumors in hepatic CT images

    Science.gov (United States)

    Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.

    2007-03-01

    Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.

  14. Does technique matter; a pilot study exploring weighting techniques for a multi-criteria decision support framework.

    Science.gov (United States)

    van Til, Janine; Groothuis-Oudshoorn, Catharina; Lieferink, Marijke; Dolan, James; Goetghebeur, Mireille

    2014-01-01

    There is an increased interest in the use of multi-criteria decision analysis (MCDA) to support regulatory and reimbursement decision making. The EVIDEM framework was developed to provide pragmatic multi-criteria decision support in health care, to estimate the value of healthcare interventions, and to aid in priority-setting. The objectives of this study were to test 1) the influence of different weighting techniques on the overall outcome of an MCDA exercise, 2) the discriminative power in weighting different criteria of such techniques, and 3) whether different techniques result in similar weights in weighting the criteria set proposed by the EVIDEM framework. A sample of 60 Dutch and Canadian students participated in the study. Each student used an online survey to provide weights for 14 criteria with two different techniques: a five-point rating scale and one of the following techniques selected randomly: ranking, point allocation, pairwise comparison and best worst scaling. The results of this study indicate that there is no effect of differences in weights on value estimates at the group level. On an individual level, considerable differences in criteria weights and rank order occur as a result of the weight elicitation method used, and the ability of different techniques to discriminate in criteria importance. Of the five techniques tested, the pair-wise comparison of criteria has the highest ability to discriminate in weights when fourteen criteria are compared. When weights are intended to support group decisions, the choice of elicitation technique has negligible impact on criteria weights and the overall value of an innovation. However, when weights are used to support individual decisions, the choice of elicitation technique influences outcome and studies that use dissimilar techniques cannot be easily compared. Weight elicitation through pairwise comparison of criteria is preferred when taking into account its superior ability to discriminate between

  15. A retrospective randomized study to compare the energy delivered using CDE with different techniques and OZil® settings by different surgeons in phacoemulsification

    Science.gov (United States)

    Chen, Ming; Sweeney, Henry W; Luke, Becky; Chen, Mindy; Brown, Mathew

    2009-01-01

    Cumulative dissipated energy (CDE) was used with Infiniti® Vision System (Alcon Labs) as an energy delivery guide to compare four different phaco techniques and phaco settings. The supracapsular phaco technique and burst mode is known for efficiency and surgery is faster compared with the old phaco unit. In this study, we found that supracapsular phaco with burst mode had the least CDE in both cataract and nuclear sclerosis cataract with the new Infiniti® unit. We suggest that CDE can be used as one of the references to modify technique and setting to improve outcome for surgeons, especially for new surgeons. PMID:19688027

  16. Improved inhalation technology for setting safe exposure levels for workplace chemicals

    Science.gov (United States)

    Stuart, Bruce O.

    1993-01-01

    Threshold Limit Values recommended as allowable air concentrations of a chemical in the workplace are often based upon a no-observable-effect-level (NOEL) determined by experimental inhalation studies using rodents. A 'safe level' for human exposure must then be estimated by the use of generalized safety factors in attempts to extrapolate from experimental rodents to man. The recent development of chemical-specific physiologically-based toxicokinetics makes use of measured physiological, biochemical, and metabolic parameters to construct a validated model that is able to 'scale-up' rodent response data to predict the behavior of the chemical in man. This procedure is made possible by recent advances in personal computer software and the emergence of appropriate biological data, and provides an analytical tool for much more reliable risk evaluation and airborne chemical exposure level setting for humans.

  17. Computerized detection of multiple sclerosis candidate regions based on a level set method using an artificial neural network

    International Nuclear Information System (INIS)

    Kuwazuru, Junpei; Magome, Taiki; Arimura, Hidetaka; Yamashita, Yasuo; Oki, Masafumi; Toyofuku, Fukai; Kakeda, Shingo; Yamamoto, Daisuke

    2010-01-01

    Yamamoto et al. developed the system for computer-aided detection of multiple sclerosis (MS) candidate regions. In a level set method in their proposed method, they employed the constant threshold value for the edge indicator function related to a speed function of the level set method. However, it would be appropriate to adjust the threshold value to each MS candidate region, because the edge magnitudes in MS candidates differ from each other. Our purpose of this study was to develop a computerized detection of MS candidate regions in MR images based on a level set method using an artificial neural network (ANN). To adjust the threshold value for the edge indicator function in the level set method to each true positive (TP) and false positive (FP) region, we constructed the ANN. The ANN could provide the suitable threshold value for each candidate region in the proposed level set method so that TP regions can be segmented and FP regions can be removed. Our proposed method detected MS regions at a sensitivity of 82.1% with 0.204 FPs per slice and similarity index of MS candidate regions was 0.717 on average. (author)

  18. Precision and costs of techniques for self-monitoring of serum glucose levels.

    OpenAIRE

    Chiasson, J. L.; Morrisset, R.; Hamet, P.

    1984-01-01

    The poor correlation between serum and urine glucose measurements has led to the development of new techniques for monitoring the blood glucose level in diabetic patients. Either a nurse or the patient can perform these tests, which involve spreading a single drop of blood onto a reagent strip. A colour change that is proportional to the serum glucose level can be read visually or with a reflectance meter. Evaluated against simultaneous serum glucose levels determined by the hospital biochemi...

  19. Kir2.1 channels set two levels of resting membrane potential with inward rectification.

    Science.gov (United States)

    Chen, Kuihao; Zuo, Dongchuan; Liu, Zheng; Chen, Haijun

    2018-04-01

    Strong inward rectifier K + channels (Kir2.1) mediate background K + currents primarily responsible for maintenance of resting membrane potential. Multiple types of cells exhibit two levels of resting membrane potential. Kir2.1 and K2P1 currents counterbalance, partially accounting for the phenomenon of human cardiomyocytes in subphysiological extracellular K + concentrations or pathological hypokalemic conditions. The mechanism of how Kir2.1 channels contribute to the two levels of resting membrane potential in different types of cells is not well understood. Here we test the hypothesis that Kir2.1 channels set two levels of resting membrane potential with inward rectification. Under hypokalemic conditions, Kir2.1 currents counterbalance HCN2 or HCN4 cation currents in CHO cells that heterologously express both channels, generating N-shaped current-voltage relationships that cross the voltage axis three times and reconstituting two levels of resting membrane potential. Blockade of HCN channels eliminated the phenomenon in K2P1-deficient Kir2.1-expressing human cardiomyocytes derived from induced pluripotent stem cells or CHO cells expressing both Kir2.1 and HCN2 channels. Weakly inward rectifier Kir4.1 or inward rectification-deficient Kir2.1•E224G mutant channels do not set such two levels of resting membrane potential when co-expressed with HCN2 channels in CHO cells or when overexpressed in human cardiomyocytes derived from induced pluripotent stem cells. These findings demonstrate a common mechanism that Kir2.1 channels set two levels of resting membrane potential with inward rectification by balancing inward currents through different cation channels such as hyperpolarization-activated HCN channels or hypokalemia-induced K2P1 leak channels.

  20. Level of health care and services in a tertiary health setting in Nigeria

    African Journals Online (AJOL)

    Level of health care and services in a tertiary health setting in Nigeria. ... Background: There is a growing awareness and demand for quality health care across the world; hence the ... Doctors and nurses formed 64.3% of the study population.

  1. Level design concept, theory, and practice

    CERN Document Server

    Kremers, Rudolf

    2009-01-01

    Good or bad level design can make or break any game, so it is surprising how little reference material exists for level designers. Beginning level designers have a limited understanding of the tools and techniques they can use to achieve their goals, or even define them. This book is the first to use a conceptual and theoretical foundation to build such a set of practical tools and techniques. It is tied to no particular technology or genre, so it will be a useful reference for many years to come. Kremers covers many concepts universal to level design, such as interactivity, world building, im

  2. Numerical simulation of interface movement in gas-liquid two-phase flows with Level Set method

    International Nuclear Information System (INIS)

    Li Huixiong; Chinese Academy of Sciences, Beijing; Deng Sheng; Chen Tingkuan; Zhao Jianfu; Wang Fei

    2005-01-01

    Numerical simulation of gas-liquid two-phase flow and heat transfer has been an attractive work for a quite long time, but still remains as a knotty difficulty due to the inherent complexities of the gas-liquid two-phase flow resulted from the existence of moving interfaces with topology changes. This paper reports the effort and the latest advances that have been made by the authors, with special emphasis on the methods for computing solutions to the advection equation of the Level set function, which is utilized to capture the moving interfaces in gas-liquid two-phase flows. Three different schemes, i.e. the simple finite difference scheme, the Superbee-TVD scheme and the 5-order WENO scheme in combination with the Runge-Kutta method are respectively applied to solve the advection equation of the Level Set. A numerical procedure based on the well-verified SIMPLER method is employed to numerically calculate the momentum equations of the two-phase flow. The above-mentioned three schemes are employed to simulate the movement of four typical interfaces under 5 typical flowing conditions. Analysis of the numerical results shows that the 5-order WENO scheme and the Superbee-TVD scheme are much better than the simple finite difference scheme, and the 5-order WENO scheme is the best to compute solutions to the advection equation of the Level Set. The 5-order WENO scheme will be employed as the main scheme to get solutions to the advection equations of the Level Set when gas-liquid two-phase flows are numerically studied in the future. (authors)

  3. Setting ozone critical levels for protecting horticultural Mediterranean crops: Case study of tomato

    International Nuclear Information System (INIS)

    González-Fernández, I.; Calvo, E.; Gerosa, G.; Bermejo, V.; Marzuoli, R.; Calatayud, V.; Alonso, R.

    2014-01-01

    Seven experiments carried out in Italy and Spain have been used to parameterising a stomatal conductance model and establishing exposure– and dose–response relationships for yield and quality of tomato with the main goal of setting O 3 critical levels (CLe). CLe with confidence intervals, between brackets, were set at an accumulated hourly O 3 exposure over 40 nl l −1 , AOT40 = 8.4 (1.2, 15.6) ppm h and a phytotoxic ozone dose above a threshold of 6 nmol m −2 s −1 , POD6 = 2.7 (0.8, 4.6) mmol m −2 for yield and AOT40 = 18.7 (8.5, 28.8) ppm h and POD6 = 4.1 (2.0, 6.2) mmol m −2 for quality, both indices performing equally well. CLe confidence intervals provide information on the quality of the dataset and should be included in future calculations of O 3 CLe for improving current methodologies. These CLe, derived for sensitive tomato cultivars, should not be applied for quantifying O 3 -induced losses at the risk of making important overestimations of the economical losses associated with O 3 pollution. -- Highlights: • Seven independent experiments from Italy and Spain were analysed. • O 3 critical levels are proposed for the protection of summer horticultural crops. • Exposure- and flux-based O 3 indices performed equally well. • Confidence intervals of the new O 3 critical levels are calculated. • A new method to estimate the degree risk of O 3 damage is proposed. -- Critical levels for tomato yield were set at AOT40 = 8.4 ppm h and POD6 = 2.7 mmol m −2 and confidence intervals should be used for improving O 3 risk assessment

  4. Tutorial: Junction spectroscopy techniques and deep-level defects in semiconductors

    Science.gov (United States)

    Peaker, A. R.; Markevich, V. P.; Coutinho, J.

    2018-04-01

    The term junction spectroscopy embraces a wide range of techniques used to explore the properties of semiconductor materials and semiconductor devices. In this tutorial review, we describe the most widely used junction spectroscopy approaches for characterizing deep-level defects in semiconductors and present some of the early work on which the principles of today's methodology are based. We outline ab-initio calculations of defect properties and give examples of how density functional theory in conjunction with formation energy and marker methods can be used to guide the interpretation of experimental results. We review recombination, generation, and trapping of charge carriers associated with defects. We consider thermally driven emission and capture and describe the techniques of Deep Level Transient Spectroscopy (DLTS), high resolution Laplace DLTS, admittance spectroscopy, and scanning DLTS. For the study of minority carrier related processes and wide gap materials, we consider Minority Carrier Transient Spectroscopy (MCTS), Optical DLTS, and deep level optical transient spectroscopy together with some of their many variants. Capacitance, current, and conductance measurements enable carrier exchange processes associated with the defects to be detected. We explain how these methods are used in order to understand the behaviour of point defects and the determination of charge states and negative-U (Hubbard correlation energy) behaviour. We provide, or reference, examples from a wide range of materials including Si, SiGe, GaAs, GaP, GaN, InGaN, InAlN, and ZnO.

  5. County-level poverty is equally associated with unmet health care needs in rural and urban settings.

    Science.gov (United States)

    Peterson, Lars E; Litaker, David G

    2010-01-01

    Regional poverty is associated with reduced access to health care. Whether this relationship is equally strong in both rural and urban settings or is affected by the contextual and individual-level characteristics that distinguish these areas, is unclear. Compare the association between regional poverty with self-reported unmet need, a marker of health care access, by rural/urban setting. Multilevel, cross-sectional analysis of a state-representative sample of 39,953 adults stratified by rural/urban status, linked at the county level to data describing contextual characteristics. Weighted random intercept models examined the independent association of regional poverty with unmet needs, controlling for a range of contextual and individual-level characteristics. The unadjusted association between regional poverty levels and unmet needs was similar in both rural (OR = 1.06 [95% CI, 1.04-1.08]) and urban (OR = 1.03 [1.02-1.05]) settings. Adjusting for other contextual characteristics increased the size of the association in both rural (OR = 1.11 [1.04-1.19]) and urban (OR = 1.11 [1.05-1.18]) settings. Further adjustment for individual characteristics had little additional effect in rural (OR = 1.10 [1.00-1.20]) or urban (OR = 1.11 [1.01-1.22]) settings. To better meet the health care needs of all Americans, health care systems in areas with high regional poverty should acknowledge the relationship between poverty and unmet health care needs. Investments, or other interventions, that reduce regional poverty may be useful strategies for improving health through better access to health care. © 2010 National Rural Health Association.

  6. Scope of physician procedures independently billed by mid-level providers in the office setting.

    Science.gov (United States)

    Coldiron, Brett; Ratnarathorn, Mondhipa

    2014-11-01

    Mid-level providers (nurse practitioners and physician assistants) were originally envisioned to provide primary care services in underserved areas. This study details the current scope of independent procedural billing to Medicare of difficult, invasive, and surgical procedures by medical mid-level providers. To understand the scope of independent billing to Medicare for procedures performed by mid-level providers in an outpatient office setting for a calendar year. Analyses of the 2012 Medicare Physician/Supplier Procedure Summary Master File, which reflects fee-for-service claims that were paid by Medicare, for Current Procedural Terminology procedures independently billed by mid-level providers. Outpatient office setting among health care providers. The scope of independent billing to Medicare for procedures performed by mid-level providers. In 2012, nurse practitioners and physician assistants billed independently for more than 4 million procedures at our cutoff of 5000 paid claims per procedure. Most (54.8%) of these procedures were performed in the specialty area of dermatology. The findings of this study are relevant to safety and quality of care. Recently, the shortage of primary care clinicians has prompted discussion of widening the scope of practice for mid-level providers. It would be prudent to temper widening the scope of practice of mid-level providers by recognizing that mid-level providers are not solely limited to primary care, and may involve procedures for which they may not have formal training.

  7. Multi-focus and multi-level techniques for visualization and analysis of networks with thematic data

    Science.gov (United States)

    Cossalter, Michele; Mengshoel, Ole J.; Selker, Ted

    2013-01-01

    Information-rich data sets bring several challenges in the areas of visualization and analysis, even when associated with node-link network visualizations. This paper presents an integration of multi-focus and multi-level techniques that enable interactive, multi-step comparisons in node-link networks. We describe NetEx, a visualization tool that enables users to simultaneously explore different parts of a network and its thematic data, such as time series or conditional probability tables. NetEx, implemented as a Cytoscape plug-in, has been applied to the analysis of electrical power networks, Bayesian networks, and the Enron e-mail repository. In this paper we briefly discuss visualization and analysis of the Enron social network, but focus on data from an electrical power network. Specifically, we demonstrate how NetEx supports the analytical task of electrical power system fault diagnosis. Results from a user study with 25 subjects suggest that NetEx enables more accurate isolation of complex faults compared to an especially designed software tool.

  8. New techniques for multi-level cross section calculation and fitting

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1980-09-01

    A number of recent developments in multi-level cross section work are described. A new iteration scheme for the conversion of Reich-Moore resonance parameters to Kapur-Peierls parameters allows application of Turing's method for Gaussian broadening of meromorphic functions directly to multi-level cross section expressions, without recourse to the Voigt profiles psi and chi. This makes calculation of Doppler-broadened Reich-Moore and MLBW cross sections practically as fast as SLBW and Adler-Adler cross section calculations involving the Voigt profiles. A convenient distant-level treatment utilizing average resonance parameters is presented. Apart from effectively dealing with edge effects in resonance fitting work it also leads to a simple prescription for the determination of bound levels which reproduce the thermal cross sections correctly. A brief discussion of improved resonance shape fitting techniques is included, with empahsis on the importance of correlated errors and proper use of prior information by application of Bayes' theorem. (orig.) [de

  9. New techniques for multi-level cross section calculation and fitting

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1981-01-01

    A number of recent developments in multi-level cross section work are described. A new iteration scheme for the conversion of Reich-Moore resonance parameters to Kapur-Peierls parameters allows application of Turing's method for Gaussian broadening of meromorphic functions directly to multi-level cross section expressions, without recourse to the Voigt profiles psi and chi. This makes calculation of Doppler-broadened Reich-Moore and MLBW cross sections practically as fast as SLBW and Adler-Adler cross section calculations involving the Voigt profiles. A convenient distant-level treatment utilizing average resonance parameters is presented. Apart from effectively dealing with edge effects in resonance fitting work it also leads to a simple prescription for the determination of bound levels which reproduce the thermal cross sections correctly. A brief discussion of improved resonance shape fitting techniques is included, with emphasis on the importance of correlated errors and proper use of prior information by application of Bayes' theorem

  10. An integrated extended Kalman filter–implicit level set algorithm for monitoring planar hydraulic fractures

    International Nuclear Information System (INIS)

    Peirce, A; Rochinha, F

    2012-01-01

    We describe a novel approach to the inversion of elasto-static tiltmeter measurements to monitor planar hydraulic fractures propagating within three-dimensional elastic media. The technique combines the extended Kalman filter (EKF), which predicts and updates state estimates using tiltmeter measurement time-series, with a novel implicit level set algorithm (ILSA), which solves the coupled elasto-hydrodynamic equations. The EKF and ILSA are integrated to produce an algorithm to locate the unknown fracture-free boundary. A scaling argument is used to derive a strategy to tune the algorithm parameters to enable measurement information to compensate for unmodeled dynamics. Synthetic tiltmeter data for three numerical experiments are generated by introducing significant changes to the fracture geometry by altering the confining geological stress field. Even though there is no confining stress field in the dynamic model used by the new EKF-ILSA scheme, it is able to use synthetic data to arrive at remarkably accurate predictions of the fracture widths and footprints. These experiments also explore the robustness of the algorithm to noise and to placement of tiltmeter arrays operating in the near-field and far-field regimes. In these experiments, the appropriate parameter choices and strategies to improve the robustness of the algorithm to significant measurement noise are explored. (paper)

  11. Development of technique for quantifying gamma emitters in metal waste. New technique of precise and automatic measurements for confirmation of clearance level of metal waste

    International Nuclear Information System (INIS)

    Hattori, Takatoshi

    2002-01-01

    A New technique of precise and automatic measurements of gamma emitters in metal waste has been developed using 3D non-contact shape measurement and monte-carlo calculation techniques in order to confirm that specific radioactivity level of metal waste satisfies the clearance level and furthermore the surface contamination level of the metal waste is below the legal standard level. The technique can give a calibration factor every measurement target automatically and realize an automatic correction for reduction of background count rate in gamma measurements due to self-shield effect of the measurement target. The accuracy of the present method has been made clear using mock-metal wastes with various types of shape, number and size. Assuming the goal of the detection limit for practical use is 25OBq in radioactivity, a concept of the practical gamma monitor has been designed so as to be able to confirm both the clearance level and surface contamination level simultaneously and to cope with the metal waste at a speed of 2-10 ton a day. (author)

  12. Application of physiologically based pharmacokinetic modeling in setting acute exposure guideline levels for methylene chloride.

    NARCIS (Netherlands)

    Bos, Peter Martinus Jozef; Zeilmaker, Marco Jacob; Eijkeren, Jan Cornelis Henri van

    2006-01-01

    Acute exposure guideline levels (AEGLs) are derived to protect the human population from adverse health effects in case of single exposure due to an accidental release of chemicals into the atmosphere. AEGLs are set at three different levels of increasing toxicity for exposure durations ranging from

  13. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    Science.gov (United States)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  14. Evaluation of bubbler/diaphragm techniques to measure surface level in the waste storage tanks

    International Nuclear Information System (INIS)

    Peters, T.J.; Hickman, B.J.; Colson, J.B.

    1993-10-01

    This report describes the results of tests conducted at the Pacific Northwest Laboratory (PNL) to determine if a bubbler technique can be used to determine the surface level in the waste tanks. Two techniques were evaluated. The first technique is a standard bubbler system in which a tube is placed below the surface of the liquid to be measured and air pressure inside a tube is increased until bubbles begin to become emitted from the tube. The air pressure then is a function of the pressure at the bottom of the tube. The second technique involves a system similar to the standard bubbler technique, but instead of bubbles being released into the material to be gauged, air pressure is increased against a diaphragm until enough pressure is applied to overcome the pressure of the liquid at the given depth, at which time the air then flows in a return loop back to a vent. The advantage of the diaphragm system is that it is a sealed system; thus no air is released into the waste tank materials, and it is not possible for the waste tank materials to get into the air flow. Based on the results of the tests conducted in this program, it appears that the bubbler and diaphragm systems that were tested could not be used for accurate measurements of the level in the waste tanks. Both exhibited deposits of simulated waste tank material at the end of the devices which affected the ability of the gauge to accurately determine changes in the surface level even though the measured value of the level was inaccurate. Further investigations into the cause of this inaccuracy may be warranted. Alternate diaphragm materials may improve the performance of this gauge

  15. A thick level set interface model for simulating fatigue-drive delamination in composites

    NARCIS (Netherlands)

    Latifi, M.; Van der Meer, F.P.; Sluys, L.J.

    2015-01-01

    This paper presents a new damage model for simulating fatigue-driven delamination in composite laminates. This model is developed based on the Thick Level Set approach (TLS) and provides a favorable link between damage mechanics and fracture mechanics through the non-local evaluation of the energy

  16. An Optimized, Grid Independent, Narrow Band Data Structure for High Resolution Level Sets

    DEFF Research Database (Denmark)

    Nielsen, Michael Bang; Museth, Ken

    2004-01-01

    enforced by the convex boundaries of an underlying cartesian computational grid. Here we present a novel very memory efficient narrow band data structure, dubbed the Sparse Grid, that enables the representation of grid independent high resolution level sets. The key features our new data structure are...

  17. A mass conserving level set method for detailed numerical simulation of liquid atomization

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Kun; Shao, Changxiao [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China); Yang, Yue [State Key Laboratory of Turbulence and Complex Systems, Peking University, Beijing 100871 (China); Fan, Jianren, E-mail: fanjr@zju.edu.cn [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)

    2015-10-01

    An improved mass conserving level set method for detailed numerical simulations of liquid atomization is developed to address the issue of mass loss in the existing level set method. This method introduces a mass remedy procedure based on the local curvature at the interface, and in principle, can ensure the absolute mass conservation of the liquid phase in the computational domain. Three benchmark cases, including Zalesak's disk, a drop deforming in a vortex field, and the binary drop head-on collision, are simulated to validate the present method, and the excellent agreement with exact solutions or experimental results is achieved. It is shown that the present method is able to capture the complex interface with second-order accuracy and negligible additional computational cost. The present method is then applied to study more complex flows, such as a drop impacting on a liquid film and the swirling liquid sheet atomization, which again, demonstrates the advantages of mass conservation and the capability to represent the interface accurately.

  18. Shape Reconstruction of Thin Electromagnetic Inclusions via Boundary Measurements: Level-Set Method Combined with the Topological Derivative

    Directory of Open Access Journals (Sweden)

    Won-Kwang Park

    2013-01-01

    Full Text Available An inverse problem for reconstructing arbitrary-shaped thin penetrable electromagnetic inclusions concealed in a homogeneous material is considered in this paper. For this purpose, the level-set evolution method is adopted. The topological derivative concept is incorporated in order to evaluate the evolution speed of the level-set functions. The results of the corresponding numerical simulations with and without noise are presented in this paper.

  19. Robust space-time extraction of ventricular surface evolution using multiphase level sets

    Science.gov (United States)

    Drapaca, Corina S.; Cardenas, Valerie; Studholme, Colin

    2004-05-01

    This paper focuses on the problem of accurately extracting the CSF-tissue boundary, particularly around the ventricular surface, from serial structural MRI of the brain acquired in imaging studies of aging and dementia. This is a challenging problem because of the common occurrence of peri-ventricular lesions which locally alter the appearance of white matter. We examine a level set approach which evolves a four dimensional description of the ventricular surface over time. This has the advantage of allowing constraints on the contour in the temporal dimension, improving the consistency of the extracted object over time. We follow the approach proposed by Chan and Vese which is based on the Mumford and Shah model and implemented using the Osher and Sethian level set method. We have extended this to the 4 dimensional case to propagate a 4D contour toward the tissue boundaries through the evolution of a 5D implicit function. For convergence we use region-based information provided by the image rather than the gradient of the image. This is adapted to allow intensity contrast changes between time frames in the MRI sequence. Results on time sequences of 3D brain MR images are presented and discussed.

  20. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    Science.gov (United States)

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Application of the level set method for multi-phase flow computation in fusion engineering

    International Nuclear Information System (INIS)

    Luo, X-Y.; Ni, M-J.; Ying, A.; Abdou, M.

    2006-01-01

    Numerical simulation of multi-phase flow is essential to evaluate the feasibility of a liquid protection scheme for the power plant chamber. The level set method is one of the best methods for computing and analyzing the motion of interface among the multi-phase flow. This paper presents a general formula for the second-order projection method combined with the level set method to simulate unsteady incompressible multi-phase flow with/out phase change flow encountered in fusion science and engineering. The third-order ENO scheme and second-order semi-implicit Crank-Nicholson scheme is used to update the convective and diffusion term. The numerical results show this method can handle the complex deformation of the interface and the effect of liquid-vapor phase change will be included in the future work

  2. Local gray level S-curve transformation - A generalized contrast enhancement technique for medical images.

    Science.gov (United States)

    Gandhamal, Akash; Talbar, Sanjay; Gajre, Suhas; Hani, Ahmad Fadzil M; Kumar, Dileep

    2017-04-01

    Most medical images suffer from inadequate contrast and brightness, which leads to blurred or weak edges (low contrast) between adjacent tissues resulting in poor segmentation and errors in classification of tissues. Thus, contrast enhancement to improve visual information is extremely important in the development of computational approaches for obtaining quantitative measurements from medical images. In this research, a contrast enhancement algorithm that applies gray-level S-curve transformation technique locally in medical images obtained from various modalities is investigated. The S-curve transformation is an extended gray level transformation technique that results into a curve similar to a sigmoid function through a pixel to pixel transformation. This curve essentially increases the difference between minimum and maximum gray values and the image gradient, locally thereby, strengthening edges between adjacent tissues. The performance of the proposed technique is determined by measuring several parameters namely, edge content (improvement in image gradient), enhancement measure (degree of contrast enhancement), absolute mean brightness error (luminance distortion caused by the enhancement), and feature similarity index measure (preservation of the original image features). Based on medical image datasets comprising 1937 images from various modalities such as ultrasound, mammograms, fluorescent images, fundus, X-ray radiographs and MR images, it is found that the local gray-level S-curve transformation outperforms existing techniques in terms of improved contrast and brightness, resulting in clear and strong edges between adjacent tissues. The proposed technique can be used as a preprocessing tool for effective segmentation and classification of tissue structures in medical images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Correlation test to assess low-level processing of high-density oligonucleotide microarray data

    Directory of Open Access Journals (Sweden)

    Bergh Jonas

    2005-03-01

    Full Text Available Abstract Background There are currently a number of competing techniques for low-level processing of oligonucleotide array data. The choice of technique has a profound effect on subsequent statistical analyses, but there is no method to assess whether a particular technique is appropriate for a specific data set, without reference to external data. Results We analyzed coregulation between genes in order to detect insufficient normalization between arrays, where coregulation is measured in terms of statistical correlation. In a large collection of genes, a random pair of genes should have on average zero correlation, hence allowing a correlation test. For all data sets that we evaluated, and the three most commonly used low-level processing procedures including MAS5, RMA and MBEI, the housekeeping-gene normalization failed the test. For a real clinical data set, RMA and MBEI showed significant correlation for absent genes. We also found that a second round of normalization on the probe set level improved normalization significantly throughout. Conclusion Previous evaluation of low-level processing in the literature has been limited to artificial spike-in and mixture data sets. In the absence of a known gold-standard, the correlation criterion allows us to assess the appropriateness of low-level processing of a specific data set and the success of normalization for subsets of genes.

  4. Simple, direct drug susceptibility testing technique for diagnosis of drug-resistant tuberculosis in resource-poor settings.

    Science.gov (United States)

    Kim, C-K; Joo, Y-T; Lee, E P; Park, Y K; Kim, H-J; Kim, S J

    2013-09-01

    The Korean Institute of Tuberculosis, Seoul, Republic of Korea. To develop a simple, direct drug susceptibility testing (DST) technique using Kudoh-modified Ogawa (KMO) medium. The critical concentrations of isoniazid (INH), rifampicin (RMP), kanamycin (KM) and ofloxacin (OFX) for KMO medium were calibrated by comparing the minimal inhibitory concentrations (MICs) against clinical isolates of Mycobacterium tuberculosis on KMO with those on Löwenstein-Jensen (LJ). The performance of the direct KMO DST technique was evaluated on 186 smear-positive sputum specimens and compared with indirect LJ DST. Agreement of MICs on direct vs. indirect DST was high for INH, RMP and OFX. KM MICs on KMO were ∼10 g/ml higher than those on LJ. The critical concentrations of INH, RMP, OFX and KM for KMO were therefore set at 0.2, 40.0, 2.0, and 40.0 g/ml. The evaluation of direct DST of smear-positive sputum specimens showed 100% agreement with indirect LJ DST for INH and RMP. However, the respective susceptible and resistant predictive values were 98.8% and 100% for OFX, and 100% and 80% for KM. Direct DST using KMO is useful, with clear advantages of a shorter turnaround time, procedural simplicity and low cost compared to indirect DST. It may be most indicated in resource-poor settings for programmatic management of drug-resistant tuberculosis.

  5. A variational approach to multi-phase motion of gas, liquid and solid based on the level set method

    Science.gov (United States)

    Yokoi, Kensuke

    2009-07-01

    We propose a simple and robust numerical algorithm to deal with multi-phase motion of gas, liquid and solid based on the level set method [S. Osher, J.A. Sethian, Front propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation, J. Comput. Phys. 79 (1988) 12; M. Sussman, P. Smereka, S. Osher, A level set approach for capturing solution to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146; J.A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, 1999; S. Osher, R. Fedkiw, Level Set Methods and Dynamics Implicit Surface, Applied Mathematical Sciences, vol. 153, Springer, 2003]. In Eulerian framework, to simulate interaction between a moving solid object and an interfacial flow, we need to define at least two functions (level set functions) to distinguish three materials. In such simulations, in general two functions overlap and/or disagree due to numerical errors such as numerical diffusion. In this paper, we resolved the problem using the idea of the active contour model [M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International Journal of Computer Vision 1 (1988) 321; V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, International Journal of Computer Vision 22 (1997) 61; G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2001; R. Kimmel, Numerical Geometry of Images: Theory, Algorithms, and Applications, Springer-Verlag, 2003] introduced in the field of image processing.

  6. Radar rainfall image repair techniques

    Directory of Open Access Journals (Sweden)

    Stephen M. Wesson

    2004-01-01

    Full Text Available There are various quality problems associated with radar rainfall data viewed in images that include ground clutter, beam blocking and anomalous propagation, to name a few. To obtain the best rainfall estimate possible, techniques for removing ground clutter (non-meteorological echoes that influence radar data quality on 2-D radar rainfall image data sets are presented here. These techniques concentrate on repairing the images in both a computationally fast and accurate manner, and are nearest neighbour techniques of two sub-types: Individual Target and Border Tracing. The contaminated data is estimated through Kriging, considered the optimal technique for the spatial interpolation of Gaussian data, where the 'screening effect' that occurs with the Kriging weighting distribution around target points is exploited to ensure computational efficiency. Matrix rank reduction techniques in combination with Singular Value Decomposition (SVD are also suggested for finding an efficient solution to the Kriging Equations which can cope with near singular systems. Rainfall estimation at ground level from radar rainfall volume scan data is of interest and importance in earth bound applications such as hydrology and agriculture. As an extension of the above, Ordinary Kriging is applied to three-dimensional radar rainfall data to estimate rainfall rate at ground level. Keywords: ground clutter, data infilling, Ordinary Kriging, nearest neighbours, Singular Value Decomposition, border tracing, computation time, ground level rainfall estimation

  7. On piecewise constant level-set (PCLS) methods for the identification of discontinuous parameters in ill-posed problems

    International Nuclear Information System (INIS)

    De Cezaro, A; Leitão, A; Tai, X-C

    2013-01-01

    We investigate level-set-type methods for solving ill-posed problems with discontinuous (piecewise constant) coefficients. The goal is to identify the level sets as well as the level values of an unknown parameter function on a model described by a nonlinear ill-posed operator equation. The PCLS approach is used here to parametrize the solution of a given operator equation in terms of a L 2 level-set function, i.e. the level-set function itself is assumed to be a piecewise constant function. Two distinct methods are proposed for computing stable solutions of the resulting ill-posed problem: the first is based on Tikhonov regularization, while the second is based on the augmented Lagrangian approach with total variation penalization. Classical regularization results (Engl H W et al 1996 Mathematics and its Applications (Dordrecht: Kluwer)) are derived for the Tikhonov method. On the other hand, for the augmented Lagrangian method, we succeed in proving the existence of (generalized) Lagrangian multipliers in the sense of (Rockafellar R T and Wets R J-B 1998 Grundlehren der Mathematischen Wissenschaften (Berlin: Springer)). Numerical experiments are performed for a 2D inverse potential problem (Hettlich F and Rundell W 1996 Inverse Problems 12 251–66), demonstrating the capabilities of both methods for solving this ill-posed problem in a stable way (complicated inclusions are recovered without any a priori geometrical information on the unknown parameter). (paper)

  8. Contextualising Water Use in Residential Settings: A Survey of Non-Intrusive Techniques and Approaches

    Directory of Open Access Journals (Sweden)

    Davide Carboni

    2016-05-01

    Full Text Available Water monitoring in households is important to ensure the sustainability of fresh water reserves on our planet. It provides stakeholders with the statistics required to formulate optimal strategies in residential water management. However, this should not be prohibitive and appliance-level water monitoring cannot practically be achieved by deploying sensors on every faucet or water-consuming device of interest due to the higher hardware costs and complexity, not to mention the risk of accidental leakages that can derive from the extra plumbing needed. Machine learning and data mining techniques are promising techniques to analyse monitored data to obtain non-intrusive water usage disaggregation. This is because they can discern water usage from the aggregated data acquired from a single point of observation. This paper provides an overview of water usage disaggregation systems and related techniques adopted for water event classification. The state-of-the art of algorithms and testbeds used for fixture recognition are reviewed and a discussion on the prominent challenges and future research are also included.

  9. GPU accelerated edge-region based level set evolution constrained by 2D gray-scale histogram.

    Science.gov (United States)

    Balla-Arabé, Souleymane; Gao, Xinbo; Wang, Bin

    2013-07-01

    Due to its intrinsic nature which allows to easily handle complex shapes and topological changes, the level set method (LSM) has been widely used in image segmentation. Nevertheless, LSM is computationally expensive, which limits its applications in real-time systems. For this purpose, we propose a new level set algorithm, which uses simultaneously edge, region, and 2D histogram information in order to efficiently segment objects of interest in a given scene. The computational complexity of the proposed LSM is greatly reduced by using the highly parallelizable lattice Boltzmann method (LBM) with a body force to solve the level set equation (LSE). The body force is the link with image data and is defined from the proposed LSE. The proposed LSM is then implemented using an NVIDIA graphics processing units to fully take advantage of the LBM local nature. The new algorithm is effective, robust against noise, independent to the initial contour, fast, and highly parallelizable. The edge and region information enable to detect objects with and without edges, and the 2D histogram information enable the effectiveness of the method in a noisy environment. Experimental results on synthetic and real images demonstrate subjectively and objectively the performance of the proposed method.

  10. Application of a visual soil examination and evaluation technique at site and farm level

    NARCIS (Netherlands)

    Sonneveld, M.P.W.; Heuvelink, G.B.M.; Moolenaar, S.W.

    2014-01-01

    Visual soil examination and evaluation (VSEE) techniques are semi-quantitative methods that provide rapid and cost-effective information on soil quality. These are mostly applied at site or field level, but there is an increased need for soil quality indicators at farm level to allow integration

  11. Level-set dynamics and mixing efficiency of passive and active scalars in DNS and LES of turbulent mixing layers

    NARCIS (Netherlands)

    Geurts, Bernard J.; Vreman, Bert; Kuerten, Hans; Luo, Kai H.

    2001-01-01

    The mixing efficiency in a turbulent mixing layer is quantified by monitoring the surface-area of level-sets of scalar fields. The Laplace transform is applied to numerically calculate integrals over arbitrary level-sets. The analysis includes both direct and large-eddy simulation and is used to

  12. Survey of agents and techniques applicable to the solidification of low-level radioactive wastes

    International Nuclear Information System (INIS)

    Fuhrmann, M.; Neilson, R.M. Jr.; Colombo, P.

    1981-12-01

    A review of the various solidification agents and techniques that are currently available or potentially applicable for the solidification of low-level radioactive wastes is presented. An overview of the types and quantities of low-level wastes produced is presented. Descriptions of waste form matrix materials, the wastes types for which they have been or may be applied and available information concerning relevant waste form properties and characteristics follow. Also included are descriptions of the processing techniques themselves with an emphasis on those operating parameters which impact upon waste form properties. The solidification agents considered in this survey include: hydraulic cements, thermoplastic materials, thermosetting polymers, glasses, synthetic minerals and composite materials. This survey is part of a program supported by the United States Department of Energy's Low-Level Waste Management Program (LLWMP). This work provides input into LLWMP efforts to develop and compile information relevant to the treatment and processing of low-level wastes and their disposal by shallow land burial

  13. Survey of agents and techniques applicable to the solidification of low-level radioactive wastes

    Energy Technology Data Exchange (ETDEWEB)

    Fuhrmann, M.; Neilson, R.M. Jr.; Colombo, P.

    1981-12-01

    A review of the various solidification agents and techniques that are currently available or potentially applicable for the solidification of low-level radioactive wastes is presented. An overview of the types and quantities of low-level wastes produced is presented. Descriptions of waste form matrix materials, the wastes types for which they have been or may be applied and available information concerning relevant waste form properties and characteristics follow. Also included are descriptions of the processing techniques themselves with an emphasis on those operating parameters which impact upon waste form properties. The solidification agents considered in this survey include: hydraulic cements, thermoplastic materials, thermosetting polymers, glasses, synthetic minerals and composite materials. This survey is part of a program supported by the United States Department of Energy's Low-Level Waste Management Program (LLWMP). This work provides input into LLWMP efforts to develop and compile information relevant to the treatment and processing of low-level wastes and their disposal by shallow land burial.

  14. SET overexpression in HEK293 cells regulates mitochondrial uncoupling proteins levels within a mitochondrial fission/reduced autophagic flux scenario

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Luciana O.; Goto, Renata N. [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Neto, Marinaldo P.C. [Department of Physics and Chemistry, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Sousa, Lucas O. [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Curti, Carlos [Department of Physics and Chemistry, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Leopoldino, Andréia M., E-mail: andreiaml@usp.br [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil)

    2015-03-06

    We hypothesized that SET, a protein accumulated in some cancer types and Alzheimer disease, is involved in cell death through mitochondrial mechanisms. We addressed the mRNA and protein levels of the mitochondrial uncoupling proteins UCP1, UCP2 and UCP3 (S and L isoforms) by quantitative real-time PCR and immunofluorescence as well as other mitochondrial involvements, in HEK293 cells overexpressing the SET protein (HEK293/SET), either in the presence or absence of oxidative stress induced by the pro-oxidant t-butyl hydroperoxide (t-BHP). SET overexpression in HEK293 cells decreased UCP1 and increased UCP2 and UCP3 (S/L) mRNA and protein levels, whilst also preventing lipid peroxidation and decreasing the content of cellular ATP. SET overexpression also (i) decreased the area of mitochondria and increased the number of organelles and lysosomes, (ii) increased mitochondrial fission, as demonstrated by increased FIS1 mRNA and FIS-1 protein levels, an apparent accumulation of DRP-1 protein, and an increase in the VDAC protein level, and (iii) reduced autophagic flux, as demonstrated by a decrease in LC3B lipidation (LC3B-II) in the presence of chloroquine. Therefore, SET overexpression in HEK293 cells promotes mitochondrial fission and reduces autophagic flux in apparent association with up-regulation of UCP2 and UCP3; this implies a potential involvement in cellular processes that are deregulated such as in Alzheimer's disease and cancer. - Highlights: • SET, UCPs and autophagy prevention are correlated. • SET action has mitochondrial involvement. • UCP2/3 may reduce ROS and prevent autophagy. • SET protects cell from ROS via UCP2/3.

  15. An improved level set method for brain MR images segmentation and bias correction.

    Science.gov (United States)

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  16. Engagement techniques and playing level impact the biomechanical demands on rugby forwards during machine-based scrummaging

    OpenAIRE

    Preatoni, Ezio; Stokes, Keith A.; England, Michael E.; Trewartha, Grant

    2014-01-01

    Objectives This cross-sectional study investigated the factors that may influence the physical loading on rugby forwards performing a scrum by studying the biomechanics of machine-based scrummaging under different engagement techniques and playing levels.Methods 34 forward packs from six playing levels performed repetitions of five different types of engagement techniques against an instrumented scrum machine under realistic training conditions. Applied forces and body movements were recorded...

  17. Minimum deltaV Burn Planning for the International Space Station Using a Hybrid Optimization Technique, Level 1

    Science.gov (United States)

    Brown, Aaron J.

    2015-01-01

    The International Space Station's (ISS) trajectory is coordinated and executed by the Trajectory Operations and Planning (TOPO) group at NASA's Johnson Space Center. TOPO group personnel routinely generate look-ahead trajectories for the ISS that incorporate translation burns needed to maintain its orbit over the next three to twelve months. The burns are modeled as in-plane, horizontal burns, and must meet operational trajectory constraints imposed by both NASA and the Russian Space Agency. In generating these trajectories, TOPO personnel must determine the number of burns to model, each burn's Time of Ignition (TIG), and magnitude (i.e. deltaV) that meet these constraints. The current process for targeting these burns is manually intensive, and does not take advantage of more modern techniques that can reduce the workload needed to find feasible burn solutions, i.e. solutions that simply meet the constraints, or provide optimal burn solutions that minimize the total DeltaV while simultaneously meeting the constraints. A two-level, hybrid optimization technique is proposed to find both feasible and globally optimal burn solutions for ISS trajectory planning. For optimal solutions, the technique breaks the optimization problem into two distinct sub-problems, one for choosing the optimal number of burns and each burn's optimal TIG, and the other for computing the minimum total deltaV burn solution that satisfies the trajectory constraints. Each of the two aforementioned levels uses a different optimization algorithm to solve one of the sub-problems, giving rise to a hybrid technique. Level 2, or the outer level, uses a genetic algorithm to select the number of burns and each burn's TIG. Level 1, or the inner level, uses the burn TIGs from Level 2 in a sequential quadratic programming (SQP) algorithm to compute a minimum total deltaV burn solution subject to the trajectory constraints. The total deltaV from Level 1 is then used as a fitness function by the genetic

  18. DESIRE FOR LEVELS. Background study for the policy document "Setting Environmental Quality Standards for Water and Soil"

    NARCIS (Netherlands)

    van de Meent D; Aldenberg T; Canton JH; van Gestel CAM; Slooff W

    1990-01-01

    The report provides scientific support for setting environmental quality objectives for water, sediment and soil. Quality criteria are not set in this report. Only options for decisions are given. The report is restricted to the derivation of the 'maximally acceptable risk' levels (MAR)

  19. Considering Actionability at the Participant's Research Setting Level for Anticipatable Incidental Findings from Clinical Research.

    Science.gov (United States)

    Ortiz-Osorno, Alberto Betto; Ehler, Linda A; Brooks, Judith

    2015-01-01

    Determining what constitutes an anticipatable incidental finding (IF) from clinical research and defining whether, and when, this IF should be returned to the participant have been topics of discussion in the field of human subject protections for the last 10 years. It has been debated that implementing a comprehensive IF-approach that addresses both the responsibility of researchers to return IFs and the expectation of participants to receive them can be logistically challenging. IFs have been debated at different levels, such as the ethical reasoning for considering their disclosure or the need for planning for them during the development of the research study. Some authors have discussed the methods for re-contacting participants for disclosing IFs, as well as the relevance of considering the clinical importance of the IFs. Similarly, other authors have debated about when IFs should be disclosed to participants. However, no author has addressed how the "actionability" of the IFs should be considered, evaluated, or characterized at the participant's research setting level. This paper defines the concept of "Actionability at the Participant's Research Setting Level" (APRSL) for anticipatable IFs from clinical research, discusses some related ethical concepts to justify the APRSL concept, proposes a strategy to incorporate APRSL into the planning and management of IFs, and suggests a strategy for integrating APRSL at each local research setting. © 2015 American Society of Law, Medicine & Ethics, Inc.

  20. Soft Computing Technique and Conventional Controller for Conical Tank Level Control

    Directory of Open Access Journals (Sweden)

    Sudharsana Vijayan

    2016-03-01

    Full Text Available In many process industries the control of liquid level is mandatory. But the control of nonlinear process is difficult. Many process industries use conical tanks because of its non linear shape contributes better drainage for solid mixtures, slurries and viscous liquids. So, control of conical tank level is a challenging task due to its non-linearity and continually varying cross-section. This is due to relationship between controlled variable level and manipulated variable flow rate, which has a square root relationship. The main objective is to execute the suitable controller for conical tank system to maintain the desired level. System identification of the non-linear process is done using black box modelling and found to be first order plus dead time (FOPDT model. In this paper it is proposed to obtain the mathematical modelling of a conical tank system and to study the system using block diagram after that soft computing technique like fuzzy and conventional controller is also used for the comparison.

  1. Accuracy of different abutment level impression techniques in All-On-4 dental implants

    Directory of Open Access Journals (Sweden)

    Marzieh Alikhasi

    2012-01-01

    Full Text Available Background and Aims: Passive fit of prosthetic frameworks is a major concern in implant dentistry. Impression technique is one of the several variables that may affect the outcome of dental implants. The purpose of this study was to compare the three dimensional accuracy of direct and indirect abutment level implant impressions ofALL-ON-4 treatment plan.Materials and Methods: A reference acrylic resin model with four Branemark fixtures was made according to All-On-4 treatment plan. Multiunit abutments were screwed into the fixtures and two special trays were made for direct and indirect impression techniques. Ten direct and ten indirect impression techniques with respective impression transfers were made. Impressions were poured with stone and the positional accuracy of the abutment analogues in each dimension of x, y, and z axes and also angular displacement (Δθ were evaluated using a Coordinate Measuring Machine (CMM. Data were analyzed using T- test.Results: The results showed that direct impression technique was significantly more accurate than indirect technique (P<0.001.Conclusion: The results showed that the accuracy of direct impression technique was significantly more than that of indirect technique in Δθ and Δr coordinate and also Δx, Δy, Δz.

  2. Architectural-level power estimation and experimentation

    Science.gov (United States)

    Ye, Wu

    With the emergence of a plethora of embedded and portable applications and ever increasing integration levels, power dissipation of integrated circuits has moved to the forefront as a design constraint. Recent years have also seen a significant trend towards designs starting at the architectural (or RT) level. Those demand accurate yet fast RT level power estimation methodologies and tools. This thesis addresses issues and experiments associate with architectural level power estimation. An execution driven, cycle-accurate RT level power simulator, SimplePower, was developed using transition-sensitive energy models. It is based on the architecture of a five-stage pipelined RISC datapath for both 0.35mum and 0.8mum technology and can execute the integer subset of the instruction set of SimpleScalar . SimplePower measures the energy consumed in the datapath, memory and on-chip buses. During the development of SimplePower , a partitioning power modeling technique was proposed to model the energy consumed in complex functional units. The accuracy of this technique was validated with HSPICE simulation results for a register file and a shifter. A novel, selectively gated pipeline register optimization technique was proposed to reduce the datapath energy consumption. It uses the decoded control signals to selectively gate the data fields of the pipeline registers. Simulation results show that this technique can reduce the datapath energy consumption by 18--36% for a set of benchmarks. A low-level back-end compiler optimization, register relabeling, was applied to reduce the on-chip instruction cache data bus switch activities. Its impact was evaluated by SimplePower. Results show that it can reduce the energy consumed in the instruction data buses by 3.55--16.90%. A quantitative evaluation was conducted for the impact of six state-of-art high-level compilation techniques on both datapath and memory energy consumption. The experimental results provide a valuable insight for

  3. Analysis of Forensic Autopsy in 120 Cases of Medical Disputes Among Different Levels of Institutional Settings.

    Science.gov (United States)

    Yu, Lin-Sheng; Ye, Guang-Hua; Fan, Yan-Yan; Li, Xing-Biao; Feng, Xiang-Ping; Han, Jun-Ge; Lin, Ke-Zhi; Deng, Miao-Wu; Li, Feng

    2015-09-01

    Despite advances in medical science, the causes of death can sometimes only be determined by pathologists after a complete autopsy. Few studies have investigated the importance of forensic autopsy in medically disputed cases among different levels of institutional settings. Our study aimed to analyze forensic autopsy in 120 cases of medical disputes among five levels of institutional settings between 2001 and 2012 in Wenzhou, China. The results showed an overall concordance rate of 55%. Of the 39% of clinically missed diagnosis, cardiovascular pathology comprises 55.32%, while respiratory pathology accounts for the remaining 44. 68%. Factors that increase the likelihood of missed diagnoses were private clinics, community settings, and county hospitals. These results support that autopsy remains an important tool in establishing causes of death in medically disputed case, which may directly determine or exclude the fault of medical care and therefore in helping in resolving these cases. © 2015 American Academy of Forensic Sciences.

  4. State-of-the-art review of quality assurance techniques for vitrified high level waste

    International Nuclear Information System (INIS)

    Miller, P.L.H.

    1984-07-01

    Quality assurance is required for certain chemical and physical properties of both the molten glass pour and the solidified glass within the stainless steel container. It is also required to monitor the physical condition of the container lid weld. A review is presented of techniques which are used or which might be adapted for use in the quality assurance of vitrified high level waste. For the most part only non-intrusive methods have been considered, however, some techniques which are not strictly non-intrusive have been reviewed where a non-intrusive technique has not been identified or where there are other advantages associated with the particular technique. In order to identify suitable candidate techniques reference has been made to an extensive literature survey and experts in the fields of nuclear waste technology, glass technology, non-destructive testing, chemical analysis and remote analysis have been contacted. The opinions of manufacturers and users of specific techniques have also been sought. A summary is also given of those techniques which can most readily be applied to the problem of quality assurance for vitrified waste as well as recommendations for further research into techniques which might be adapted to suit this application. (author)

  5. EFFECTIVE SUMMARY FOR MASSIVE DATA SET

    Directory of Open Access Journals (Sweden)

    A. Radhika

    2015-07-01

    Full Text Available The research efforts attempt to investigate size of the data increasing interest in designing the effective algorithm for space and time reduction. Providing high-dimensional technique over large data set is difficult. However, Randomized techniques are used for analyzing the data set where the performance of the data from part of storage in networks needs to be collected and analyzed continuously. Previously collaborative filtering approach is used for finding the similar patterns based on the user ranking but the outcomes are not observed yet. Linear approach requires high running time and more space. To overcome this sketching technique is used to represent massive data sets. Sketching allows short fingerprints of the item sets of users which allow approximately computing similarity between sets of different users. The concept of sketching is to generate minimum subset of record that executes all the original records. Sketching performs two techniques dimensionality reduction which reduces rows or columns and data reduction. It is proved that sketching can be performed using Principal Component Analysis for finding index value

  6. A combined single-multiphase flow formulation of the premixing phase using the level set method

    International Nuclear Information System (INIS)

    Leskovar, M.; Marn, J.

    1999-01-01

    The premixing phase of a steam explosion covers the interaction of the melt jet or droplets with the water prior to any steam explosion occurring. To get a better insight of the hydrodynamic processes during the premixing phase beside hot premixing experiments, where the water evaporation is significant, also cold isothermal premixing experiments are performed. The specialty of isothermal premixing experiments is that three phases are involved: the water, the air and the spheres phase, but only the spheres phase mixes with the other two phases whereas the water and air phases do not mix and remain separated by a free surface. Our idea therefore was to treat the isothermal premixing process with a combined single-multiphase flow model. In this combined model the water and air phase are treated as a single phase with discontinuous phase properties at the water air interface, whereas the spheres are treated as usually with a multiphase flow model, where the spheres represent the dispersed phase and the common water-air phase represents the continuous phase. The common water-air phase was described with the front capturing method based on the level set formulation. In the level set formulation, the boundary of two-fluid interfaces is modeled as the zero set of a smooth signed normal distance function defined on the entire physical domain. The boundary is then updated by solving a nonlinear equation of the Hamilton-Jacobi type on the whole domain. With this single-multiphase flow model the Queos isothermal premixing Q08 has been simulated. A numerical analysis using different treatments of the water-air interface (level set, high-resolution and upwind) has been performed for the incompressible and compressible case and the results were compared to experimental measurements.(author)

  7. UpSet: Visualization of Intersecting Sets

    Science.gov (United States)

    Lex, Alexander; Gehlenborg, Nils; Strobelt, Hendrik; Vuillemot, Romain; Pfister, Hanspeter

    2016-01-01

    Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains. PMID:26356912

  8. A retrospective randomized study to compare the energy delivered using CDE with different techniques and OZil settings by different surgeons in phacoemulsification

    Directory of Open Access Journals (Sweden)

    Ming Chen

    2009-07-01

    Full Text Available Ming Chen1, Henry W Sweeney2, Becky Luke3, Mindy Chen4, Mathew Brown51University of Hawaii, Honolulu, Hawaii, USA; 2Research Support Services, the Queens Medical Center, Honolulu, Hawaii, USA; 3Surgical Suite, Honolulu, Hawaii, USA; 4University of California, Irvine, CA, USA; 5University of California, San Diego, CA, USAAbstract: Cumulative dissipated energy (CDE was used with Infiniti® Vision System (Alcon Labs as an energy delivery guide to compare four different phaco techniques and phaco settings. The supracapsular phaco technique and burst mode is known for efficiency and surgery is faster compared with the old phaco unit. In this study, we found that supracapsular phaco with burst mode had the least CDE in both cataract and nuclear sclerosis cataract with the new Infiniti® unit. We suggest that CDE can be used as one of the references to modify technique and setting to improve outcome for surgeons, especially for new surgeons. Keywords: CDE (cumulative dissipated energy, cataract surgery, phacoemulsification, supracapsular, burst mode, Divide–Conquer

  9. Friction stir welding sets sail in China

    International Nuclear Information System (INIS)

    Luan, Guohong

    2007-01-01

    Today, Friction Stir Welding has set sail in China. As the pioneer of FSW development in the China territory, China FSW Centre hes made outstanding achievements in FSW technique development, FSW engineering, FSW equipment and FSW product. But the real industrial applications of FSW in China are just begining. With the planned national long-term development programmes and huge market requirement in aerospace, aviation, shipbuilding, railway, power and energy industries, FSW will continue to develop rapidly in the next 10 years. FSW will continue to develop rapidly in the next 10 years. FSW not only raises the level of joining techniques in Chinese industrial companies, but also increase the competitive ability of the industrial products made in china

  10. Individual and setting level predictors of the implementation of a skin cancer prevention program: a multilevel analysis

    Directory of Open Access Journals (Sweden)

    Brownson Ross C

    2010-05-01

    Full Text Available Abstract Background To achieve widespread cancer control, a better understanding is needed of the factors that contribute to successful implementation of effective skin cancer prevention interventions. This study assessed the relative contributions of individual- and setting-level characteristics to implementation of a widely disseminated skin cancer prevention program. Methods A multilevel analysis was conducted using data from the Pool Cool Diffusion Trial from 2004 and replicated with data from 2005. Implementation of Pool Cool by lifeguards was measured using a composite score (implementation variable, range 0 to 10 that assessed whether the lifeguard performed different components of the intervention. Predictors included lifeguard background characteristics, lifeguard sun protection-related attitudes and behaviors, pool characteristics, and enhanced (i.e., more technical assistance, tailored materials, and incentives are provided versus basic treatment group. Results The mean value of the implementation variable was 4 in both years (2004 and 2005; SD = 2 in 2004 and SD = 3 in 2005 indicating a moderate implementation for most lifeguards. Several individual-level (lifeguard characteristics and setting-level (pool characteristics and treatment group factors were found to be significantly associated with implementation of Pool Cool by lifeguards. All three lifeguard-level domains (lifeguard background characteristics, lifeguard sun protection-related attitudes and behaviors and six pool-level predictors (number of weekly pool visitors, intervention intensity, geographic latitude, pool location, sun safety and/or skin cancer prevention programs, and sun safety programs and policies were included in the final model. The most important predictors of implementation were the number of weekly pool visitors (inverse association and enhanced treatment group (positive association. That is, pools with fewer weekly visitors and pools in the enhanced

  11. A parameter tree approach to estimating system sensitivities to parameter sets

    International Nuclear Information System (INIS)

    Jarzemba, M.S.; Sagar, B.

    2000-01-01

    A post-processing technique for determining relative system sensitivity to groups of parameters and system components is presented. It is assumed that an appropriate parametric model is used to simulate system behavior using Monte Carlo techniques and that a set of realizations of system output(s) is available. The objective of our technique is to analyze the input vectors and the corresponding output vectors (that is, post-process the results) to estimate the relative sensitivity of the output to input parameters (taken singly and as a group) and thereby rank them. This technique is different from the design of experimental techniques in that a partitioning of the parameter space is not required before the simulation. A tree structure (which looks similar to an event tree) is developed to better explain the technique. Each limb of the tree represents a particular combination of parameters or a combination of system components. For convenience and to distinguish it from the event tree, we call it the parameter tree. To construct the parameter tree, the samples of input parameter values are treated as either a '+' or a '-' based on whether or not the sampled parameter value is greater than or less than a specified branching criterion (e.g., mean, median, percentile of the population). The corresponding system outputs are also segregated into similar bins. Partitioning the first parameter into a '+' or a '-' bin creates the first level of the tree containing two branches. At the next level, realizations associated with each first-level branch are further partitioned into two bins using the branching criteria on the second parameter and so on until the tree is fully populated. Relative sensitivities are then inferred from the number of samples associated with each branch of the tree. The parameter tree approach is illustrated by applying it to a number of preliminary simulations of the proposed high-level radioactive waste repository at Yucca Mountain, NV. Using a

  12. Automatic segmentation of Leishmania parasite in microscopic images using a modified CV level set method

    Science.gov (United States)

    Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab

    2015-12-01

    Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.

  13. Quasi-min-max Fuzzy MPC of UTSG Water Level Based on Off-Line Invariant Set

    Science.gov (United States)

    Liu, Xiangjie; Jiang, Di; Lee, Kwang Y.

    2015-10-01

    In a nuclear power plant, the water level of the U-tube steam generator (UTSG) must be maintained within a safe range. Traditional control methods encounter difficulties due to the complexity, strong nonlinearity and “swell and shrink” effects, especially at low power levels. A properly designed robust model predictive control can well solve this problem. In this paper, a quasi-min-max fuzzy model predictive controller is developed for controlling the constrained UTSG system. While the online computational burden could be quite large for the real-time control, a bank of ellipsoid invariant sets together with the corresponding feedback control laws are obtained by off-line solving linear matrix inequalities (LMIs). Based on the UTSG states, the online optimization is simplified as a constrained optimization problem with a bisection search for the corresponding ellipsoid invariant set. Simulation results are given to show the effectiveness of the proposed controller.

  14. On the Relationship between Variational Level Set-Based and SOM-Based Active Contours

    Science.gov (United States)

    Abdelsamea, Mohammed M.; Gnecco, Giorgio; Gaber, Mohamed Medhat; Elyan, Eyad

    2015-01-01

    Most Active Contour Models (ACMs) deal with the image segmentation problem as a functional optimization problem, as they work on dividing an image into several regions by optimizing a suitable functional. Among ACMs, variational level set methods have been used to build an active contour with the aim of modeling arbitrarily complex shapes. Moreover, they can handle also topological changes of the contours. Self-Organizing Maps (SOMs) have attracted the attention of many computer vision scientists, particularly in modeling an active contour based on the idea of utilizing the prototypes (weights) of a SOM to control the evolution of the contour. SOM-based models have been proposed in general with the aim of exploiting the specific ability of SOMs to learn the edge-map information via their topology preservation property and overcoming some drawbacks of other ACMs, such as trapping into local minima of the image energy functional to be minimized in such models. In this survey, we illustrate the main concepts of variational level set-based ACMs, SOM-based ACMs, and their relationship and review in a comprehensive fashion the development of their state-of-the-art models from a machine learning perspective, with a focus on their strengths and weaknesses. PMID:25960736

  15. Adaptable Value-Set Analysis for Low-Level Code

    OpenAIRE

    Brauer, Jörg; Hansen, René Rydhof; Kowalewski, Stefan; Larsen, Kim G.; Olesen, Mads Chr.

    2012-01-01

    This paper presents a framework for binary code analysis that uses only SAT-based algorithms. Within the framework, incremental SAT solving is used to perform a form of weakly relational value-set analysis in a novel way, connecting the expressiveness of the value sets to computational complexity. Another key feature of our framework is that it translates the semantics of binary code into an intermediate representation. This allows for a straightforward translation of the program semantics in...

  16. What is the perceived impact of Alexander technique lessons on health status, costs and pain management in the real life setting of an English hospital? The results of a mixed methods evaluation of an Alexander technique service for those with chronic back pain.

    Science.gov (United States)

    McClean, Stuart; Brilleman, Sam; Wye, Lesley

    2015-07-28

    Randomised controlled trial evidence indicates that Alexander Technique is clinically and cost effective for chronic back pain. The aim of this mixed methods evaluation was to explore the role and perceived impact of Alexander Technique lessons in the naturalistic setting of an acute hospital Pain Management Clinic in England. To capture changes in health status and resource use amongst service users, 43 service users were administered three widely used questionnaires (Brief Pain Inventory, MYMOP and Client Service Resource Inventory) at three time points: baseline, six weeks and three months after baseline. We also carried out 27 telephone interviews with service users and seven face-to-face interviews with pain clinic staff and Alexander Technique teachers. Quantitative data were analysed using descriptive statistics and qualitative data were analysed thematically. Those taking Alexander Technique lessons reported small improvements in health outcomes, and condition-related costs fell. However, due to the non-randomised, uncontrolled nature of the study design, changes cannot be attributed to the Alexander Technique lessons. Service users stated that their relationship to pain and pain management had changed, especially those who were more committed to practising the techniques regularly. These changes may explain the reported reduction in pain-related service use and the corresponding lower associated costs. Alexander Technique lessons may be used as another approach to pain management. The findings suggests that Alexander Technique lessons can help improve self-efficacy for those who are sufficiently motivated, which in turn may have an impact on service utilisation levels.

  17. Differential geometry techniques for sets of nonlinear partial differential equations

    Science.gov (United States)

    Estabrook, Frank B.

    1990-01-01

    An attempt is made to show that the Cartan theory of partial differential equations can be a useful technique for applied mathematics. Techniques for finding consistent subfamilies of solutions that are generically rich and well-posed and for introducing potentials or other usefully consistent auxiliary fields are introduced. An extended sample calculation involving the Korteweg-de Vries equation is given.

  18. Application of a set of complementary techniques to understand how varying the proportion of two wastes affects humic acids produced by vermicomposting

    Energy Technology Data Exchange (ETDEWEB)

    Fernández-Gómez, Manuel J., E-mail: manuelj.fernandez@eez.csic.es [Estación Experimental del Zaidín, Consejo Superior de Investigaciones Científicas, Profesor Albareda 1, 18008 Granada (Spain); Nogales, Rogelio [Estación Experimental del Zaidín, Consejo Superior de Investigaciones Científicas, Profesor Albareda 1, 18008 Granada (Spain); Plante, Alain [Department of Earth and Environmental Science, University of Pennsylvania, Hayden Hall, 240 S. 33rd Street, Philadelphia, PA 19104 (United States); Plaza, César [Instituto de Ciencias Agrarias, Consejo Superior de Investigaciones Científicas, Serrano 115, 28006 Madrid (Spain); Fernández, José M. [Department of Earth and Environmental Science, University of Pennsylvania, Hayden Hall, 240 S. 33rd Street, Philadelphia, PA 19104 (United States); Instituto de Ciencias Agrarias, Consejo Superior de Investigaciones Científicas, Serrano 115, 28006 Madrid (Spain)

    2015-01-15

    Highlights: • A set of techniques was used to characterize humic acids content of vermicomposts. • The properties of the humic acids produced from different waste mixtures were similar. • This set of techniques allowed distinguishing the humic acids of each vermicomposts. • Increasing humic acid contents in initial mixtures would produce richer vermicomposts. - Abstract: A better understanding of how varying the proportion of different organic wastes affects humic acid (HA) formation during vermicomposting would be useful in producing vermicomposts enriched in HAs. With the aim of improving the knowledge about this issue, a variety of analytical techniques [UV–visible spectroscopic, Fourier transform infrared, fluorescence spectra, solid-state cross-polarization magic-angle spinning (CPMAS) {sup 13}C nuclear magnetic resonance (NMR) spectra, and thermal analysis] was used in the present study to characterize HAs isolated from two mixtures at two different ratios (2:1 and 1:1) of tomato-plant debris (TD) and paper-mill sludge (PS) before and after vermicomposting. The results suggest that vermicomposting increased the HA content in the TD/PS 2:1 and 1:1 mixtures (15.9% and 16.2%, respectively), but the vermicompost produced from the mixture with a higher amount of TD had a greater proportion (24%) of HAs. Both vermicomposting processes caused equal modifications in the humic precursors contained in the different mixtures of TD and PS, and consequently, the HAs in the vermicomposts produced from different waste mixtures exhibited analogous characteristics. Only the set of analytical techniques used in this research was able to detect differences between the HAs isolated from each type of vermicompost. In conclusion, varying the proportion of different wastes may have a stronger influence on the amount of HAs in vermicomposts than on the properties of HAs.

  19. Application of a set of complementary techniques to understand how varying the proportion of two wastes affects humic acids produced by vermicomposting

    International Nuclear Information System (INIS)

    Fernández-Gómez, Manuel J.; Nogales, Rogelio; Plante, Alain; Plaza, César; Fernández, José M.

    2015-01-01

    Highlights: • A set of techniques was used to characterize humic acids content of vermicomposts. • The properties of the humic acids produced from different waste mixtures were similar. • This set of techniques allowed distinguishing the humic acids of each vermicomposts. • Increasing humic acid contents in initial mixtures would produce richer vermicomposts. - Abstract: A better understanding of how varying the proportion of different organic wastes affects humic acid (HA) formation during vermicomposting would be useful in producing vermicomposts enriched in HAs. With the aim of improving the knowledge about this issue, a variety of analytical techniques [UV–visible spectroscopic, Fourier transform infrared, fluorescence spectra, solid-state cross-polarization magic-angle spinning (CPMAS) 13 C nuclear magnetic resonance (NMR) spectra, and thermal analysis] was used in the present study to characterize HAs isolated from two mixtures at two different ratios (2:1 and 1:1) of tomato-plant debris (TD) and paper-mill sludge (PS) before and after vermicomposting. The results suggest that vermicomposting increased the HA content in the TD/PS 2:1 and 1:1 mixtures (15.9% and 16.2%, respectively), but the vermicompost produced from the mixture with a higher amount of TD had a greater proportion (24%) of HAs. Both vermicomposting processes caused equal modifications in the humic precursors contained in the different mixtures of TD and PS, and consequently, the HAs in the vermicomposts produced from different waste mixtures exhibited analogous characteristics. Only the set of analytical techniques used in this research was able to detect differences between the HAs isolated from each type of vermicompost. In conclusion, varying the proportion of different wastes may have a stronger influence on the amount of HAs in vermicomposts than on the properties of HAs

  20. Fast Streaming 3D Level set Segmentation on the GPU for Smooth Multi-phase Segmentation

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Zhang, Qin; Anton, François

    2011-01-01

    Level set method based segmentation provides an efficient tool for topological and geometrical shape handling, but it is slow due to high computational burden. In this work, we provide a framework for streaming computations on large volumetric images on the GPU. A streaming computational model...

  1. Predicting groundwater level fluctuations with meteorological effect implications—A comparative study among soft computing techniques

    Science.gov (United States)

    Shiri, Jalal; Kisi, Ozgur; Yoon, Heesung; Lee, Kang-Kun; Hossein Nazemi, Amir

    2013-07-01

    The knowledge of groundwater table fluctuations is important in agricultural lands as well as in the studies related to groundwater utilization and management levels. This paper investigates the abilities of Gene Expression Programming (GEP), Adaptive Neuro-Fuzzy Inference System (ANFIS), Artificial Neural Networks (ANN) and Support Vector Machine (SVM) techniques for groundwater level forecasting in following day up to 7-day prediction intervals. Several input combinations comprising water table level, rainfall and evapotranspiration values from Hongcheon Well station (South Korea), covering a period of eight years (2001-2008) were used to develop and test the applied models. The data from the first six years were used for developing (training) the applied models and the last two years data were reserved for testing. A comparison was also made between the forecasts provided by these models and the Auto-Regressive Moving Average (ARMA) technique. Based on the comparisons, it was found that the GEP models could be employed successfully in forecasting water table level fluctuations up to 7 days beyond data records.

  2. Study of indoor radon levels in some radioactive areas of Himachal Pradesh: an inter-comparison of active and passive techniques

    International Nuclear Information System (INIS)

    Bajwa, B.S.; Singh, S.; Sharma, N.; Virk, H.S.

    2006-01-01

    Full text of publication follows: Indoor radon levels measurements were carried using both the active and passive techniques in the dwellings of some villages, known to be located in the vicinity of uranium mineralized zones of Hamirpur district, Himachal Pradesh. Even in the passive technique using Solid State Nuclear Track Detectors (S.S.N.T.D.), both the bare-slide and twin chamber dosemeter cup modes were utilized. An attempt has also been made to assess the levels of the indoor radon in these dwellings and inhalation dose rates of the population living in these villages. The average value of radon concentration levels using the bare-slide mode varies from 109.0 to 741.5 Bq/m3 in these dwellings, where as the maximum radon level using the twin cup dosemeter technique was found to be 140.3 Bq/m3. As usual the radon concentrations were found to be varying with seasonal changes, building materials etc. The radon survey in the dwellings of these villages has also been carried out using the Alpha- Guard technique, which is based on the pulse ionization chamber. The indoor radon concentration levels measured using the active technique of Alpha Guard have been found to be quite different from those measured in these dwellings by the passive technique of S.S.N.T.D.; indicating the importance of the S.S.N.T.D. in the long-term integrated measurement of the indoor radon levels in the dwellings. (authors)

  3. Study of indoor radon levels in some radioactive areas of Himachal Pradesh: an inter-comparison of active and passive techniques

    International Nuclear Information System (INIS)

    Bajwa, B.S.; Singh, S.; Sharma, N.; Virk, H.S.

    2006-01-01

    Full text of publication follows: Indoor radon levels measurements were carried using both the active and passive techniques in the dwellings of some villages, known to be located in the vicinity of uranium mineralized zones of Hamirpur district, Himachal Pradesh. Even in the passive technique using S.S.N.T.D., both the bare -slide and twin chamber dosemeter cup modes were utilized. An attempt has also been made to assess the levels of the indoor radon in these dwellings and inhalation dose rates of the population living in these villages. The average value of radon concentration levels using the bare-slide mode varies from 109.0 to 741.5 Bq/m3 in these dwellings, where as the maximum radon level using the twin cup dosemeter technique was found to be 140.3 Bq/m3. As usual the radon concentrations were found to be varying with seasonal changes, building materials etc. The radon survey in the dwellings of these villages has also been carried out using the Alpha-Guard technique, which is based on the pulse ionization chamber. The indoor radon concentration levels measured using the active technique of Alpha Guard have been found to be quite different from those measured in these dwellings by the passive technique of S.S.N.T.D.; indicating the importance of the S.S.N.T.D. in the long -term integrated measurement of the indoor radon levels in the dwellings. (authors)

  4. HPC in Basin Modeling: Simulating Mechanical Compaction through Vertical Effective Stress using Level Sets

    Science.gov (United States)

    McGovern, S.; Kollet, S. J.; Buerger, C. M.; Schwede, R. L.; Podlaha, O. G.

    2017-12-01

    In the context of sedimentary basins, we present a model for the simulation of the movement of ageological formation (layers) during the evolution of the basin through sedimentation and compactionprocesses. Assuming a single phase saturated porous medium for the sedimentary layers, the modelfocuses on the tracking of the layer interfaces, through the use of the level set method, as sedimentationdrives fluid-flow and reduction of pore space by compaction. On the assumption of Terzaghi's effectivestress concept, the coupling of the pore fluid pressure to the motion of interfaces in 1-D is presented inMcGovern, et.al (2017) [1] .The current work extends the spatial domain to 3-D, though we maintain the assumption ofvertical effective stress to drive the compaction. The idealized geological evolution is conceptualized asthe motion of interfaces between rock layers, whose paths are determined by the magnitude of a speedfunction in the direction normal to the evolving layer interface. The speeds normal to the interface aredependent on the change in porosity, determined through an effective stress-based compaction law,such as the exponential Athy's law. Provided with the speeds normal to the interface, the level setmethod uses an advection equation to evolve a potential function, whose zero level set defines theinterface. Thus, the moving layer geometry influences the pore pressure distribution which couplesback to the interface speeds. The flexible construction of the speed function allows extension, in thefuture, to other terms to represent different physical processes, analogous to how the compaction rulerepresents material deformation.The 3-D model is implemented using the generic finite element method framework Deal II,which provides tools, building on p4est and interfacing to PETSc, for the massively parallel distributedsolution to the model equations [2]. Experiments are being run on the Juelich Supercomputing Center'sJureca cluster. [1] McGovern, et.al. (2017

  5. Differences in sampling techniques on total post-mortem tryptase.

    Science.gov (United States)

    Tse, R; Garland, J; Kesha, K; Elstub, H; Cala, A D; Ahn, Y; Stables, S; Palmiere, C

    2017-11-20

    The measurement of mast cell tryptase is commonly used to support the diagnosis of anaphylaxis. In the post-mortem setting, the literature recommends sampling from peripheral blood sources (femoral blood) but does not specify the exact sampling technique. Sampling techniques vary between pathologists, and it is unclear whether different sampling techniques have any impact on post-mortem tryptase levels. The aim of this study is to compare the difference in femoral total post-mortem tryptase levels between two sampling techniques. A 6-month retrospective study comparing femoral total post-mortem tryptase levels between (1) aspirating femoral vessels with a needle and syringe prior to evisceration and (2) femoral vein cut down during evisceration. Twenty cases were identified, with three cases excluded from analysis. There was a statistically significant difference (paired t test, p sampling methods. The clinical significance of this finding and what factors may contribute to it are unclear. When requesting post-mortem tryptase, the pathologist should consider documenting the exact blood collection site and method used for collection. In addition, blood samples acquired by different techniques should not be mixed together and should be analyzed separately if possible.

  6. Nonlinear canonical correlation analysis with k sets of variables

    NARCIS (Netherlands)

    van der Burg, Eeke; de Leeuw, Jan

    1987-01-01

    The multivariate technique OVERALS is introduced as a non-linear generalization of canonical correlation analysis (CCA). First, two sets CCA is introduced. Two sets CCA is a technique that computes linear combinations of sets of variables that correlate in an optimal way. Two sets CCA is then

  7. Classification of Normal and Apoptotic Cells from Fluorescence Microscopy Images Using Generalized Polynomial Chaos and Level Set Function.

    Science.gov (United States)

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2016-06-01

    Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.

  8. Home advantage in high-level volleyball varies according to set number.

    Science.gov (United States)

    Marcelino, Rui; Mesquita, Isabel; Palao Andrés, José Manuel; Sampaio, Jaime

    2009-01-01

    The aim of the present study was to identify the probability of winning each Volleyball set according to game location (home, away). Archival data was obtained from 275 sets in the 2005 Men's Senior World League and 65,949 actions were analysed. Set result (win, loss), game location (home, away), set number (first, second, third, fourth and fifth) and performance indicators (serve, reception, set, attack, dig and block) were the variables considered in this study. In a first moment, performance indicators were used in a logistic model of set result, by binary logistic regression analysis. After finding the adjusted logistic model, the log-odds of winning the set were analysed according to game location and set number. The results showed that winning a set is significantly related to performance indicators (Chisquare(18)=660.97, padvantage at the beginning of the game (first set) and in the two last sets of the game (fourth and fifth sets), probably due to facilities familiarity and crowd effects. Different game actions explain these advantages and showed that to win the first set is more important to take risk, through a better performance in the attack and block, and to win the final set is important to manage the risk through a better performance on the reception. These results may suggest intra-game variation in home advantage and can be most useful to better prepare and direct the competition. Key pointsHome teams always have more probability of winning the game than away teams.Home teams have higher performance in reception, set and attack in the total of the sets.The advantage of home teams is more pronounced at the beginning of the game (first set) and in two last sets of the game (fourth and fifth sets) suggesting intra-game variation in home advantage.Analysis by sets showed that home teams have a better performance in the attack and block in the first set and in the reception in the third and fifth sets.

  9. Setting priorities for ambient air quality objectives

    International Nuclear Information System (INIS)

    2004-10-01

    Alberta has ambient air quality objectives in place for several pollutants, toxic substances and other air quality parameters. A process is in place to determine if additional air quality objectives are required or if existing objectives should be changed. In order to identify the highest priority substances that may require an ambient air quality objective to protect ecosystems and public health, a rigorous, transparent and cost effective priority setting methodology is required. This study reviewed, analyzed and assessed successful priority setting techniques used by other jurisdictions. It proposed an approach for setting ambient air quality objective priorities that integrates the concerns of stakeholders with Alberta Environment requirements. A literature and expert review were used to examine existing priority-setting techniques used by other jurisdictions. An analysis process was developed to identify the strengths and weaknesses of various techniques and their ability to take into account the complete pathway between chemical emissions and damage to human health or the environment. The key strengths and weaknesses of each technique were identified. Based on the analysis, the most promising technique was the tool for the reduction and assessment of chemical and other environmental impacts (TRACI). Several considerations for using TRACI to help set priorities for ambient air quality objectives were also presented. 26 refs, 8 tabs., 4 appendices

  10. Multi-domain, higher order level set scheme for 3D image segmentation on the GPU

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Zhang, Qin; Anton, François

    2010-01-01

    to evaluate level set surfaces that are $C^2$ continuous, but are slow due to high computational burden. In this paper, we provide a higher order GPU based solver for fast and efficient segmentation of large volumetric images. We also extend the higher order method to multi-domain segmentation. Our streaming...

  11. Engagement techniques and playing level impact the biomechanical demands on rugby forwards during machine-based scrummaging.

    Science.gov (United States)

    Preatoni, Ezio; Stokes, Keith A; England, Michael E; Trewartha, Grant

    2015-04-01

    This cross-sectional study investigated the factors that may influence the physical loading on rugby forwards performing a scrum by studying the biomechanics of machine-based scrummaging under different engagement techniques and playing levels. 34 forward packs from six playing levels performed repetitions of five different types of engagement techniques against an instrumented scrum machine under realistic training conditions. Applied forces and body movements were recorded in three orthogonal directions. The modification of the engagement technique altered the load acting on players. These changes were in a similar direction and of similar magnitude irrespective of the playing level. Reducing the dynamics of the initial engagement through a fold-in procedure decreased the peak compression force, the peak downward force and the engagement speed in excess of 30%. For example, peak compression (horizontal) forces in the professional teams changed from 16.5 (baseline technique) to 8.6 kN (fold-in procedure). The fold-in technique also reduced the occurrence of combined high forces and head-trunk misalignment during the absorption of the impact, which was used as a measure of potential hazard, by more than 30%. Reducing the initial impact did not decrease the ability of the teams to produce sustained compression forces. De-emphasising the initial impact against the scrum machine decreased the mechanical stresses acting on forward players and may benefit players' welfare by reducing the hazard factors that may induce chronic degeneration of the spine. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  12. Towards a Framework for Change Detection in Data Sets

    Science.gov (United States)

    Böttcher, Mirko; Nauck, Detlef; Ruta, Dymitr; Spott, Martin

    Since the world with its markets, innovations and customers is changing faster than ever before, the key to survival for businesses is the ability to detect, assess and respond to changing conditions rapidly and intelligently. Discovering changes and reacting to or acting upon them before others do has therefore become a strategical issue for many companies. However, existing data analysis techniques are insufflent for this task since they typically assume that the domain under consideration is stable over time. This paper presents a framework that detects changes within a data set at virtually any level of granularity. The underlying idea is to derive a rule-based description of the data set at different points in time and to subsequently analyse how these rules change. Nevertheless, further techniques are required to assist the data analyst in interpreting and assessing their changes. Therefore the framework also contains methods to discard rules that are non-drivers for change and to assess the interestingness of detected changes.

  13. The effects of breathing techniques training on the duration of labor and anxiety levels of pregnant women.

    Science.gov (United States)

    Cicek, Sevil; Basar, Fatma

    2017-11-01

    To assess the effects of breathing techniques training on anxiety levels of pregnant women and the duration of labor. The study utilizes a randomized controlled trial design. The pregnant women were divided into control (n = 35) or experimental group (n = 35) randomly. The experimental group received breathing techniques training in the latent phase and these techniques were applied in the following phases accordingly. The anxiety levels of pregnant women were evaluated three times in total. The duration of labor was considered as the duration of the first stage of labor and the duration of the second stage of labor. There were significant differences between the two groups regarding the mean State Anxiety Inventory (SAI) and the mean duration of labor. This study concludes that breathing techniques are an effective method in the reduction of anxiety and influence the duration of delivery during labor. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. A high-level power model for MPSoC on FPGA

    NARCIS (Netherlands)

    Piscitelli, R.; Pimentel, A.D.

    2011-01-01

    This paper presents a framework for high-level power estimation of multiprocessor systems-on-chip (MPSoC) architectures on FPGA. The technique is based on abstract execution profiles, called event signatures, and it operates at a higher level of abstraction than, e.g., commonly-used instruction-set

  15. Feasibility of megavoltage portal CT using an electronic portal imaging device (EPID) and a multi-level scheme algebraic reconstruction technique (MLS-ART)

    International Nuclear Information System (INIS)

    Guan, Huaiqun; Zhu, Yunping

    1998-01-01

    Although electronic portal imaging devices (EPIDs) are efficient tools for radiation therapy verification, they only provide images of overlapped anatomic structures. We investigated using a fluorescent screen/CCD-based EPID, coupled with a novel multi-level scheme algebraic reconstruction technique (MLS-ART), for a feasibility study of portal computed tomography (CT) reconstructions. The CT images might be useful for radiation treatment planning and verification. We used an EPID, set it to work at the linear dynamic range and collimated 6 MV photons from a linear accelerator to a slit beam of 1 cm wide and 25 cm long. We performed scans under a total of ∼200 monitor units (MUs) for several phantoms in which we varied the number of projections and MUs per projection. The reconstructed images demonstrated that using the new MLS-ART technique megavoltage portal CT with a total of 200 MUs can achieve a contrast detectibility of ∼2.5% (object size 5mmx5mm) and a spatial resolution of 2.5 mm. (author)

  16. A Tough Call : Mitigating Advanced Code-Reuse Attacks at the Binary Level

    NARCIS (Netherlands)

    Veen, Victor Van Der; Goktas, Enes; Contag, Moritz; Pawoloski, Andre; Chen, Xi; Rawat, Sanjay; Bos, Herbert; Holz, Thorsten; Athanasopoulos, Ilias; Giuffrida, Cristiano

    2016-01-01

    Current binary-level Control-Flow Integrity (CFI) techniques are weak in determining the set of valid targets for indirect control flow transfers on the forward edge. In particular, the lack of source code forces existing techniques to resort to a conservative address-taken policy that

  17. Application of Spectral Analysis Techniques in the Intercomparison of Aerosol Data: 1. an EOF Approach to the Spatial-Temporal Variability of Aerosol Optical Depth Using Multiple Remote Sensing Data Sets

    Science.gov (United States)

    Li, Jing; Carlson, Barbara E.; Lacis, Andrew A.

    2013-01-01

    Many remote sensing techniques and passive sensors have been developed to measure global aerosol properties. While instantaneous comparisons between pixel-level data often reveal quantitative differences, here we use Empirical Orthogonal Function (EOF) analysis, also known as Principal Component Analysis, to demonstrate that satellite-derived aerosol optical depth (AOD) data sets exhibit essentially the same spatial and temporal variability and are thus suitable for large-scale studies. Analysis results show that the first four EOF modes of AOD account for the bulk of the variance and agree well across the four data sets used in this study (i.e., Aqua MODIS, Terra MODIS, MISR, and SeaWiFS). Only SeaWiFS data over land have slightly different EOF patterns. Globally, the first two EOF modes show annual cycles and are mainly related to Sahara dust in the northern hemisphere and biomass burning in the southern hemisphere, respectively. After removing the mean seasonal cycle from the data, major aerosol sources, including biomass burning in South America and dust in West Africa, are revealed in the dominant modes due to the different interannual variability of aerosol emissions. The enhancement of biomass burning associated with El Niño over Indonesia and central South America is also captured with the EOF technique.

  18. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    Science.gov (United States)

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Application of positron annihilation lifetime technique to the study of deep level transients in semiconductors

    Science.gov (United States)

    Deng, A. H.; Shan, Y. Y.; Fung, S.; Beling, C. D.

    2002-03-01

    Unlike its conventional applications in lattice defect characterization, positron annihilation lifetime technique was applied to study temperature-dependent deep level transients in semiconductors. Defect levels in the band gap can be determined as they are determined by conventional deep level transient spectroscopy (DLTS) studies. The promising advantage of this application of positron annihilation over the conventional DLTS is that it could further extract extra microstructure information of deep-level defects, such as whether a deep level defect is vacancy related or not. A demonstration of EL2 defect level transient study in GaAs was shown and the EL2 level of 0.82±0.02 eV was obtained by a standard Arrhenius analysis, similar to that in conventional DLTS studies.

  20. Comparison of different statistical methods for estimation of extreme sea levels with wave set-up contribution

    Science.gov (United States)

    Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme

    2013-04-01

    Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.

  1. Neuro-fuzzy and neural network techniques for forecasting sea level in Darwin Harbor, Australia

    Science.gov (United States)

    Karimi, Sepideh; Kisi, Ozgur; Shiri, Jalal; Makarynskyy, Oleg

    2013-03-01

    Accurate predictions of sea level with different forecast horizons are important for coastal and ocean engineering applications, as well as in land drainage and reclamation studies. The methodology of tidal harmonic analysis, which is generally used for obtaining a mathematical description of the tides, is data demanding requiring processing of tidal observation collected over several years. In the present study, hourly sea levels for Darwin Harbor, Australia were predicted using two different, data driven techniques, adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Multi linear regression (MLR) technique was used for selecting the optimal input combinations (lag times) of hourly sea level. The input combination comprises current sea level as well as five previous level values found to be optimal. For the ANFIS models, five different membership functions namely triangular, trapezoidal, generalized bell, Gaussian and two Gaussian membership function were tested and employed for predicting sea level for the next 1 h, 24 h, 48 h and 72 h. The used ANN models were trained using three different algorithms, namely, Levenberg-Marquardt, conjugate gradient and gradient descent. Predictions of optimal ANFIS and ANN models were compared with those of the optimal auto-regressive moving average (ARMA) models. The coefficient of determination, root mean square error and variance account statistics were used as comparison criteria. The obtained results indicated that triangular membership function was optimal for predictions with the ANFIS models while adaptive learning rate and Levenberg-Marquardt were most suitable for training the ANN models. Consequently, ANFIS and ANN models gave similar forecasts and performed better than the developed for the same purpose ARMA models for all the prediction intervals.

  2. Representing the Fuzzy improved risk graph for determination of optimized safety integrity level in industrial setting

    Directory of Open Access Journals (Sweden)

    Z. Qorbali

    2013-12-01

    .Conclusion: as a result of establishing the presented method, identical levels in conventional risk graph table are replaced with different sublevels that not only increases the accuracy in determining the SIL, but also elucidates the effective factor in improving the safety level and consequently saves time and cost significantly. The proposed technique has been employed to develop the SIL of Tehran Refinery ISOMAX Center. IRG and FIRG results have been compared to clarify the efficacy and importance of the proposed method

  3. Numerical Construction of Viable Sets for Autonomous Conflict Control Systems

    Directory of Open Access Journals (Sweden)

    Nikolai Botkin

    2014-04-01

    Full Text Available A conflict control system with state constraints is under consideration. A method for finding viability kernels (the largest subsets of state constraints where the system can be confined is proposed. The method is related to differential games theory essentially developed by N. N. Krasovskii and A. I. Subbotin. The viability kernel is constructed as the limit of sets generated by a Pontryagin-like backward procedure. This method is implemented in the framework of a level set technique based on the computation of limiting viscosity solutions of an appropriate Hamilton–Jacobi equation. To fulfill this, the authors adapt their numerical methods formerly developed for solving time-dependent Hamilton–Jacobi equations arising from problems with state constraints. Examples of computing viability sets are given.

  4. Education level inequalities and transportation injury mortality in the middle aged and elderly in European settings

    NARCIS (Netherlands)

    Borrell, C.; Plasència, A.; Huisman, M.; Costa, G.; Kunst, A.; Andersen, O.; Bopp, M.; Borgan, J.-K.; Deboosere, P.; Glickman, M.; Gadeyne, S.; Minder, C.; Regidor, E.; Spadea, T.; Valkonen, T.; Mackenbach, J. P.

    2005-01-01

    OBJECTIVE: To study the differential distribution of transportation injury mortality by educational level in nine European settings, among people older than 30 years, during the 1990s. METHODS: Deaths of men and women older than 30 years from transportation injuries were studied. Rate differences

  5. Cooking techniques improve the levels of bioactive compounds and antioxidant activity in kale and red cabbage.

    Science.gov (United States)

    Murador, Daniella Carisa; Mercadante, Adriana Zerlotti; de Rosso, Veridiana Vera

    2016-04-01

    The aim of this study is to investigate the effects of different home cooking techniques (boiling, steaming, and stir-frying) in kale and red cabbage, on the levels of bioactive compounds (carotenoids, anthocyanins and phenolic compounds) determined by high-performance liquid chromatography coupled with photodiode array and mass spectrometry detectors (HPLC-DAD-MS(n)), and on the antioxidant activity evaluated by ABTS, ORAC and cellular antioxidant activity (CAA) assays. The steaming technique resulted in a significant increase in phenolic content in kale (86.1%; pkale, steaming resulted in significant increases in antioxidant activity levels in all of the evaluation methods. In the red cabbage, boiling resulted in a significant increase in antioxidant activity using the ABTS assay but resulted in a significant decrease using the ORAC assay. According to the CAA assay, the stir-fried sample displayed the highest levels of antioxidant activity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Records for radioactive waste management up to repository closure: Managing the primary level information (PLI) set

    International Nuclear Information System (INIS)

    2004-07-01

    The objective of this publication is to highlight the importance of the early establishment of a comprehensive records system to manage primary level information (PLI) as an integrated set of information, not merely as a collection of information, throughout all the phases of radioactive waste management. Early establishment of a comprehensive records system to manage Primary Level Information as an integrated set of information throughout all phases of radioactive waste management is important. In addition to the information described in the waste inventory record keeping system (WIRKS), the PLI of a radioactive waste repository consists of the entire universe of information, data and records related to any aspect of the repository's life cycle. It is essential to establish PLI requirements based on integrated set of needs from Regulators and Waste Managers involved in the waste management chain and to update these requirements as needs change over time. Information flow for radioactive waste management should be back-end driven. Identification of an Authority that will oversee the management of PLI throughout all phases of the radioactive waste management life cycle would guarantee the information flow to future generations. The long term protection of information essential to future generations can only be assured by the timely establishment of a comprehensive and effective RMS capable of capturing, indexing and evaluating all PLI. The loss of intellectual control over the PLI will make it very difficult to subsequently identify the ILI and HLI information sets. At all times prior to the closure of a radioactive waste repository, there should be an identifiable entity with a legally enforceable financial and management responsibility for the continued operation of a PLI Records Management System. The information presented in this publication will assist Member States in ensuring that waste and repository records, relevant for retention after repository closure

  7. Estimation of low level gross alpha activities in the radioactive effluent using liquid scintillation counting technique

    International Nuclear Information System (INIS)

    Bhade, Sonali P.D.; Johnson, Bella E.; Singh, Sanjay; Babu, D.A.R.

    2012-01-01

    A technique has been developed for simultaneous measurement of gross alpha and gross beta activity concentration in low level liquid effluent samples in presence of higher activity concentrations of tritium. For this purpose, alpha beta discriminating Pulse Shape Analysis Liquid Scintillation Counting (LSC) technique was used. Main advantages of this technique are easy sample preparation, rapid measurement and higher sensitivity. The calibration methodology for Quantulus1220 LSC based on PSA technique using 241 Am and 90 Sr/ 90 Y as alpha and beta standards respectively was described in detail. LSC technique was validated by measuring alpha and beta activity concentrations in test samples with known amount of 241 Am and 90 Sr/ 90 Y activities spiked in distilled water. The results obtained by LSC technique were compared with conventional planchet counting methods such as ZnS(Ag) and end window GM detectors. The gross alpha and gross beta activity concentrations in spiked samples, obtained by LSC technique were found to be within ±5% of the reference values. (author)

  8. Nurses' comfort level with spiritual assessment: a study among nurses working in diverse healthcare settings.

    Science.gov (United States)

    Cone, Pamela H; Giske, Tove

    2017-10-01

    To gain knowledge about nurses' comfort level in assessing spiritual matters and to learn what questions nurses use in practice related to spiritual assessment. Spirituality is important in holistic nursing care; however, nurses report feeling uncomfortable and ill-prepared to address this domain with patients. Education is reported to impact nurses' ability to engage in spiritual care. This cross-sectional exploratory survey reports on a mixed-method study examining how comfortable nurses are with spiritual assessment. In 2014, a 21-item survey with 10 demographic variables and three open-ended questions were distributed to Norwegian nurses working in diverse care settings with 172 nurse responses (72 % response rate). SPSS was used to analyse quantitative data; thematic analysis examined the open-ended questions. Norwegian nurses reported a high level of comfort with most questions even though spirituality is seen as private. Nurses with some preparation or experience in spiritual care were most comfortable assessing spirituality. Statistically significant correlations were found between the nurses' comfort level with spiritual assessment and their preparedness and sense of the importance of spiritual assessment. How well-prepared nurses felt was related to years of experience, degree of spirituality and religiosity, and importance of spiritual assessment. Many nurses are poorly prepared for spiritual assessment and care among patients in diverse care settings; educational preparation increases their comfort level with facilitating such care. Nurses who feel well prepared with spirituality feel more comfortable with the spiritual domain. By fostering a culture where patients' spirituality is discussed and reflected upon in everyday practice and in continued education, nurses' sense of preparedness, and thus their level of comfort, can increase. Clinical supervision and interprofessional collaboration with hospital chaplains and/or other spiritual leaders can

  9. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J. (comps.)

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques.

  10. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    International Nuclear Information System (INIS)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J.

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques

  11. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

  12. Advantages of Synthetic Noise and Machine Learning for Analyzing Radioecological Data Sets.

    Directory of Open Access Journals (Sweden)

    Igor Shuryak

    Full Text Available The ecological effects of accidental or malicious radioactive contamination are insufficiently understood because of the hazards and difficulties associated with conducting studies in radioactively-polluted areas. Data sets from severely contaminated locations can therefore be small. Moreover, many potentially important factors, such as soil concentrations of toxic chemicals, pH, and temperature, can be correlated with radiation levels and with each other. In such situations, commonly-used statistical techniques like generalized linear models (GLMs may not be able to provide useful information about how radiation and/or these other variables affect the outcome (e.g. abundance of the studied organisms. Ensemble machine learning methods such as random forests offer powerful alternatives. We propose that analysis of small radioecological data sets by GLMs and/or machine learning can be made more informative by using the following techniques: (1 adding synthetic noise variables to provide benchmarks for distinguishing the performances of valuable predictors from irrelevant ones; (2 adding noise directly to the predictors and/or to the outcome to test the robustness of analysis results against random data fluctuations; (3 adding artificial effects to selected predictors to test the sensitivity of the analysis methods in detecting predictor effects; (4 running a selected machine learning method multiple times (with different random-number seeds to test the robustness of the detected "signal"; (5 using several machine learning methods to test the "signal's" sensitivity to differences in analysis techniques. Here, we applied these approaches to simulated data, and to two published examples of small radioecological data sets: (I counts of fungal taxa in samples of soil contaminated by the Chernobyl nuclear power plan accident (Ukraine, and (II bacterial abundance in soil samples under a ruptured nuclear waste storage tank (USA. We show that the proposed

  13. Comparison of Spinal Block Levels between Laboring and Nonlaboring Parturients Using Combined Spinal Epidural Technique with Intrathecal Plain Bupivacaine

    Directory of Open Access Journals (Sweden)

    Yu-Ying Tang

    2012-01-01

    Full Text Available Background. It was suggested that labor may influence the spread of intrathecal bupivacaine using combined spinal epidural (CSE technique. However, no previous studies investigated this proposition. We designed this study to investigate the spinal block characteristics of plain bupivacaine between nonlaboring and laboring parturients using CSE technique. Methods. Twenty-five nonlaboring (Group NL and twenty-five laboring parturients (Group L undergoing cesarean delivery were enrolled. Following identification of the epidural space at the L3-4 interspace, plain bupivacaine 10 mg was administered intrathecally using CSE technique. The level of sensory block, degree of motor block, and hemodynamic changes were assessed. Results. The baseline systolic blood pressure (SBP and the maximal decrease of SBP in Group L were significantly higher than those in Group NL (=0.002 and =0.03, resp.. The median sensory level tested by cold stimulation was T6 for Group NL and T5 for Group L (=0.46. The median sensory level tested by pinprick was T7 for both groups (=0.35. The degree of motor block was comparable between the two groups (=0.85. Conclusion. We did not detect significant differences in the sensory block levels between laboring and nonlaboring parturients using CSE technique with intrathecal plain bupivacaine.

  14. Harmonic elimination technique for a single-phase multilevel converter with unequal DC link voltage levels

    DEFF Research Database (Denmark)

    Ghasemi, N.; Zare, F.; Boora, A.A.

    2012-01-01

    Multilevel converters, because of the benefits they attract in generating high quality output voltage, are used in several applications. Various modulation and control techniques are introduced by several researchers to control the output voltage of the multilevel converters like space vector...... modulation and harmonic elimination (HE) methods. Multilevel converters may have a DC link with equal or unequal DC voltages. In this study a new HE technique based on the HE method is proposed for multilevel converters with unequal DC link voltage. The DC link voltage levels are considered as additional...

  15. Handling Imbalanced Data Sets in Multistage Classification

    Science.gov (United States)

    López, M.

    Multistage classification is a logical approach, based on a divide-and-conquer solution, for dealing with problems with a high number of classes. The classification problem is divided into several sequential steps, each one associated to a single classifier that works with subgroups of the original classes. In each level, the current set of classes is split into smaller subgroups of classes until they (the subgroups) are composed of only one class. The resulting chain of classifiers can be represented as a tree, which (1) simplifies the classification process by using fewer categories in each classifier and (2) makes it possible to combine several algorithms or use different attributes in each stage. Most of the classification algorithms can be biased in the sense of selecting the most populated class in overlapping areas of the input space. This can degrade a multistage classifier performance if the training set sample frequencies do not reflect the real prevalence in the population. Several techniques such as applying prior probabilities, assigning weights to the classes, or replicating instances have been developed to overcome this handicap. Most of them are designed for two-class (accept-reject) problems. In this article, we evaluate several of these techniques as applied to multistage classification and analyze how they can be useful for astronomy. We compare the results obtained by classifying a data set based on Hipparcos with and without these methods.

  16. Fluoroscopy in paediatric fractures - Setting a local diagnostic reference level

    International Nuclear Information System (INIS)

    Pillai, A.; McAuley, A.; McMurray, K.; Jain, M.

    2006-01-01

    Background: The ionizing radiations (Medical Exposure) Regulation 2000 has made it mandatory to establish diagnostic reference levels (DRLs) for all typical radiological examinations. Objectives: We attempt to provide dose data for some common fluoroscopic procedures used in orthopaedic trauma that may be used as the basis for setting DRLs for paediatric patients. Materials and methods: The dose area product (DAP) in 865 paediatric trauma examinations was analysed. Median DAP values and screening times for each procedure type along with quartile values for each range are presented. Results: In the upper limb, elbow examinations had maximum exposure with a median DAP value of 1.21 cGy cm 2 . Median DAP values for forearm and wrist examinations were 0.708 and 0.538 cGy cm 2 , respectively. In lower limb, tibia and fibula examinations had a median DAP value of 3.23 cGy cm 2 followed by ankle examinations with a median DAP of 3.10 cGy cm 2 . The rounded third quartile DAP value for each distribution can be used as a provisional DRL for the specific procedure type. (authors)

  17. Level Diagrams analysis of Pareto Front for multiobjective system redundancy allocation

    International Nuclear Information System (INIS)

    Zio, E.; Bazzo, R.

    2011-01-01

    Reliability-based and risk-informed design, operation, maintenance and regulation lead to multiobjective (multicriteria) optimization problems. In this context, the Pareto Front and Set found in a multiobjective optimality search provide a family of solutions among which the decision maker has to look for the best choice according to his or her preferences. Efficient visualization techniques for Pareto Front and Set analyses are needed for helping decision makers in the selection task. In this paper, we consider the multiobjective optimization of system redundancy allocation and use the recently introduced Level Diagrams technique for graphically representing the resulting Pareto Front and Set. Each objective and decision variable is represented on separate diagrams where the points of the Pareto Front and Set are positioned according to their proximity to ideally optimal points, as measured by a metric of normalized objective values. All diagrams are synchronized across all objectives and decision variables. On the basis of the analysis of the Level Diagrams, we introduce a procedure for reducing the number of solutions in the Pareto Front; from the reduced set of solutions, the decision maker can more easily identify his or her preferred solution.

  18. Three-Dimensional Simulation of DRIE Process Based on the Narrow Band Level Set and Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Jia-Cheng Yu

    2018-02-01

    Full Text Available A three-dimensional topography simulation of deep reactive ion etching (DRIE is developed based on the narrow band level set method for surface evolution and Monte Carlo method for flux distribution. The advanced level set method is implemented to simulate the time-related movements of etched surface. In the meanwhile, accelerated by ray tracing algorithm, the Monte Carlo method incorporates all dominant physical and chemical mechanisms such as ion-enhanced etching, ballistic transport, ion scattering, and sidewall passivation. The modified models of charged particles and neutral particles are epitomized to determine the contributions of etching rate. The effects such as scalloping effect and lag effect are investigated in simulations and experiments. Besides, the quantitative analyses are conducted to measure the simulation error. Finally, this simulator will be served as an accurate prediction tool for some MEMS fabrications.

  19. Image accuracy and representational enhancement through low-level, multi-sensor integration techniques

    International Nuclear Information System (INIS)

    Baker, J.E.

    1993-05-01

    Multi-Sensor Integration (MSI) is the combining of data and information from more than one source in order to generate a more reliable and consistent representation of the environment. The need for MSI derives largely from basic ambiguities inherent in our current sensor imaging technologies. These ambiguities exist as long as the mapping from reality to image is not 1-to-1. That is, if different 44 realities'' lead to identical images, a single image cannot reveal the particular reality which was the truth. MSI techniques can be divided into three categories based on the relative information content of the original images with that of the desired representation: (1) ''detail enhancement,'' wherein the relative information content of the original images is less rich than the desired representation; (2) ''data enhancement,'' wherein the MSI techniques axe concerned with improving the accuracy of the data rather than either increasing or decreasing the level of detail; and (3) ''conceptual enhancement,'' wherein the image contains more detail than is desired, making it difficult to easily recognize objects of interest. In conceptual enhancement one must group pixels corresponding to the same conceptual object and thereby reduce the level of extraneous detail. This research focuses on data and conceptual enhancement algorithms. To be useful in many real-world applications, e.g., autonomous or teleoperated robotics, real-time feedback is critical. But, many MSI/image processing algorithms require significant processing time. This is especially true of feature extraction, object isolation, and object recognition algorithms due to their typical reliance on global or large neighborhood information. This research attempts to exploit the speed currently available in state-of-the-art digitizers and highly parallel processing systems by developing MSI algorithms based on pixel rather than global-level features

  20. Effects of setting creative goals of different specificity on judged creativity of the product

    OpenAIRE

    Čorko, Irena; Vranić, Andrea

    2005-01-01

    The study examined the effect of setting creative goals of different specificity on judged creativity of the product. Female psychology students (N=47) were divided in 3 groups. Experimental task was to make a collage. Groups differed in the level of specificity of the given goal. Collages were judged by 11 judges using the consensual assessment technique. Factor analysis of these judgments confirmed 2 orthogonal factors: creativity and technical goodness. Results show that setting a specific...

  1. Annual plan of research on safety techniques against low level radioactive wastes, 1984-1988

    International Nuclear Information System (INIS)

    1984-01-01

    The establishment of the countermeasures for treating and disposing radioactive wastes has become an important subject for promoting the utilization of atomic energy. Especially as to low level radioactive wastes, the cumulative quantity has reached about 460,000 in terms of 200 l drums as of the end of March, 1983, and accompanying the development of the utilization of atomic energy, its rapid increase is expected. So far, as for the disposal of low level radioactive wastes, the research and development and the preparation of safety criteria and safety evaluation techniques have been carried out, following the basic policy of the Atomic Energy Commission to execute land disposal and ocean disposal in combination, first to make the test disposal after preliminary safety evaluation, and to shift to the full scale disposal based on the results. The annual plan was decided on July 22, 1983, and the first revision was carried out this time, therefore, it is reported here. The basic policy of establishing this annual plan, and the annual plan for safety technique research are described. (Kako, I.)

  2. County-Level Poverty Is Equally Associated with Unmet Health Care Needs in Rural and Urban Settings

    Science.gov (United States)

    Peterson, Lars E.; Litaker, David G.

    2010-01-01

    Context: Regional poverty is associated with reduced access to health care. Whether this relationship is equally strong in both rural and urban settings or is affected by the contextual and individual-level characteristics that distinguish these areas, is unclear. Purpose: Compare the association between regional poverty with self-reported unmet…

  3. Comparing of goal setting strategy with group education method to increase physical activity level: A randomized trial.

    Science.gov (United States)

    Jiryaee, Nasrin; Siadat, Zahra Dana; Zamani, Ahmadreza; Taleban, Roya

    2015-10-01

    Designing an intervention to increase physical activity is important to be based on the health care settings resources and be acceptable by the subject group. This study was designed to assess and compare the effect of the goal setting strategy with a group education method on increasing the physical activity of mothers of children aged 1 to 5. Mothers who had at least one child of 1-5 years were randomized into two groups. The effect of 1) goal-setting strategy and 2) group education method on increasing physical activity was assessed and compared 1 month and 3 months after the intervention. Also, the weight, height, body mass index (BMI), waist and hip circumference, and well-being were compared between the two groups before and after the intervention. Physical activity level increased significantly after the intervention in the goal-setting group and it was significantly different between the two groups after intervention (P goal-setting group after the intervention. In the group education method, only the well-being score improved significantly (P goal-setting strategy to boost physical activity, improving the state of well-being and decreasing BMI, waist, and hip circumference.

  4. Optimizing anesthesia techniques in the ambulatory setting

    NARCIS (Netherlands)

    E. Galvin (Eilish)

    2007-01-01

    textabstractAmbulatory surgery refers to the process of admitting patients, administering anesthesia and surgical care, and discharging patients home following an appropriate level of recovery on the same day. The word ambulatory is derived from the latin word ambulare, which means ''to walk''. This

  5. A topology optimization method based on the level set method for the design of negative permeability dielectric metamaterials

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Izui, Kazuhiro

    2012-01-01

    This paper presents a level set-based topology optimization method for the design of negative permeability dielectric metamaterials. Metamaterials are artificial materials that display extraordinary physical properties that are unavailable with natural materials. The aim of the formulated...... optimization problem is to find optimized layouts of a dielectric material that achieve negative permeability. The presence of grayscale areas in the optimized configurations critically affects the performance of metamaterials, positively as well as negatively, but configurations that contain grayscale areas...... are highly impractical from an engineering and manufacturing point of view. Therefore, a topology optimization method that can obtain clear optimized configurations is desirable. Here, a level set-based topology optimization method incorporating a fictitious interface energy is applied to a negative...

  6. Assessment of serum selenium levels in 2-month-old sucking calves using total reflection technique

    International Nuclear Information System (INIS)

    Bernardini, D.; Testoni, S.; Buoso, M.C.; Ceccato, D.; Moschini, G.; Valdes, M.; Torboli, A.

    2000-01-01

    The assessment of selenium status of livestock is an important aspect of production medicine as evidence for the influence of low Se levels on disease resistance in ruminants is reviewed with emphasis on susceptibility to various pathologies (such as infections, exudative diathesis, pneumonia, pancreatic degeneration). Additional evidence suggests that Se deficiency may cause muscular dystrophy in calves, while, severe deficiency has been associated with cardiomyopathy and even death. Serum Se content is a good indicator of the short term Se status and reflects the recent dietary intake of the element. Since serum Se content is a good indicator of the short term status of the element and reflects the its recent dietary intake, the present work is aimed to determine Se concentration in serum from a group of 2 months old sucking calves suspected to be severely deficient. We used the TX 2000 X-ray spectrometer manufactured by Ital Structures. The energy resolution (FWHM) of the Si(Li) detector was 137 eV for Mn Kα. Among nuclear techniques the TXRF method is the best suitable for trace element analysis in liquid or dissolved samples and can deal much easier with elemental investigation. Physical basis of used analytical method, experimental set up and the sample preparation procedure are described. The concentration data obtained are presented and discussed. (author)

  7. Level set method for optimal shape design of MRAM core. Micromagnetic approach

    International Nuclear Information System (INIS)

    Melicher, Valdemar; Cimrak, Ivan; Keer, Roger van

    2008-01-01

    We aim at optimizing the shape of the magnetic core in MRAM memories. The evolution of the magnetization during the writing process is described by the Landau-Lifshitz equation (LLE). The actual shape of the core in one cell is characterized by the coefficient γ. Cost functional f=f(γ) expresses the quality of the writing process having in mind the competition between the full-select and the half-select element. We derive an explicit form of the derivative F=∂f/∂γ which allows for the use of gradient-type methods for the actual computation of the optimized shape (e.g., steepest descend method). The level set method (LSM) is employed for the representation of the piecewise constant coefficient γ

  8. Basic set theory

    CERN Document Server

    Levy, Azriel

    2002-01-01

    An advanced-level treatment of the basics of set theory, this text offers students a firm foundation, stopping just short of the areas employing model-theoretic methods. Geared toward upper-level undergraduate and graduate students, it consists of two parts: the first covers pure set theory, including the basic motions, order and well-foundedness, cardinal numbers, the ordinals, and the axiom of choice and some of it consequences; the second deals with applications and advanced topics such as point set topology, real spaces, Boolean algebras, and infinite combinatorics and large cardinals. An

  9. Principle and geomorphological applicability of summit level and base level technique using Aster Gdem satellite-derived data and the original software Baz

    Directory of Open Access Journals (Sweden)

    Akihisa Motoki

    2015-05-01

    Full Text Available This article presents principle and geomorphological applicability of summit level technique using Aster Gdem satellite-derived topographicdata. Summit level corresponds to thevirtualtopographic surface constituted bylocalhighest points, such as peaks and plateau tops, and reconstitutes palaeo-geomorphology before the drainage erosion. Summit level map is efficient for reconstitution of palaeo-surfaces and detection of active tectonic movement. Base level is thevirtualsurface composed oflocallowest points, as valley bottoms. The difference between summit level and base level is called relief amount. Thesevirtualmapsareconstructed by theoriginalsoftwareBaz. Themacroconcavity index, MCI, is calculated from summit level and relief amount maps. The volume-normalised three-dimensional concavity index, TCI, is calculated from hypsometric diagram. The massifs with high erosive resistance tend to have convex general form and low MCI and TCI. Those with low resistance have concave form and high MCI and TCI. The diagram of TCI vs. MCI permits to distinguish erosive characteristics of massifs according to their constituent rocks. The base level map for ocean bottom detects the basement tectonic uplift which occurred before the formation of the volcanic seamounts.

  10. Computing the dynamics of biomembranes by combining conservative level set and adaptive finite element methods

    OpenAIRE

    Laadhari , Aymen; Saramito , Pierre; Misbah , Chaouqi

    2014-01-01

    International audience; The numerical simulation of the deformation of vesicle membranes under simple shear external fluid flow is considered in this paper. A new saddle-point approach is proposed for the imposition of the fluid incompressibility and the membrane inextensibility constraints, through Lagrange multipliers defined in the fluid and on the membrane respectively. Using a level set formulation, the problem is approximated by mixed finite elements combined with an automatic adaptive ...

  11. Restoring coastal wetlands that were ditched for mosquito control: a preliminary assessment of hydro-leveling as a restoration technique

    Science.gov (United States)

    Smith, Thomas J.; Tiling, Ginger; Leasure, Pamela S.

    2007-01-01

    The wetlands surrounding Tampa Bay, Florida were extensively ditched for mosquito control in the 1950s. Spoil from ditch construction was placed adjacent to the wetlands ditches creating mound-like features (spoil-mounds). These mounds represent a loss of 14% of the wetland area in Tampa Bay. Spoil mounds interfere with tidal flow and are locations for non-native plants to colonize (e.g., Schinus terebinthifolius). Removal of the spoil mounds to eliminate exotic plants, restore native vegetation, and re-establish natural hydrology is a restoration priority for environmental managers. Hydro-leveling, a new technique, was tested in a mangrove forest restoration project in 2004. Hydro-leveling uses a high pressure stream of water to wash sediment from the spoil mound into the adjacent wetland and ditch. To assess the effectiveness of this technique, we conducted vegetation surveys in areas that were hydro-leveled and in non-hydro-leveled areas 3 years post-project. Adult Schinus were reduced but not eliminated from hydro-leveled mounds. Schinus seedlings however were absent from hydro-leveled sites. Colonization by native species was sparse. Mangrove seedlings were essentially absent (≈2 m−2) from the centers of hydro-leveled mounds and were in low density on their edges (17 m−2) in comparison to surrounding mangrove forests (105 m−2). Hydro-leveling resulted in mortality of mangroves adjacent to the mounds being leveled. This was probably caused by burial of pneumatophores during the hydro-leveling process. For hydro-leveling to be a useful and successful restoration technique several requirements must be met. Spoil mounds must be lowered to the level of the surrounding wetlands. Spoil must be distributed further into the adjacent wetland to prevent burial of nearby native vegetation. Finally, native species may need to be planted on hydro-leveled areas to speed up the re-vegetation process.

  12. Natural setting of Japanese islands and geologic disposal of high-level waste

    International Nuclear Information System (INIS)

    Koide, Hitoshi

    1991-01-01

    The Japanese islands are a combination of arcuate islands along boundaries between four major plates: Eurasia, North America, Pacific and Philippine Sea plates. The interaction among the four plates formed complex geological structures which are basically patchworks of small blocks of land and sea-floor sediments piled up by the subduction of oceanic plates along the margin of the Eurasia continent. Although frequent earthquakes and volcanic eruptions clearly indicate active crustal deformation, the distribution of active faults and volcanoes is localized regionally in the Japanese islands. Crustal displacement faster than 1 mm/year takes place only in restricted regions near plate boundaries or close to major active faults. Volcanic activity is absent in the region between the volcanic front and the subduction zone. The site selection is especially important in Japan. The scenarios for the long-term performance assessment of high-level waste disposal are discussed with special reference to the geological setting of Japan. The long-term prediction of tectonic disturbance, evaluation of faults and fractures in rocks and estimation of long-term water-rock interaction are key issues in the performance assessment of the high-level waste disposal in the Japanese islands. (author)

  13. Transport-level description of the 252Cf-source method using the Langevin technique

    International Nuclear Information System (INIS)

    Stolle, A.M.; Akcasu, A.Z.

    1991-01-01

    The fluctuations in the neutron number density and detector outputs in a nuclear reactor can be analyzed conveniently by using the Langevin equation approach. This approach can be implemented at any level of approximation to describe the time evolution of the neutron population, from the most complete transport-level description to the very basic point reactor analysis of neutron number density fluctuations. In this summary, the complete space- and velocity-dependent transport-level formulation of the Langevin equation approach is applied to the analysis of the 252 Cf-source-driven noise analysis (CSDNA) method, an experimental technique developed by J.T. Mihalczo at Oak Ridge National Laboratory, which makes use of noise analysis to determine the reactivity of subcritical media. From this analysis, a theoretical expression for the subcritical multiplication factor is obtained that can then be used to interpret the experimental data. Results at the transport level are in complete agreement with an independent derivation performed by Sutton and Doub, who used the probability density method to interpret the CSDNA experiment, but differed from other expressions that have appeared in the literature

  14. Methodology for setting the reference levels in the measurements of the dose rate absorbed in air due to the environmental gamma radiation

    International Nuclear Information System (INIS)

    Dominguez Ley, Orlando; Capote Ferrera, Eduardo; Caveda Ramos, Celia; Alonso Abad, Dolores

    2008-01-01

    Full text: The methodology for setting the reference levels of the measurements of the gamma dose rate absorbed in the air is described. The registration level was obtained using statistical methods. To set the alarm levels, it was necessary to begin with certain affectation level, which activates the investigation operation mode when being reached. It is was necessary to transform this affectation level into values of the indicators selected to set the appearance of an alarm in the network, allowing its direct comparison and at the same time a bigger operability of this one. The affectation level was assumed as an effective dose of 1 mSv/y, which is the international dose limit for public. The conversion factor obtained in a practical way as a consequence of the Chernobyl accident was assumed, converting the value of annual effective dose into values of effective dose rate in air. These factors are the most important in our work, since the main task of the National Network of Environmental Radiological Surveillance of the Republic of Cuba is detecting accidents with a situations regional affectation, and this accident is precisely an example of pollution at this scale. The alarm level setting was based on the results obtained in the first year of the Chernobyl accident. For this purpose, some transformations were achieved. In the final results, a correction factor was introduced depending on the year season the measurement was made. It was taken into account the influence of different meteorological events on the measurement of this indicator. (author)

  15. The Daily Events and Emotions of Master's-Level Family Therapy Trainees in Off-Campus Practicum Settings

    Science.gov (United States)

    Edwards, Todd M.; Patterson, Jo Ellen

    2012-01-01

    The Day Reconstruction Method (DRM) was used to assess the daily events and emotions of one program's master's-level family therapy trainees in off-campus practicum settings. This study examines the DRM reports of 35 family therapy trainees in the second year of their master's program in marriage and family therapy. Four themes emerged from the…

  16. Novel room-temperature-setting phosphate ceramics for stabilizing combustion products and low-level mixed wastes

    International Nuclear Information System (INIS)

    Wagh, A.S.; Singh, D.

    1994-01-01

    Argonne National Laboratory, with support from the Office of Technology in the US Department of Energy (DOE), has developed a new process employing novel, chemically bonded ceramic materials to stabilize secondary waste streams. Such waste streams result from the thermal processes used to stabilize low-level, mixed wastes. The process will help the electric power industry treat its combustion and low-level mixed wastes. The ceramic materials are strong, dense, leach-resistant, and inexpensive to fabricate. The room-temperature-setting process allows stabilization of volatile components containing lead, mercury, cadmium, chromium, and nickel. The process also provides effective stabilization of fossil fuel combustion products. It is most suitable for treating fly and bottom ashes

  17. Nine-phase hex-tuple inverter for five-level output based on double carrier PWM technique

    DEFF Research Database (Denmark)

    Padmanaban, S.; Bhaskar, M.S.; Blaabjerg, F.

    2016-01-01

    This work articulates double carrier based five-level pulsewidth modulation for a nine-phase hex-tuple inverter AC drive. A set of standard three-phase voltage source inverter (VSI) with slight modification is used for framing the ninephase AC drive. In particular VSI packed with one bidirectiona...

  18. Segmenting the Parotid Gland using Registration and Level Set Methods

    DEFF Research Database (Denmark)

    Hollensen, Christian; Hansen, Mads Fogtmann; Højgaard, Liselotte

    . The method was evaluated on a test set consisting of 8 corresponding data sets. The attained total volume Dice coefficient and mean Haussdorff distance were 0.61 ± 0.20 and 15.6 ± 7.4 mm respectively. The method has improvement potential which could be exploited in order for clinical introduction....

  19. A stochastic approach to the derivation of exemption and clearance levels

    International Nuclear Information System (INIS)

    Deckert, A.

    1997-01-01

    Deciding what clearance levels are appropriate for a particular waste stream inherently involves a number of uncertainties. Some of these uncertainties can be quantified using stochastic modeling techniques, which can aid the process of decision making. In this presentation the German approach to dealing with the uncertainties involved in setting clearance levels is addressed. (author)

  20. Transport equations, Level Set and Eulerian mechanics. Application to fluid-structure coupling

    International Nuclear Information System (INIS)

    Maitre, E.

    2008-11-01

    My works were devoted to numerical analysis of non-linear elliptic-parabolic equations, to neutron transport equation and to the simulation of fabrics draping. More recently I developed an Eulerian method based on a level set formulation of the immersed boundary method to deal with fluid-structure coupling problems arising in bio-mechanics. Some of the more efficient algorithms to solve the neutron transport equation make use of the splitting of the transport operator taking into account its characteristics. In the present work we introduced a new algorithm based on this splitting and an adaptation of minimal residual methods to infinite dimensional case. We present the case where the velocity space is of dimension 1 (slab geometry) and 2 (plane geometry) because the splitting is simpler in the former

  1. ANALYSIS DATA SETS USING HYBRID TECHNIQUES APPLIED ARTIFICIAL INTELLIGENCE BASED PRODUCTION SYSTEMS INTEGRATED DESIGN

    Directory of Open Access Journals (Sweden)

    Daniel-Petru GHENCEA

    2017-06-01

    Full Text Available The paper proposes a prediction model of behavior spindle from the point of view of the thermal deformations and the level of the vibrations by highlighting and processing the characteristic equations. This is a model analysis for the shaft with similar electro-mechanical characteristics can be achieved using a hybrid analysis based on artificial intelligence (genetic algorithms - artificial neural networks - fuzzy logic. The paper presents a prediction mode obtaining valid range of values for spindles with similar characteristics based on measured data sets from a few spindles test without additional measures being required. Extracting polynomial functions of graphs resulting from simultaneous measurements and predict the dynamics of the two features with multi-objective criterion is the main advantage of this method.

  2. Setting the stage for master's level success

    Science.gov (United States)

    Roberts, Donna

    Comprehensive reading, writing, research, and study skills play a critical role in a graduate student's success and ability to contribute to a field of study effectively. The literature indicated a need to support graduate student success in the areas of mentoring, navigation, as well as research and writing. The purpose of this two-phased mixed methods explanatory study was to examine factors that characterize student success at the Master's level in the fields of education, sociology and social work. The study was grounded in a transformational learning framework which focused on three levels of learning: technical knowledge, practical or communicative knowledge, and emancipatory knowledge. The study included two data collection points. Phase one consisted of a Master's Level Success questionnaire that was sent via Qualtrics to graduate level students at three colleges and universities in the Central Valley of California: a California State University campus, a University of California campus, and a private college campus. The results of the chi-square indicated that seven questionnaire items were significant with p values less than .05. Phase two in the data collection included semi-structured interview questions that resulted in three themes emerged using Dedoose software: (1) the need for more language and writing support at the Master's level, (2) the need for mentoring, especially for second-language learners, and (3) utilizing the strong influence of faculty in student success. It is recommended that institutions continually assess and strengthen their programs to meet the full range of learners and to support students to degree completion.

  3. Sputtered Encapsulation as Wafer Level Packaging for Isolatable MEMS Devices: A Technique Demonstrated on a Capacitive Accelerometer

    Directory of Open Access Journals (Sweden)

    Azrul Azlan Hamzah

    2008-11-01

    Full Text Available This paper discusses sputtered silicon encapsulation as a wafer level packaging approach for isolatable MEMS devices. Devices such as accelerometers, RF switches, inductors, and filters that do not require interaction with the surroundings to function, could thus be fully encapsulated at the wafer level after fabrication. A MEMSTech 50g capacitive accelerometer was used to demonstrate a sputtered encapsulation technique. Encapsulation with a very uniform surface profile was achieved using spin-on glass (SOG as a sacrificial layer, SU-8 as base layer, RF sputtered silicon as main structural layer, eutectic gold-silicon as seal layer, and liquid crystal polymer (LCP as outer encapsulant layer. SEM inspection and capacitance test indicated that the movable elements were released after encapsulation. Nanoindentation test confirmed that the encapsulated device is sufficiently robust to withstand a transfer molding process. Thus, an encapsulation technique that is robust, CMOS compatible, and economical has been successfully developed for packaging isolatable MEMS devices at the wafer level.

  4. Reducing surgical levels by paraspinal mapping and diffusion tensor imaging techniques in lumbar spinal stenosis.

    Science.gov (United States)

    Chen, Hua-Biao; Wan, Qi; Xu, Qi-Feng; Chen, Yi; Bai, Bo

    2016-04-25

    Correlating symptoms and physical examination findings with surgical levels based on common imaging results is not reliable. In patients who have no concordance between radiological and clinical symptoms, the surgical levels determined by conventional magnetic resonance imaging (MRI) and neurogenic examination (NE) may lead to a more extensive surgery and significant complications. We aimed to confirm that whether the use of diffusion tensor imaging (DTI) and paraspinal mapping (PM) techniques can further prevent the occurrence of false positives with conventional MRI, distinguish which are clinically relevant from levels of cauda equina and/or nerve root lesions based on MRI, and determine and reduce the decompression levels of lumbar spinal stenosis than MRI + NE, while ensuring or improving surgical outcomes. We compared the data between patients who underwent MRI + (PM or DTI) and patients who underwent conventional MRI + NE to determine levels of decompression for the treatment of lumbar spinal stenosis. Outcome measures were assessed at 2 weeks, 3 months, 6 months, and 12 months postoperatively. One hundred fourteen patients (59 in the control group, 54 in the experimental group) underwent decompression. The levels of decompression determined by MRI + (PM or DTI) in the experimental group were significantly less than that determined by MRI + NE in the control group (p = 0.000). The surgical time, blood loss, and surgical transfusion were significantly less in the experimental group (p = 0.001, p = 0.011, p = 0.001, respectively). There were no differences in improvement of the visual analog scale back and leg pain (VAS-BP, VAS-LP) scores and Oswestry Disability Index (ODI) scores at 2 weeks, 3 months, 6 months, and 12 months after operation between the experimental and control groups. MRI + (PM or DTI) showed clear benefits in determining decompression levels of lumbar spinal stenosis than MRI + NE. In patients with lumbar spinal

  5. Axiomatic set theory

    CERN Document Server

    Suppes, Patrick

    1972-01-01

    This clear and well-developed approach to axiomatic set theory is geared toward upper-level undergraduates and graduate students. It examines the basic paradoxes and history of set theory and advanced topics such as relations and functions, equipollence, finite sets and cardinal numbers, rational and real numbers, and other subjects. 1960 edition.

  6. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.

    Science.gov (United States)

    Kumar, Neeraj; Verma, Ruchika; Sharma, Sanuj; Bhargava, Surabhi; Vahadane, Abhishek; Sethi, Amit

    2017-07-01

    Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.

  7. Thresholding: A Pixel-Level Image Processing Methodology Preprocessing Technique for an OCR System for the Brahmi Script

    Directory of Open Access Journals (Sweden)

    H. K. Anasuya Devi

    2006-12-01

    Full Text Available In this paper we study the methodology employed for preprocessing the archaeological images. We present the various algorithms used in the low-level processing stage of image analysis for Optical Character Recognition System for Brahmi Script. The image preprocessing technique covered in this paper is thresholding. We also try to analyze the results obtained by the pixel-level processing algorithms.

  8. Measurement technique developments for LBE flows

    Energy Technology Data Exchange (ETDEWEB)

    Buchenau, D., E-mail: d.buchenau@fzd.de [Forschungszentrum Dresden-Rossendorf (FZD), 01314 Dresden (Germany); Eckert, S.; Gerbeth, G. [Forschungszentrum Dresden-Rossendorf (FZD), 01314 Dresden (Germany); Stieglitz, R. [Karlsruhe Institute of Technology (KIT), 76344 Eggenstein-Leopoldshafen (Germany); Dierckx, M. [SCK-CEN, Belgian Nuclear Research Centre, 2400 Mol (Belgium)

    2011-08-31

    We report on the development of measurement techniques for flows in lead-bismuth eutectic alloys (LBE). This paper covers the test results of newly developed contactless flow rate sensors as well as the development and test of the LIDAR technique for operational free surface level detection. The flow rate sensors are based on the flow-induced disturbance of an externally applied AC magnetic field which manifests itself by a modified amplitude or a modified phase of the AC field. Another concept of a force-free contactless flow meter uses a single cylindrical permanent magnet. The electromagnetic torque on the magnet caused by the liquid metal flow sets the magnet into rotation. The operation of those sensors has been demonstrated at liquid metal test loops for which comparative flow rate measurements are available, as well as at the LBE loops THESYS at KIT and WEBEXPIR at SCK-CEN. For the level detection a commercial LIDAR system was successfully tested at the WEBEXPIR facility in Mol and the THEADES loop in Karlsruhe.

  9. Robust nuclei segmentation in cyto-histopathological images using statistical level set approach with topology preserving constraint

    Science.gov (United States)

    Taheri, Shaghayegh; Fevens, Thomas; Bui, Tien D.

    2017-02-01

    Computerized assessments for diagnosis or malignancy grading of cyto-histopathological specimens have drawn increased attention in the field of digital pathology. Automatic segmentation of cell nuclei is a fundamental step in such automated systems. Despite considerable research, nuclei segmentation is still a challenging task due noise, nonuniform illumination, and most importantly, in 2D projection images, overlapping and touching nuclei. In most published approaches, nuclei refinement is a post-processing step after segmentation, which usually refers to the task of detaching the aggregated nuclei or merging the over-segmented nuclei. In this work, we present a novel segmentation technique which effectively addresses the problem of individually segmenting touching or overlapping cell nuclei during the segmentation process. The proposed framework is a region-based segmentation method, which consists of three major modules: i) the image is passed through a color deconvolution step to extract the desired stains; ii) then the generalized fast radial symmetry transform is applied to the image followed by non-maxima suppression to specify the initial seed points for nuclei, and their corresponding GFRS ellipses which are interpreted as the initial nuclei borders for segmentation; iii) finally, these nuclei border initial curves are evolved through the use of a statistical level-set approach along with topology preserving criteria for segmentation and separation of nuclei at the same time. The proposed method is evaluated using Hematoxylin and Eosin, and fluorescent stained images, performing qualitative and quantitative analysis, showing that the method outperforms thresholding and watershed segmentation approaches.

  10. Comparing of goal setting strategy with group education method to increase physical activity level: A randomized trial

    Directory of Open Access Journals (Sweden)

    Nasrin Jiryaee

    2015-01-01

    Full Text Available Background: Designing an intervention to increase physical activity is important to be based on the health care settings resources and be acceptable by the subject group. This study was designed to assess and compare the effect of the goal setting strategy with a group education method on increasing the physical activity of mothers of children aged 1 to 5. Materials and Methods: Mothers who had at least one child of 1-5 years were randomized into two groups. The effect of 1 goal-setting strategy and 2 group education method on increasing physical activity was assessed and compared 1 month and 3 months after the intervention. Also, the weight, height, body mass index (BMI, waist and hip circumference, and well-being were compared between the two groups before and after the intervention. Results: Physical activity level increased significantly after the intervention in the goal-setting group and it was significantly different between the two groups after intervention (P < 0.05. BMI, waist circumference, hip circumference, and well-being score were significantly different in the goal-setting group after the intervention. In the group education method, only the well-being score improved significantly (P < 0.05. Conclusion: Our study presented the effects of using the goal-setting strategy to boost physical activity, improving the state of well-being and decreasing BMI, waist, and hip circumference.

  11. Separation Techniques for Uranium and Plutonium at Trace Levels for the Thermal Ionization Mass Spectrometric Determination

    International Nuclear Information System (INIS)

    Suh, M. Y.; Han, S. H.; Kim, J. G.; Park, Y. J.; Kim, W. H.

    2005-12-01

    This report describes the state of the art and the progress of the chemical separation and purification techniques required for the thermal ionization mass spectrometric determination of uranium and plutonium in environmental samples at trace or ultratrace levels. Various techniques, such as precipitation, solvent extraction, extraction chromatography, and ion exchange chromatography, for separation of uranium and plutonium were evaluated. Sample preparation methods and dissolution techniques for environmental samples were also discussed. Especially, both extraction chromatographic and anion exchange chromatographic procedures for uranium and plutonium in environmental samples, such as soil, sediment, plant, seawater, urine, and bone ash were reviewed in detail in order to propose some suitable methods for the separation and purification of uranium and plutonium from the safeguards environmental or swipe samples. A survey of the IAEA strengthened safeguards system, the clean room facility of IAEA's NWAL(Network of Analytical Laboratories), and the analytical techniques for safeguards environmental samples was also discussed here

  12. Separation Techniques for Uranium and Plutonium at Trace Levels for the Thermal Ionization Mass Spectrometric Determination

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Han, S. H.; Kim, J. G.; Park, Y. J.; Kim, W. H

    2005-12-15

    This report describes the state of the art and the progress of the chemical separation and purification techniques required for the thermal ionization mass spectrometric determination of uranium and plutonium in environmental samples at trace or ultratrace levels. Various techniques, such as precipitation, solvent extraction, extraction chromatography, and ion exchange chromatography, for separation of uranium and plutonium were evaluated. Sample preparation methods and dissolution techniques for environmental samples were also discussed. Especially, both extraction chromatographic and anion exchange chromatographic procedures for uranium and plutonium in environmental samples, such as soil, sediment, plant, seawater, urine, and bone ash were reviewed in detail in order to propose some suitable methods for the separation and purification of uranium and plutonium from the safeguards environmental or swipe samples. A survey of the IAEA strengthened safeguards system, the clean room facility of IAEA's NWAL(Network of Analytical Laboratories), and the analytical techniques for safeguards environmental samples was also discussed here.

  13. Effect of different anesthesia techniques on the serum brain-derived neurotrophic factor (BDNF) levels.

    Science.gov (United States)

    Ozer, A B; Demirel, I; Erhan, O L; Firdolas, F; Ustundag, B

    2015-10-01

    Serum Brain-Derived Neurotrophic Factor (BDNF) levels are associated with neurotransmission and cognitive functions. The goal of this study was to examine the effect of general anesthesia on BDNF levels. It was also to reveal whether this effect had a relationship with the surgical stress response or not. The study included 50 male patients, age 20-40, who were scheduled to have inguinoscrotal surgery, and who were in the ASA I-II risk group. The patients were divided into two groups according to the anesthesia techniques used: general (GA) and spinal (SA). In order to measure serum BDNF, cortisol, insulin and glucose levels, blood samples were taken at four different times: before and after anesthesia, end of the surgery, and before transferal from the recovery room. Serum BDNF levels were significantly low (p BDNF and the stress hormones. Our findings suggested that general anesthetics had an effect on serum BDNF levels independent of the stress response. In future, BDNF could be used as biochemical parameters of anesthesia levels, but studies with a greater scope should be carried out to present the relationship between anesthesia and neurotrophins.

  14. A shimming technique for improvement of the spectral performance of APS Undulator A

    International Nuclear Information System (INIS)

    Vasserman, I.

    1996-01-01

    The performance of insertion devices would achieve almost the ultimate level if a proper set of techniques could be developed to correct the magnetic field imperfections. It has been shown experimentally that the measured radiation characteristics of a magnetically fine-tuned insertion device are very close to those calculated for an ideal device. There are different techniques for correction of magnetic field errors. One used most often is a shimming technique capable of correcting both integrated and local field errors. In this note, some specific results of a shimming technique applied to APS insertion devices will be presented. This technique uses two types of shims: one for trajectory corrections and one for phase corrections. It has been demonstrated that trajectory shims could bring the rms phase errors to the level of 5 degrees, and the next shimming step when only phase shims are applied brings the rms phase errors as low as 1.5 degree

  15. Rough Set Approach to Incomplete Multiscale Information System

    Science.gov (United States)

    Yang, Xibei; Qi, Yong; Yu, Dongjun; Yu, Hualong; Song, Xiaoning; Yang, Jingyu

    2014-01-01

    Multiscale information system is a new knowledge representation system for expressing the knowledge with different levels of granulations. In this paper, by considering the unknown values, which can be seen everywhere in real world applications, the incomplete multiscale information system is firstly investigated. The descriptor technique is employed to construct rough sets at different scales for analyzing the hierarchically structured data. The problem of unravelling decision rules at different scales is also addressed. Finally, the reduct descriptors are formulated to simplify decision rules, which can be derived from different scales. Some numerical examples are employed to substantiate the conceptual arguments. PMID:25276852

  16. Social Context of First Birth Timing in a Rapidly Changing Rural Setting

    Science.gov (United States)

    Ghimire, Dirgha J.

    2016-01-01

    This article examines the influence of social context on the rate of first birth. Drawing on socialization models, I develop a theoretical framework to explain how different aspects of social context (i.e., neighbors), may affect the rate of first birth. Neighbors, who in the study setting comprise individuals’ immediate social context, have an important influence on the rate of first birth. To test my hypotheses, I leverage a setting, measures and analytical techniques designed to study the impact of macro-level social contexts on micro-level individual behavior. The results show that neighbors’ age at first birth, travel to the capital city and media exposure tend to reduce the first birth rate, while neighbors’ non-family work experience increases first birth rate. These effects are independent of neighborhood characteristics and are robust against several key variations in model specifications. PMID:27886737

  17. The Relationship Between Biomechanical Indicators of the Snatch Technique and Female Weightlifters' Levels

    Directory of Open Access Journals (Sweden)

    Szyszka Paulina

    2015-03-01

    Full Text Available Introduction. The snatch technique is a discipline in Olympic weightlifting. The lifter has to raise the barbell from the platform directly above their head in one movement. While reviewing the literature on biomechanical analysis of the techniques of weightlifting, one can find positions on the analysis of parameters, such as barbell track, horizontal displacement, and angular positions of the joints in the individual phases of the lifter's movement. Many texts concern female and male lifters taking part in World or European Championships. The parameters of the best competitors are outlined - mostly those who finish in the top five places in competition. Mostly these are parameters regarding male lifters, and less frequently those of female lifters. In the literature review, an overlooked aspect is that of the definition of the diversity of indicators as regards the snatch technique practiced by female lifters depending on score. Material and methods. In the research, registered snatch attempts during the World Championship were used. Videos were used by judges to establish a maximum weight limit for female lifters. The attempts were registered by two cameras and were later digitally processed by the APAS 2000 system. Barbell parameters, maximum speed, average of the bar, and the parameters of the lifter-bar collocation (horizontal displacement of barbell weights and height elevation were assessed. Results. The analysed attempts show the margin of error for measurement of the average speed of the barbell as 0.03 m/s. The difference in maximum speed of analysed attempts is 15%. The height of clearance of the first-placed female lifter's barbell was 12.7 cm, 30 cm for the last-placed. Conclusions. The sporting level of weightlifting by female lifters influences the analysed biomechanical indicators of the snatch. Those indicators, which are similar in the case of both the World Championship winner and the female lifter who came last, may be

  18. An acoustic technique for the determination of liquor level in tanks

    International Nuclear Information System (INIS)

    Watson, J.; Jones, T.L.

    1980-02-01

    The design, development and application of a prototype suitable for the measurement of liquor levels in tanks is described. The technique involves directing an acoustic pulse down a constraining tube to the liquor surface and measuring the time of return of the reflected pulse. Using the equipment it is possible to determine the position of a solid surface with a total error of less than 1 mm. The prototype instrument was used to measure the volume of liquors contained in rectangular slab tanks used for accountancy purposes at Dounreay Nuclear Power Development Establishment. The total error obtained in an individual measurement of volume was less than 0.2 litres (95% confidence limits). The instrument may be used as a replacement for a Pneumercator system in existing installations. (author)

  19. Cooling-load prediction by the combination of rough set theory and an artificial neural-network based on data-fusion technique

    International Nuclear Information System (INIS)

    Hou Zhijian; Lian Zhiwei; Yao Ye; Yuan Xinjian

    2006-01-01

    A novel method integrating rough sets (RS) theory and an artificial neural network (ANN) based on data-fusion technique is presented to forecast an air-conditioning load. Data-fusion technique is the process of combining multiple sensors data or related information to estimate or predict entity states. In this paper, RS theory is applied to find relevant factors to the load, which are used as inputs of an artificial neural-network to predict the cooling load. To improve the accuracy and enhance the robustness of load forecasting results, a general load-prediction model, by synthesizing multi-RSAN (MRAN), is presented so as to make full use of redundant information. The optimum principle is employed to deduce the weights of each RSAN model. Actual prediction results from a real air-conditioning system show that, the MRAN forecasting model is better than the individual RSAN and moving average (AMIMA) ones, whose relative error is within 4%. In addition, individual RSAN forecasting results are better than that of ARIMA

  20. Daily water level forecasting using wavelet decomposition and artificial intelligence techniques

    Science.gov (United States)

    Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.

    2015-01-01

    Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.

  1. An Effective Performance Analysis of Machine Learning Techniques for Cardiovascular Disease

    Directory of Open Access Journals (Sweden)

    Vinitha DOMINIC

    2015-03-01

    Full Text Available Machine learning techniques will help in deriving hidden knowledge from clinical data which can be of great benefit for society, such as reduce the number of clinical trials required for precise diagnosis of a disease of a person etc. Various areas of study are available in healthcare domain like cancer, diabetes, drugs etc. This paper focuses on heart disease dataset and how machine learning techniques can help in understanding the level of risk associated with heart diseases. Initially, data is preprocessed then analysis is done in two stages, in first stage feature selection techniques are applied on 13 commonly used attributes and in second stage feature selection techniques are applied on 75 attributes which are related to anatomic structure of the heart like blood vessels of the heart, arteries etc. Finally, validation of the reduced set of features using an exhaustive list of classifiers is done.In parallel study of the anatomy of the heart is done using the identified features and the characteristics of each class is understood. It is observed that these reduced set of features are anatomically relevant. Thus, it can be concluded that, applying machine learning techniques on clinical data is beneficial and necessary.

  2. Development of a working set of waste package performance criteria for deepsea disposal of low-level radioactive waste. Final report

    International Nuclear Information System (INIS)

    Columbo, P.; Fuhrmann, M.; Neilson, R.M. Jr; Sailor, V.L.

    1982-11-01

    The United States ocean dumping regulations developed pursuant to PL92-532, the Marine Protection, Research, and Sanctuaries Act of 1972, as amended, provide for a general policy of isolation and containment of low-level radioactive waste after disposal into the ocean. In order to determine whether any particular waste packaging system is adequate to meet this general requirement, it is necessary to establish a set of performance criteria against which to evaluate a particular packaging system. These performance criteria must present requirements for the behavior of the waste in combination with its immobilization agent and outer container in a deepsea environment. This report presents a working set of waste package performance criteria, and includes a glossary of terms, characteristics of low-level radioactive waste, radioisotopes of importance in low-level radioactive waste, and a summary of domestic and international regulations which control the ocean disposal of these wastes

  3. Level Set-Based Topology Optimization for the Design of an Electromagnetic Cloak With Ferrite Material

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Andkjær, Jacob Anders

    2013-01-01

    . A level set-based topology optimization method incorporating a fictitious interface energy is used to find optimized configurations of the ferrite material. The numerical results demonstrate that the optimization successfully found an appropriate ferrite configuration that functions as an electromagnetic......This paper presents a structural optimization method for the design of an electromagnetic cloak made of ferrite material. Ferrite materials exhibit a frequency-dependent degree of permeability, due to a magnetic resonance phenomenon that can be altered by changing the magnitude of an externally...

  4. Classification of sand samples according to radioactivity content by the use of euclidean and rough sets techniques

    International Nuclear Information System (INIS)

    Abd El-Monsef, M.M.; Kozae, A.M.; Seddeek, M.K.; Medhat, T.; Sharshar, T.; Badran, H.M.

    2004-01-01

    Form the geological point of view, the origin and transport of black and normal sands is particularly important. Black and normal sands came to their places along the Mediterranean-sea coast after transport by some natural process. Both types of sands have different radiological properties. This study is, therefore, attempts to use mathematical methods to classify Egyptian sand samples collected from 42 locations in an area of 40 x 19 km 2 based on their radioactivity contents. The use of all information resulted from the experimental measurements of radioactivity contents as well as some other parameters can be a time and effort consuming task. So that the process of eliminating unnecessary attributes is of prime importance. This elimination process of the superfluous attributes that cannot affect the decision was carried out. Some topological techniques to classify the information systems resulting from the radioactivity measurements were then carried out. These techniques were applied in Euclidean and quasi-discrete topological cases. While there are some applications in environmental radioactivity of the former case, the use of the quasi-discrete in the so-called rough set information analysis is new in such a study. The mathematical methods are summarized and the results and their radiological implications are discussed. Generally, the results indicate no radiological anomaly and it supports the hypothesis previously suggested about the presence of two types of sand in the studied area

  5. A web-based study of the relationship of duration of insulin pump infusion set use and fasting blood glucose level in adults with type 1 diabetes.

    Science.gov (United States)

    Sampson Perrin, Alysa J; Guzzetta, Russell C; Miller, Kellee M; Foster, Nicole C; Lee, Anna; Lee, Joyce M; Block, Jennifer M; Beck, Roy W

    2015-05-01

    To evaluate the impact of infusion set use duration on glycemic control, we conducted an Internet-based study using the T1D Exchange's online patient community, Glu ( myGlu.org ). For 14 days, 243 electronically consented adults with type 1 diabetes (T1D) entered online that day's fasting blood glucose (FBG) level, the prior day's total daily insulin (TDI) dose, and whether the infusion set was changed. Mean duration of infusion set use was 3.0 days. Mean FBG level was higher with each successive day of infusion set use, increasing from 126 mg/dL on Day 1 to 133 mg/dL on Day 3 to 147 mg/dL on Day 5 (P<0.001). TDI dose did not vary with increased duration of infusion set use. Internet-based data collection was used to rapidly conduct the study at low cost. The results indicate that FBG levels increase with each additional day of insulin pump infusion set use.

  6. Low level radioactivity measurements with phoswich detectors using coincident techniques and digital pulse processing analysis.

    Science.gov (United States)

    de la Fuente, R; de Celis, B; del Canto, V; Lumbreras, J M; de Celis Alonso, B; Martín-Martín, A; Gutierrez-Villanueva, J L

    2008-10-01

    A new system has been developed for the detection of low radioactivity levels of fission products and actinides using coincidence techniques. The device combines a phoswich detector for alpha/beta/gamma-ray recognition with a fast digital card for electronic pulse analysis. The phoswich can be used in a coincident mode by identifying the composed signal produced by the simultaneous detection of alpha/beta particles and X-rays/gamma particles. The technique of coincidences with phoswich detectors was proposed recently to verify the Nuclear Test Ban Treaty (NTBT) which established the necessity of monitoring low levels of gaseous fission products produced by underground nuclear explosions. With the device proposed here it is possible to identify the coincidence events and determine the energy and type of coincident particles. The sensitivity of the system has been improved by employing liquid scintillators and a high resolution low energy germanium detector. In this case it is possible to identify simultaneously by alpha/gamma coincidence transuranic nuclides present in environmental samples without necessity of performing radiochemical separation. The minimum detectable activity was estimated to be 0.01 Bq kg(-1) for 0.1 kg of soil and 1000 min counting.

  7. Low level radioactivity measurements with phoswich detectors using coincident techniques and digital pulse processing analysis

    International Nuclear Information System (INIS)

    Fuente, R. de la; Celis, B. de; Canto, V. del; Lumbreras, J.M.; Celis, Alonso B. de; Martin-Martin, A.; Gutierrez-Villanueva, J.L.

    2008-01-01

    A new system has been developed for the detection of low radioactivity levels of fission products and actinides using coincidence techniques. The device combines a phoswich detector for α/β/γ-ray recognition with a fast digital card for electronic pulse analysis. The phoswich can be used in a coincident mode by identifying the composed signal produced by the simultaneous detection of α/β particles and X-rays/γ particles. The technique of coincidences with phoswich detectors was proposed recently to verify the Nuclear Test Ban Treaty (NTBT) which established the necessity of monitoring low levels of gaseous fission products produced by underground nuclear explosions. With the device proposed here it is possible to identify the coincidence events and determine the energy and type of coincident particles. The sensitivity of the system has been improved by employing liquid scintillators and a high resolution low energy germanium detector. In this case it is possible to identify simultaneously by α/γ coincidence transuranic nuclides present in environmental samples without necessity of performing radiochemical separation. The minimum detectable activity was estimated to be 0.01 Bq kg -1 for 0.1 kg of soil and 1000 min counting

  8. Does gastric bypass surgery change body weight set point?

    Science.gov (United States)

    Hao, Z; Mumphrey, M B; Morrison, C D; Münzberg, H; Ye, J; Berthoud, H R

    2016-12-01

    The relatively stable body weight during adulthood is attributed to a homeostatic regulatory mechanism residing in the brain which uses feedback from the body to control energy intake and expenditure. This mechanism guarantees that if perturbed up or down by design, body weight will return to pre-perturbation levels, defined as the defended level or set point. The fact that weight re-gain is common after dieting suggests that obese subjects defend a higher level of body weight. Thus, the set point for body weight is flexible and likely determined by the complex interaction of genetic, epigenetic and environmental factors. Unlike dieting, bariatric surgery does a much better job in producing sustained suppression of food intake and body weight, and an intensive search for the underlying mechanisms has started. Although one explanation for this lasting effect of particularly Roux-en-Y gastric bypass surgery (RYGB) is simple physical restriction due to the invasive surgery, a more exciting explanation is that the surgery physiologically reprograms the body weight defense mechanism. In this non-systematic review, we present behavioral evidence from our own and other studies that defended body weight is lowered after RYGB and sleeve gastrectomy. After these surgeries, rodents return to their preferred lower body weight if over- or underfed for a period of time, and the ability to drastically increase food intake during the anabolic phase strongly argues against the physical restriction hypothesis. However, the underlying mechanisms remain obscure. Although the mechanism involves central leptin and melanocortin signaling pathways, other peripheral signals such as gut hormones and their neural effector pathways likely contribute. Future research using both targeted and non-targeted 'omics' techniques in both humans and rodents as well as modern, genetically targeted, neuronal manipulation techniques in rodents will be necessary.

  9. Generalized image contrast enhancement technique based on the Heinemann contrast discrimination model

    Science.gov (United States)

    Liu, Hong; Nodine, Calvin F.

    1996-07-01

    This paper presents a generalized image contrast enhancement technique, which equalizes the perceived brightness distribution based on the Heinemann contrast discrimination model. It is based on the mathematically proven existence of a unique solution to a nonlinear equation, and is formulated with easily tunable parameters. The model uses a two-step log-log representation of luminance contrast between targets and surround in a luminous background setting. The algorithm consists of two nonlinear gray scale mapping functions that have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of the gray-level distribution of the given image, and can be uniquely determined once the previous three are set. Tests have been carried out to demonstrate the effectiveness of the algorithm for increasing the overall contrast of radiology images. The traditional histogram equalization can be reinterpreted as an image enhancement technique based on the knowledge of human contrast perception. In fact, it is a special case of the proposed algorithm.

  10. Modeling Restrained Shrinkage Induced Cracking in Concrete Rings Using the Thick Level Set Approach

    Directory of Open Access Journals (Sweden)

    Rebecca Nakhoul

    2018-03-01

    Full Text Available Modeling restrained shrinkage-induced damage and cracking in concrete is addressed herein. The novel Thick Level Set (TLS damage growth and crack propagation model is used and adapted by introducing shrinkage contribution into the formulation. The TLS capacity to predict damage evolution, crack initiation and growth triggered by restrained shrinkage in absence of external loads is evaluated. A study dealing with shrinkage-induced cracking in elliptical concrete rings is presented herein. Key results such as the effect of rings oblateness on stress distribution and critical shrinkage strain needed to initiate damage are highlighted. In addition, crack positions are compared to those observed in experiments and are found satisfactory.

  11. Glycated haemoglobin (HbA1c ) and fasting plasma glucose relationships in sea-level and high-altitude settings.

    Science.gov (United States)

    Bazo-Alvarez, J C; Quispe, R; Pillay, T D; Bernabé-Ortiz, A; Smeeth, L; Checkley, W; Gilman, R H; Málaga, G; Miranda, J J

    2017-06-01

    Higher haemoglobin levels and differences in glucose metabolism have been reported among high-altitude residents, which may influence the diagnostic performance of HbA 1c . This study explores the relationship between HbA 1c and fasting plasma glucose (FPG) in populations living at sea level and at an altitude of > 3000 m. Data from 3613 Peruvian adults without a known diagnosis of diabetes from sea-level and high-altitude settings were evaluated. Linear, quadratic and cubic regression models were performed adjusting for potential confounders. Receiver operating characteristic (ROC) curves were constructed and concordance between HbA 1c and FPG was assessed using a Kappa index. At sea level and high altitude, means were 13.5 and 16.7 g/dl (P > 0.05) for haemoglobin level; 41 and 40 mmol/mol (5.9% and 5.8%; P < 0.01) for HbA 1c ; and 5.8 and 5.1 mmol/l (105 and 91.3 mg/dl; P < 0.001) for FPG, respectively. The adjusted relationship between HbA 1c and FPG was quadratic at sea level and linear at high altitude. Adjusted models showed that, to predict an HbA 1c value of 48 mmol/mol (6.5%), the corresponding mean FPG values at sea level and high altitude were 6.6 and 14.8 mmol/l (120 and 266 mg/dl), respectively. An HbA 1c cut-off of 48 mmol/mol (6.5%) had a sensitivity for high FPG of 87.3% (95% confidence interval (95% CI) 76.5 to 94.4) at sea level and 40.9% (95% CI 20.7 to 63.6) at high altitude. The relationship between HbA 1c and FPG is less clear at high altitude than at sea level. Caution is warranted when using HbA 1c to diagnose diabetes mellitus in this setting. © 2017 The Authors. Diabetic Medicine published by John Wiley & Sons Ltd on behalf of Diabetes UK.

  12. Effect of liner design, pulsator setting, and vacuum level on bovine teat tissue changes and milking characteristics as measured by ultrasonography

    Directory of Open Access Journals (Sweden)

    Gleeson David E

    2004-05-01

    Full Text Available Friesian-type dairy cows were milked with different machine settings to determine the effect of these settings on teat tissue reaction and on milking characteristics. Three teat-cup liner designs were used with varying upper barrel dimensions (wide-bore WB = 31.6 mm; narrow-bore NB = 21.0 mm; narrow-bore NB1 = 25.0 mm. These liners were tested with alternate and simultaneous pulsation patterns, pulsator ratios (60:40 and 67:33 and three system vacuum levels (40, 44 and 50 kPa. Teat tissue was measured using ultrasonography, before milking and directly after milking. The measurements recorded were teat canal length (TCL, teat diameter (TD, cistern diameter (CD and teat wall thickness (TWT. Teat tissue changes were similar with a system vacuum level of either 50 kPa (mid-level or 40 kPa (low-level. Widening the liner upper barrel bore dimension from 21.0 mm (P

  13. A book of set theory

    CERN Document Server

    Pinter, Charles C

    2014-01-01

    Suitable for upper-level undergraduates, this accessible approach to set theory poses rigorous but simple arguments. Each definition is accompanied by commentary that motivates and explains new concepts. Starting with a repetition of the familiar arguments of elementary set theory, the level of abstract thinking gradually rises for a progressive increase in complexity.A historical introduction presents a brief account of the growth of set theory, with special emphasis on problems that led to the development of the various systems of axiomatic set theory. Subsequent chapters explore classes and

  14. Use of communication techniques by Maryland dentists.

    Science.gov (United States)

    Maybury, Catherine; Horowitz, Alice M; Wang, Min Qi; Kleinman, Dushanka V

    2013-12-01

    Health care providers' use of recommended communication techniques can increase patients' adherence to prevention and treatment regimens and improve patient health outcomes. The authors conducted a survey of Maryland dentists to determine the number and type of communication techniques they use on a routine basis. The authors mailed a 30-item questionnaire to a random sample of 1,393 general practice dentists and all 169 members of the Maryland chapter of the American Academy of Pediatric Dentistry. The overall response rate was 38.4 percent. Analysis included descriptive statistics, analysis of variance and ordinary least squares regression analysis to examine the association of dentists' characteristics with the number of communication techniques used. They set the significance level at P communication techniques and 3.6 of the seven basic techniques, whereas pediatric dentists reported using a mean of 8.4 and 3.8 of those techniques, respectively. General dentists who had taken a communication course outside of dental school were more likely than those who had not to use the 18 techniques (P communication course outside of dental school were more likely than those who had not to use the 18 techniques (P communication techniques that dentists used routinely varied across the 18 techniques and was low for most techniques. Practical Implications. Professional education is needed both in dental school curricula and continuing education courses to increase use of recommended communication techniques. Specifically, dentists and their team members should consider taking communication skills courses and conducting an overall evaluation of their practices for user friendliness.

  15. Risk assessment techniques for civil aviation security

    Energy Technology Data Exchange (ETDEWEB)

    Tamasi, Galileo, E-mail: g.tamasi@enac.rupa.i [Ente Nazionale per l' Aviazione Civile-Direzione Progetti, Studi e Ricerche, Via di Villa Ricotti, 42, 00161 Roma (Italy); Demichela, Micaela, E-mail: micaela.demichela@polito.i [SAfeR-Centro Studi su Sicurezza, Affidabilita e Rischi, Dipartimento di Scienza dei Materiali e Ingegneria Chimica, Politecnico di Torino, Corso Duca degli Abruzzi, 24, 10129 Torino (Italy)

    2011-08-15

    Following the 9/11 terrorists attacks in New York a strong economical effort was made to improve and adapt aviation security, both in infrastructures as in airplanes. National and international guidelines were promptly developed with the objective of creating a security management system able to supervise the identification of risks and the definition and optimization of control measures. Risk assessment techniques are thus crucial in the above process, since an incorrect risk identification and quantification can strongly affect both the security level as the investments needed to reach it. The paper proposes a set of methodologies to qualitatively and quantitatively assess the risk in the security of civil aviation and the risk assessment process based on the threats, criticality and vulnerabilities concepts, highlighting their correlation in determining the level of risk. RAMS techniques are applied to the airport security system in order to analyze the protection equipment for critical facilities located in air-side, allowing also the estimation of the importance of the security improving measures vs. their effectiveness.

  16. Risk assessment techniques for civil aviation security

    International Nuclear Information System (INIS)

    Tamasi, Galileo; Demichela, Micaela

    2011-01-01

    Following the 9/11 terrorists attacks in New York a strong economical effort was made to improve and adapt aviation security, both in infrastructures as in airplanes. National and international guidelines were promptly developed with the objective of creating a security management system able to supervise the identification of risks and the definition and optimization of control measures. Risk assessment techniques are thus crucial in the above process, since an incorrect risk identification and quantification can strongly affect both the security level as the investments needed to reach it. The paper proposes a set of methodologies to qualitatively and quantitatively assess the risk in the security of civil aviation and the risk assessment process based on the threats, criticality and vulnerabilities concepts, highlighting their correlation in determining the level of risk. RAMS techniques are applied to the airport security system in order to analyze the protection equipment for critical facilities located in air-side, allowing also the estimation of the importance of the security improving measures vs. their effectiveness.

  17. Measurement of thermally ablated lesions in sonoelastographic images using level set methods

    Science.gov (United States)

    Castaneda, Benjamin; Tamez-Pena, Jose Gerardo; Zhang, Man; Hoyt, Kenneth; Bylund, Kevin; Christensen, Jared; Saad, Wael; Strang, John; Rubens, Deborah J.; Parker, Kevin J.

    2008-03-01

    The capability of sonoelastography to detect lesions based on elasticity contrast can be applied to monitor the creation of thermally ablated lesion. Currently, segmentation of lesions depicted in sonoelastographic images is performed manually which can be a time consuming process and prone to significant intra- and inter-observer variability. This work presents a semi-automated segmentation algorithm for sonoelastographic data. The user starts by planting a seed in the perceived center of the lesion. Fast marching methods use this information to create an initial estimate of the lesion. Subsequently, level set methods refine its final shape by attaching the segmented contour to edges in the image while maintaining smoothness. The algorithm is applied to in vivo sonoelastographic images from twenty five thermal ablated lesions created in porcine livers. The estimated area is compared to results from manual segmentation and gross pathology images. Results show that the algorithm outperforms manual segmentation in accuracy, inter- and intra-observer variability. The processing time per image is significantly reduced.

  18. Advanced quadrature sets and acceleration and preconditioning techniques for the discrete ordinates method in parallel computing environments

    Science.gov (United States)

    Longoni, Gianluca

    In the nuclear science and engineering field, radiation transport calculations play a key-role in the design and optimization of nuclear devices. The linear Boltzmann equation describes the angular, energy and spatial variations of the particle or radiation distribution. The discrete ordinates method (S N) is the most widely used technique for solving the linear Boltzmann equation. However, for realistic problems, the memory and computing time require the use of supercomputers. This research is devoted to the development of new formulations for the SN method, especially for highly angular dependent problems, in parallel environments. The present research work addresses two main issues affecting the accuracy and performance of SN transport theory methods: quadrature sets and acceleration techniques. New advanced quadrature techniques which allow for large numbers of angles with a capability for local angular refinement have been developed. These techniques have been integrated into the 3-D SN PENTRAN (Parallel Environment Neutral-particle TRANsport) code and applied to highly angular dependent problems, such as CT-Scan devices, that are widely used to obtain detailed 3-D images for industrial/medical applications. In addition, the accurate simulation of core physics and shielding problems with strong heterogeneities and transport effects requires the numerical solution of the transport equation. In general, the convergence rate of the solution methods for the transport equation is reduced for large problems with optically thick regions and scattering ratios approaching unity. To remedy this situation, new acceleration algorithms based on the Even-Parity Simplified SN (EP-SSN) method have been developed. A new stand-alone code system, PENSSn (Parallel Environment Neutral-particle Simplified SN), has been developed based on the EP-SSN method. The code is designed for parallel computing environments with spatial, angular and hybrid (spatial/angular) domain

  19. Interpersonal communication and compliance: The Disrupt-Then-Reframe technique in dyadic influence settings

    NARCIS (Netherlands)

    Fennis, B.M.; Das, H.H.J.; Pruyn, A.Th.H.

    2006-01-01

    Two field experiments examined the impact of the Disrupt-Then-Reframe (DTR) technique on compliance. This recently identified technique consists of a subtle, odd element in a typical scripted request (the disruption) followed by a persuasive phrase (the reframing). The authors argued that its impact

  20. Interpersonal communication and compliance. The disrupt-then-reframe technique in dyadic influence settings

    NARCIS (Netherlands)

    Fennis, B.M.; Das, Enny; Pruyn, Adriaan T.H.

    2006-01-01

    Two field experiments examined the impact of the Disrupt-Then-Reframe (DTR) technique on compliance. This recently identified technique consists of a subtle, odd element in a typical scripted request (the disruption) followed by a persuasive phrase (the reframing). The authors argued that its impact

  1. A Novel Technique for Generating and Observing Chemiluminescence in a Biological Setting

    KAUST Repository

    Büchel, Gabriel E.

    2017-03-10

    Intraoperative imaging techniques have the potential to make surgical interventions safer and more effective; for these reasons, such techniques are quickly moving into the operating room. Here, we present a new approach that utilizes a technique not yet explored for intraoperative imaging: chemiluminescent imaging. This method employs a ruthenium-based chemiluminescent reporter along with a custom-built nebulizing system to produce ex vivo or in vivo images with high signal-to-noise ratios. The ruthenium-based reporter produces light following exposure to an aqueous oxidizing solution and re-reduction within the surrounding tissue. This method has allowed us to detect reporter concentrations as low as 6.9 pmol/cm(2). In this work, we present a visual guide to our proof-of-concept in vivo studies involving subdermal and intravenous injections in mice. The results suggest that this technology is a promising candidate for further preclinical research and might ultimately become a useful tool in the operating room.

  2. Clinical bacteriology in low-resource settings: today's solutions.

    Science.gov (United States)

    Ombelet, Sien; Ronat, Jean-Baptiste; Walsh, Timothy; Yansouni, Cedric P; Cox, Janneke; Vlieghe, Erika; Martiny, Delphine; Semret, Makeda; Vandenberg, Olivier; Jacobs, Jan

    2018-03-05

    Low-resource settings are disproportionately burdened by infectious diseases and antimicrobial resistance. Good quality clinical bacteriology through a well functioning reference laboratory network is necessary for effective resistance control, but low-resource settings face infrastructural, technical, and behavioural challenges in the implementation of clinical bacteriology. In this Personal View, we explore what constitutes successful implementation of clinical bacteriology in low-resource settings and describe a framework for implementation that is suitable for general referral hospitals in low-income and middle-income countries with a moderate infrastructure. Most microbiological techniques and equipment are not developed for the specific needs of such settings. Pending the arrival of a new generation diagnostics for these settings, we suggest focus on improving, adapting, and implementing conventional, culture-based techniques. Priorities in low-resource settings include harmonised, quality assured, and tropicalised equipment, consumables, and techniques, and rationalised bacterial identification and testing for antimicrobial resistance. Diagnostics should be integrated into clinical care and patient management; clinically relevant specimens must be appropriately selected and prioritised. Open-access training materials and information management tools should be developed. Also important is the need for onsite validation and field adoption of diagnostics in low-resource settings, with considerable shortening of the time between development and implementation of diagnostics. We argue that the implementation of clinical bacteriology in low-resource settings improves patient management, provides valuable surveillance for local antibiotic treatment guidelines and national policies, and supports containment of antimicrobial resistance and the prevention and control of hospital-acquired infections. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. GSHR, a Web-Based Platform Provides Gene Set-Level Analyses of Hormone Responses in Arabidopsis

    Directory of Open Access Journals (Sweden)

    Xiaojuan Ran

    2018-01-01

    Full Text Available Phytohormones regulate diverse aspects of plant growth and environmental responses. Recent high-throughput technologies have promoted a more comprehensive profiling of genes regulated by different hormones. However, these omics data generally result in large gene lists that make it challenging to interpret the data and extract insights into biological significance. With the rapid accumulation of theses large-scale experiments, especially the transcriptomic data available in public databases, a means of using this information to explore the transcriptional networks is needed. Different platforms have different architectures and designs, and even similar studies using the same platform may obtain data with large variances because of the highly dynamic and flexible effects of plant hormones; this makes it difficult to make comparisons across different studies and platforms. Here, we present a web server providing gene set-level analyses of Arabidopsis thaliana hormone responses. GSHR collected 333 RNA-seq and 1,205 microarray datasets from the Gene Expression Omnibus, characterizing transcriptomic changes in Arabidopsis in response to phytohormones including abscisic acid, auxin, brassinosteroids, cytokinins, ethylene, gibberellins, jasmonic acid, salicylic acid, and strigolactones. These data were further processed and organized into 1,368 gene sets regulated by different hormones or hormone-related factors. By comparing input gene lists to these gene sets, GSHR helped to identify gene sets from the input gene list regulated by different phytohormones or related factors. Together, GSHR links prior information regarding transcriptomic changes induced by hormones and related factors to newly generated data and facilities cross-study and cross-platform comparisons; this helps facilitate the mining of biologically significant information from large-scale datasets. The GSHR is freely available at http://bioinfo.sibs.ac.cn/GSHR/.

  4. High-sensitivity measurements for low-level TRU wastes using advanced passive neutron techniques

    International Nuclear Information System (INIS)

    Menlove, H.O.; Eccleston, G.W.

    1992-01-01

    In recent years, both passive- and active-neutron nondestructive assay (NDA) systems have been used to measure the uranium and plutonium content in 200-ell drums. Because of the heterogeneity of the wastes, representative sampling is not possible and NDA methods are preferred over destructive analysis. Active-neutron assay systems are used to measure the fissile isotopes such as 235 U, 23 Pu, and 241 Pu; the isotopic ratios are used to infer the total plutonium content and thus the specific disintegration rate. The active systems include 14-MeV-neutron (DT) generators with delayed-neutron counting, (D,T) generators with the differential die-away technique, and 252 Cf delayed-neutron shufflers. Passive assay systems (for example, segmented gamma-ray scanners)5 have used gamma-ray sessions, while others (for example, passive drum counters) used passive-neutron signals. We have developed a new passive-neutron measurement technique to improve the accuracy and sensitivity of the NDA of plutonium scrap and waste. This new 200-ell-drum assay system combines the classical NDA method of counting passive-neutron totals and coincidences from plutonium with the new features of ''add-a-source'' (AS) and multiplicity counting to improve the accuracy of matrix corrections and statistical techniques that improve the low-level detectability limits. This paper describes the improvements we have made in passive-neutron assay systems and compares the accuracies and detectability limits of passive- and active-neutron assay systems

  5. Accurate prediction of complex free surface flow around a high speed craft using a single-phase level set method

    Science.gov (United States)

    Broglia, Riccardo; Durante, Danilo

    2017-11-01

    This paper focuses on the analysis of a challenging free surface flow problem involving a surface vessel moving at high speeds, or planing. The investigation is performed using a general purpose high Reynolds free surface solver developed at CNR-INSEAN. The methodology is based on a second order finite volume discretization of the unsteady Reynolds-averaged Navier-Stokes equations (Di Mascio et al. in A second order Godunov—type scheme for naval hydrodynamics, Kluwer Academic/Plenum Publishers, Dordrecht, pp 253-261, 2001; Proceedings of 16th international offshore and polar engineering conference, San Francisco, CA, USA, 2006; J Mar Sci Technol 14:19-29, 2009); air/water interface dynamics is accurately modeled by a non standard level set approach (Di Mascio et al. in Comput Fluids 36(5):868-886, 2007a), known as the single-phase level set method. In this algorithm the governing equations are solved only in the water phase, whereas the numerical domain in the air phase is used for a suitable extension of the fluid dynamic variables. The level set function is used to track the free surface evolution; dynamic boundary conditions are enforced directly on the interface. This approach allows to accurately predict the evolution of the free surface even in the presence of violent breaking waves phenomena, maintaining the interface sharp, without any need to smear out the fluid properties across the two phases. This paper is aimed at the prediction of the complex free-surface flow field generated by a deep-V planing boat at medium and high Froude numbers (from 0.6 up to 1.2). In the present work, the planing hull is treated as a two-degree-of-freedom rigid object. Flow field is characterized by the presence of thin water sheets, several energetic breaking waves and plungings. The computational results include convergence of the trim angle, sinkage and resistance under grid refinement; high-quality experimental data are used for the purposes of validation, allowing to

  6. Radiological error: analysis, standard setting, targeted instruction and teamworking

    International Nuclear Information System (INIS)

    FitzGerald, Richard

    2005-01-01

    Diagnostic radiology does not have objective benchmarks for acceptable levels of missed diagnoses [1]. Until now, data collection of radiological discrepancies has been very time consuming. The culture within the specialty did not encourage it. However, public concern about patient safety is increasing. There have been recent innovations in compiling radiological interpretive discrepancy rates which may facilitate radiological standard setting. However standard setting alone will not optimise radiologists' performance or patient safety. We must use these new techniques in radiological discrepancy detection to stimulate greater knowledge sharing, targeted instruction and teamworking among radiologists. Not all radiological discrepancies are errors. Radiological discrepancy programmes must not be abused as an instrument for discrediting individual radiologists. Discrepancy rates must not be distorted as a weapon in turf battles. Radiological errors may be due to many causes and are often multifactorial. A systems approach to radiological error is required. Meaningful analysis of radiological discrepancies and errors is challenging. Valid standard setting will take time. Meanwhile, we need to develop top-up training, mentoring and rehabilitation programmes. (orig.)

  7. A game on the universe of sets

    International Nuclear Information System (INIS)

    Saveliev, D I

    2008-01-01

    Working in set theory without the axiom of regularity, we consider a two-person game on the universe of sets. In this game, the players choose in turn an element of a given set, an element of this element and so on. A player wins if he leaves his opponent no possibility of making a move, that is, if he has chosen the empty set. Winning sets (those admitting a winning strategy for one of the players) form a natural hierarchy with levels indexed by ordinals (in the finite case, the ordinal indicates the shortest length of a winning strategy). We show that the class of hereditarily winning sets is an inner model containing all well-founded sets and that each of the four possible relations between the universe, the class of hereditarily winning sets, and the class of well-founded sets is consistent. As far as the class of winning sets is concerned, either it is equal to the whole universe, or many of the axioms of set theory cannot hold on this class. Somewhat surprisingly, this does not apply to the axiom of regularity: we show that the failure of this axiom is consistent with its relativization to winning sets. We then establish more subtle properties of winning non-well-founded sets. We describe all classes of ordinals for which the following is consistent: winning sets without minimal elements (in the sense of membership) occur exactly at the levels indexed by the ordinals of this class. In particular, we show that if an even level of the hierarchy of winning sets contains a set without minimal elements, then all higher levels contain such sets. We show that the failure of the axiom of regularity implies that all odd levels contain sets without minimal elements, but it is consistent with the absence of such sets at all even levels as well as with their appearance at an arbitrary even non-limit or countable-cofinal level. To obtain consistency results, we propose a new method for obtaining models with non-well-founded sets. Finally, we study how long this game can

  8. The zygomatic implant perforated (ZIP) flap: a new technique for combined surgical reconstruction and rapid fixed dental rehabilitation following low-level maxillectomy.

    Science.gov (United States)

    Butterworth, C J; Rogers, S N

    2017-12-01

    This aim of this report is to describe the development and evolution of a new surgical technique for the immediate surgical reconstruction and rapid post-operative prosthodontic rehabilitation with a fixed dental prosthesis following low-level maxillectomy for malignant disease.The technique involves the use of a zygomatic oncology implant perforated micro-vascular soft tissue flap (ZIP flap) for the primary management of maxillary malignancy with surgical closure of the resultant maxillary defect and the installation of osseointegrated support for a zygomatic implant-supported maxillary fixed dental prosthesis.The use of this technique facilitates extremely rapid oral and dental rehabilitation within a few weeks of resective surgery, providing rapid return to function and restoring appearance following low-level maxillary resection, even in cases where radiotherapy is required as an adjuvant treatment post-operatively. The ZIP flap technique has been adopted as a standard procedure in the unit for the management of low-level maxillary malignancy, and this report provides a detailed step-by-step approach to treatment and discusses modifications developed over the treatment of an initial cohort of patients.

  9. Cooperative Fuzzy Games Approach to Setting Target Levels of ECs in Quality Function Deployment

    Directory of Open Access Journals (Sweden)

    Zhihui Yang

    2014-01-01

    Full Text Available Quality function deployment (QFD can provide a means of translating customer requirements (CRs into engineering characteristics (ECs for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach.

  10. Cooperative fuzzy games approach to setting target levels of ECs in quality function deployment.

    Science.gov (United States)

    Yang, Zhihui; Chen, Yizeng; Yin, Yunqiang

    2014-01-01

    Quality function deployment (QFD) can provide a means of translating customer requirements (CRs) into engineering characteristics (ECs) for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach.

  11. Assessing the interplay between the shoulders and low back during manual patient handling techniques in a nursing setting.

    Science.gov (United States)

    Belbeck, Alicia; Cudlip, Alan C; Dickerson, Clark R

    2014-01-01

    The purpose of this research was to quantify shoulder demands during freestyle manual patient handling (MPH) tasks and determine whether approaches intended to prevent low back injury increased shoulder demands. Twenty females completed 5 MPH tasks found commonly in hospital settings before and after a training session using current workplace MPH guidelines. Most normalized muscle activity indices and ratings of perceived exertion decreased following training at both the low back and shoulders, but were more pronounced at the low back. There was little evidence to suggest that mechanical demands were transferred from the low back to the shoulders following the training session. The study generally supports continued use of the recommended MPH techniques, but indicates that several tasks generate high muscular demands and should be avoided if possible.

  12. Estimation of trace levels of plutonium in urine samples by fission track technique

    International Nuclear Information System (INIS)

    Sawant, P.D.; Prabhu, S.; Pendharkar, K.A.; Kalsi, P.C.

    2009-01-01

    Individual monitoring of radiation workers handling Pu in various nuclear installations requires the detection of trace levels of plutonium in bioassay samples. It is necessary to develop methods that can detect urinary excretion of Pu in fraction of mBq range. Therefore, a sensitive method such as fission track analysis has been developed for the measurement of trace levels of Pu in bioassay samples. In this technique, chemically separated plutonium from the sample and a Pu standard were electrodeposited on planchettes and covered with Lexan solid state nuclear track detector (SSNTD) and irradiated with thermal neutrons in APSARA reactor of Bhabha Atomic Research Centre, India. The fission track densities in the Lexan films of the sample and the standard were used to calculate the amount of Pu in the sample. The minimum amount of Pu that can be analyzed by this method using doubly distilled electronic grade (E. G.) reagents is about 12 μBq/L. (author)

  13. Sample application of sensitivity/uncertainty analysis techniques to a groundwater transport problem. National Low-Level Waste Management Program

    International Nuclear Information System (INIS)

    Seitz, R.R.; Rood, A.S.; Harris, G.A.; Maheras, S.J.; Kotecki, M.

    1991-06-01

    The primary objective of this document is to provide sample applications of selected sensitivity and uncertainty analysis techniques within the context of the radiological performance assessment process. These applications were drawn from the companion document Guidelines for Sensitivity and Uncertainty Analyses of Low-Level Radioactive Waste Performance Assessment Computer Codes (S. Maheras and M. Kotecki, DOE/LLW-100, 1990). Three techniques are illustrated in this document: one-factor-at-a-time (OFAT) analysis, fractional factorial design, and Latin hypercube sampling. The report also illustrates the differences in sensitivity and uncertainty analysis at the early and latter stages of the performance assessment process, and potential pitfalls that can be encountered when applying the techniques. The emphasis is on application of the techniques as opposed to the actual results, since the results are hypothetical and are not based on site-specific conditions

  14. What is the perceived impact of Alexander technique lessons on health status, costs and pain management in the real life setting of an English hospital? The results of a mixed methods evaluation of an Alexander technique service for those with chronic back pain

    OpenAIRE

    McClean, Stuart; Brilleman, Sam; Wye, Lesley

    2015-01-01

    Background: Randomised controlled trial evidence indicates that Alexander Technique is clinically and cost effective for chronic back pain. The aim of this mixed methods evaluation was to explore the role and perceived impact of Alexander Technique lessons in the naturalistic setting of an acute hospital Pain Management Clinic in England.\\ud \\ud Methods: To capture changes in health status and resource use amongst service users, 43 service users were administered three widely used questionnai...

  15. Point-source inversion techniques

    Science.gov (United States)

    Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.

    1982-11-01

    A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.

  16. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    Energy Technology Data Exchange (ETDEWEB)

    Grimes, Joshua, E-mail: grimes.joshua@mayo.edu [Department of Physics and Astronomy, University of British Columbia, Vancouver V5Z 1L8 (Canada); Celler, Anna [Department of Radiology, University of British Columbia, Vancouver V5Z 1L8 (Canada)

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming the same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90

  17. An analysis on the level changing of UET and SET in blood and urine in early stage of kidney disease caused by diabetes

    International Nuclear Information System (INIS)

    Liu Juzhen; Yang Wenying; Cai Tietie

    2001-01-01

    Objective: To study the relationship between UET and SET variation and early changes of diabetic nephropathy. Methods: UET and SET were measured in 24 patients with diabetes, 19 with early stage diabetic nephropathy, 21 with advanced diabetic nephropathy and 30 normal as contrast. Results: Apparent uprise of UET and SET was observed in all patients when compared to normal contrasts (P 2 -macroglobulin was revealed (P<0.05). Conclusion: UET and SET levels uprose as long as diabetic nephropathy deteriorated. As a result, UET and SET may act as sensitive indices in diagnosing early stage diabetic nephropathy

  18. Pocket book on setting techniques for medical imaging. X-ray diagnostics, angiography, CT, MRT. 4. rev. and enl. ed.; Taschenatlas Einstelltechnik. Roentgendiagnostik, Angiografie, CT, MRT

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, Torsten B.; Reif, Emil [Caritas-Krankenhaus, Dillingen/Saar (Germany)

    2009-07-01

    The pocketbook on setting techniques for medical imaging is concerned with the problem to prepare appropriate images for diagnostic purposes using modern high-technology instruments like x-ray diagnostics, angiography, computerized tomography and magnetic resonance tomography. The following issues are covered: Head, spinal column, upper extremities, lower extremities, thorax, gastrointestinal tract, intravenous organ examination, angiography, computerized tomography, NMR imaging.

  19. Set-theoretic methods in control

    CERN Document Server

    Blanchini, Franco

    2015-01-01

    The second edition of this monograph describes the set-theoretic approach for the control and analysis of dynamic systems, both from a theoretical and practical standpoint.  This approach is linked to fundamental control problems, such as Lyapunov stability analysis and stabilization, optimal control, control under constraints, persistent disturbance rejection, and uncertain systems analysis and synthesis.  Completely self-contained, this book provides a solid foundation of mathematical techniques and applications, extensive references to the relevant literature, and numerous avenues for further theoretical study. All the material from the first edition has been updated to reflect the most recent developments in the field, and a new chapter on switching systems has been added.  Each chapter contains examples, case studies, and exercises to allow for a better understanding of theoretical concepts by practical application. The mathematical language is kept to the minimum level necessary for the adequate for...

  20. Combinatorial set theory with a gentle introduction to forcing

    CERN Document Server

    Halbeisen, Lorenz J

    2017-01-01

    This book, now in a thoroughly revised second edition, provides a comprehensive and accessible introduction to modern set theory. Following an overview of basic notions in combinatorics and first-order logic, the author outlines the main topics of classical set theory in the second part, including Ramsey theory and the axiom of choice. The revised edition contains new permutation models and recent results in set theory without the axiom of choice. The third part explains the sophisticated technique of forcing in great detail, now including a separate chapter on Suslin’s problem. The technique is used to show that certain statements are neither provable nor disprovable from the axioms of set theory. In the final part, some topics of classical set theory are revisited and further developed in light of forcing, with new chapters on Sacks Forcing and Shelah’s astonishing construction of a model with finitely many Ramsey ultrafilters. Written for graduate students in axiomatic set theory, Combinatorial Set Th...

  1. Evaluating inhaler use technique in COPD patients

    Directory of Open Access Journals (Sweden)

    Pothirat C

    2015-07-01

    Full Text Available Chaicharn Pothirat, Warawut Chaiwong, Nittaya Phetsuk, Sangnual Pisalthanapuna, Nonglak Chetsadaphan, Woranoot Choomuang Division of Pulmonary, Critical Care and Allergy, Department of Internal Medicine, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand Background: Poor inhalation techniques are associated with decreased medication delivery and poor disease control in chronic obstructive pulmonary disease (COPD. The purpose of this study was to evaluate techniques for using inhaler devices in COPD patients.Methods: A prospective cross-sectional study was conducted to assess patient compliance with correct techniques for using inhaler devices across four regimens, ie, the pressurized metered-dose inhaler (pMDI, the pMDI with a spacer, the Accuhaler®, and the Handihaler®. The percentage of compliance with essential steps of correct device usage for each regimen was recorded without prior notification when COPD patients presented for a routine visit, and 1 month after receiving face-to-face training. We compared the percentage of compliance between the devices and risk factors related to incorrect techniques using logistic regression analysis. Percentage of patient compliance with correct techniques was compared between the two visits using the chi-square test. Statistical significance was set at P<0.05.Results: A total of 103 COPD patients (mean age 71.2±9.2 years, males 64.1%, low education level 82.5%, and percent predicted forced expiratory volume in 1 second 51.9±22.5 were evaluated. Seventy-seven patients (74.8% performed at least one step incorrectly. Patients using the Handihaler had the lowest compliance failure (42.5%, and the odds ratio for failure with the other devices compared with the Handihaler were 4.6 (95% confidence interval [CI] 1.8–11.8 for the pMDI, 3.1 (95% CI 1.2–8.2 for the pMDI with a spacer, and 2.4 (95% CI 1.1–5.2 for the Accuhaler. Low education level was the single most important factor related

  2. Motion Transplantation Techniques: A Survey

    NARCIS (Netherlands)

    van Basten, Ben; Egges, Arjan

    2012-01-01

    During the past decade, researchers have developed several techniques for transplanting motions. These techniques transplant a partial auxiliary motion, possibly defined for a small set of degrees of freedom, on a base motion. Motion transplantation improves motion databases' expressiveness and

  3. Comparative study between the effects of isolated manual therapy techniques and those associated with low level laser therapy on pain in patients with temporomandibular dysfunction

    Directory of Open Access Journals (Sweden)

    Juliana Cristina Frare

    2008-01-01

    Full Text Available Objective: This study sought to evaluate the pain condition in patients with temporomandibular dysfunction after applying manual therapy techniques and those associated with this low level laser therapy. Methods: The study involved 20 patients with temporomandibular dysfunction, divided randomly into two groups: G1 (n = 10, formed by 7 women and 3 men, average age 28.2 years (± 7, treated with manual therapy techniques and G2 (n = 10, formed by 8 women and 2 men, with average age 24.01 (± 6.04, treated with the combination of manual therapy techniques and low level laser therapy. The patients were treated three times a week for four consecutive weeks. The memorandum of manual therapy techniques based on Chaintow,Makofsky and Bienfaint was used. For low level laser therapy GaAs laser (904 nm, 6 J/cm2, 0.38 mW/cm2 was used, applied at 4pre-auricular points. To analyze the pain level, the visual analog pain scale was used. For data analysis the Student’s-t and Wilcoxon tests were used, both with significance level of 5% (p <0.05.Results: There was significant reduction (p <0.05 in the level of pain in both groups treated, but in G2 the significance was higher.Conclusion: Manual therapy techniques, either alone or associated with low level laser therapy showed satisfactory results for pain control in patients with temporomandibular dysfunction.

  4. Noninvasive evaluation of mental stress using by a refined rough set technique based on biomedical signals.

    Science.gov (United States)

    Liu, Tung-Kuan; Chen, Yeh-Peng; Hou, Zone-Yuan; Wang, Chao-Chih; Chou, Jyh-Horng

    2014-06-01

    Evaluating and treating of stress can substantially benefits to people with health problems. Currently, mental stress evaluated using medical questionnaires. However, the accuracy of this evaluation method is questionable because of variations caused by factors such as cultural differences and individual subjectivity. Measuring of biomedical signals is an effective method for estimating mental stress that enables this problem to be overcome. However, the relationship between the levels of mental stress and biomedical signals remain poorly understood. A refined rough set algorithm is proposed to determine the relationship between mental stress and biomedical signals, this algorithm combines rough set theory with a hybrid Taguchi-genetic algorithm, called RS-HTGA. Two parameters were used for evaluating the performance of the proposed RS-HTGA method. A dataset obtained from a practice clinic comprising 362 cases (196 male, 166 female) was adopted to evaluate the performance of the proposed approach. The empirical results indicate that the proposed method can achieve acceptable accuracy in medical practice. Furthermore, the proposed method was successfully used to identify the relationship between mental stress levels and bio-medical signals. In addition, the comparison between the RS-HTGA and a support vector machine (SVM) method indicated that both methods yield good results. The total averages for sensitivity, specificity, and precision were greater than 96%, the results indicated that both algorithms produced highly accurate results, but a substantial difference in discrimination existed among people with Phase 0 stress. The SVM algorithm shows 89% and the RS-HTGA shows 96%. Therefore, the RS-HTGA is superior to the SVM algorithm. The kappa test results for both algorithms were greater than 0.936, indicating high accuracy and consistency. The area under receiver operating characteristic curve for both the RS-HTGA and a SVM method were greater than 0.77, indicating

  5. A conditioned level-set method with block-division strategy to flame front extraction based on OH-PLIF measurements

    International Nuclear Information System (INIS)

    Han Yue; Cai Guo-Biao; Xu Xu; Bruno Renou; Abdelkrim Boukhalfa

    2014-01-01

    A novel approach to extract flame fronts, which is called the conditioned level-set method with block division (CLSB), has been developed. Based on a two-phase level-set formulation, the conditioned initialization and region-lock optimization appear to be beneficial to improve the efficiency and accuracy of the flame contour identification. The original block-division strategy enables the approach to be unsupervised by calculating local self-adaptive threshold values autonomously before binarization. The CLSB approach has been applied to deal with a large set of experimental data involving swirl-stabilized premixed combustion in diluted regimes operating at atmospheric pressures. The OH-PLIF measurements have been carried out in this framework. The resulting images are, thus, featured by lower signal-to-noise ratios (SNRs) than the ideal image; relatively complex flame structures lead to significant non-uniformity in the OH signal intensity; and, the magnitude of the maximum OH gradient observed along the flame front can also vary depending on flow or local stoichiometry. Compared with other conventional edge detection operators, the CLSB method demonstrates a good ability to deal with the OH-PLIF images at low SNR and with the presence of a multiple scales of both OH intensity and OH gradient. The robustness to noise sensitivity and intensity inhomogeneity has been evaluated throughout a range of experimental images of diluted flames, as well as against a circle test as Ground Truth (GT). (interdisciplinary physics and related areas of science and technology)

  6. Analysis of Setting Efficacy in Young Male and Female Volleyball Players.

    Science.gov (United States)

    González-Silva, Jara; Domínguez, Alberto Moreno; Fernández-Echeverría, Carmen; Rabaz, Fernando Claver; Arroyo, M Perla Moreno

    2016-12-01

    The main objective of this study was to analyse the variables that predicted setting efficacy in complex I (KI) in volleyball, in formative categories and depending on gender. The study sample was comprised of 5842 game actions carried out by the 16 male category and the 18 female category teams that participated in the Under-16 Spanish Championship. The dependent variable was setting efficacy. The independent variables were grouped into: serve variables (a serve zone, the type of serve, striking technique, an in-game role of the server and serve direction), reception variables (a reception zone, a receiver player and reception efficacy) and setting variables (a setter's position, a setting zone, the type of a set, setting technique, a set's area and tempo of a set). Multinomial logistic regression showed that the best predictive variables of setting efficacy, both in female and male categories, were reception efficacy, setting technique and tempo of a set. In the male category, the jump serve was the greatest predictor of setting efficacy, while in the female category, it was the set's area. Therefore, in the male category, it was not only the preceding action that affected setting efficacy, but also the serve. On the contrary, in the female category, only variables of the action itself and of the previous action, reception, affected setting efficacy. The results obtained in the present study should be taken into account in the training process of both male and female volleyball players in formative stages.

  7. Combination Base64 Algorithm and EOF Technique for Steganography

    Science.gov (United States)

    Rahim, Robbi; Nurdiyanto, Heri; Hidayat, Rahmat; Saleh Ahmar, Ansari; Siregar, Dodi; Putera Utama Siahaan, Andysah; Faisal, Ilham; Rahman, Sayuti; Suita, Diana; Zamsuri, Ahmad; Abdullah, Dahlan; Napitupulu, Darmawan; Ikhsan Setiawan, Muhammad; Sriadhi, S.

    2018-04-01

    The steganography process combines mathematics and computer science. Steganography consists of a set of methods and techniques to embed the data into another media so that the contents are unreadable to anyone who does not have the authority to read these data. The main objective of the use of base64 method is to convert any file in order to achieve privacy. This paper discusses a steganography and encoding method using base64, which is a set of encoding schemes that convert the same binary data to the form of a series of ASCII code. Also, the EoF technique is used to embed encoding text performed by Base64. As an example, for the mechanisms a file is used to represent the texts, and by using the two methods together will increase the security level for protecting the data, this research aims to secure many types of files in a particular media with a good security and not to damage the stored files and coverage media that used.

  8. Is integrative use of techniques in psychotherapy the exception or the rule? Results of a national survey of doctoral-level practitioners.

    Science.gov (United States)

    Thoma, Nathan C; Cecero, John J

    2009-12-01

    This study sought to investigate the extent to which therapists endorse techniques outside of their self-identified orientation and which techniques are endorsed across orientations. A survey consisting of 127 techniques from 8 major theories of psychotherapy was administered via U.S. mail to a national random sample of doctoral-level psychotherapy practitioners. The 201 participants endorsed substantial numbers of techniques from outside their respective orientations. Many of these techniques were quite different from those of the core theories of the respective orientations. Further examining when and why experienced practitioners switch to techniques outside their primary orientation may help reveal where certain techniques fall short and where others excel, indicating a need for further research that taps the collective experience of practitioners. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  9. The utility of imputed matched sets. Analyzing probabilistically linked databases in a low information setting.

    Science.gov (United States)

    Thomas, A M; Cook, L J; Dean, J M; Olson, L M

    2014-01-01

    To compare results from high probability matched sets versus imputed matched sets across differing levels of linkage information. A series of linkages with varying amounts of available information were performed on two simulated datasets derived from multiyear motor vehicle crash (MVC) and hospital databases, where true matches were known. Distributions of high probability and imputed matched sets were compared against the true match population for occupant age, MVC county, and MVC hour. Regression models were fit to simulated log hospital charges and hospitalization status. High probability and imputed matched sets were not significantly different from occupant age, MVC county, and MVC hour in high information settings (p > 0.999). In low information settings, high probability matched sets were significantly different from occupant age and MVC county (p sets were not (p > 0.493). High information settings saw no significant differences in inference of simulated log hospital charges and hospitalization status between the two methods. High probability and imputed matched sets were significantly different from the outcomes in low information settings; however, imputed matched sets were more robust. The level of information available to a linkage is an important consideration. High probability matched sets are suitable for high to moderate information settings and for situations involving case-specific analysis. Conversely, imputed matched sets are preferable for low information settings when conducting population-based analyses.

  10. Lebesgue Sets Immeasurable Existence

    Directory of Open Access Journals (Sweden)

    Diana Marginean Petrovai

    2012-12-01

    Full Text Available It is well known that the notion of measure and integral were released early enough in close connection with practical problems of measuring of geometric figures. Notion of measure was outlined in the early 20th century through H. Lebesgue’s research, founder of the modern theory of measure and integral. It was developed concurrently a technique of integration of functions. Gradually it was formed a specific area todaycalled the measure and integral theory. Essential contributions to building this theory was made by a large number of mathematicians: C. Carathodory, J. Radon, O. Nikodym, S. Bochner, J. Pettis, P. Halmos and many others. In the following we present several abstract sets, classes of sets. There exists the sets which are not Lebesgue measurable and the sets which are Lebesgue measurable but are not Borel measurable. Hence B ⊂ L ⊂ P(X.

  11. The influence of power and actor relations on priority setting and resource allocation practices at the hospital level in Kenya: a case study.

    Science.gov (United States)

    Barasa, Edwine W; Cleary, Susan; English, Mike; Molyneux, Sassy

    2016-09-30

    Priority setting and resource allocation in healthcare organizations often involves the balancing of competing interests and values in the context of hierarchical and politically complex settings with multiple interacting actor relationships. Despite this, few studies have examined the influence of actor and power dynamics on priority setting practices in healthcare organizations. This paper examines the influence of power relations among different actors on the implementation of priority setting and resource allocation processes in public hospitals in Kenya. We used a qualitative case study approach to examine priority setting and resource allocation practices in two public hospitals in coastal Kenya. We collected data by a combination of in-depth interviews of national level policy makers, hospital managers, and frontline practitioners in the case study hospitals (n = 72), review of documents such as hospital plans and budgets, minutes of meetings and accounting records, and non-participant observations in case study hospitals over a period of 7 months. We applied a combination of two frameworks, Norman Long's actor interface analysis and VeneKlasen and Miller's expressions of power framework to examine and interpret our findings RESULTS: The interactions of actors in the case study hospitals resulted in socially constructed interfaces between: 1) senior managers and middle level managers 2) non-clinical managers and clinicians, and 3) hospital managers and the community. Power imbalances resulted in the exclusion of middle level managers (in one of the hospitals) and clinicians and the community (in both hospitals) from decision making processes. This resulted in, amongst others, perceptions of unfairness, and reduced motivation in hospital staff. It also puts to question the legitimacy of priority setting processes in these hospitals. Designing hospital decision making structures to strengthen participation and inclusion of relevant stakeholders could

  12. XACML 3.0 in Answer Set Programming

    DEFF Research Database (Denmark)

    Ramli, Carroline Dewi Puspa Kencana; Nielson, Hanne Riis; Nielson, Flemming

    2012-01-01

    We present a systematic technique for transforming XACML 3.0 policies in Answer Set Programming (ASP). We show that the resulting logic program has a unique answer set that directly corresponds to our formalisation of the standard semantics of XACML 3.0 from [9]. We demonstrate how our results make...

  13. Goal oriented Mathematics Survey at Preparatory Level- Revised set ...

    African Journals Online (AJOL)

    This cross sectional study design on mathematical syllabi at preparatory levels of the high schools was to investigate the efficiency of the subject at preparatory level education serving as a basis for several streams, like Natural science, Technology, Computer Science, Health Science and Agriculture found at tertiary levels.

  14. Country report: Vietnam. Setting Up of a 90Sr/90Y Generator System Based on Supported Liquid Membrane (SLM) Technique and Radiolabeling of Eluted 90Y with Biomolecules

    International Nuclear Information System (INIS)

    Nguyen Thi Thu; Duong Van Dong; Bui Van Cuong; Chu Van Khoa

    2010-01-01

    In the course of participating in the IAEA-CRP during the last two years, Vietnam has achieved the goal of setting up a 90 Sr/ 90 Y generator system based on Supported Liquid Membrane (SLM) technique and also radiolabeling of the eluted 90 Y with antibody, peptides and albumin. A two stage SLM based 90 Sr- 90 Y generator was set up in-house to generate carrier-free 90 Y at different activity levels viz. 5, 20, 50 mCi. The generator system was operated in sequential mode in which 2-ethylhexyl 2-ethylhexyl phosphonic acid (PC88A) based SLM was used in the first stage for the transport 90 Y in 4.0 M nitric acid from source phase where 90 Sr- 90 Y equilibrium mixture is placed in nitric acid medium at pH to 1-2. In the second stage, octyl (phenyl)-N,N-diisobutylcarbamoylmethyl phosphine oxide (CMPO) based SLM was used for the transport of 90 Y selectively to 1.0 M acetic acid which is the best medium for radiolebeling. The eluted 90 Y from the generator was tested for the presence of any traces of 90 Sr using the Extraction Paper Chromatography (EPC) and was found suitable for radiolabeling. The generator system could be upgraded to 100 mCi level successfully due to an expert mission from India through IAEA. The 90 Y product obtained from the generator system was used for radiolabeling of antibody and peptides viz. Rituximab, DOTATATE and albumin particles under different experimental conditions. A new chromatography system could be developed for analyzing 90 Y labeled albumin using the TAE buffer as mobile phase in PC and ITLC

  15. [Cardiac Synchronization Function Estimation Based on ASM Level Set Segmentation Method].

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Tang, Liang; He, Ying; Zhang, Huie

    At present, there is no accurate and quantitative methods for the determination of cardiac mechanical synchronism, and quantitative determination of the synchronization function of the four cardiac cavities with medical images has a great clinical value. This paper uses the whole heart ultrasound image sequence, and segments the left & right atriums and left & right ventricles of each frame. After the segmentation, the number of pixels in each cavity and in each frame is recorded, and the areas of the four cavities of the image sequence are therefore obtained. The area change curves of the four cavities are further extracted, and the synchronous information of the four cavities is obtained. Because of the low SNR of Ultrasound images, the boundary lines of cardiac cavities are vague, so the extraction of cardiac contours is still a challenging problem. Therefore, the ASM model information is added to the traditional level set method to force the curve evolution process. According to the experimental results, the improved method improves the accuracy of the segmentation. Furthermore, based on the ventricular segmentation, the right and left ventricular systolic functions are evaluated, mainly according to the area changes. The synchronization of the four cavities of the heart is estimated based on the area changes and the volume changes.

  16. Teachers and Learners’ Perceptions of Applying Translation as a Method, Strategy, or Technique in an Iranian EFL Setting

    Directory of Open Access Journals (Sweden)

    Fatemeh Mollaei

    2017-04-01

    Full Text Available It has been found that translation is an efficient means to teach/learn grammar, syntax, and lexis of a foreign language. Meanwhile, translation is good for beginners who do not still enjoy the critical level of proficiency in their target language for expression.  This study was conducted to examine the teachers and learners’ perceptions of employing translation in the foreign language classroom; i.e., the effects, merits, demerits, limitations, as well as its use as a method, strategy or technique. Both quantitative and qualitative methods were used to collect and analyze the data from graduate and undergraduate learners (n=56 and teachers (n=44, male and female, who responded to two questionnaires. Additionally, only the teachers were interviewed to gain richer insight into their perceptions and attitudes. According to the results of independent samples t-test, there was no significant difference between teachers and learners’ attitude to applying translation as a method, strategy, or technique in learning a foreign language.  Based on the interview results, some teachers believed that employing translation in the foreign language context was helpful but not constantly. They claimed that translation was only effective in teaching vocabulary and grammar apart from leaners’ proficiency level as it can clarify meaning. But some other teachers noted that mother tongue would interfere with learning foreign language; they considered translation as a time-consuming activity through which students cannot capture the exact meaning.

  17. Effects of a Peer Evaluation Technique on Nursing Students' Anxiety Levels.

    Science.gov (United States)

    Stewart, Patricia; Greene, Debbie; Coke, Sallie

    2017-11-16

    Techniques to help decrease students' stress and anxiety during a nursing program can be beneficial to their overall health and mental well-being. A quasi-experimental design was used to examine if a peer evaluation technique during clinical skill practice sessions decreases anxiety prior to skill performance evaluation with nursing faculty. Participant feedback supports the integration of a peer evaluation technique when learning clinical skills.

  18. Iterative Reconstruction Techniques in Abdominopelvic CT: Technical Concepts and Clinical Implementation.

    Science.gov (United States)

    Patino, Manuel; Fuentes, Jorge M; Singh, Sarabjeet; Hahn, Peter F; Sahani, Dushyant V

    2015-07-01

    This article discusses the clinical challenge of low-radiation-dose examinations, the commonly used approaches for dose optimization, and their effect on image quality. We emphasize practical aspects of the different iterative reconstruction techniques, along with their benefits, pitfalls, and clinical implementation. The widespread use of CT has raised concerns about potential radiation risks, motivating diverse strategies to reduce the radiation dose associated with CT. CT manufacturers have developed alternative reconstruction algorithms intended to improve image quality on dose-optimized CT studies, mainly through noise and artifact reduction. Iterative reconstruction techniques take unique approaches to noise reduction and provide distinct strength levels or settings.

  19. Characterization of mammographic masses based on level set segmentation with new image features and patient information

    International Nuclear Information System (INIS)

    Shi Jiazheng; Sahiner, Berkman; Chan Heangping; Ge Jun; Hadjiiski, Lubomir; Helvie, Mark A.; Nees, Alexis; Wu Yita; Wei Jun; Zhou Chuan; Zhang Yiheng; Cui Jing

    2008-01-01

    Computer-aided diagnosis (CAD) for characterization of mammographic masses as malignant or benign has the potential to assist radiologists in reducing the biopsy rate without increasing false negatives. The purpose of this study was to develop an automated method for mammographic mass segmentation and explore new image based features in combination with patient information in order to improve the performance of mass characterization. The authors' previous CAD system, which used the active contour segmentation, and morphological, textural, and spiculation features, has achieved promising results in mass characterization. The new CAD system is based on the level set method and includes two new types of image features related to the presence of microcalcifications with the mass and abruptness of the mass margin, and patient age. A linear discriminant analysis (LDA) classifier with stepwise feature selection was used to merge the extracted features into a classification score. The classification accuracy was evaluated using the area under the receiver operating characteristic curve. The authors' primary data set consisted of 427 biopsy-proven masses (200 malignant and 227 benign) in 909 regions of interest (ROIs) (451 malignant and 458 benign) from multiple mammographic views. Leave-one-case-out resampling was used for training and testing. The new CAD system based on the level set segmentation and the new mammographic feature space achieved a view-based A z value of 0.83±0.01. The improvement compared to the previous CAD system was statistically significant (p=0.02). When patient age was included in the new CAD system, view-based and case-based A z values were 0.85±0.01 and 0.87±0.02, respectively. The study also demonstrated the consistency of the newly developed CAD system by evaluating the statistics of the weights of the LDA classifiers in leave-one-case-out classification. Finally, an independent test on the publicly available digital database for screening

  20. Topology optimization of hyperelastic structures using a level set method

    Science.gov (United States)

    Chen, Feifei; Wang, Yiqiang; Wang, Michael Yu; Zhang, Y. F.

    2017-12-01

    Soft rubberlike materials, due to their inherent compliance, are finding widespread implementation in a variety of applications ranging from assistive wearable technologies to soft material robots. Structural design of such soft and rubbery materials necessitates the consideration of large nonlinear deformations and hyperelastic material models to accurately predict their mechanical behaviour. In this paper, we present an effective level set-based topology optimization method for the design of hyperelastic structures that undergo large deformations. The method incorporates both geometric and material nonlinearities where the strain and stress measures are defined within the total Lagrange framework and the hyperelasticity is characterized by the widely-adopted Mooney-Rivlin material model. A shape sensitivity analysis is carried out, in the strict sense of the material derivative, where the high-order terms involving the displacement gradient are retained to ensure the descent direction. As the design velocity enters into the shape derivative in terms of its gradient and divergence terms, we develop a discrete velocity selection strategy. The whole optimization implementation undergoes a two-step process, where the linear optimization is first performed and its optimized solution serves as the initial design for the subsequent nonlinear optimization. It turns out that this operation could efficiently alleviate the numerical instability and facilitate the optimization process. To demonstrate the validity and effectiveness of the proposed method, three compliance minimization problems are studied and their optimized solutions present significant mechanical benefits of incorporating the nonlinearities, in terms of remarkable enhancement in not only the structural stiffness but also the critical buckling load.

  1. Contribution of the geophysical and radon techniques to characterize hydrogeological setting in the western volcanic zone of Yarmouk basin: Case study Deir El-Adas

    International Nuclear Information System (INIS)

    Al-Fares, W.; Soliman, E.; Al-Ali, A.

    2009-01-01

    The aim of this study is to illustrate the geophysical and radon techniques in characterizing ''at local scale'' a hydrogeological setting in the volcanic zone of Yarmouk basin. And to employ the obtained results to understand and explain similar hydrogeological situation related to particular subsurface geologic and tectonic structure. Based on the field observations and failed wells drilled at Deir El-Adas, and the occurrence of successful well out of that zone, all these reasons, have given us the incentive to verify and provide realistic explanation of this phenomena in the basaltic outcrops of Yarmouk basin. The interpretation of the vertical electrical surveys (VES), indicates to the presence of local faulted anticline structure of Palaeogene located under the volcanic outcrops. This structure has led to complex hydrogeological conditions, represented by limited recharge in this area which occurs through fractures and secondary faults in addition to the low direct precipitation. Piezometric map indicates to water divide in the north-west of Deir El-Adas related to the tectonic setting. Meanwhile, discharge map show low reproducibility of drilled wells in Deir El-Adas and its periphery. Due to limited radon data, it was difficult to draw concrete conclusions from this technique. (author)

  2. CT Findings of Disease with Elevated Serum D-Dimer Levels in an Emergency Room Setting

    International Nuclear Information System (INIS)

    Choi, Ji Youn; Kwon, Woo Cheol; Kim, Young Ju

    2012-01-01

    Pulmonary embolism and deep vein thrombosis are the leading causes of elevated serum D-dimer levels in the emergency room. Although D-dimer is a useful screening test because of its high sensitivity and negative predictive value, it has a low specificity. In addition, D-dimer can be elevated in various diseases. Therefore, information on the various diseases with elevated D-dimer levels and their radiologic findings may allow for accurate diagnosis and proper management. Herein, we report the CT findings of various diseases with elevated D-dimer levels in an emergency room setting, including an intravascular contrast filling defect with associated findings in a venous thromboembolism, fracture with soft tissue swelling and hematoma formation in a trauma patient, enlargement with contrast enhancement in the infected organ of a patient, coronary artery stenosis with a perfusion defect of the myocardium in a patient with acute myocardial infarction, high density of acute thrombus in a cerebral vessel with a low density of affected brain parenchyma in an acute cerebral infarction, intimal flap with two separated lumens in a case of aortic dissection, organ involvement of malignancy in a cancer patient, and atrophy of a liver with a dilated portal vein and associated findings.

  3. CT Findings of Disease with Elevated Serum D-Dimer Levels in an Emergency Room Setting

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Ji Youn; Kwon, Woo Cheol; Kim, Young Ju [Dept. of Radiology, Wonju Christian Hospital, Yensei University Wonju College of Medicine, Wonju (Korea, Republic of)

    2012-01-15

    Pulmonary embolism and deep vein thrombosis are the leading causes of elevated serum D-dimer levels in the emergency room. Although D-dimer is a useful screening test because of its high sensitivity and negative predictive value, it has a low specificity. In addition, D-dimer can be elevated in various diseases. Therefore, information on the various diseases with elevated D-dimer levels and their radiologic findings may allow for accurate diagnosis and proper management. Herein, we report the CT findings of various diseases with elevated D-dimer levels in an emergency room setting, including an intravascular contrast filling defect with associated findings in a venous thromboembolism, fracture with soft tissue swelling and hematoma formation in a trauma patient, enlargement with contrast enhancement in the infected organ of a patient, coronary artery stenosis with a perfusion defect of the myocardium in a patient with acute myocardial infarction, high density of acute thrombus in a cerebral vessel with a low density of affected brain parenchyma in an acute cerebral infarction, intimal flap with two separated lumens in a case of aortic dissection, organ involvement of malignancy in a cancer patient, and atrophy of a liver with a dilated portal vein and associated findings.

  4. Dynamical basis set

    International Nuclear Information System (INIS)

    Blanco, M.; Heller, E.J.

    1985-01-01

    A new Cartesian basis set is defined that is suitable for the representation of molecular vibration-rotation bound states. The Cartesian basis functions are superpositions of semiclassical states generated through the use of classical trajectories that conform to the intrinsic dynamics of the molecule. Although semiclassical input is employed, the method becomes ab initio through the standard matrix diagonalization variational method. Special attention is given to classical-quantum correspondences for angular momentum. In particular, it is shown that the use of semiclassical information preferentially leads to angular momentum eigenstates with magnetic quantum number Vertical BarMVertical Bar equal to the total angular momentum J. The present method offers a reliable technique for representing highly excited vibrational-rotational states where perturbation techniques are no longer applicable

  5. Bud development, flowering and fruit set of Moringa oleifera Lam. (Horseradish Tree as affected by various irrigation levels

    Directory of Open Access Journals (Sweden)

    Quintin Ernst Muhl

    2013-12-01

    Full Text Available Moringa oleifera is becoming increasingly popular as an industrial crop due to its multitude of useful attributes as water purifier, nutritional supplement and biofuel feedstock. Given its tolerance to sub-optimal growing conditions, most of the current and anticipated cultivation areas are in medium to low rainfall areas. This study aimed to assess the effect of various irrigation levels on floral initiation, flowering and fruit set. Three treatments namely, a 900 mm (900IT, 600 mm (600IT and 300 mm (300IT per annum irrigation treatment were administered through drip irrigation, simulating three total annual rainfall amounts. Individual inflorescences from each treatment were tagged during floral initiation and monitored throughout until fruit set. Flower bud initiation was highest at the 300IT and lowest at the 900IT for two consecutive growing seasons. Fruit set on the other hand, decreased with the decrease in irrigation treatment. Floral abortion, reduced pollen viability as well as moisture stress in the style were contributing factors to the reduction in fruiting/yield observed at the 300IT. Moderate water stress prior to floral initiation could stimulate flower initiation, however, this should be followed by sufficient irrigation to ensure good pollination, fruit set and yield.

  6. Reliability and Validity of 10 Different Standard Setting Procedures.

    Science.gov (United States)

    Halpin, Glennelle; Halpin, Gerald

    Research indicating that different cut-off points result from the use of different standard-setting techniques leaves decision makers with a disturbing dilemma: Which standard-setting method is best? This investigation of the reliability and validity of 10 different standard-setting approaches was designed to provide information that might help…

  7. On reinitializing level set functions

    Science.gov (United States)

    Min, Chohong

    2010-04-01

    In this paper, we consider reinitializing level functions through equation ϕt+sgn(ϕ0)(‖∇ϕ‖-1)=0[16]. The method of Russo and Smereka [11] is taken in the spatial discretization of the equation. The spatial discretization is, simply speaking, the second order ENO finite difference with subcell resolution near the interface. Our main interest is on the temporal discretization of the equation. We compare the three temporal discretizations: the second order Runge-Kutta method, the forward Euler method, and a Gauss-Seidel iteration of the forward Euler method. The fact that the time in the equation is fictitious makes a hypothesis that all the temporal discretizations result in the same result in their stationary states. The fact that the absolute stability region of the forward Euler method is not wide enough to include all the eigenvalues of the linearized semi-discrete system of the second order ENO spatial discretization makes another hypothesis that the forward Euler temporal discretization should invoke numerical instability. Our results in this paper contradict both the hypotheses. The Runge-Kutta and Gauss-Seidel methods obtain the second order accuracy, and the forward Euler method converges with order between one and two. Examining all their properties, we conclude that the Gauss-Seidel method is the best among the three. Compared to the Runge-Kutta, it is twice faster and requires memory two times less with the same accuracy.

  8. Probabilistic Open Set Recognition

    Science.gov (United States)

    Jain, Lalit Prithviraj

    Real-world tasks in computer vision, pattern recognition and machine learning often touch upon the open set recognition problem: multi-class recognition with incomplete knowledge of the world and many unknown inputs. An obvious way to approach such problems is to develop a recognition system that thresholds probabilities to reject unknown classes. Traditional rejection techniques are not about the unknown; they are about the uncertain boundary and rejection around that boundary. Thus traditional techniques only represent the "known unknowns". However, a proper open set recognition algorithm is needed to reduce the risk from the "unknown unknowns". This dissertation examines this concept and finds existing probabilistic multi-class recognition approaches are ineffective for true open set recognition. We hypothesize the cause is due to weak adhoc assumptions combined with closed-world assumptions made by existing calibration techniques. Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under this assumption of incomplete class knowledge. For this, we formulate the problem as one of modeling positive training data by invoking statistical extreme value theory (EVT) near the decision boundary of positive data with respect to negative data. We provide a new algorithm called the PI-SVM for estimating the unnormalized posterior probability of class inclusion. This dissertation also introduces a new open set recognition model called Compact Abating Probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical EVT for score calibration with one-class and binary

  9. Basic investigation of the laminated alginate impression technique: Setting time, permanent deformation, elastic deformation, consistency, and tensile bond strength tests.

    Science.gov (United States)

    Kitamura, Aya; Kawai, Yasuhiko

    2015-01-01

    Laminated alginate impression for edentulous is simple and time efficient compared to border molding technique. The purpose of this study was to examine clinical applicability of the laminated alginate impression, by measuring the effects of different Water/Powder (W/P) and mixing methods, and different bonding methods in the secondary impression of alginate impression. Three W/P: manufacturer-designated mixing water amount (standard), 1.5-fold (1.5×) and 1.75-fold (1.75×) water amount were mixed by manual and automatic mixing methods. Initial and complete setting time, permanent and elastic deformation, and consistency of the secondary impression were investigated (n=10). Additionally, tensile bond strength between the primary and secondary impression were measured in the following surface treatment; air blow only (A), surface baking (B), and alginate impression material bonding agent (ALGI-BOND: AB) (n=12). Initial setting times significantly shortened with automatic mixing for all W/P (p<0.05). The permanent deformation decreased and elastic deformation increased as high W/P, regardless of the mixing method. Elastic deformation significantly reduced in 1.5× and 1.75× with automatic mixing (p<0.05). All of these properties resulted within JIS standards. For all W/P, AB showed a significantly high bonding strength as compared to A and B (p<0.01). The increase of mixing water, 1.5× and 1.75×, resulted within JIS standards in setting time, suggesting its applicability in clinical setting. The use of automatic mixing device decreased elastic strain and shortening of the curing time. For the secondary impression application of adhesives on the primary impression gives secure adhesion. Copyright © 2014 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  10. A Container-based Trusted Multi-level Security Mechanism

    Directory of Open Access Journals (Sweden)

    Li Xiao-Yong

    2017-01-01

    Full Text Available Multi-level security mechanism has been widely applied in the military, government, defense and other domains in which information is required to be divided by security-level. Through this type of security mechanism, users at different security levels are provided with information at corresponding security levels. Traditional multi-level security mechanism which depends on the safety of operating system finally proved to be not practical. We propose a container-based trusted multi-level security mechanism in this paper to improve the applicability of the multi-level mechanism. It guarantees multi-level security of the system through a set of multi-level security policy rules and trusted techniques. The technical feasibility and application scenarios are also discussed. The ease of realization, strong practical significance and low cost of our method will largely expand the application of multi-level security mechanism in real life.

  11. Blocking sets in Desarguesian planes

    NARCIS (Netherlands)

    Blokhuis, A.; Miklós, D.; Sós, V.T.; Szönyi, T.

    1996-01-01

    We survey recent results concerning the size of blocking sets in desarguesian projective and affine planes, and implications of these results and the technique to prove them, to related problemis, such as the size of maximal partial spreads, small complete arcs, small strong representative systems

  12. Intervention Techniques Used With Autism Spectrum Disorder by Speech-Language Pathologists in the United States and Taiwan: A Descriptive Analysis of Practice in Clinical Settings.

    Science.gov (United States)

    Hsieh, Ming-Yeh; Lynch, Georgina; Madison, Charles

    2018-04-27

    This study examined intervention techniques used with children with autism spectrum disorder (ASD) by speech-language pathologists (SLPs) in the United States and Taiwan working in clinic/hospital settings. The research questions addressed intervention techniques used with children with ASD, intervention techniques used with different age groups (under and above 8 years old), and training received before using the intervention techniques. The survey was distributed through the American Speech-Language-Hearing Association to selected SLPs across the United States. In Taiwan, the survey (Chinese version) was distributed through the Taiwan Speech-Language Pathologist Union, 2018, to certified SLPs. Results revealed that SLPs in the United States and Taiwan used 4 common intervention techniques: Social Skill Training, Augmentative and Alternative Communication, Picture Exchange Communication System, and Social Stories. Taiwanese SLPs reported SLP preparation program training across these common intervention strategies. In the United States, SLPs reported training via SLP preparation programs, peer therapists, and self-taught. Most SLPs reported using established or emerging evidence-based practices as defined by the National Professional Development Center (2014) and the National Standards Report (2015). Future research should address comparison of SLP preparation programs to examine the impact of preprofessional training on use of evidence-based practices to treat ASD.

  13. Analysis of Setting Efficacy in Young Male and Female Volleyball Players

    Directory of Open Access Journals (Sweden)

    González-Silva Jara

    2016-12-01

    Full Text Available The main objective of this study was to analyse the variables that predicted setting efficacy in complex I (KI in volleyball, in formative categories and depending on gender. The study sample was comprised of 5842 game actions carried out by the 16 male category and the 18 female category teams that participated in the Under-16 Spanish Championship. The dependent variable was setting efficacy. The independent variables were grouped into: serve variables (a serve zone, the type of serve, striking technique, an in-game role of the server and serve direction, reception variables (a reception zone, a receiver player and reception efficacy and setting variables (a setter‘s position, a setting zone, the type of a set, setting technique, a set’s area and tempo of a set. Multinomial logistic regression showed that the best predictive variables of setting efficacy, both in female and male categories, were reception efficacy, setting technique and tempo of a set. In the male category, the jump serve was the greatest predictor of setting efficacy, while in the female category, it was the set’s area. Therefore, in the male category, it was not only the preceding action that affected setting efficacy, but also the serve. On the contrary, in the female category, only variables of the action itself and of the previous action, reception, affected setting efficacy. The results obtained in the present study should be taken into account in the training process of both male and female volleyball players in formative stages.

  14. Generalized cost-effectiveness analysis for national-level priority-setting in the health sector

    Directory of Open Access Journals (Sweden)

    Edejer Tessa

    2003-12-01

    Full Text Available Abstract Cost-effectiveness analysis (CEA is potentially an important aid to public health decision-making but, with some notable exceptions, its use and impact at the level of individual countries is limited. A number of potential reasons may account for this, among them technical shortcomings associated with the generation of current economic evidence, political expediency, social preferences and systemic barriers to implementation. As a form of sectoral CEA, Generalized CEA sets out to overcome a number of these barriers to the appropriate use of cost-effectiveness information at the regional and country level. Its application via WHO-CHOICE provides a new economic evidence base, as well as underlying methodological developments, concerning the cost-effectiveness of a range of health interventions for leading causes of, and risk factors for, disease. The estimated sub-regional costs and effects of different interventions provided by WHO-CHOICE can readily be tailored to the specific context of individual countries, for example by adjustment to the quantity and unit prices of intervention inputs (costs or the coverage, efficacy and adherence rates of interventions (effectiveness. The potential usefulness of this information for health policy and planning is in assessing if current intervention strategies represent an efficient use of scarce resources, and which of the potential additional interventions that are not yet implemented, or not implemented fully, should be given priority on the grounds of cost-effectiveness. Health policy-makers and programme managers can use results from WHO-CHOICE as a valuable input into the planning and prioritization of services at national level, as well as a starting point for additional analyses of the trade-off between the efficiency of interventions in producing health and their impact on other key outcomes such as reducing inequalities and improving the health of the poor.

  15. Level-set segmentation of pulmonary nodules in megavolt electronic portal images using a CT prior

    International Nuclear Information System (INIS)

    Schildkraut, J. S.; Prosser, N.; Savakis, A.; Gomez, J.; Nazareth, D.; Singh, A. K.; Malhotra, H. K.

    2010-01-01

    Purpose: Pulmonary nodules present unique problems during radiation treatment due to nodule position uncertainty that is caused by respiration. The radiation field has to be enlarged to account for nodule motion during treatment. The purpose of this work is to provide a method of locating a pulmonary nodule in a megavolt portal image that can be used to reduce the internal target volume (ITV) during radiation therapy. A reduction in the ITV would result in a decrease in radiation toxicity to healthy tissue. Methods: Eight patients with nonsmall cell lung cancer were used in this study. CT scans that include the pulmonary nodule were captured with a GE Healthcare LightSpeed RT 16 scanner. Megavolt portal images were acquired with a Varian Trilogy unit equipped with an AS1000 electronic portal imaging device. The nodule localization method uses grayscale morphological filtering and level-set segmentation with a prior. The treatment-time portion of the algorithm is implemented on a graphical processing unit. Results: The method was retrospectively tested on eight cases that include a total of 151 megavolt portal image frames. The method reduced the nodule position uncertainty by an average of 40% for seven out of the eight cases. The treatment phase portion of the method has a subsecond execution time that makes it suitable for near-real-time nodule localization. Conclusions: A method was developed to localize a pulmonary nodule in a megavolt portal image. The method uses the characteristics of the nodule in a prior CT scan to enhance the nodule in the portal image and to identify the nodule region by level-set segmentation. In a retrospective study, the method reduced the nodule position uncertainty by an average of 40% for seven out of the eight cases studied.

  16. GenoSets: visual analytic methods for comparative genomics.

    Directory of Open Access Journals (Sweden)

    Aurora A Cain

    Full Text Available Many important questions in biology are, fundamentally, comparative, and this extends to our analysis of a growing number of sequenced genomes. Existing genomic analysis tools are often organized around literal views of genomes as linear strings. Even when information is highly condensed, these views grow cumbersome as larger numbers of genomes are added. Data aggregation and summarization methods from the field of visual analytics can provide abstracted comparative views, suitable for sifting large multi-genome datasets to identify critical similarities and differences. We introduce a software system for visual analysis of comparative genomics data. The system automates the process of data integration, and provides the analysis platform to identify and explore features of interest within these large datasets. GenoSets borrows techniques from business intelligence and visual analytics to provide a rich interface of interactive visualizations supported by a multi-dimensional data warehouse. In GenoSets, visual analytic approaches are used to enable querying based on orthology, functional assignment, and taxonomic or user-defined groupings of genomes. GenoSets links this information together with coordinated, interactive visualizations for both detailed and high-level categorical analysis of summarized data. GenoSets has been designed to simplify the exploration of multiple genome datasets and to facilitate reasoning about genomic comparisons. Case examples are included showing the use of this system in the analysis of 12 Brucella genomes. GenoSets software and the case study dataset are freely available at http://genosets.uncc.edu. We demonstrate that the integration of genomic data using a coordinated multiple view approach can simplify the exploration of large comparative genomic data sets, and facilitate reasoning about comparisons and features of interest.

  17. Feature-level domain adaptation

    DEFF Research Database (Denmark)

    Kouw, Wouter M.; Van Der Maaten, Laurens J P; Krijthe, Jesse H.

    2016-01-01

    -level domain adaptation (flda), that models the dependence between the two domains by means of a feature-level transfer model that is trained to describe the transfer from source to target domain. Subsequently, we train a domain-adapted classifier by minimizing the expected loss under the resulting transfer...... modeled via a dropout distribution, which allows the classiffier to adapt to differences in the marginal probability of features in the source and the target domain. Our experiments on several real-world problems show that flda performs on par with state-of-the-art domainadaptation techniques.......Domain adaptation is the supervised learning setting in which the training and test data are sampled from different distributions: training data is sampled from a source domain, whilst test data is sampled from a target domain. This paper proposes and studies an approach, called feature...

  18. Effect of AGE and Sex on thyroid hormone levels in normal egyptian individuals using RIA technique

    International Nuclear Information System (INIS)

    Abdel-Aziz, S.M.; El-Seify, S.; Megahed, Y.M.; El-Arab, A.

    1993-01-01

    This work aims to estimate total serum levels of thyroid hormones, namely triiodothyronine (T 3 ) and thyroxine (T 4 ) as well as the pituitary thyrotropin (TSH) in different categories of normal egyptian individuals classified according to age and sex. Radioimmunoassay (RIA) and immunoradiometritassay (IRMA) techniques were used. Results of this study indicate that T 3 and T 4 concentrations decreased significantly with advancing age. This decrement was statistically significant in both sexes and could be attributed to the decline in TBG concentration in the elderly. TSH level was not influenced by sex, however, a slight decrease was observed in the elderly perhaps due to decreased TSH receptors and or cyclic AMP activity. 3 figs., 2 tabs

  19. Novel technique of reducing radon levels in living premises

    International Nuclear Information System (INIS)

    Khaydarov, R.A.; Gapurova, O.U.; Khaydarov, R.R.

    2006-01-01

    Full text: Radon is a naturally occurring gas seeping into homes and underground structures (buildings, tunnels, hangars, garages, etc.) from the surrounding soil through walls, floor, etc. and emanating from construction materials such as concrete, granite, etc. The level of radon is especially great in regions with the higher content of uranium in soil and water and with geological breaks of the Earth's crust. Concentrations of uranium higher than 10 g per ton of soil have been found in 14% of territory of Uzbekistan. As a result, for instance, concentration of radon 10-100 times exceeds the regulation level in 14% of premises in Tashkent, 41% of premises in Almalik town and 44% in Yangiabad town. The purpose of this work was creating a method to reduce concentration of radon gas in buildings and underground structures. We suppose that the most effective technique is a treatment of walls, floors, etc. of basement and underground structures by special chemicals which seal micropores inside the construction materials. Sealing the pores stops radon diffusion and, in addition, it blocks another radon pathway - water migration and emanation from concrete, gypsum or other construction materials. In the paper polymeric silicoorganic compounds are investigated and selected as the chemicals to prevent radon seeping indoors. Gas (air, Ar, Rn-222, H 2 O) permeability of concrete and gypsum after treatment by chemicals has been examined. Influence of types of cement and sand, preliminary treatment by different chemicals, different types of polymeric silicoorganic compounds, time between treatments, moisture of concrete, time between preparation of chemicals and treatment of concrete (ageing of chemicals), time between treatment of concrete and testing (ageing of treated concrete) have been examined. Surfaces of the samples were treated by spray. Experiments have shown that chosen method of treatment of the construction materials allows reducing the coefficient of gas

  20. Goal-Setting in Youth Football. Are Coaches Missing an Opportunity?

    Science.gov (United States)

    Maitland, Alison; Gervis, Misia

    2010-01-01

    Background: Goal-setting is not always the simple motivational technique when used in an applied sport setting especially in relation to the meaning of achievement in competitive sport. Goal-setting needs to be examined in a broader context than goal-setting theory, such as provided by social cognitive theories of motivation. In football, the…

  1. Assessing the Effectiveness of Inquiry-based Learning Techniques Implemented in Large Classroom Settings

    Science.gov (United States)

    Steer, D. N.; McConnell, D. A.; Owens, K.

    2001-12-01

    assessments of knowledge-level learning included evaluations of student responses to pre- and post-instruction conceptual test questions, short group exercises and content-oriented exam questions. Higher level thinking skills were assessed when students completed exercises that required the completion of Venn diagrams, concept maps and/or evaluation rubrics both during class periods and on exams. Initial results indicate that these techniques improved student attendance significantly and improved overall retention in the course by 8-14% over traditional lecture formats. Student scores on multiple choice exam questions were slightly higher (1-3%) for students taught in the active learning environment and short answer questions showed larger gains (7%) over students' scores in a more traditional class structure.

  2. Variability of Plyometric and Ballistic Exercise Technique Maintains Jump Performance.

    Science.gov (United States)

    Chandler, Phillip T; Greig, Matthew; Comfort, Paul; McMahon, John J

    2018-06-01

    Chandler, PT, Greig, M, Comfort, P, and McMahon, JJ. Variability of plyometric and ballistic exercise technique maintains jump performance. J Strength Cond Res 32(6): 1571-1582, 2018-The aim of this study was to investigate changes in vertical jump technique over the course of a training session. Twelve plyometric and ballistic exercise-trained male athletes (age = 23.4 ± 4.6 years, body mass = 78.7 ± 18.8 kg, height = 177.1 ± 9.0 cm) performed 3 sets of 10 repetitions of drop jump (DJ), rebound jump (RJ) and squat jump (SJ). Each exercise was analyzed from touchdown to peak joint flexion and peak joint flexion to take-off. Squat jump was analyzed from peak joint flexion to take-off only. Jump height, flexion and extension time and range of motion, and instantaneous angles of the ankle, knee, and hip joints were measured. Separate 1-way repeated analyses of variance compared vertical jump technique across exercise sets and repetitions. Exercise set analysis found that SJ had lower results than DJ and RJ for the angle at peak joint flexion for the hip, knee, and ankle joints and take-off angle of the hip joint. Exercise repetition analysis found that the ankle joint had variable differences for the angle at take-off, flexion, and extension time for RJ. The knee joint had variable differences for flexion time for DJ and angle at take-off and touchdown for RJ. There was no difference in jump height. Variation in measured parameters across repetitions highlights variable technique across plyometric and ballistic exercises. This did not affect jump performance, but likely maintained jump performance by overcoming constraints (e.g., level of rate coding).

  3. Comparative Study of Modulation Techniques for Two-Level Voltage Source Inverters

    Directory of Open Access Journals (Sweden)

    Barry W. Williams

    2016-06-01

    Full Text Available A detailed comparative study of modulation techniques for single and three phase dc-ac inverters is presented.  Sinusoidal Pulse Width Modulation, Triplen Sinusoidal Pulse Width Modulation, Space Vector Modulation, Selective Harmonic Elimination and Wavelet Modulation are assessed and compared in terms of maximum fundamental output, harmonic performance, switching losses and operational mode.  The presented modulation techniques are applied to single and three phase voltage source inverters and are simulated using SIMULINK.  The simulation results clarify the inverter performance achieved using the different modulations techniques.

  4. A compressed sensing based approach on Discrete Algebraic Reconstruction Technique.

    Science.gov (United States)

    Demircan-Tureyen, Ezgi; Kamasak, Mustafa E

    2015-01-01

    Discrete tomography (DT) techniques are capable of computing better results, even using less number of projections than the continuous tomography techniques. Discrete Algebraic Reconstruction Technique (DART) is an iterative reconstruction method proposed to achieve this goal by exploiting a prior knowledge on the gray levels and assuming that the scanned object is composed from a few different densities. In this paper, DART method is combined with an initial total variation minimization (TvMin) phase to ensure a better initial guess and extended with a segmentation procedure in which the threshold values are estimated from a finite set of candidates to minimize both the projection error and the total variation (TV) simultaneously. The accuracy and the robustness of the algorithm is compared with the original DART by the simulation experiments which are done under (1) limited number of projections, (2) limited view problem and (3) noisy projections conditions.

  5. Learning mediastinoscopy: the need for education, experience and modern techniques--interdependency of the applied technique and surgeon's training level.

    Science.gov (United States)

    Walles, Thorsten; Friedel, Godehard; Stegherr, Tobias; Steger, Volker

    2013-04-01

    Mediastinoscopy represents the gold standard for invasive mediastinal staging. While learning and teaching the surgical technique are challenging due to the limited accessibility of the operation field, both benefited from the implementation of video-assisted techniques. However, it has not been established yet whether video-assisted mediastinoscopy improves the mediastinal staging in itself. Retrospective single-centre cohort analysis of 657 mediastinoscopies performed at a specialized tertiary care thoracic surgery unit from 1994 to 2006. The number of specimens obtained per procedure and per lymph node station (2, 4, 7, 8 for mediastinoscopy and 2-9 for open lymphadenectomy), the number of lymph node stations examined, sensitivity and negative predictive value with a focus on the technique employed (video-assisted vs standard technique) and the surgeon's experience were calculated. Overall sensitivity was 60%, accuracy was 90% and negative predictive value 88%. With the conventional technique, experience alone improved sensitivity from 49 to 57% and it was predominant at the paratracheal right region (from 62 to 82%). But with the video-assisted technique, experienced surgeons rose sensitivity from 57 to 79% in contrast to inexperienced surgeons who lowered sensitivity from 49 to 33%. We found significant differences concerning (i) the total number of specimens taken, (ii) the amount of lymph node stations examined, (iii) the number of specimens taken per lymph node station and (iv) true positive mediastinoscopies. The video-assisted technique can significantly improve the results of mediastinoscopy. A thorough education on the modern video-assisted technique is mandatory for thoracic surgeons until they can fully exhaust its potential.

  6. Applications of the parity space technique to the validation of the water level measurement of pressurizer for steady state and transients

    International Nuclear Information System (INIS)

    Zwingelstein, G.; Bath, L.

    1983-01-01

    During the design of disturbance analysis and surveillance systems, safety parameter display systems, computerized operator support systems or advanced control rooms, sensor signal validation is commonly considered as the first task to be performed. After an introduction of the anticipated benefits of the signal validation techniques and a brief survey of the methods under current practices, a signal validation technique based upon the parity space methodology is presented. The efficiency of the method applied to the detection and the identification of five types of failures is illustrated with two examples when three water level measurements of a pressurizer of a nuclear plant are redundant. In the first example the use of the analytical redundancy technique is presented when only two identical sensors are available. A detailed description of the dynamic model of the pressurizer is given. In the second example the case of the identical water level sensors is considered. Performances of the software developed on a computer DEC PDP 11 are finally given

  7. 6 Sigma DFSS technique which is easy to use

    International Nuclear Information System (INIS)

    2002-01-01

    This book gives descriptions of 6 sigma DFSS technique. The contents of this book are storm of change, way of problem and solution, importance of customer satisfaction, quality improvement is key of customer satisfaction, quality improvement equals cost cutting, quality aim in perfect level, finding basic cause, data is life, standardization is fundamentals of all activity for improvement, chief, Chang's house, collection of data, setting goal to improve, experiment is the best way, importance of the last step, x control power of 6 sigma and Let's go six-sigma.

  8. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    Science.gov (United States)

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  9. DPASV analytical technique for ppb level uranium analysis

    Science.gov (United States)

    Pal, Sangita; Singha, Mousumi; Meena, Sher Singh

    2018-04-01

    Determining uranium in ppb level is considered to be most crucial for reuse of water originated in nuclear industries at the time of decontamination of plant effluents generated during uranium (fuel) production, fuel rod fabrication, application in nuclear reactors and comparatively small amount of effluents obtained during laboratory research and developmental work. Higher level of uranium in percentage level can be analyzed through gravimetry, titration etc, whereas inductively coupled plasma-atomic energy spectroscopy (ICP-AES), fluorimeter are well suited for ppm level. For ppb level of uranium, inductively coupled plasma - mass spectroscopy (ICP-MS) or Differential Pulse Anodic Stripping Voltammetry (DPASV) serve the purpose. High precision, accuracy and sensitivity are the crucial for uranium analysis in trace (ppb) level, which are satisfied by ICP-MS and stripping voltammeter. Voltammeter has been found to be less expensive, requires low maintenance and is convenient for measuring uranium in presence of large number of other ions in the waste effluent. In this paper, necessity of uranium concentration quantification for recovery as well as safe disposal of plant effluent, working mechanism of voltammeter w.r.t. uranium analysis in ppb level with its standard deviation and a data comparison with ICP-MS has been represented.

  10. Evaluation of three instrumentation techniques at the precision of apical stop and apical sealing of obturation

    Directory of Open Access Journals (Sweden)

    Özgür Genç

    2011-08-01

    Full Text Available OBJECTIVE: The aim of this study was to investigate the ability of two NiTi rotary apical preparation techniques used with an electronic apex locator-integrated endodontic motor and a manual technique to create an apical stop at a predetermined level (0.5 mm short of the apical foramen in teeth with disrupted apical constriction, and to evaluate microleakage following obturation in such prepared teeth. MATERIAL AND METHODS: 85 intact human mandibular permanent incisors with single root canal were accessed and the apical constriction was disrupted using a #25 K-file. The teeth were embedded in alginate and instrumented to #40 using rotary Lightspeed or S-Apex techniques or stainless-steel K-files. Distance between the apical foramen and the created apical stop was measured to an accuracy of 0.01 mm. In another set of instrumented teeth, root canals were obturated using gutta-percha and sealer, and leakage was tested at 1 week and 3 months using a fluid filtration device. RESULTS: All techniques performed slightly short of the predetermined level. Closest preparation to the predetermined level was with the manual technique and the farthest was with S-Apex. A significant difference was found between the performances of these two techniques (p<0.05. Lightspeed ranked in between. Leakage was similar for all techniques at either period. However, all groups leaked significantly more at 3 months compared to 1 week (p<0.05. CONCLUSIONS: Despite statistically significant differences found among the techniques, deviations from the predetermined level were small and clinically acceptable for all techniques. Leakage following obturation was comparable in all groups.

  11. Ventilation techniques and radon in small houses

    International Nuclear Information System (INIS)

    Keskinen, J.; Graeffe, G.; Janka, K.

    1988-01-01

    Indoor radon is the main cause of radiation exposure in Finland. The National Board of Health set the recommended concentration limits in 1986: an action level of 800 Bq/m 3 and a planning value of 200 Bq/m 3 for new buildings. The 800 Bq/m 3 concentration is estimated to be exceeded in 1.4% of the housing. This rather high number has motivated a number of studies concerning countermeasures against radon in existing houses. The purpose of this study was to find out possible remedial actions against radon using standard ventilation techniques. The ventilation rates were not increased over 0.71/h in order to have a realistic view about the possibilities of the state-of-the-art techniques. Special attention was given to methods which would be generally applicable to a large number of dwellings already existing. Results are reported of a pilot study with six small houses with established high radon concentrations

  12. Decoupling Solar Variability and Instrument Trends Using the Multiple Same-Irradiance-Level (MuSIL) Analysis Technique

    Science.gov (United States)

    Woods, Thomas N.; Eparvier, Francis G.; Harder, Jerald; Snow, Martin

    2018-05-01

    The solar spectral irradiance (SSI) dataset is a key record for studying and understanding the energetics and radiation balance in Earth's environment. Understanding the long-term variations of the SSI over timescales of the 11-year solar activity cycle and longer is critical for many Sun-Earth research topics. Satellite measurements of the SSI have been made since the 1970s, most of them in the ultraviolet, but recently also in the visible and near-infrared. A limiting factor for the accuracy of previous solar variability results is the uncertainties for the instrument degradation corrections, which need fairly large corrections relative to the amount of solar cycle variability at some wavelengths. The primary objective of this investigation has been to separate out solar cycle variability and any residual uncorrected instrumental trends in the SSI measurements from the Solar Radiation and Climate Experiment (SORCE) mission and the Thermosphere, Mesosphere, Ionosphere, Energetic, and Dynamics (TIMED) mission. A new technique called the Multiple Same-Irradiance-Level (MuSIL) analysis has been developed, which examines an SSI time series at different levels of solar activity to provide long-term trends in an SSI record, and the most common result is a downward trend that most likely stems from uncorrected instrument degradation. This technique has been applied to each wavelength in the SSI records from SORCE (2003 - present) and TIMED (2002 - present) to provide new solar cycle variability results between 27 nm and 1600 nm with a resolution of about 1 nm at most wavelengths. This technique, which was validated with the highly accurate total solar irradiance (TSI) record, has an estimated relative uncertainty of about 5% of the measured solar cycle variability. The MuSIL results are further validated with the comparison of the new solar cycle variability results from different solar cycles.

  13. Multi-level decision making models, methods and applications

    CERN Document Server

    Zhang, Guangquan; Gao, Ya

    2015-01-01

    This monograph presents new developments in multi-level decision-making theory, technique and method in both modeling and solution issues. It especially presents how a decision support system can support managers in reaching a solution to a multi-level decision problem in practice. This monograph combines decision theories, methods, algorithms and applications effectively. It discusses in detail the models and solution algorithms of each issue of bi-level and tri-level decision-making, such as multi-leaders, multi-followers, multi-objectives, rule-set-based, and fuzzy parameters. Potential readers include organizational managers and practicing professionals, who can use the methods and software provided to solve their real decision problems; PhD students and researchers in the areas of bi-level and multi-level decision-making and decision support systems; students at an advanced undergraduate, master’s level in information systems, business administration, or the application of computer science.  

  14. Out-of-order parallel discrete event simulation for electronic system-level design

    CERN Document Server

    Chen, Weiwei

    2014-01-01

    This book offers readers a set of new approaches and tools a set of tools and techniques for facing challenges in parallelization with design of embedded systems.? It provides an advanced parallel simulation infrastructure for efficient and effective system-level model validation and development so as to build better products in less time.? Since parallel discrete event simulation (PDES) has the potential to exploit the underlying parallel computational capability in today's multi-core simulation hosts, the author begins by reviewing the parallelization of discrete event simulation, identifyin

  15. Computation of expectation values from vibrational coupled-cluster at the two-mode coupling level

    DEFF Research Database (Denmark)

    Zoccante, Alberto; Seidler, Peter; Christiansen, Ove

    2011-01-01

    In this work we show how the vibrational coupled-cluster method at the two-mode coupling level can be used to calculate zero-point vibrational averages of properties. A technique is presented, where any expectation value can be calculated using a single set of Lagrangian multipliers computed...

  16. Effect of a uniform magnetic field on dielectric two-phase bubbly flows using the level set method

    International Nuclear Information System (INIS)

    Ansari, M.R.; Hadidi, A.; Nimvari, M.E.

    2012-01-01

    In this study, the behavior of a single bubble in a dielectric viscous fluid under a uniform magnetic field has been simulated numerically using the Level Set method in two-phase bubbly flow. The two-phase bubbly flow was considered to be laminar and homogeneous. Deformation of the bubble was considered to be due to buoyancy and magnetic forces induced from the external applied magnetic field. A computer code was developed to solve the problem using the flow field, the interface of two phases, and the magnetic field. The Finite Volume method was applied using the SIMPLE algorithm to discretize the governing equations. Using this algorithm enables us to calculate the pressure parameter, which has been eliminated by previous researchers because of the complexity of the two-phase flow. The finite difference method was used to solve the magnetic field equation. The results outlined in the present study agree well with the existing experimental data and numerical results. These results show that the magnetic field affects and controls the shape, size, velocity, and location of the bubble. - Highlights: ►A bubble behavior was simulated numerically. ► A single bubble behavior was considered in a dielectric viscous fluid. ► A uniform magnetic field is used to study a bubble behavior. ► Deformation of the bubble was considered using the Level Set method. ► The magnetic field affects the shape, size, velocity, and location of the bubble.

  17. Identification of Arbitrary Zonation in Groundwater Parameters using the Level Set Method and a Parallel Genetic Algorithm

    Science.gov (United States)

    Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.

    2017-12-01

    Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.

  18. Identification of noise in linear data sets by factor analysis

    International Nuclear Information System (INIS)

    Roscoe, B.A.; Hopke, Ph.K.

    1982-01-01

    A technique which has the ability to identify bad data points, after the data has been generated, is classical factor analysis. The ability of classical factor analysis to identify two different types of data errors make it ideally suited for scanning large data sets. Since the results yielded by factor analysis indicate correlations between parameters, one must know something about the nature of the data set and the analytical techniques used to obtain it to confidentially isolate errors. (author)

  19. ML at ATLAS&CMS : setting the stage

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    In the early days of the LHC the canonical problems of classification and regression were mostly addressed using simple cut-based techniques. Today, ML techniques (some already pioneered in pre-LHC or non collider experiments) play a fundamental role in the toolbox of any experimentalist. The talk will introduce, through a representative collection of examples, the problems addressed with ML techniques at the LHC. The goal of the talk is to set the stage for a constructive discussion with non-HEP ML practitioners, focusing on the specificities of HEP applications.

  20. Automatic Fontanel Extraction from Newborns' CT Images Using Variational Level Set

    Science.gov (United States)

    Kazemi, Kamran; Ghadimi, Sona; Lyaghat, Alireza; Tarighati, Alla; Golshaeyan, Narjes; Abrishami-Moghaddam, Hamid; Grebe, Reinhard; Gondary-Jouet, Catherine; Wallois, Fabrice

    A realistic head model is needed for source localization methods used for the study of epilepsy in neonates applying Electroencephalographic (EEG) measurements from the scalp. The earliest models consider the head as a series of concentric spheres, each layer corresponding to a different tissue whose conductivity is assumed to be homogeneous. The results of the source reconstruction depend highly on the electric conductivities of the tissues forming the head.The most used model is constituted of three layers (scalp, skull, and intracranial). Most of the major bones of the neonates’ skull are ossified at birth but can slightly move relative to each other. This is due to the sutures, fibrous membranes that at this stage of development connect the already ossified flat bones of the neurocranium. These weak parts of the neurocranium are called fontanels. Thus it is important to enter the exact geometry of fontaneles and flat bone in a source reconstruction because they show pronounced in conductivity. Computer Tomography (CT) imaging provides an excellent tool for non-invasive investigation of the skull which expresses itself in high contrast to all other tissues while the fontanels only can be identified as absence of bone, gaps in the skull formed by flat bone. Therefore, the aim of this paper is to extract the fontanels from CT images applying a variational level set method. We applied the proposed method to CT-images of five different subjects. The automatically extracted fontanels show good agreement with the manually extracted ones.

  1. Settings for Physical Activity – Developing a Site-specific Physical Activity Behavior Model based on Multi-level Intervention Studies

    DEFF Research Database (Denmark)

    Troelsen, Jens; Klinker, Charlotte Demant; Breum, Lars

    Settings for Physical Activity – Developing a Site-specific Physical Activity Behavior Model based on Multi-level Intervention Studies Introduction: Ecological models of health behavior have potential as theoretical framework to comprehend the multiple levels of factors influencing physical...... to be taken into consideration. A theoretical implication of this finding is to develop a site-specific physical activity behavior model adding a layered structure to the ecological model representing the determinants related to the specific site. Support: This study was supported by TrygFonden, Realdania...... activity (PA). The potential is shown by the fact that there has been a dramatic increase in application of ecological models in research and practice. One proposed core principle is that an ecological model is most powerful if the model is behavior-specific. However, based on multi-level interventions...

  2. New approaches to wipe sampling methods for antineoplastic and other hazardous drugs in healthcare settings.

    Science.gov (United States)

    Connor, Thomas H; Smith, Jerome P

    2016-09-01

    At the present time, the method of choice to determine surface contamination of the workplace with antineoplastic and other hazardous drugs is surface wipe sampling and subsequent sample analysis with a variety of analytical techniques. The purpose of this article is to review current methodology for determining the level of surface contamination with hazardous drugs in healthcare settings and to discuss recent advances in this area. In addition it will provide some guidance for conducting surface wipe sampling and sample analysis for these drugs in healthcare settings. Published studies on the use of wipe sampling to measure hazardous drugs on surfaces in healthcare settings drugs were reviewed. These studies include the use of well-documented chromatographic techniques for sample analysis in addition to newly evolving technology that provides rapid analysis of specific antineoplastic. Methodology for the analysis of surface wipe samples for hazardous drugs are reviewed, including the purposes, technical factors, sampling strategy, materials required, and limitations. The use of lateral flow immunoassay (LFIA) and fluorescence covalent microbead immunosorbent assay (FCMIA) for surface wipe sample evaluation is also discussed. Current recommendations are that all healthc a re settings where antineoplastic and other hazardous drugs are handled include surface wipe sampling as part of a comprehensive hazardous drug-safe handling program. Surface wipe sampling may be used as a method to characterize potential occupational dermal exposure risk and to evaluate the effectiveness of implemented controls and the overall safety program. New technology, although currently limited in scope, may make wipe sampling for hazardous drugs more routine, less costly, and provide a shorter response time than classical analytical techniques now in use.

  3. Introduction to the level-set full field modeling of laths spheroidization phenomenon in α/β titanium alloys

    Directory of Open Access Journals (Sweden)

    Polychronopoulou D.

    2016-01-01

    Full Text Available Fragmentation of α lamellae and subsequent spheroidization of α laths in α/β titanium alloys occurring during and after deformation are well known phenomena. We will illustrate the development of a new finite element methodology to model them. This new methodology is based on a level set framework to model the deformation and the ad hoc simultaneous and/or subsequent interfaces kinetics. We will focus, at yet, on the modeling of the surface diffusion at the α/β phase interfaces and the motion by mean curvature at the α/α grain interfaces.

  4. Meniscal tears: comparison of half-Fourier technique and conventional MR imaging

    International Nuclear Information System (INIS)

    Shabana, Wael; Maeseneer, Michel de; Machiels, Freddy; Ridder, Filip de; Osteaux, Michel

    2003-01-01

    Purpose: To determine whether half-Fourier MR image acquisition technique can provide similar information to that of conventional MR acquisition technique for evaluation of meniscal tears. Materials and methods: We studied 101 menisci in 52 patients who were referred for evaluation of meniscal tears. Sagittal MR images of the knee were obtained for all patients by using proton density and T2-weighted SE sequences on a 1-T clinical system. The half-Fourier technique and conventional technique were used for all patients. All other imaging parameters were identical for both sequences (TR/TE=2400/20,70; 3 mm slice thickness; 200x256 matrix; field of view, 200; one signal acquired). Both sets of images were filmed with standard window and level settings. Images were randomised and interpreted independently by two radiologists for the presence of meniscal tears. Images were also subjectively assessed for image quality using a five-point grading scale. Results: On half-Fourier images, Reader 1 interpreted 23 menisci as torn, compared to 28 for Reader 2. On conventional images, Reader 1 interpreted 24 menisci as torn, compared to 26 for Reader 2. Agreement between interpretation of the conventional and that of the half-Fourier images was 99% for Reader 1, and 98% for Reader 2. Agreement between readers for the half-Fourier images was 95%, and for the conventional images 96%. No statistically significant difference was found in the subjective evaluation of image quality between the conventional and half-Fourier images. Conclusion: The half-Fourier acquisition technique compares favourably with the conventional technique for the evaluation of meniscal tears

  5. Patient- and population-level health consequences of discontinuing antiretroviral therapy in settings with inadequate HIV treatment availability

    Directory of Open Access Journals (Sweden)

    Kimmel April D

    2012-09-01

    Full Text Available Abstract Background In resource-limited settings, HIV budgets are flattening or decreasing. A policy of discontinuing antiretroviral therapy (ART after HIV treatment failure was modeled to highlight trade-offs among competing policy goals of optimizing individual and population health outcomes. Methods In settings with two available ART regimens, we assessed two strategies: (1 continue ART after second-line failure (Status Quo and (2 discontinue ART after second-line failure (Alternative. A computer model simulated outcomes for a single cohort of newly detected, HIV-infected individuals. Projections were fed into a population-level model allowing multiple cohorts to compete for ART with constraints on treatment capacity. In the Alternative strategy, discontinuation of second-line ART occurred upon detection of antiretroviral failure, specified by WHO guidelines. Those discontinuing failed ART experienced an increased risk of AIDS-related mortality compared to those continuing ART. Results At the population level, the Alternative strategy increased the mean number initiating ART annually by 1,100 individuals (+18.7% to 6,980 compared to the Status Quo. More individuals initiating ART under the Alternative strategy increased total life-years by 15,000 (+2.8% to 555,000, compared to the Status Quo. Although more individuals received treatment under the Alternative strategy, life expectancy for those treated decreased by 0.7 years (−8.0% to 8.1 years compared to the Status Quo. In a cohort of treated patients only, 600 more individuals (+27.1% died by 5 years under the Alternative strategy compared to the Status Quo. Results were sensitive to the timing of detection of ART failure, number of ART regimens, and treatment capacity. Although we believe the results robust in the short-term, this analysis reflects settings where HIV case detection occurs late in the disease course and treatment capacity and the incidence of newly detected patients are

  6. [Dot1 and Set2 Histone Methylases Control the Spontaneous and UV-Induced Mutagenesis Levels in the Saccharomyces cerevisiae Yeasts].

    Science.gov (United States)

    Kozhina, T N; Evstiukhina, T A; Peshekhonov, V T; Chernenkov, A Yu; Korolev, V G

    2016-03-01

    In the Saccharomyces cerevisiae yeasts, the DOT1 gene product provides methylation of lysine 79 (K79) of hi- stone H3 and the SET2 gene product provides the methylation of lysine 36 (K36) of the same histone. We determined that the dot1 and set2 mutants suppress the UV-induced mutagenesis to an equally high degree. The dot1 mutation demonstrated statistically higher sensitivity to the low doses of MMC than the wild type strain. The analysis of the interaction between the dot1 and rad52 mutations revealed a considerable level of spontaneous cell death in the double dot1 rad52 mutant. We observed strong suppression of the gamma-in- duced mutagenesis in the set2 mutant. We determined that the dot1 and set2 mutations decrease the sponta- neous mutagenesis rate in both single and d ouble mutants. The epistatic interaction between the dot1 and set2 mutations and almost similar sensitivity of the corresponding mutants to the different types of DNA damage allow one to conclude that both genes are involved in the control of the same DNA repair pathways, the ho- mologous-recombination-based and the postreplicative DNA repair.

  7. On the modeling of bubble evolution and transport using coupled level-set/CFD method

    International Nuclear Information System (INIS)

    Bartlomiej Wierzbicki; Steven P Antal; Michael Z Podowski

    2005-01-01

    Full text of publication follows: The ability to predict the shape of the gas/liquid/solid interfaces is important for various multiphase flow and heat transfer applications. Specific issues of interest to nuclear reactor thermal-hydraulics, include the evolution of the shape of bubbles attached to solid surfaces during nucleation, bubble surface interactions in complex geometries, etc. Additional problems, making the overall task even more complicated, are associated with the effect of material properties that may be significantly altered by the addition of minute amounts of impurities, such as surfactants or nano-particles. The present paper is concerned with the development of an innovative approach to model time-dependent shape of gas/liquid interfaces in the presence of solid walls. The proposed approach combines a modified level-set method with an advanced CFD code, NPHASE. The coupled numerical solver can be used to simulate the evolution of gas/liquid interfaces in two-phase flows for a variety of geometries and flow conditions, from individual bubbles to free surfaces (stratified flows). The issues discussed in the full paper will include: a description of the novel aspects of the proposed level-set concept based method, an overview of the NPHASE code modeling framework and a description of the coupling method between these two elements of the overall model. A particular attention will be give to the consistency and completeness of model formulation for the interfacial phenomena near the liquid/gas/solid triple line, and to the impact of the proposed numerical approach on the accuracy and consistency of predictions. The accuracy will be measured in terms of both the calculated shape of the interfaces and the gas and liquid velocity fields around the interfaces and in the entire computational domain. The results of model testing and validation will also be shown in the full paper. The situations analyzed will include: bubbles of different sizes and varying

  8. COSPEDTree: COuplet Supertree by Equivalence Partitioning of Taxa Set and DAG Formation.

    Science.gov (United States)

    Bhattacharyya, Sourya; Mukherjee, Jayanta

    2015-01-01

    From a set of phylogenetic trees with overlapping taxa set, a supertree exhibits evolutionary relationships among all input taxa. The key is to resolve the contradictory relationships with respect to input trees, between individual taxa subsets. Formulation of this NP hard problem employs either local search heuristics to reduce tree search space, or resolves the conflicts with respect to fixed or varying size subtree level decompositions. Different approximation techniques produce supertrees with considerable performance variations. Moreover, the majority of the algorithms involve high computational complexity, thus not suitable for use on large biological data sets. Current study presents COSPEDTree, a novel method for supertree construction. The technique resolves source tree conflicts by analyzing couplet (taxa pair) relationships for each source trees. Subsequently, individual taxa pairs are resolved with a single relation. To prioritize the consensus relations among individual taxa pairs for resolving them, greedy scoring is employed to assign higher score values for the consensus relations among a taxa pair. Selected set of relations resolving individual taxa pairs is subsequently used to construct a directed acyclic graph (DAG). Vertices of DAG represents a taxa subset inferred from the same speciation event. Thus, COSPEDTree can generate non-binary supertrees as well. Depth first traversal on this DAG yields final supertree. According to the performance metrics on branch dissimilarities (such as FP, FN and RF), COSPEDTree produces mostly conservative, well resolved supertrees. Specifically, RF metrics are mostly lower compared to the reference approaches, and FP values are lower apart from only strictly conservative (or veto) approaches. COSPEDTree has worst case time and space complexities of cubic and quadratic order, respectively, better or comparable to the reference approaches. Such high performance and low computational costs enable COSPEDTree to be

  9. [Analysis of genomic DNA methylation level in radish under cadmium stress by methylation-sensitive amplified polymorphism technique].

    Science.gov (United States)

    Yang, Jin-Lan; Liu, Li-Wang; Gong, Yi-Qin; Huang, Dan-Qiong; Wang, Feng; He, Ling-Li

    2007-06-01

    The level of cytosine methylation induced by cadmium in radish (Raphanus sativus L.) genome was analysed using the technique of methylation-sensitive amplified polymorphism (MSAP). The MSAP ratios in radish seedling exposed to cadmium chloride at the concentration of 50, 250 and 500 mg/L were 37%, 43% and 51%, respectively, and the control was 34%; the full methylation levels (C(m)CGG in double strands) were at 23%, 25% and 27%, respectively, while the control was 22%. The level of increase in MSAP and full methylation indicated that de novo methylation occurred in some 5'-CCGG sites under Cd stress. There was significant positive correlation between increase of total DNA methylation level and CdCl(2) concentration. Four types of MSAP patterns: de novo methylation, de-methylation, atypical pattern and no changes of methylation pattern were identified among CdCl(2) treatments and the control. DNA methylation alteration in plants treated with CdCl(2) was mainly through de novo methylation.

  10. A novel technique for die-level post-processing of released optical MEMS

    International Nuclear Information System (INIS)

    Elsayed, Mohannad Y; Beaulieu, Philippe-Olivier; Briere, Jonathan; Ménard, Michaël; Nabki, Frederic

    2016-01-01

    This work presents a novel die-level post-processing technique for dies including released movable structures. The procedure was applied to microelectromechanical systems (MEMS) chips that were fabricated in a commercial process, SOIMUMPs from MEMSCAP. It allows the performance of a clean DRIE etch of sidewalls on the diced chips enabling the optical testing of the pre-released MEMS mirrors through the chip edges. The etched patterns are defined by photolithography using photoresist spray coating. The photoresist thickness is tuned to create photoresist bridges over the pre-released gaps, protecting the released structures during subsequent wet processing steps. Then, the chips are subject to a sequence of wet and dry etching steps prior to dry photoresist removal in O 2 plasma. Processed micromirrors were tested and found to rotate similarly to devices without processing, demonstrating that the post-processing procedure does not affect the mechanical performance of the devices significantly. (technical note)

  11. LANDING TECHNIQUES IN BEACH VOLLEYBALL

    Directory of Open Access Journals (Sweden)

    Markus Tilp

    2013-09-01

    Full Text Available The aims of the present study were to establish a detailed and representative record of landing techniques (two-, left-, and right-footed landings in professional beach volleyball and compare the data with those of indoor volleyball. Beach volleyball data was retrieved from videos taken at FIVB World Tour tournaments. Landing techniques were compared in the different beach and indoor volleyball skills serve, set, attack, and block with regard to sex, playing technique, and court position. Significant differences were observed between men and women in landings following block actions (χ²(2 = 18.19, p < 0.01 but not following serve, set, and attack actions. Following blocking, men landed more often on one foot than women. Further differences in landings following serve and attack with regard to playing technique and position were mainly observed in men. The comparison with landing techniques in indoor volleyball revealed overall differences both in men (χ²(2 = 161.4, p < 0.01 and women (χ²(2 = 84.91, p < 0.01. Beach volleyball players land more often on both feet than indoor volleyball players. Besides the softer surface in beach volleyball, and therefore resulting lower loads, these results might be another reason for fewer injuries and overuse conditions compared to indoor volleyball

  12. Automated volume analysis of head and neck lesions on CT scans using 3D level set segmentation

    International Nuclear Information System (INIS)

    Street, Ethan; Hadjiiski, Lubomir; Sahiner, Berkman; Gujar, Sachin; Ibrahim, Mohannad; Mukherji, Suresh K.; Chan, Heang-Ping

    2007-01-01

    The authors have developed a semiautomatic system for segmentation of a diverse set of lesions in head and neck CT scans. The system takes as input an approximate bounding box, and uses a multistage level set to perform the final segmentation. A data set consisting of 69 lesions marked on 33 scans from 23 patients was used to evaluate the performance of the system. The contours from automatic segmentation were compared to both 2D and 3D gold standard contours manually drawn by three experienced radiologists. Three performance metric measures were used for the comparison. In addition, a radiologist provided quality ratings on a 1 to 10 scale for all of the automatic segmentations. For this pilot study, the authors observed that the differences between the automatic and gold standard contours were larger than the interobserver differences. However, the system performed comparably to the radiologists, achieving an average area intersection ratio of 85.4% compared to an average of 91.2% between two radiologists. The average absolute area error was 21.1% compared to 10.8%, and the average 2D distance was 1.38 mm compared to 0.84 mm between the radiologists. In addition, the quality rating data showed that, despite the very lax assumptions made on the lesion characteristics in designing the system, the automatic contours approximated many of the lesions very well

  13. Two-phase electro-hydrodynamic flow modeling by a conservative level set model.

    Science.gov (United States)

    Lin, Yuan

    2013-03-01

    The principles of electro-hydrodynamic (EHD) flow have been known for more than a century and have been adopted for various industrial applications, for example, fluid mixing and demixing. Analytical solutions of such EHD flow only exist in a limited number of scenarios, for example, predicting a small deformation of a single droplet in a uniform electric field. Numerical modeling of such phenomena can provide significant insights about EHDs multiphase flows. During the last decade, many numerical results have been reported to provide novel and useful tools of studying the multiphase EHD flow. Based on a conservative level set method, the proposed model is able to simulate large deformations of a droplet by a steady electric field, which is beyond the region of theoretic prediction. The model is validated for both leaky dielectrics and perfect dielectrics, and is found to be in excellent agreement with existing analytical solutions and numerical studies in the literature. Furthermore, simulations of the deformation of a water droplet in decyl alcohol in a steady electric field match better with published experimental data than the theoretical prediction for large deformations. Therefore the proposed model can serve as a practical and accurate tool for simulating two-phase EHD flow. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Clustering with Instance and Attribute Level Side Information

    Directory of Open Access Journals (Sweden)

    Jinlong Wang

    2010-12-01

    Full Text Available Selecting a suitable proximity measure is one of the fundamental tasks in clustering. How to effectively utilize all available side information, including the instance level information in the form of pair-wise constraints, and the attribute level information in the form of attribute order preferences, is an essential problem in metric learning. In this paper, we propose a learning framework in which both the pair-wise constraints and the attribute order preferences can be incorporated simultaneously. The theory behind it and the related parameter adjusting technique have been described in details. Experimental results on benchmark data sets demonstrate the effectiveness of proposed method.

  15. Nitrogen balance and dynamics in corn under different soil fertility levels using “1“5N isotope tracer technique

    International Nuclear Information System (INIS)

    Rallos, R.V.; Rivera, F.G.; Samar, E.D.; Rojales, J.S.; Anida, A.H.

    2015-01-01

    Nitrogen (N) Fertilizer plays a vital role on the growth and development of any crop. The inefficient N fertilizer utilization contributes to poor crop productivity and environment pollution. This study used the 15N isotope tracer technique to understand the nitrogen balance and dynamics in corn grown during the wet and dry season for low, medium and high N soils in Northern Luzon. The experiments were laid out following the randomized complete block design (RCBD) potassium requirements were applied at optimum level on solid chemical analysis and fertilizer recommendation. The study was able to separate the source of N from applied fertilizer and from the soils, traced using 15N during the 30 days after planting (DAP), 60 DAP and at harvest. Result show that, more than half of N in the plant came directly from added fertilizer during the early stage, which decreased towards harvest period. Fertilizer N yield use efficiency showed negative relationship with the rate of N application and soil fertility levels. Of N fertilization in different soil fertility levels were also established using isotope tracer technique. (author)

  16. Experimental Investigations of Noise Control in Planetary Gear Set by Phasing

    Directory of Open Access Journals (Sweden)

    S. H. Gawande

    2014-01-01

    Full Text Available Now a days reduction of gear noise and resulting vibrations has received much attention of the researchers. The internal excitation caused by the variation in tooth mesh stiffness is a key factor in causing vibration. Therefore to reduce gear noise and vibrations several techniques have been proposed in recent years. In this research the experimental work is carried out to study the effect of planet phasing on noise and subsequent resulting vibrations of Nylon-6 planetary gear drive. For this purpose experimental set-up was built and trials were conducted for two different arrangements (i.e., with phasing and without phasing and it is observed that the noise level and resulting vibrations were reduced by planet phasing arrangement. So from the experimental results it is observed that by applying the meshing phase difference one can reduce planetary gear set noise and vibrations.

  17. APPLICATION OF ROUGH SET THEORY TO MAINTENANCE LEVEL DECISION-MAKING FOR AERO-ENGINE MODULES BASED ON INCREMENTAL KNOWLEDGE LEARNING

    Institute of Scientific and Technical Information of China (English)

    陆晓华; 左洪福; 蔡景

    2013-01-01

    The maintenance of an aero-engine usually includes three levels ,and the maintenance cost and period greatly differ depending on the different maintenance levels .To plan a reasonable maintenance budget program , airlines would like to predict the maintenance level of aero-engine before repairing in terms of performance parame-ters ,which can provide more economic benefits .The maintenance level decision rules are mined using the histori-cal maintenance data of a civil aero-engine based on the rough set theory ,and a variety of possible models of upda-ting rules produced by newly increased maintenance cases added to the historical maintenance case database are in-vestigated by the means of incremental machine learning .The continuously updated rules can provide reasonable guidance suggestions for engineers and decision support for planning a maintenance budget program before repai-ring .The results of an example show that the decision rules become more typical and robust ,and they are more accurate to predict the maintenance level of an aero-engine module as the maintenance data increase ,which illus-trates the feasibility of the represented method .

  18. Improving district level health planning and priority setting in Tanzania through implementing accountability for reasonableness framework: Perceptions of stakeholders.

    Science.gov (United States)

    Maluka, Stephen; Kamuzora, Peter; San Sebastián, Miguel; Byskov, Jens; Ndawi, Benedict; Hurtig, Anna-Karin

    2010-12-01

    In 2006, researchers and decision-makers launched a five-year project - Response to Accountable Priority Setting for Trust in Health Systems (REACT) - to improve planning and priority-setting through implementing the Accountability for Reasonableness framework in Mbarali District, Tanzania. The objective of this paper is to explore the acceptability of Accountability for Reasonableness from the perspectives of the Council Health Management Team, local government officials, health workforce and members of user boards and committees. Individual interviews were carried out with different categories of actors and stakeholders in the district. The interview guide consisted of a series of questions, asking respondents to describe their perceptions regarding each condition of the Accountability for Reasonableness framework in terms of priority setting. Interviews were analysed using thematic framework analysis. Documentary data were used to support, verify and highlight the key issues that emerged. Almost all stakeholders viewed Accountability for Reasonableness as an important and feasible approach for improving priority-setting and health service delivery in their context. However, a few aspects of Accountability for Reasonableness were seen as too difficult to implement given the socio-political conditions and traditions in Tanzania. Respondents mentioned: budget ceilings and guidelines, low level of public awareness, unreliable and untimely funding, as well as the limited capacity of the district to generate local resources as the major contextual factors that hampered the full implementation of the framework in their context. This study was one of the first assessments of the applicability of Accountability for Reasonableness in health care priority-setting in Tanzania. The analysis, overall, suggests that the Accountability for Reasonableness framework could be an important tool for improving priority-setting processes in the contexts of resource-poor settings

  19. Research on the development of green chemistry technology assessment techniques: a material reutilization case.

    Science.gov (United States)

    Hong, Seokpyo; Ahn, Kilsoo; Kim, Sungjune; Gong, Sungyong

    2015-01-01

    This study presents a methodology that enables a quantitative assessment of green chemistry technologies. The study carries out a quantitative evaluation of a particular case of material reutilization by calculating the level of "greenness" i.e., the level of compliance with the principles of green chemistry that was achieved by implementing a green chemistry technology. The results indicate that the greenness level was enhanced by 42% compared to the pre-improvement level, thus demonstrating the economic feasibility of green chemistry. The assessment technique established in this study will serve as a useful reference for setting the direction of industry-level and government-level technological R&D and for evaluating newly developed technologies, which can greatly contribute toward gaining a competitive advantage in the global market.

  20. Legal technique: approaches to section on types

    Directory of Open Access Journals (Sweden)

    І. Д. Шутак

    2015-11-01

    Full Text Available Legal technique is a branch of knowledge about the rules of doing legal work and creating in the process a variety of legal documents, which had previously been part of the theory of law. In modern conditions of the legal technique are isolated in a separate branch of legal science, focused on solving practical problems. The purpose of this article is to analyze the types of legal techniques, in particular, on the basis of theoretical propositions about legal technique to allocate substantial characteristics and types of legal technique. O. Malko and M. Matuzov consider legal technique as a set of rules, techniques, methods of preparation, creation, registration of legal documents, their classification and accounting for their excellence, efficient use. A similar meaning is investing in this concept Alekseev, determining that the legal technique is a set of tools and techniques used in accordance with accepted rules in the formulation and systematization of legal acts to ensure their perfection. So, legal technique – theoretical and applied legal science, which studies the regularities of rational legal practice in the creation, interpretation and implementation of law. In relation to the type of legal techniques in the literature proposed different classifications. For example, G. Muromtsev technique, which is used only in the field of law, divide on the technique of law-making (legislative technique, technique of law enforcement, interpretation, technique of judicial speech, interrogation, notarial activities. V. Kartashov shared legal technique on law making and enforcement (prorealtime, interpretive yourself and prevacidrebatezw, judicial or investigative, prosecutorial, and the like. Some authors clearly indicate that the criterion by which to distinguish types of legal techniques. So, S. Alekseev notes that legal technique is classified from the point of view of the legal nature of the act made on: a techniques of legal acts; b the

  1. Lumbar lordosis restoration following single-level instrumented fusion comparing 4 commonly used techniques.

    Science.gov (United States)

    Dimar, John R; Glassman, Steven D; Vemuri, Venu M; Esterberg, Justin L; Howard, Jennifer M; Carreon, Leah Y

    2011-11-09

    A major sequelae of lumbar fusion is acceleration of adjacent-level degeneration due to decreased lumbar lordosis. We evaluated the effectiveness of 4 common fusion techniques in restoring lordosis: instrumented posterolateral fusion, translumbar interbody fusion, anteroposterior fusion with posterior instrumentation, and anterior interbody fusion with lordotic threaded (LT) cages (Medtronic Sofamor Danek, Memphis, Tennessee). Radiographs were measured preoperatively, immediately postoperatively, and a minimum of 6 months postoperatively. Parameters measured included anterior and posterior disk space height, lumbar lordosis from L3 to S1, and surgical level lordosis.No significant difference in demographics existed among the 4 groups. All preoperative parameters were similar among the 4 groups. Lumbar lordosis at final follow-up showed no difference between the anteroposterior fusion with posterior instrumentation, translumbar interbody fusion, and LT cage groups, although the posterolateral fusion group showed a significant loss of lordosis (-10°) (Plordosis and showed maintenance of anterior and posterior disk space height postoperatively compared with the other groups. Instrumented posterolateral fusion produces a greater loss of lordosis compared with anteroposterior fusion with posterior instrumentation, translumbar interbody fusion, and LT cages. Maintenance of lordosis and anterior and posterior disk space height is significantly better with anterior interbody fusion with LT cages. Copyright 2011, SLACK Incorporated.

  2. [THE POSSIBILITY OF APPLICATION OF COLORIMETRY TECHNIQUE OF DETECTION OF LEVELS OF OXIDATIVE STRESS AND ANTIOXIDANT CAPACITY OF SERUM].

    Science.gov (United States)

    Sapojnikova, M A; Strakhova, L A; Blinova, T V; Makarov, I A; Rakhmanov, R S; Umniagina, I A

    2015-11-01

    The analysis was implemented concerning indicators of oxidative status and antioxidant capacity of serum. The indicators were received by colorimetry technique based on detection of peroxides in blood serum in examined patients of different categories: healthy persons aged from 17 to 20 years and from 30 to 60 years and patients with bronchopulmonary pathology. The low level of oxidative stress and high antioxidant capacity of serum were established in individuals ofyounger age. With increasing of age, degree of expression of oxidative stress augmented and level of antioxidant defense lowered. Almost all patients with bronchopulmonary pathology had high level of oxidative stress and low level of antioxidant defense. The analysis of quantitative data of examined indicators their conformity with health condition was established

  3. COMMUNICATION TECHNIQUE OF HIZBUT TAHRIR INDONESIA (HTI IN THE DEVELOPMENT OF CADRE IN NORTH SUMATERA

    Directory of Open Access Journals (Sweden)

    Rubino

    2017-11-01

    Full Text Available This study aimed to analyze the communication techniques applied Hizbut Tahrir Indonesia (HTI in the development of cadres in North Sumatra. The approach used in this research is qualitative approach, with reason to understand the problem in its natural setting, and interpret this phenomenon based on the meaning given by the informant, also because this research is multidimensional which is the result of various situation complexity, so it needs to be analyzed the context around it . The informant of this research is determined by purposive technique that is explored based on the purpose of this research, with informant plan amounted to 6 (six people that is 1 (one management and 5 (five person responsible lajnah. Based on the data obtained, the results of this study are, there are three communication techniques applied by the HTI in cadre development activities are: (1 informative techniques, namely by providing information about HTI and the main ideas it develops, to all levels of society both students , students, scholars, intellectuals, as well as influential figures in society such as government leaders, legislators, leaders of mass organizations, leaders of political parties, etc., as well as members of the cadre at the general level of learning, general level, (2 persuasive techniques by inviting them to join and support HTI preaching through dialogue, discussion, bulletin sharing, magazines, etc., and (3 the technique of human relationships by giving advice inter-personal to the community or members experiencing problems through consultation activities

  4. Current trends in pedicle screw stimulation techniques: lumbosacral, thoracic, and cervical levels.

    Science.gov (United States)

    Isley, Michael R; Zhang, Xiao-Feng; Balzer, Jeffrey R; Leppanen, Ronald E

    2012-06-01

    justification" of intraoperative neuromonitoring"... is the perception that the safety and efficacy of pedicle screw fixation are enhanced..." (Resnick et al. 2005b). However in summarizing a massive (over 1000 papers taken from the National Library of Medicine), contemporary, literature review spanning nearly a decade (1996 to 2003), this invited panel (Resnick et al. 2005b) recognized that the evidence-based documents contributing to the parts related to pedicle screw fixation and neuromonitoring were "... full of potential sources of error ..." and lacked appropriate, randomized, prospective studies for formulating rigid standards and guidelines. Nevertheless, current trends support the routine use and clinical utility of these neuromonitoring techniques. In particular free-run and triggered EMG have been well recognized in numerous publications for improving both the accuracy and safety of pedicle screw implantation. Currently, treatment with pedicle screw instrumentation routinely involves all levels of the spine - lumbosacral, thoracic, and cervical. Significant historical events, various neuromonitoring modalities, intraoperative alarm criteria, clinical efficacy, current trends, and caveats related to pedicle screw stimulation along the entire vertebral column will be reviewed.

  5. Use of structured personality survey techniques to indicate operator response to stressful situations

    International Nuclear Information System (INIS)

    Waller, M.A.

    1990-01-01

    Under given circumstances, a person will tend to operate in one of four dominant orientations: (1) to perform tasks; (2) to achieve consensus; (3) to achieve understanding, or (4) to maintain structure. Historically, personality survey techniques, such as the Myers-Briggs type indicator, have been used to determine these tendencies. While these techniques can accurately reflect a person's orientation under normal social situations, under different sets of conditions, the same person may exhibit other tendencies, displaying a similar or entirely different orientation. While most do not exhibit extreme tendencies or changes of orientation, the shift in personality from normal to stressful conditions can be rather dramatic, depending on the individual. Structured personality survey techniques have been used to indicate operator response to stressful situations. These techniques have been extended to indicate the balance between orientations that the control room team has through the various levels of cognizance

  6. Cross-Layer Techniques for Adaptive Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Yufeng Shan

    2005-02-01

    Full Text Available Real-time streaming media over wireless networks is a challenging proposition due to the characteristics of video data and wireless channels. In this paper, we propose a set of cross-layer techniques for adaptive real-time video streaming over wireless networks. The adaptation is done with respect to both channel and data. The proposed novel packetization scheme constructs the application layer packet in such a way that it is decomposed exactly into an integer number of equal-sized radio link protocol (RLP packets. FEC codes are applied within an application packet at the RLP packet level rather than across different application packets and thus reduce delay at the receiver. A priority-based ARQ, together with a scheduling algorithm, is applied at the application layer to retransmit only the corrupted RLP packets within an application layer packet. Our approach combines the flexibility and programmability of application layer adaptations, with low delay and bandwidth efficiency of link layer techniques. Socket-level simulations are presented to verify the effectiveness of our approach.

  7. Validation of the Care-Related Quality of Life Instrument in different study settings: findings from The Older Persons and Informal Caregivers Survey Minimum DataSet (TOPICS-MDS).

    Science.gov (United States)

    Lutomski, J E; van Exel, N J A; Kempen, G I J M; Moll van Charante, E P; den Elzen, W P J; Jansen, A P D; Krabbe, P F M; Steunenberg, B; Steyerberg, E W; Olde Rikkert, M G M; Melis, R J F

    2015-05-01

    Validity is a contextual aspect of a scale which may differ across sample populations and study protocols. The objective of our study was to validate the Care-Related Quality of Life Instrument (CarerQol) across two different study design features, sampling framework (general population vs. different care settings) and survey mode (interview vs. written questionnaire). Data were extracted from The Older Persons and Informal Caregivers Minimum DataSet (TOPICS-MDS, www.topics-mds.eu ), a pooled public-access data set with information on >3,000 informal caregivers throughout the Netherlands. Meta-correlations and linear mixed models between the CarerQol's seven dimensions (CarerQol-7D) and caregiver's level of happiness (CarerQol-VAS) and self-rated burden (SRB) were performed. The CarerQol-7D dimensions were correlated to the CarerQol-VAS and SRB in the pooled data set and the subgroups. The strength of correlations between CarerQol-7D dimensions and SRB was weaker among caregivers who were interviewed versus those who completed a written questionnaire. The directionality of associations between the CarerQol-VAS, SRB and the CarerQol-7D dimensions in the multivariate model supported the construct validity of the CarerQol in the pooled population. Significant interaction terms were observed in several dimensions of the CarerQol-7D across sampling frame and survey mode, suggesting meaningful differences in reporting levels. Although good scientific practice emphasises the importance of re-evaluating instrument properties in individual research studies, our findings support the validity and applicability of the CarerQol instrument in a variety of settings. Due to minor differential reporting, pooling CarerQol data collected using mixed administration modes should be interpreted with caution; for TOPICS-MDS, meta-analytic techniques may be warranted.

  8. Numerical simulation of overflow at vertical weirs using a hybrid level set/VOF method

    Science.gov (United States)

    Lv, Xin; Zou, Qingping; Reeve, Dominic

    2011-10-01

    This paper presents the applications of a newly developed free surface flow model to the practical, while challenging overflow problems for weirs. Since the model takes advantage of the strengths of both the level set and volume of fluid methods and solves the Navier-Stokes equations on an unstructured mesh, it is capable of resolving the time evolution of very complex vortical motions, air entrainment and pressure variations due to violent deformations following overflow of the weir crest. In the present study, two different types of vertical weir, namely broad-crested and sharp-crested, are considered for validation purposes. The calculated overflow parameters such as pressure head distributions, velocity distributions, and water surface profiles are compared against experimental data as well as numerical results available in literature. A very good quantitative agreement has been obtained. The numerical model, thus, offers a good alternative to traditional experimental methods in the study of weir problems.

  9. Effects of Daily Physical Activity Level on Manual Wheelchair Propulsion Technique in Full-Time Manual Wheelchair Users During Steady-State Treadmill Propulsion.

    Science.gov (United States)

    Dysterheft, Jennifer; Rice, Ian; Learmonth, Yvonne; Kinnett-Hopkins, Dominque; Motl, Robert

    2017-07-01

    To examine whether differences in propulsion technique as a function of intraindividual variability occur as a result of shoulder pain and physical activity (PA) level in full-time manual wheelchair users (MWUs). Observational study. Research laboratory. Adults (N=14) with spinal cord injury (mean age: 30.64±11.08) who used a wheelchair for >80% of daily ambulation and were free of any condition that could be worsened by PA. Not applicable. PA level was measured using the Physical Activity Scale for Individuals with Physical Disabilities (PASIPD), and shoulder pain was measured using the Wheelchair User's Shoulder Pain Index (WUSPI) survey. Mean and intraindividual variability propulsion metrics were measured for propulsion analysis. WUSPI scores indicated participants experienced low levels of shoulder pain. The results of the Spearman rank-order correlation revealed that PASIPD scores were significantly related to mean contact angle (r s =-.57) and stroke frequency (r s =.60) as well as to coefficient of variation of peak force (r s =.63), peak torque (r s =.59), contact angle (r s =.73), and stroke frequency (r s =.60). WUSPI scores were significantly correlated with only mean peak force (P=.02). No significant correlations were observed between PASIPD, WUSPI, and body mass index scores. Differences in propulsion technique were observed on the basis of PA levels. Participants with higher PASIPD scores used a more injurious stroke technique when propelling at higher speeds. This may indicate that active individuals who use injurious stroke mechanics may be at higher risk of injury. A strong relation was found between peak propulsion forces and shoulder pain. Rehabilitation professionals should emphasize the use of a protective stroke technique in both inactive and active MWUs during exercise and faster propulsion. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  10. Application of multivariate statistical techniques in microbial ecology.

    Science.gov (United States)

    Paliy, O; Shankar, V

    2016-03-01

    Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large-scale ecological data sets. In particular, noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amount of data, powerful statistical techniques of multivariate analysis are well suited to analyse and interpret these data sets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular data set. In this review, we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and data set structure. © 2016 John Wiley & Sons Ltd.

  11. Overcoming Barriers in Unhealthy Settings

    Directory of Open Access Journals (Sweden)

    Michael K. Lemke

    2016-03-01

    Full Text Available We investigated the phenomenon of sustained health-supportive behaviors among long-haul commercial truck drivers, who belong to an occupational segment with extreme health disparities. With a focus on setting-level factors, this study sought to discover ways in which individuals exhibit resiliency while immersed in endemically obesogenic environments, as well as understand setting-level barriers to engaging in health-supportive behaviors. Using a transcendental phenomenological research design, 12 long-haul truck drivers who met screening criteria were selected using purposeful maximum sampling. Seven broad themes were identified: access to health resources, barriers to health behaviors, recommended alternative settings, constituents of health behavior, motivation for health behaviors, attitude toward health behaviors, and trucking culture. We suggest applying ecological theories of health behavior and settings approaches to improve driver health. We also propose the Integrative and Dynamic Healthy Commercial Driving (IDHCD paradigm, grounded in complexity science, as a new theoretical framework for improving driver health outcomes.

  12. Evolutionary computation techniques a comparative perspective

    CERN Document Server

    Cuevas, Erik; Oliva, Diego

    2017-01-01

    This book compares the performance of various evolutionary computation (EC) techniques when they are faced with complex optimization problems extracted from different engineering domains. Particularly focusing on recently developed algorithms, it is designed so that each chapter can be read independently. Several comparisons among EC techniques have been reported in the literature, however, they all suffer from one limitation: their conclusions are based on the performance of popular evolutionary approaches over a set of synthetic functions with exact solutions and well-known behaviors, without considering the application context or including recent developments. In each chapter, a complex engineering optimization problem is posed, and then a particular EC technique is presented as the best choice, according to its search characteristics. Lastly, a set of experiments is conducted in order to compare its performance to other popular EC methods.

  13. Forecasting Water Level Fluctuations of Urmieh Lake Using Gene Expression Programming and Adaptive Neuro-Fuzzy Inference System

    Directory of Open Access Journals (Sweden)

    Sepideh Karimi

    2012-06-01

    Full Text Available Forecasting lake level at various prediction intervals is an essential issue in such industrial applications as navigation, water resource planning and catchment management. In the present study, two data driven techniques, namely Gene Expression Programming and Adaptive Neuro-Fuzzy Inference System, were applied for predicting daily lake levels for three prediction intervals. Daily water-level data from Urmieh Lake in Northwestern Iran were used to train, test and validate the used techniques. Three statistical indexes, coefficient of determination, root mean square error and variance accounted for were used to assess the performance of the used techniques. Technique inter-comparisons demonstrated that the GEP surpassed the ANFIS model at each of the prediction intervals. A traditional auto regressive moving average model was also applied to the same data sets; the obtained results were compared with those of the data driven approaches demonstrating superiority of the data driven models to ARMA.

  14. Evaluating the effect of river restoration techniques on reducing the impacts of outfall on water quality

    Science.gov (United States)

    Mant, Jenny; Janes, Victoria; Terrell, Robert; Allen, Deonie; Arthur, Scott; Yeakley, Alan; Morse, Jennifer; Holman, Ian

    2015-04-01

    Outfalls represent points of discharge to a river and often contain pollutants from urban runoff, such as heavy metals. Additionally, erosion around the outfall site results in increased sediment generation and the release of associated pollutants. Water quality impacts from heavy metals pose risks to the river ecosystem (e.g. toxicity to aquatic habitats). Restoration techniques including establishment of swales, and the re-vegetation and reinforcement of channel banks aim to decrease outfall flow velocities resulting in deposition of pollutants and removal through plant uptake. Within this study the benefits of river restoration techniques for the removal of contaminants associated with outfalls have been quantified within Johnson Creek, Portland, USA as part of the EPSRC funded Blue-Green Cities project. The project aims to develop new strategies for protecting hydrological and ecological values of urban landscapes. A range of outfalls have been selected which span restored and un-restored channel reaches, a variety of upstream land-uses, and both direct and set-back outfalls. River Habitat Surveys were conducted at each of the sites to assess the level of channel modification within the reach. Sediment samples were taken at the outfall location, upstream, and downstream of outfalls for analysis of metals including Nickel, Lead, Zinc, Copper, Iron and Magnesium. These were used to assess the impact of the level of modification at individual sites, and to compare the influence of direct and set-back outfalls. Concentrations of all metals in the sediments found at outfalls generally increased with the level of modification at the site. Sediment in restored sites had lower metal concentrations both at the outfall and downstream compared to unrestored sites, indicating the benefit of these techniques to facilitate the effective removal of pollutants by trapping of sediment and uptake of contaminants by vegetation. However, the impact of restoration measures varied

  15. Abstract sets and finite ordinals an introduction to the study of set theory

    CERN Document Server

    Keene, G B

    2007-01-01

    This text unites the logical and philosophical aspects of set theory in a manner intelligible both to mathematicians without training in formal logic and to logicians without a mathematical background. It combines an elementary level of treatment with the highest possible degree of logical rigor and precision.Starting with an explanation of all the basic logical terms and related operations, the text progresses through a stage-by-stage elaboration that proves the fundamental theorems of finite sets. It focuses on the Bernays theory of finite classes and finite sets, exploring the system's basi

  16. A fuzzy set preference model for market share analysis

    Science.gov (United States)

    Turksen, I. B.; Willson, Ian A.

    1992-01-01

    Consumer preference models are widely used in new product design, marketing management, pricing, and market segmentation. The success of new products depends on accurate market share prediction and design decisions based on consumer preferences. The vague linguistic nature of consumer preferences and product attributes, combined with the substantial differences between individuals, creates a formidable challenge to marketing models. The most widely used methodology is conjoint analysis. Conjoint models, as currently implemented, represent linguistic preferences as ratio or interval-scaled numbers, use only numeric product attributes, and require aggregation of individuals for estimation purposes. It is not surprising that these models are costly to implement, are inflexible, and have a predictive validity that is not substantially better than chance. This affects the accuracy of market share estimates. A fuzzy set preference model can easily represent linguistic variables either in consumer preferences or product attributes with minimal measurement requirements (ordinal scales), while still estimating overall preferences suitable for market share prediction. This approach results in flexible individual-level conjoint models which can provide more accurate market share estimates from a smaller number of more meaningful consumer ratings. Fuzzy sets can be incorporated within existing preference model structures, such as a linear combination, using the techniques developed for conjoint analysis and market share estimation. The purpose of this article is to develop and fully test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation), and how much to make (market share

  17. The support-control continuum: An investigation of staff perspectives on factors influencing the success or failure of de-escalation techniques for the management of violence and aggression in mental health settings.

    Science.gov (United States)

    Price, Owen; Baker, John; Bee, Penny; Lovell, Karina

    2018-01-01

    De-escalation techniques are recommended to manage violence and aggression in mental health settings yet restrictive practices continue to be frequently used. Barriers and enablers to the implementation and effectiveness of de-escalation techniques in practice are not well understood. To obtain staff descriptions of de-escalation techniques currently used in mental health settings and explore factors perceived to influence their implementation and effectiveness. Qualitative, semi-structured interviews and Framework Analysis. Five in-patient wards including three male psychiatric intensive care units, one female acute ward and one male acute ward in three UK Mental Health NHS Trusts. 20 ward-based clinical staff. Individual semi-structured interviews were digitally recorded, transcribed verbatim and analysed using a qualitative data analysis software package. Participants described 14 techniques used in response to escalated aggression applied on a continuum between support and control. Techniques along the support-control continuum could be classified in three groups: 'support' (e.g. problem-solving, distraction, reassurance) 'non-physical control' (e.g. reprimands, deterrents, instruction) and 'physical control' (e.g. physical restraint and seclusion). Charting the reasoning staff provided for technique selection against the described behavioural outcome enabled a preliminary understanding of staff, patient and environmental influences on de-escalation success or failure. Importantly, the more coercive 'non-physical control' techniques are currently conceptualised by staff as a feature of de-escalation techniques, yet, there was evidence of a link between these and increased aggression/use of restrictive practices. Risk was not a consistent factor in decisions to adopt more controlling techniques. Moral judgements regarding the function of the aggression; trial-and-error; ingrained local custom (especially around instruction to low stimulus areas); knowledge of

  18. Fluoroscopy-guided insertion of nasojejunal tubes in children - setting local diagnostic reference levels

    International Nuclear Information System (INIS)

    Vitta, Lavanya; Raghavan, Ashok; Sprigg, Alan; Morrell, Rachel

    2009-01-01

    Little is known about the radiation burden from fluoroscopy-guided insertions of nasojejunal tubes (NJTs) in children. There are no recommended or published standards of diagnostic reference levels (DRLs) available. To establish reference dose area product (DAP) levels for the fluoroscopy-guided insertion of nasojejunal tubes as a basis for setting DRLs for children. In addition, we wanted to assess our local practice and determine the success and complication rates associated with this procedure. Children who had NJT insertion procedures were identified retrospectively from the fluoroscopy database. The age of the child at the time of the procedure, DAP, screening time, outcome of the procedure, and any complications were recorded for each procedure. As the radiation dose depends on the size of the child, the children were assigned to three different age groups. The sample size, mean, median and third-quartile DAPs were calculated for each group. The third-quartile values were used to establish the DRLs. Of 186 procedures performed, 172 were successful on the first attempt. These were performed in a total of 43 children with 60% having multiple insertions over time. The third-quartile DAPs were as follows for each age group: 0-12 months, 2.6 cGy cm 2 ; 1-7 years, 2.45 cGy cm 2 ; >8 years, 14.6 cGy cm 2 . High DAP readings were obtained in the 0-12 months (n = 4) and >8 years (n = 2) age groups. No immediate complications were recorded. Fluoroscopy-guided insertion of NJTs is a highly successful procedure in a selected population of children and is associated with a low complication rate. The radiation dose per procedure is relatively low. (orig.)

  19. Volume growth trends in a Douglas-fir levels-of-growing-stock study.

    Science.gov (United States)

    Robert O. Curtis

    2006-01-01

    Mean curves of increment and yield in gross total cubic volume and net merchantable cubic volume were derived from seven installations of the regional cooperative Levels-of-Growing-Stock Study (LOGS) in Douglas-fir. The technique used reduces the seven curves for each treatment for each variable of interest to a single set of readily interpretable mean curves. To a top...

  20. THE MODEL OF IDENTIFICATION OF THE PROBLEM MAIN CAUSE SET OF VARIATION

    Directory of Open Access Journals (Sweden)

    Nenad Miric

    2008-06-01

    Full Text Available The term Lean has been widely used in today's product manufacturing and service delivery environments. In its fundamental nature the Lean Philosophy continuously strives for elimination of any kind of waste that exists in such environments. There are six basic strategies [1] related to the Lean Philosophy: Workplace Safety & Order & Cleanliness, JIT production, Six Sigma Quality, Empowered Teams, Visual Management and Pursuit of Perfection. On the journey of sustaining the lean supporting strategies there are many problems, or opportunities as Lean Practitioners call them. The value of some strategies highly depends on the efficiency of the problem solving techniques used to overcome the emerging issues. JIT production is difficult to imagine without a system that supports a high level of operational readiness with equipment uptime above 98%. Six Sigma level of quality, even when built into a product or system design, still undergoes the challenges of day to day operations and the variability brought with it. This variability is the source of waste and lean systems culture strives for continuous reduction of it. Empowered Teams properly trained to recognize the real cause of the problems and their Pursuit of Perfection culture are one of the corner stones of Lean Philosophy sustainability. Their ability to work with Problem Solvers and understand the difference between the "cure of the symptoms" approach versus "problem root cause identification" is one of the distinctions between Lean and Mass operations. Among the series of Statistical Engineering To ols this paper will show one of the techniques that proved to be powerful in the identification of the Set of Variation that contains the Main Cause of the new problems that arise in daily operations. This technique is called Multi - Vari. Multi - Vari is th e statistical engineering method used to analyze the set of data acquired in an organized manner. The set could be analyzed graphically or

  1. Single or double-level anterior interbody fusion techniques for cervical degenerative disc disease

    NARCIS (Netherlands)

    Jacobs, Wilco; Willems, Paul C.; van Limbeek, Jacques; Bartels, Ronald; Pavlov, Paul; Anderson, Patricia G.; Oner, Cumhur

    2011-01-01

    Background The number of surgical techniques for decompression and solid interbody fusion as treatment for cervical spondylosis has increased rapidly, but the rationale for the choice between different techniques remains unclear. Objectives To determine which technique of anterior interbody fusion

  2. Glycated albumin is set lower in relation to plasma glucose levels in patients with Cushing's syndrome.

    Science.gov (United States)

    Kitamura, Tetsuhiro; Otsuki, Michio; Tamada, Daisuke; Tabuchi, Yukiko; Mukai, Kosuke; Morita, Shinya; Kasayama, Soji; Shimomura, Iichiro; Koga, Masafumi

    2013-09-23

    Glycated albumin (GA) is an indicator of glycemic control, which has some specific characters in comparison with HbA1c. Since glucocorticoids (GC) promote protein catabolism including serum albumin, GC excess state would influence GA levels. We therefore investigated GA levels in patients with Cushing's syndrome. We studied 16 patients with Cushing's syndrome (8 patients had diabetes mellitus and the remaining 8 patients were non-diabetic). Thirty-two patients with type 2 diabetes mellitus and 32 non-diabetic subjects matched for age, sex and BMI were used as controls. In the patients with Cushing's syndrome, GA was significantly correlated with HbA1c, but the regression line shifted downwards as compared with the controls. The GA/HbA1c ratio in the patients with Cushing's syndrome was also significantly lower than the controls. HbA1c in the non-diabetic patients with Cushing's syndrome was not different from the non-diabetic controls, whereas GA was significantly lower. In 7 patients with Cushing's syndrome who performed self-monitoring of blood glucose, the measured HbA1c was matched with HbA1c estimated from mean blood glucose, whereas the measured GA was significantly lower than the estimated GA. We clarified that GA is set lower in relation to plasma glucose levels in patients with Cushing's syndrome. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Modeling of Two-Phase Flow in Rough-Walled Fracture Using Level Set Method

    Directory of Open Access Journals (Sweden)

    Yunfeng Dai

    2017-01-01

    Full Text Available To describe accurately the flow characteristic of fracture scale displacements of immiscible fluids, an incompressible two-phase (crude oil and water flow model incorporating interfacial forces and nonzero contact angles is developed. The roughness of the two-dimensional synthetic rough-walled fractures is controlled with different fractal dimension parameters. Described by the Navier–Stokes equations, the moving interface between crude oil and water is tracked using level set method. The method accounts for differences in densities and viscosities of crude oil and water and includes the effect of interfacial force. The wettability of the rough fracture wall is taken into account by defining the contact angle and slip length. The curve of the invasion pressure-water volume fraction is generated by modeling two-phase flow during a sudden drainage. The volume fraction of water restricted in the rough-walled fracture is calculated by integrating the water volume and dividing by the total cavity volume of the fracture while the two-phase flow is quasistatic. The effect of invasion pressure of crude oil, roughness of fracture wall, and wettability of the wall on two-phase flow in rough-walled fracture is evaluated.

  4. Urethral pressure reflectometry; a novel technique for simultaneous recording of pressure and cross-sectional area

    DEFF Research Database (Denmark)

    Aagaard, Mikael; Klarskov, Niels; Sønksen, Jens

    2012-01-01

    Study Type - Diagnostic (case series) Level of Evidence 4 What's known on the subject? and What does the study add? In the 1980s and 1990s, a method for direct measurement of pressure and cross-sectional area in women and men was developed. It was successful in terms of obtaining meaningful results...... reproducible than conventional urethral pressure profilometry, when measuring incontinence in women. In 2010 it was also introduced as a new measuring technique in the anal canal. This study, adds a new and interesting technique to the field of male urodynamics. For the first time, sound waves have been used...... in several studies. But the technique, which was based on the field gradient principle, was never implemented in the clinical setting because of technical limitations. In 2005, urethral pressure reflectometry was introduced as a new technique in female urodynamics. The technique has been shown to be more...

  5. A Household Level Analysis of Water Sanitation Associated with Gastrointestinal Disease in an Urban Slum Setting of South Okkalapa Township, Myanmar

    Directory of Open Access Journals (Sweden)

    Zar Ni Hlaing

    2016-07-01

    Full Text Available This research analyzed the prevalence of water sanitation at the household level against gastrointestinal disease occurrence in the urban slum setting of South Okkalapa Township, Myanmar, using cross-sectional study design techniques. A total of 364 household respondents were interviewed face to face by well-trained research assistants using structured questionnaires. Chi-square tests and multiple logistic regression analyses were used to determine the association between independent and dependent variables. Results showed that the source of household water (OR: 13.58, 95% CI: 6.90-26.74, and the types of drinking water (OR: 1.85, 95% CI: 0.92-3.71, were significantly associated with gastrointestinal diseases (p-value<0.05. After adjustment for confounding factors, this study found that occupation (AOR: 2.63, 95% CI: 1.25-5.54, employment status (AOR: 2.25, 95% CI: 1.01-5.01, type of household toilet (AOR: 8.66, 95% CI: 4.03-18.60, sources of household water (AOR: 6.56, 95% CI: 2.86-15.08, and the method of vector control (AOR: 3.12, 95% CI: 1.37-7.30 were all significantly associated with gastrointestinal diseases (p-value<0.05. Health education and appropriate technology for household water, sanitary latrines, environmental sanitation and waste disposal, and the implementation of policies focusing on systematic water management are therefore urgently required to control the spread of waterborne diseases.

  6. A new hysteroscopic technique for the preparation of partially intramural myomas in office setting (OPPIuM technique): A pilot study.

    Science.gov (United States)

    Bettocchi, Stefano; Di Spiezio Sardo, Attilio; Ceci, Oronzo; Nappi, Luigi; Guida, Maurizio; Greco, Elena; Pinto, Lauro; Camporiale, Anna Lina; Nappi, Carmine

    2009-01-01

    To assess the safety and the effectiveness of a novel hysteroscopic technique for the Office Preparation of Partially Intramural Myomas (OPPIuM), to facilitate the subsequent, already scheduled, resectoscopic myomectomy. Pilot study. University of Bari, Naples and Foggia. Fifty-nine fertile women (age 27-48 years) diagnosed at office hysteroscopy as having symptomatic submucous myomas>1.5 cm with intramural development (G1 and G2), scheduled for resectoscopic surgery. The OPPIuM technique consisted of an incision of the endometrial mucosa covering the myoma by means of Fr scissors or bipolar Versapoint Twizzle electrode, along its reflection line on the uterine wall, up to the precise identification of the cleavage surface between the myoma and its pseudo-capsule. Such procedure was aimed at triggering the protrusion of the intramural portion of the myoma into the uterine cavity during the following menstrual cycles, thus facilitating the subsequent total removal of the lesion via resectoscopic surgery. All patients underwent follow-up in-patient hysteroscopy after 2 menstrual cycles before resectoscopic surgery were performed. The OPPIuM technique was successfully performed in all cases. The mean diameter of successfully prepared myomas was 2.9+/-0.8 cm. At follow-up hysteroscopy, the conversion of partially intramural myomas into totally or prevalently intracavitary ones was observed in 93.2% (55/59) of cases. In 2 of 3 cases of failure, the myomas' size was>4 cm. One patient was excluded from the study because of the occurrence of total spontaneous expulsion of the myoma at the subsequent menstrual cycle. Our preliminary findings seem to support the safety and the effectiveness of the OPPIuM procedure by reporting the conversion of myomas with intramural development>1.5 cm into totally or prevalently intracavitary ones in nearly 93% of cases. Such technique may allow surgeons to perform resectoscopic surgery more safely and quickly as dealing with prevalently

  7. A Monte Carlo technique for signal level detection in implanted intracranial pressure monitoring.

    Science.gov (United States)

    Avent, R K; Charlton, J D; Nagle, H T; Johnson, R N

    1987-01-01

    Statistical monitoring techniques like CUSUM, Trigg's tracking signal and EMP filtering have a major advantage over more recent techniques, such as Kalman filtering, because of their inherent simplicity. In many biomedical applications, such as electronic implantable devices, these simpler techniques have greater utility because of the reduced requirements on power, logic complexity and sampling speed. The determination of signal means using some of the earlier techniques are reviewed in this paper, and a new Monte Carlo based method with greater capability to sparsely sample a waveform and obtain an accurate mean value is presented. This technique may find widespread use as a trend detection method when reduced power consumption is a requirement.

  8. Use of standard reliability levels in design and safety assessment of in-pile loops

    International Nuclear Information System (INIS)

    Bogani, G.; Verre, A.; Balestreri, S.; Colombo, A.G.; Luisi, T.

    1975-01-01

    This paper describes a logic-probabilistic analysis technique for a critical design review and safety assessment of in-pile loops. The examples in this paper refer to the analysis performed for the experimental loops already constructed or under construction in the ESSOR reactor of the Joint Research Centre of Ispra, as irradiation facilities for fuel element research and development tests. The proposed technique is based on the classification into categories of components and protective device malfunctions. Such subdivision into categories was agreed upon by the Italian Safety Authority and Euratom JRC, and adopted for the safety assessment of the ESSOR reactor in-pile loops. For each category, the method makes a link with a corresponding malfunction probability range (probability level). This probability level is defined taking into account design, construction, inspection and maintenance criteria as well as periodic controls; therefore the quality level and consequently the reliability level are thus also defined. The analysis is developed in the following stages: (1) definition of the analysis object (top event) and drawing of the relative fault-tree; (2) loop design analysis and preliminary optimization based on logic criteria; (3) classification into categories of the fault-tree primary events; (4) final loop design analysis and optimization based on defined component quality requirements. Stages 2 and 4 are quite different since stage 2 mainly consists of a redundance optimization, while stage 4 acts on the component quality level in such a way that each minimum cut-set leading to the top has an acceptable probability level. During analysis development, use is made of computer codes which, among other things enable the verification of fault-tree logic makeup, the listing of the minimum cut-sets with and without event categorization, and the evaluation of each cut-set order. (author)

  9. EqualChance: Addressing Intra-set Write Variation to Increase Lifetime of Non-volatile Caches

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL; Vetter, Jeffrey S [ORNL

    2014-01-01

    To address the limitations of SRAM such as high-leakage and low-density, researchers have explored use of non-volatile memory (NVM) devices, such as ReRAM (resistive RAM) and STT-RAM (spin transfer torque RAM) for designing on-chip caches. A crucial limitation of NVMs, however, is that their write endurance is low and the large intra-set write variation introduced by existing cache management policies may further exacerbate this problem, thereby reducing the cache lifetime significantly. We present EqualChance, a technique to increase cache lifetime by reducing intra-set write variation. EqualChance works by periodically changing the physical cache-block location of a write-intensive data item within a set to achieve wear-leveling. Simulations using workloads from SPEC CPU2006 suite and HPC (high-performance computing) field show that EqualChance improves the cache lifetime by 4.29X. Also, its implementation overhead is small, and it incurs very small performance and energy loss.

  10. District-level hospital trauma care audit filters: Delphi technique for defining context-appropriate indicators for quality improvement initiative evaluation in developing countries.

    Science.gov (United States)

    Stewart, Barclay T; Gyedu, Adam; Quansah, Robert; Addo, Wilfred Larbi; Afoko, Akis; Agbenorku, Pius; Amponsah-Manu, Forster; Ankomah, James; Appiah-Denkyira, Ebenezer; Baffoe, Peter; Debrah, Sam; Donkor, Peter; Dorvlo, Theodor; Japiong, Kennedy; Kushner, Adam L; Morna, Martin; Ofosu, Anthony; Oppong-Nketia, Victor; Tabiri, Stephen; Mock, Charles

    2016-01-01

    Prospective clinical audit of trauma care improves outcomes for the injured in high-income countries (HICs). However, equivalent, context-appropriate audit filters for use in low- and middle-income country (LMIC) district-level hospitals have not been well established. We aimed to develop context-appropriate trauma care audit filters for district-level hospitals in Ghana, was well as other LMICs more broadly. Consensus on trauma care audit filters was built between twenty panellists using a Delphi technique with four anonymous, iterative surveys designed to elicit: (i) trauma care processes to be measured; (ii) important features of audit filters for the district-level hospital setting; and (iii) potentially useful filters. Filters were ranked on a scale from 0 to 10 (10 being very useful). Consensus was measured with average percent majority opinion (APMO) cut-off rate. Target consensus was defined a priori as: a median rank of ≥9 for each filter and an APMO cut-off rate of ≥0.8. Panellists agreed on trauma care processes to target (e.g. triage, phases of trauma assessment, early referral if needed) and specific features of filters for district-level hospital use (e.g. simplicity, unassuming of resource capacity). APMO cut-off rate increased successively: Round 1--0.58; Round 2--0.66; Round 3--0.76; and Round 4--0.82. After Round 4, target consensus on 22 trauma care and referral-specific filters was reached. Example filters include: triage--vital signs are recorded within 15 min of arrival (must include breathing assessment, heart rate, blood pressure, oxygen saturation if available); circulation--a large bore IV was placed within 15 min of patient arrival; referral--if referral is activated, the referring clinician and receiving facility communicate by phone or radio prior to transfer. This study proposes trauma care audit filters appropriate for LMIC district-level hospitals. Given the successes of similar filters in HICs and obstetric care filters in LMICs

  11. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    Science.gov (United States)

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2017-06-01

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  12. Grouting as a remedial technique for buried low-level radioactive wastes

    International Nuclear Information System (INIS)

    Spalding, B.P.; Hyder, L.K.; Munro, I.L.

    1985-01-01

    Seven grout formulations were tested in the laboratory for their ability to penetrate and to reduce the hydraulic conductivities of soils used as backfills for shallow land burial trenches. Soils from two sites, in Oak Ridge, TN, and Maxey Flats, KY were used and both are classified as Typic Dystrochrepts. Three soluble grout formulations (sodium silicate, polypropenamide [polyacrylamide], and 1,3-Benzenediol [resorcinol]-formaldehyde) were able to both penetrate soil and sand columns and reduce hydraulic conductivities from initial values of ca. 10 -4 m s -1 to -8 m s -1 . Three particulate grouts (lime [calcium oxide]-fly ash, fly ash-cement-bentonite, and bentonite alone) could not penetrate columns; such formulations would, therefore, be difficult to inject into closed burial trenches. Field demonstrations with both sodium silicate and polyacrylamide showed that grout could be distributed throughout a burial trench and that waste-backfill hydraulic conductivity could be reduced several orders of magnitude. Field grouting with polyacrylamide reduced the mean hydraulic conductivity of nine intratrench monitoring wells from 10 -4 to 10 -8 m s -1 . Grouting of low-level radioactive solid waste in situ, therefore, should be an effective technique to correct situations where leaching of buried wastes has or will result in groundwater contamination

  13. Principles of modern radar advanced techniques

    CERN Document Server

    Melvin, William

    2012-01-01

    Principles of Modern Radar: Advanced Techniques is a professional reference for practicing engineers that provides a stepping stone to advanced practice with in-depth discussions of the most commonly used advanced techniques for radar design. It will also serve advanced radar academic and training courses with a complete set of problems for students as well as solutions for instructors.

  14. Template matching techniques in computer vision theory and practice

    CERN Document Server

    Brunelli, Roberto

    2009-01-01

    The detection and recognition of objects in images is a key research topic in the computer vision community.  Within this area, face recognition and interpretation has attracted increasing attention owing to the possibility of unveiling human perception mechanisms, and for the development of practical biometric systems. This book and the accompanying website, focus on template matching, a subset of object recognition techniques of wide applicability, which has proved to be particularly effective for face recognition applications. Using examples from face processing tasks throughout the book to illustrate more general object recognition approaches, Roberto Brunelli: examines the basics of digital image formation, highlighting points critical to the task of template matching;presents basic and  advanced template matching techniques, targeting grey-level images, shapes and point sets;discusses recent pattern classification paradigms from a template matching perspective;illustrates the development of a real fac...

  15. Brain tumor segmentation based on a hybrid clustering technique

    Directory of Open Access Journals (Sweden)

    Eman Abdel-Maksoud

    2015-03-01

    This paper presents an efficient image segmentation approach using K-means clustering technique integrated with Fuzzy C-means algorithm. It is followed by thresholding and level set segmentation stages to provide an accurate brain tumor detection. The proposed technique can get benefits of the K-means clustering for image segmentation in the aspects of minimal computation time. In addition, it can get advantages of the Fuzzy C-means in the aspects of accuracy. The performance of the proposed image segmentation approach was evaluated by comparing it with some state of the art segmentation algorithms in case of accuracy, processing time, and performance. The accuracy was evaluated by comparing the results with the ground truth of each processed image. The experimental results clarify the effectiveness of our proposed approach to deal with a higher number of segmentation problems via improving the segmentation quality and accuracy in minimal execution time.

  16. Connection Between the Originality Level of Pupils' Visual Expression in Visual Arts Lessons and Their Level of Tolerance for Diversity

    Directory of Open Access Journals (Sweden)

    Miroslav Huzjak

    2017-09-01

    Full Text Available The aim of this research was to examine the connection between the originality level in children's expression during visual art lessons and their level of tolerance for difference. The participants comprised primary school pupils from grades one, two and three, a total of 110. It was confirmed that there was a statistically significant difference between the pupils who had an introduction to the lesson using the didactic model of visual problembased teaching and those who had not. Learning and setting art terminology, the analysis of motifs and explanation, as well as demonstration of art techniques resulted in a higher level of creativity in visual performance, as well as a higher level of tolerance. It can be concluded that, with the proper choice of didactic models in teaching the visual arts, a wide range of pupil attitudes and beliefs can be improved.

  17. APPLICABILITY OF CONSOLIDATED TECHNIQUES IN THE VIEW OF ROMANIAN ACCOUNTING REGULATIONS

    Directory of Open Access Journals (Sweden)

    Cristina Rosu

    2016-06-01

    Full Text Available The accounting regulations are more and more interested in groups of companies. In some cases, these regulations require for preparing the consolidated financial statements. This is the task of the parent company who keeps the consolidated accounts. To accomplish its goals, the consolidated accounting uses a couple of, so-called, consolidation techniques. These are applied in the case of groups of companies with a complex structure. Their goal is to elaborate the consolidated financial statements using a set of methods and empirical skills. In this article we synthetize and apply the consolidation techniques in the view of Romanian accounting regulations. The Romanian practice has revealed, especially, two techniques: one based on direct consolidation and another one based on multiple levels (phased consolidation. Therefore, this work regards only the technical side of consolidated accounting, accounting records being evaded. Furthermore, we focus only on the preparation of the consolidated balance sheet in the case of some hypothetical groups of companies.

  18. Two imaging techniques for 3D quantification of pre-cementation space for CAD/CAM crowns.

    Science.gov (United States)

    Rungruanganunt, Patchanee; Kelly, J Robert; Adams, Douglas J

    2010-12-01

    Internal three-dimensional (3D) "fit" of prostheses to prepared teeth is likely more important clinically than "fit" judged only at the level of the margin (i.e. marginal "opening"). This work evaluates two techniques for quantitatively defining 3D "fit", both using pre-cementation space impressions: X-ray microcomputed tomography (micro-CT) and quantitative optical analysis. Both techniques are of interest for comparison of CAD/CAM system capabilities and for documenting "fit" as part of clinical studies. Pre-cementation space impressions were taken of a single zirconia coping on its die using a low viscosity poly(vinyl siloxane) impression material. Calibration specimens of this material were fabricated between the measuring platens of a micrometre. Both calibration curves and pre-cementation space impression data sets were obtained by examination using micro-CT and quantitative optical analysis. Regression analysis was used to compare calibration curves with calibration sets. Micro-CT calibration data showed tighter 95% confidence intervals and was able to measure over a wider thickness range than for the optical technique. Regions of interest (e.g., lingual, cervical) were more easily analysed with optical image analysis and this technique was more suitable for extremely thin impression walls (impressions. Each has advantages and limitations but either technique has the potential for use as part of clinical studies or CAD/CAM protocol optimization. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Recommended safety, reliability, quality assurance and management aerospace techniques with possible application by the DOE to the high-level radioactive waste repository program

    International Nuclear Information System (INIS)

    Bland, W.M. Jr.

    1985-05-01

    Aerospace SRQA and management techniques, principally those developed and used by the NASA Lyndon B. Johnson Space Center on the manned space flight programs, have been assessed for possible application by the DOE and the DOE-contractors to the high level radioactive waste repository program that results from the implementation of the NWPA of 1982. Those techniques believed to have the greatest potential for usefulness to the DOE and the DOE-contractors have been discussed in detail and are recommended to the DOE for adoption; discussion is provided for the manner in which this transfer of technology can be implemented. Six SRQA techniques and two management techniques are recommended for adoption by the DOE; included with the management techniques is a recommendation for the DOE to include a licensing interface with the NRC in the application of the milestone reviews technique. Three other techniques are recommended for study by the DOE for possible adaptation to the DOE program

  20. Adopting plasma pyrolysis for management of low-level solid radioactive waste in India

    International Nuclear Information System (INIS)

    Gupta, R.K.; Singh, A.K.; Yeotikar, R.G.; Patil, S.P.; Jha, Jyoti; Mishra, S.K.; Gandhi, K.G.; Misra, S.D.

    2010-01-01

    Since Plasma Pyrolysis of Low-Level Solid Radioactive Waste has the potential of reducing waste volumes by a factor of up to 1000:1, the new technology is seen as a sound engineering and economic option for managing voluminous low-active wastes. Development and adoption of such technique, to replace existing methods of Low-Level Solid Radioactive Waste management, is borne out of a compelling need to conserve disposal space. While Plasma-based systems are already in use for disposal of medical, toxic and other industrial wastes, the level of maturity is yet to be attained in their radioactive applications. A Prototype Plasma Pyrolysis Unit is being set up in India which, after extensive trials, will function as a full-scale plant for the volume reduction of Low-Level Solid Radioactive Wastes. This paper deals with the transition philosophy from the current techniques to the Plasma-based process. The design and engineering of the proposed facility and various system components is also briefly touched upon. (author)

  1. Structural level characterization of base oils using advanced analytical techniques

    KAUST Repository

    Hourani, Nadim; Muller, Hendrik; Adam, Frederick M.; Panda, Saroj K.; Witt, Matthias; Al-Hajji, Adnan A.; Sarathy, Mani

    2015-01-01

    cyclotron resonance mass spectrometry (FT-ICR MS) equipped with atmospheric pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) sources. First, the capabilities and limitations of each analytical technique were evaluated

  2. A fully automated and reproducible level-set segmentation approach for generation of MR-based attenuation correction map of PET images in the brain employing single STE-MR imaging modality

    Energy Technology Data Exchange (ETDEWEB)

    Kazerooni, Anahita Fathi; Aarabi, Mohammad Hadi [Quantitative MR Imaging and Spectroscopy Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Ay, Mohammadreza [Quantitative MR Imaging and Spectroscopy Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Medical Imaging Systems Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Rad, Hamidreza Saligheh [Quantitative MR Imaging and Spectroscopy Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of)

    2014-07-29

    Generating MR-based attenuation correction map (μ-map) for quantitative reconstruction of PET images still remains a challenge in hybrid PET/MRI systems, mainly because cortical bone structures are indistinguishable from proximal air cavities in conventional MR images. Recently, development of short echo-time (STE) MR imaging sequences, has shown promise in differentiating cortical bone from air. However, on STE-MR images, the bone appears with discontinuous boundaries. Therefore, segmentation techniques based on intensity classification, such as thresholding or fuzzy C-means, fail to homogeneously delineate bone boundaries, especially in the presence of intrinsic noise and intensity inhomogeneity. Consequently, they cannot be fully automatized, must be fine-tuned on the case-by-case basis, and require additional morphological operations for segmentation refinement. To overcome the mentioned problems, in this study, we introduce a new fully automatic and reproducible STE-MR segmentation approach exploiting level-set in a clustering-based intensity inhomogeneity correction framework to reliably delineate bone from soft tissue and air.

  3. A fully automated and reproducible level-set segmentation approach for generation of MR-based attenuation correction map of PET images in the brain employing single STE-MR imaging modality

    International Nuclear Information System (INIS)

    Kazerooni, Anahita Fathi; Aarabi, Mohammad Hadi; Ay, Mohammadreza; Rad, Hamidreza Saligheh

    2014-01-01

    Generating MR-based attenuation correction map (μ-map) for quantitative reconstruction of PET images still remains a challenge in hybrid PET/MRI systems, mainly because cortical bone structures are indistinguishable from proximal air cavities in conventional MR images. Recently, development of short echo-time (STE) MR imaging sequences, has shown promise in differentiating cortical bone from air. However, on STE-MR images, the bone appears with discontinuous boundaries. Therefore, segmentation techniques based on intensity classification, such as thresholding or fuzzy C-means, fail to homogeneously delineate bone boundaries, especially in the presence of intrinsic noise and intensity inhomogeneity. Consequently, they cannot be fully automatized, must be fine-tuned on the case-by-case basis, and require additional morphological operations for segmentation refinement. To overcome the mentioned problems, in this study, we introduce a new fully automatic and reproducible STE-MR segmentation approach exploiting level-set in a clustering-based intensity inhomogeneity correction framework to reliably delineate bone from soft tissue and air.

  4. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi [Department of Radiology, University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less

  5. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    International Nuclear Information System (INIS)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-01-01

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F≤f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time

  6. Determination of uranium in ground water using different analytical techniques

    International Nuclear Information System (INIS)

    Sahu, S.K.; Maity, Sukanta; Bhangare, R.C.; Pandit, G.G.; Sharma, D.N.

    2014-10-01

    The concern over presence of natural radionuclides like uranium in drinking water is growing recently. The contamination of aquifers with radionuclides depends on number of factors. The geology of an area is the most important factor along with anthropogenic activities like mining, coal ash disposal from thermal power plants, use of phosphate fertilizers etc. Whatever may be the source, the presence of uranium in drinking waters is a matter of great concern for public health. Studies show that uranium is a chemo-toxic and nephrotoxic heavy metal. This chemotoxicity affects the kidneys and bones in particular. Seeing the potential health hazards from natural radionuclides in drinking water, many countries worldwide have adopted the guideline activity concentration for drinking water quality recommended by the WHO (2011). For uranium, WHO has set a limit of 30μgL-1 in drinking water. The geological distribution of uranium and its migration in environment is of interest because the element is having environmental and exposure concerns. It is of great interest to use an analytical technique for uranium analysis in water which is highly sensitive especially at trace levels, specific and precise in presence of other naturally occurring major and trace metals and needs small amount of sample. Various analytical methods based on the use of different techniques have been developed in the past for the determination of uranium in the geological samples. The determination of uranium requires high selectivity due to its strong association with other elements. Several trace level wet chemistry analytical techniques have been reported for uranium determination, but most of these involve tedious and pain staking procedures, high detection limits, interferences etc. Each analytical technique has its own merits and demerits. Comparative assessment by different techniques can provide better quality control and assurance. In present study, uranium was analysed in ground water samples

  7. Effect of culture levels, ultrafiltered retentate addition, total solid levels and heat treatments on quality improvement of buffalo milk plain set yoghurt.

    Science.gov (United States)

    Yadav, Vijesh; Gupta, Vijay Kumar; Meena, Ganga Sahay

    2018-05-01

    Studied the effect of culture (2, 2.5 and 3%), ultrafiltered (UF) retentate addition (0, 11, 18%), total milk solids (13, 13.50, 14%) and heat treatments (80 and 85 °C/30 min) on the change in pH and titratable acidity (TA), sensory scores and rheological parameters of yoghurt. With 3% culture levels, the required TA (0.90% LA) was achieved in minimum 6 h incubation. With an increase in UF retentate addition, there was observed a highly significant decrease in overall acceptability, body and texture and colour and appearance scores, but there was highly significant increase in rheological parameters of yoghurt samples. Yoghurt made from even 13.75% total solids containing nil UF retentate was observed to be sufficiently firm by the sensory panel. Most of the sensory attributes of yoghurt made with 13.50% total solids were significantly better than yoghurt prepared with either 13 or 14% total solids. Standardised milk heated to 85 °C/30 min resulted in significantly better overall acceptability in yoghurt. Overall acceptability of optimised yoghurt was significantly better than a branded market sample. UF retentate addition adversely affected yoghurt quality, whereas optimization of culture levels, totals milk solids and others process parameters noticeably improved the quality of plain set yoghurt with a shelf life of 15 days at 4 °C.

  8. Transfer printing techniques for materials assembly and micro/nanodevice fabrication.

    Science.gov (United States)

    Carlson, Andrew; Bowen, Audrey M; Huang, Yonggang; Nuzzo, Ralph G; Rogers, John A

    2012-10-09

    Transfer printing represents a set of techniques for deterministic assembly of micro-and nanomaterials into spatially organized, functional arrangements with two and three-dimensional layouts. Such processes provide versatile routes not only to test structures and vehicles for scientific studies but also to high-performance, heterogeneously integrated functional systems, including those in flexible electronics, three-dimensional and/or curvilinear optoelectronics, and bio-integrated sensing and therapeutic devices. This article summarizes recent advances in a variety of transfer printing techniques, ranging from the mechanics and materials aspects that govern their operation to engineering features of their use in systems with varying levels of complexity. A concluding section presents perspectives on opportunities for basic and applied research, and on emerging use of these methods in high throughput, industrial-scale manufacturing. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Gender identification of Grasshopper Sparrows comparing behavioral, morphological, and molecular techniques

    Science.gov (United States)

    Ammer, F.K.; Wood, P.B.; McPherson, R.J.

    2008-01-01

    Correct gender identification in monomorphic species is often difficult especially if males and females do not display obvious behavioral and breeding differences. We compared gender specific morphology and behavior with recently developed DNA techniques for gender identification in the monomorphic Grasshopper Sparrow (Ammodramus savannarum). Gender was ascertained with DNA in 213 individuals using the 2550F/2718R primer set and 3% agarose gel electrophoresis. Field observations using behavior and breeding characteristics to identify gender matched DNA analyses with 100% accuracy for adult males and females. Gender was identified with DNA for all captured juveniles that did not display gender specific traits or behaviors in the field. The molecular techniques used offered a high level of accuracy and may be useful in studies of dispersal mechanisms and winter assemblage composition in monomorphic species.

  10. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal; Henkel, Jö rg

    2010-01-01

    % for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures, namely ARM and MIPS. © 2010 ACM.

  11. INBOUND AND OUTBOUND MARKETING TECHNIQUES: A COMPARISON BETWEEN ITALIAN AND ROMANIAN PURE PLAYERS AND CLICK AND MORTAR COMPANIES

    Directory of Open Access Journals (Sweden)

    Elisa Rancati

    2015-05-01

    Full Text Available Despite the large number of blog posts and articles regarding the use of Inbound and Outbound Marketing techniques, no research articles compares the propensity of companies towards these opposite types of marketing techniques among different countries. The present study is mainly focused on literature review of Inbound vs Outbound Marketing and acknowledge the academic community, interested in Content Marketing studies, about the availability of data sets regarding the implementation of these techniques at the level of Italian and Romanian pure players and click and mortar companies, which will be explored in the near future through appropriate statistical methods within the framework of a cross-cultural research.

  12. Set-up and first operation of a plasma oven for treatment of low level radioactive wastes

    Directory of Open Access Journals (Sweden)

    Nachtrodt Frederik

    2014-01-01

    Full Text Available An experimental device for plasma treatment of low and intermediate level radioactive waste was built and tested in several design variations. The laboratory device is designed with the intention to study the general effects and difficulties in a plasma incineration set-up for the further future development of a larger scale pilot plant. The key part of the device consists of a novel microwave plasma torch driven by 200 W electric power, and operating at atmospheric pressure. It is a specific design characteristic of the torch that a high peak temperature can be reached with a low power input compared to other plasma torches. Experiments have been carried out to analyze the effect of the plasma on materials typical for operational low-level wastes. In some preliminary cold tests the behavior of stable volatile species e. g., caesium was investigated by TXRF measurements of material collected from the oven walls and the filtered off-gas. The results help in improving and scaling up the existing design and in understanding the effects for a pilot plant, especially for the off-gas collection and treatment.

  13. The Delphi Technique in Educational Research

    Directory of Open Access Journals (Sweden)

    Ravonne A. Green

    2014-04-01

    Full Text Available The Delphi Technique has been useful in educational settings in forming guidelines, standards, and in predicting trends. Judd lists these major uses of the Delphi Technique in higher education: (a cost-effectiveness, (b cost–benefit analysis, (c curriculum and campus planning, and (d university-wide educational goals and objectives. The thorough Delphi researcher seeks to reconcile the Delphi consensus with current literature, institutional research, and the campus environment. This triangle forms a sound base for responsible research practice. This book gives an overview of the Delphi Technique and the primary uses of this technique in research. This article on the Delphi Technique will give the researcher an invaluable resource for learning about the Delphi Technique and for applying this method in educational research projects.

  14. Emotional Freedom Techniques for Reducing Anxiety and Cortisol Level in Pregnant Adolescent Primiparous

    Directory of Open Access Journals (Sweden)

    Mardjan Mardjan

    2018-01-01

    Full Text Available ABSTRACT Anxiety during pregnancy in  primiparous mother will be a hard burden because of the immature both psycologic and reproductive organs which can increase the risk of maternal mortality, infant mortality, prolonged childbirth, LBW, postpartum depression, etc. An effort to minimize the anxiety is the implementation of EFT (Emotional Freedom Techniques during the third trimester.  This research purposed to assess the effectiveness of EFT to decrease anxiety in facing childbirth. This research used the quasi-experimental pre-test and post-test method of treatment and control. The treatment was done during the third trimester, started and followed for 3 months ie month 7th, 8th, 9th. The EFT was implemented every month then continued independently by the mother, until before childbirth process. The research instrument used TMAS (Taylor Manifest Anxiety Scale and cortisol blood test. The subjects were 38 respondents consisted of 19 interventions and 19 controls. Result with paired t-test, TMAS1,2,3, each stage got significant difference, pre and post blood cortisol level p = 0.0001. Linear regression analysis on TMAS p = 0.001 and R² = 0.57, whereas blood cortisol level p = 0.004 and R² = 0.43. This analysis proved EFT contributed significantly 57% to lower anxiety levels and 43% to lower blood cortisol level, indirectly affected the readiness to face childbirth process.                                                            ABSTRAK         Kecemasan selama kehamilan pada ibu primipara akan memberatkan kondisi bayi dalam kandungan karena secara psikologis kejiwaannya belum siap dan organ reproduksi belum sempurna yang dapat meningkatkan risiko dalam persalinan dan merupakan salah satu faktor penyebab kematian ibu, bayi, partus lama, BBLR, depresi postpartum, dll. Upaya meminimalisasi kecemasan ini dilakukan dengan metode EFT (Emotional Freedom Techniques selama trimester

  15. Random sets and random fuzzy sets as ill-perceived random variables an introduction for Ph.D. students and practitioners

    CERN Document Server

    Couso, Inés; Sánchez, Luciano

    2014-01-01

    This short book provides a unified view of the history and theory of random sets and fuzzy random variables, with special emphasis on its use for representing higher-order non-statistical uncertainty about statistical experiments. The authors lay bare the existence of two streams of works using the same mathematical ground, but differing form their use of sets, according to whether they represent objects of interest naturally taking the form of sets, or imprecise knowledge about such objects. Random (fuzzy) sets can be used in many fields ranging from mathematical morphology, economics, artificial intelligence, information processing and statistics per se, especially in areas where the outcomes of random experiments cannot be observed with full precision. This book also emphasizes the link between random sets and fuzzy sets with some techniques related to the theory of imprecise probabilities. This small book is intended for graduate and doctoral students in mathematics or engineering, but also provides an i...

  16. Self-consistent Green’s-function technique for bulk and surface impurity calculations: Surface core-level shifts by complete screening

    DEFF Research Database (Denmark)

    Aldén, M.; Abrikosov, I. A.; Johansson, B.

    1994-01-01

    of the frozen-core and atomic-sphere approximation but, in addition, includes the dipole contribution to the intersphere potential. Within the concept of complete screening, we identify the surface core-level binding-energy shift with the surface segregation energy of a core-ionized atom and use the Green......'s-function impurity technique in a comprehensive study of the surface core-level shifts (SCLS) of the 4d and 5d transition metals. In those cases, where observed data refer to single crystals, we obtain good agreement with experiment, whereas the calculations typically underestimate the measured shift obtained from...

  17. Country report: Vietnam. Setting Up of a {sup 90}Sr/{sup 90}Y Generator System Based on Supported Liquid Membrane (SLM) Technique and Radiolabeling of Eluted {sup 90}Y with Biomolecules

    Energy Technology Data Exchange (ETDEWEB)

    Thu, Nguyen Thi; Dong, Duong Van; Cuong, Bui Van; Khoa, Chu Van [Vietnam Atomic Energy Commission (VAEC), Nuclear Research Institute, Dalat (Viet Nam)

    2010-07-01

    In the course of participating in the IAEA-CRP during the last two years, Vietnam has achieved the goal of setting up a {sup 90}Sr/{sup 90}Y generator system based on Supported Liquid Membrane (SLM) technique and also radiolabeling of the eluted {sup 90}Y with antibody, peptides and albumin. A two stage SLM based {sup 90}Sr-{sup 90}Y generator was set up in-house to generate carrier-free {sup 90}Y at different activity levels viz. 5, 20, 50 mCi. The generator system was operated in sequential mode in which 2-ethylhexyl 2-ethylhexyl phosphonic acid (PC88A) based SLM was used in the first stage for the transport {sup 90}Y in 4.0 M nitric acid from source phase where {sup 90}Sr-{sup 90}Y equilibrium mixture is placed in nitric acid medium at pH to 1-2. In the second stage, octyl (phenyl)-N,N-diisobutylcarbamoylmethyl phosphine oxide (CMPO) based SLM was used for the transport of {sup 90}Y selectively to 1.0 M acetic acid which is the best medium for radiolebeling. The eluted {sup 90}Y from the generator was tested for the presence of any traces of {sup 90}Sr using the Extraction Paper Chromatography (EPC) and was found suitable for radiolabeling. The generator system could be upgraded to 100 mCi level successfully due to an expert mission from India through IAEA. The {sup 90}Y product obtained from the generator system was used for radiolabeling of antibody and peptides viz. Rituximab, DOTATATE and albumin particles under different experimental conditions. A new chromatography system could be developed for analyzing {sup 90}Y labeled albumin using the TAE buffer as mobile phase in PC and ITLC.

  18. Almost Free Modules Set-Theoretic Methods

    CERN Document Server

    Eklof, PC

    1990-01-01

    This is an extended treatment of the set-theoretic techniques which have transformed the study of abelian group and module theory over the last 15 years. Part of the book is new work which does not appear elsewhere in any form. In addition, a large body of material which has appeared previously (in scattered and sometimes inaccessible journal articles) has been extensively reworked and in many cases given new and improved proofs. The set theory required is carefully developed with algebraists in mind, and the independence results are derived from explicitly stated axioms. The book contains exe

  19. Advances phase-lock techniques

    CERN Document Server

    Crawford, James A

    2008-01-01

    From cellphones to micrprocessors, to GPS navigation, phase-lock techniques are utilized in most all modern electronic devices. This high-level book takes a systems-level perspective, rather than circuit-level, which differentiates it from other books in the field.

  20. Successful adaptation of three-dimensional inversion methodologies for archaeological-scale, total-field magnetic data sets

    Science.gov (United States)

    Cheyney, S.; Fishwick, S.; Hill, I. A.; Linford, N. T.

    2015-08-01

    Despite the development of advanced processing and interpretation tools for magnetic data sets in the fields of mineral and hydrocarbon industries, these methods have not achieved similar levels of adoption for archaeological or very near surface surveys. Using a synthetic data set we demonstrate that certain methodologies and assumptions used to successfully invert more regional-scale data can lead to large discrepancies between the true and recovered depths when applied to archaeological-type anomalies. We propose variations to the current approach, analysing the choice of the depth-weighting function, mesh design and parameter constraints, to develop an appropriate technique for the 3-D inversion of archaeological-scale data sets. The results show a successful recovery of a synthetic scenario, as well as a case study of a Romano-Celtic temple in the UK. For the case study, the final susceptibility model is compared with two coincident ground penetrating radar surveys, showing a high correlation with the comparative depth slices. The new approach takes interpretation of archaeological data sets beyond a simple 2-D visual interpretation based on pattern recognition.

  1. Tokunaga and Horton self-similarity for level set trees of Markov chains

    International Nuclear Information System (INIS)

    Zaliapin, Ilia; Kovchegov, Yevgeniy

    2012-01-01

    Highlights: ► Self-similar properties of the level set trees for Markov chains are studied. ► Tokunaga and Horton self-similarity are established for symmetric Markov chains and regular Brownian motion. ► Strong, distributional self-similarity is established for symmetric Markov chains with exponential jumps. ► It is conjectured that fractional Brownian motions are Tokunaga self-similar. - Abstract: The Horton and Tokunaga branching laws provide a convenient framework for studying self-similarity in random trees. The Horton self-similarity is a weaker property that addresses the principal branching in a tree; it is a counterpart of the power-law size distribution for elements of a branching system. The stronger Tokunaga self-similarity addresses so-called side branching. The Horton and Tokunaga self-similarity have been empirically established in numerous observed and modeled systems, and proven for two paradigmatic models: the critical Galton–Watson branching process with finite progeny and the finite-tree representation of a regular Brownian excursion. This study establishes the Tokunaga and Horton self-similarity for a tree representation of a finite symmetric homogeneous Markov chain. We also extend the concept of Horton and Tokunaga self-similarity to infinite trees and establish self-similarity for an infinite-tree representation of a regular Brownian motion. We conjecture that fractional Brownian motions are also Tokunaga and Horton self-similar, with self-similarity parameters depending on the Hurst exponent.

  2. Low-level HIV-1 replication and the dynamics of the resting CD4+ T cell reservoir for HIV-1 in the setting of HAART

    Directory of Open Access Journals (Sweden)

    Wilke Claus O

    2008-01-01

    Full Text Available Abstract Background In the setting of highly active antiretroviral therapy (HAART, plasma levels of human immunodeficiency type-1 (HIV-1 rapidly decay to below the limit of detection of standard clinical assays. However, reactivation of remaining latently infected memory CD4+ T cells is a source of continued virus production, forcing patients to remain on HAART despite clinically undetectable viral loads. Unfortunately, the latent reservoir decays slowly, with a half-life of up to 44 months, making it the major known obstacle to the eradication of HIV-1 infection. However, the mechanism underlying the long half-life of the latent reservoir is unknown. The most likely potential mechanisms are low-level viral replication and the intrinsic stability of latently infected cells. Methods Here we use a mathematical model of T cell dynamics in the setting of HIV-1 infection to probe the decay characteristics of the latent reservoir upon initiation of HAART. We compare the behavior of this model to patient derived data in order to gain insight into the role of low-level viral replication in the setting of HAART. Results By comparing the behavior of our model to patient derived data, we find that the viral dynamics observed in patients on HAART could be consistent with low-level viral replication but that this replication would not significantly affect the decay rate of the latent reservoir. Rather than low-level replication, the intrinsic stability of latently infected cells and the rate at which they are reactivated primarily determine the observed reservoir decay rate according to the predictions of our model. Conclusion The intrinsic stability of the latent reservoir has important implications for efforts to eradicate HIV-1 infection and suggests that intensified HAART would not accelerate the decay of the latent reservoir.

  3. Low-level HIV-1 replication and the dynamics of the resting CD4+ T cell reservoir for HIV-1 in the setting of HAART

    Science.gov (United States)

    Sedaghat, Ahmad R; Siliciano, Robert F; Wilke, Claus O

    2008-01-01

    Background In the setting of highly active antiretroviral therapy (HAART), plasma levels of human immunodeficiency type-1 (HIV-1) rapidly decay to below the limit of detection of standard clinical assays. However, reactivation of remaining latently infected memory CD4+ T cells is a source of continued virus production, forcing patients to remain on HAART despite clinically undetectable viral loads. Unfortunately, the latent reservoir decays slowly, with a half-life of up to 44 months, making it the major known obstacle to the eradication of HIV-1 infection. However, the mechanism underlying the long half-life of the latent reservoir is unknown. The most likely potential mechanisms are low-level viral replication and the intrinsic stability of latently infected cells. Methods Here we use a mathematical model of T cell dynamics in the setting of HIV-1 infection to probe the decay characteristics of the latent reservoir upon initiation of HAART. We compare the behavior of this model to patient derived data in order to gain insight into the role of low-level viral replication in the setting of HAART. Results By comparing the behavior of our model to patient derived data, we find that the viral dynamics observed in patients on HAART could be consistent with low-level viral replication but that this replication would not significantly affect the decay rate of the latent reservoir. Rather than low-level replication, the intrinsic stability of latently infected cells and the rate at which they are reactivated primarily determine the observed reservoir decay rate according to the predictions of our model. Conclusion The intrinsic stability of the latent reservoir has important implications for efforts to eradicate HIV-1 infection and suggests that intensified HAART would not accelerate the decay of the latent reservoir. PMID:18171475

  4. Assessing the suitability of extreme learning machines (ELM for groundwater level prediction

    Directory of Open Access Journals (Sweden)

    Yadav Basant

    2017-03-01

    Full Text Available Fluctuation of groundwater levels around the world is an important theme in hydrological research. Rising water demand, faulty irrigation practices, mismanagement of soil and uncontrolled exploitation of aquifers are some of the reasons why groundwater levels are fluctuating. In order to effectively manage groundwater resources, it is important to have accurate readings and forecasts of groundwater levels. Due to the uncertain and complex nature of groundwater systems, the development of soft computing techniques (data-driven models in the field of hydrology has significant potential. This study employs two soft computing techniques, namely, extreme learning machine (ELM and support vector machine (SVM to forecast groundwater levels at two observation wells located in Canada. A monthly data set of eight years from 2006 to 2014 consisting of both hydrological and meteorological parameters (rainfall, temperature, evapotranspiration and groundwater level was used for the comparative study of the models. These variables were used in various combinations for univariate and multivariate analysis of the models. The study demonstrates that the proposed ELM model has better forecasting ability compared to the SVM model for monthly groundwater level forecasting.

  5. A Titration Technique for Demonstrating a Magma Replenishment Model.

    Science.gov (United States)

    Hodder, A. P. W.

    1983-01-01

    Conductiometric titrations can be used to simulate subduction-setting volcanism. Suggestions are made as to the use of this technique in teaching volcanic mechanisms and geochemical indications of tectonic settings. (JN)

  6. Physics Mining of Multi-Source Data Sets

    Science.gov (United States)

    Helly, John; Karimabadi, Homa; Sipes, Tamara

    2012-01-01

    Powerful new parallel data mining algorithms can produce diagnostic and prognostic numerical models and analyses from observational data. These techniques yield higher-resolution measures than ever before of environmental parameters by fusing synoptic imagery and time-series measurements. These techniques are general and relevant to observational data, including raster, vector, and scalar, and can be applied in all Earth- and environmental science domains. Because they can be highly automated and are parallel, they scale to large spatial domains and are well suited to change and gap detection. This makes it possible to analyze spatial and temporal gaps in information, and facilitates within-mission replanning to optimize the allocation of observational resources. The basis of the innovation is the extension of a recently developed set of algorithms packaged into MineTool to multi-variate time-series data. MineTool is unique in that it automates the various steps of the data mining process, thus making it amenable to autonomous analysis of large data sets. Unlike techniques such as Artificial Neural Nets, which yield a blackbox solution, MineTool's outcome is always an analytical model in parametric form that expresses the output in terms of the input variables. This has the advantage that the derived equation can then be used to gain insight into the physical relevance and relative importance of the parameters and coefficients in the model. This is referred to as physics-mining of data. The capabilities of MineTool are extended to include both supervised and unsupervised algorithms, handle multi-type data sets, and parallelize it.

  7. Agenda-setting the unknown

    DEFF Research Database (Denmark)

    Dannevig, Halvor

    -setting theory, it is concluded that agenda-setting of climate change adaptation requires human agency in providing local legitimacy and salience for the issue. The thesis also finds that boundary arrangements are needed to bridge the gap between local knowledge and scientific knowledge for adaptation governance....... Attempts at such boundary arrangements are already in place at the regional governance levels, but they must be strengthened if municipalities are to take further steps in implementing adaptation measures....

  8. Contextual control over task-set retrieval.

    Science.gov (United States)

    Crump, Matthew J C; Logan, Gordon D

    2010-11-01

    Contextual cues signaling task likelihood or the likelihood of task repetition are known to modulate the size of switch costs. We follow up on the finding by Leboe, Wong, Crump, and Stobbe (2008) that location cues predictive of the proportion of switch or repeat trials modulate switch costs. Their design employed one cue per task, whereas our experiment employed two cues per task, which allowed separate assessment of modulations to the cue-repetition benefit, a measure of lower level cue-encoding processes, and to the task-alternation cost, a measure of higher level processes representing task-set information. We demonstrate that location information predictive of switch proportion modulates performance at the level of task-set representations. Furthermore, we demonstrate that contextual control occurs even when subjects are unaware of the associations between context and switch likelihood. We discuss the notion that contextual information provides rapid, unconscious control over the extent to which prior task-set representations are retrieved in the service of guiding online performance.

  9. Art Appreciation and Technique.

    Science.gov (United States)

    Dean, Diane R.; Milam, Debora

    1985-01-01

    Presents examples of independent study units for gifted high school students in a resource room setting. Both art appreciation and technique are covered in activities concerned with media (basics of pencil, India ink, pastels, crayons, oil, acrylics, and watercolors), subject matter (landscapes, animals, the human figure), design and illustration…

  10. Coastal lagoon systems as indicator of Holocene sea-level development in a periglacial soft-sediment setting: Samsø, Denmark

    DEFF Research Database (Denmark)

    Sander, Lasse; Fruergaard, Mikkel; Johannessen, Peter N.

    2014-01-01

    . Stratigraphy, grain-size distribution, fossil and organic matter content of cores retrieved from the lagoons were analyzed and compared. Age control was established using radiocarbon and optically stimulated luminescence dating. Our data produced a surprisingly consistent pattern for the sedimentary......Confined shallow-water environments are encountered many places along the coast of the inner Danish waters. Despite their common occurrence, these environments have rarely been studied as sedimentary archives. In this study we set out to trace back changes in relative sea-level and associated...... geomorphological responses in sediment cores retrieved from coastal lagoon systems on the island of Samsø, central Denmark. In the mid-Atlantic period, the post-glacial sea-level rise reached what is today the southern Kattegat Sea. Waves, currents and tides began to erode the unconsolidated moraine material...

  11. Different techniques for reducing alcohol levels in wine: A review⋆

    Directory of Open Access Journals (Sweden)

    Ozturk Burcu

    2014-01-01

    The aim of this review is to provide technical and practical information covering the outstanding techniques that may be used to adjust elevated alcohol concentration in wine and their effects on wine from the point of organoleptic characteristics view.

  12. Theory of random sets

    CERN Document Server

    Molchanov, Ilya

    2017-01-01

    This monograph, now in a thoroughly revised second edition, offers the latest research on random sets. It has been extended to include substantial developments achieved since 2005, some of them motivated by applications of random sets to econometrics and finance. The present volume builds on the foundations laid by Matheron and others, including the vast advances in stochastic geometry, probability theory, set-valued analysis, and statistical inference. It shows the various interdisciplinary relationships of random set theory within other parts of mathematics, and at the same time fixes terminology and notation that often vary in the literature, establishing it as a natural part of modern probability theory and providing a platform for future development. It is completely self-contained, systematic and exhaustive, with the full proofs that are necessary to gain insight. Aimed at research level, Theory of Random Sets will be an invaluable reference for probabilists; mathematicians working in convex and integ...

  13. An Empirical Study of the Transmission Power Setting for Bluetooth-Based Indoor Localization Mechanisms.

    Science.gov (United States)

    Castillo-Cara, Manuel; Lovón-Melgarejo, Jesús; Bravo-Rocca, Gusseppe; Orozco-Barbosa, Luis; García-Varea, Ismael

    2017-06-07

    Nowadays, there is a great interest in developing accurate wireless indoor localization mechanisms enabling the implementation of many consumer-oriented services. Among the many proposals, wireless indoor localization mechanisms based on the Received Signal Strength Indication (RSSI) are being widely explored. Most studies have focused on the evaluation of the capabilities of different mobile device brands and wireless network technologies. Furthermore, different parameters and algorithms have been proposed as a means of improving the accuracy of wireless-based localization mechanisms. In this paper, we focus on the tuning of the RSSI fingerprint to be used in the implementation of a Bluetooth Low Energy 4.0 (BLE4.0) Bluetooth localization mechanism. Following a holistic approach, we start by assessing the capabilities of two Bluetooth sensor/receiver devices. We then evaluate the relevance of the RSSI fingerprint reported by each BLE4.0 beacon operating at various transmission power levels using feature selection techniques. Based on our findings, we use two classification algorithms in order to improve the setting of the transmission power levels of each of the BLE4.0 beacons. Our main findings show that our proposal can greatly improve the localization accuracy by setting a custom transmission power level for each BLE4.0 beacon.

  14. An Empirical Study of the Transmission Power Setting for Bluetooth-Based Indoor Localization Mechanisms

    Directory of Open Access Journals (Sweden)

    Manuel Castillo-Cara

    2017-06-01

    Full Text Available Nowadays, there is a great interest in developing accurate wireless indoor localization mechanisms enabling the implementation of many consumer-oriented services. Among the many proposals, wireless indoor localization mechanisms based on the Received Signal Strength Indication (RSSI are being widely explored. Most studies have focused on the evaluation of the capabilities of different mobile device brands and wireless network technologies. Furthermore, different parameters and algorithms have been proposed as a means of improving the accuracy of wireless-based localization mechanisms. In this paper, we focus on the tuning of the RSSI fingerprint to be used in the implementation of a Bluetooth Low Energy 4.0 (BLE4.0 Bluetooth localization mechanism. Following a holistic approach, we start by assessing the capabilities of two Bluetooth sensor/receiver devices. We then evaluate the relevance of the RSSI fingerprint reported by each BLE4.0 beacon operating at various transmission power levels using feature selection techniques. Based on our findings, we use two classification algorithms in order to improve the setting of the transmission power levels of each of the BLE4.0 beacons. Our main findings show that our proposal can greatly improve the localization accuracy by setting a custom transmission power level for each BLE4.0 beacon.

  15. Teamwork skills in actual, in situ, and in-center pediatric emergencies: performance levels across settings and perceptions of comparative educational impact.

    Science.gov (United States)

    Couto, Thomaz Bittencourt; Kerrey, Benjamin T; Taylor, Regina G; FitzGerald, Michael; Geis, Gary L

    2015-04-01

    Pediatric emergencies require effective teamwork. These skills are developed and demonstrated in actual emergencies and in simulated environments, including simulation centers (in center) and the real care environment (in situ). Our aims were to compare teamwork performance across these settings and to identify perceived educational strengths and weaknesses between simulated settings. We hypothesized that teamwork performance in actual emergencies and in situ simulations would be higher than for in-center simulations. A retrospective, video-based assessment of teamwork was performed in an academic, pediatric level 1 trauma center, using the Team Emergency Assessment Measure (TEAM) tool (range, 0-44) among emergency department providers (physicians, nurses, respiratory therapists, paramedics, patient care assistants, and pharmacists). A survey-based, cross-sectional assessment was conducted to determine provider perceptions regarding simulation training. One hundred thirty-two videos, 44 from each setting, were reviewed. Mean total TEAM scores were similar and high in all settings (31.2 actual, 31.1 in situ, and 32.3 in-center, P = 0.39). Of 236 providers, 154 (65%) responded to the survey. For teamwork training, in situ simulation was considered more realistic (59% vs. 10%) and more effective (45% vs. 15%) than in-center simulation. In a video-based study in an academic pediatric institution, ratings of teamwork were relatively high among actual resuscitations and 2 simulation settings, substantiating the influence of simulation-based training on instilling a culture of communication and teamwork. On the basis of survey results, providers favored the in situ setting for teamwork training and suggested an expansion of our existing in situ program.

  16. Prediction of high level vibration test results by use of available inelastic analysis techniques

    International Nuclear Information System (INIS)

    Hofmayer, C.H.; Park, Y.J.; Costello, J.F.

    1991-01-01

    As part of a cooperative study between the United States and Japan, the US Nuclear Regulatory Commission and the Ministry of International Trade and Industry of Japan agreed to perform a test program that would subject a large scale piping model to significant plastic strains under excitation conditions much greater than the design condition for nuclear power plants. The objective was to compare the results of the tests with state-of-the-art analyses. Comparisons were done at different excitation levels from elastic to elastic-plastic to levels where cracking was induced in the test model. The program was called the high Level Vibration Test (HLVT). The HLVT was performed on the seismic table at the Tadotsu Engineering Laboratory of Nuclear Power Engineering Test Center in Japan. The test model was constructed by modifying the 1/2.5 scale model of one loop of a PWR primary coolant system which was previously tested by NUPEC as part of their seismic proving test program. A comparison of various analysis techniques with test results shows a higher prediction error in the detailed strain values than in the overall response values. This prediction error is magnified as the plasticity in the test model increases. There is no significant difference in the peak responses between the simplified and the detailed analyses. A comparison between various detailed finite element model runs indicates that the material properties and plasticity modeling have a significant impact on the plastic strain responses under dynamic loading reversals. 5 refs., 12 figs

  17. A comparative evaluation of five human reliability assessment techniques

    International Nuclear Information System (INIS)

    Kirwan, B.

    1988-01-01

    A field experiment was undertaken to evaluate the accuracy, usefulness, and resources requirements of five human reliability quantification techniques (Techniques for Human Error Rate Prediction (THERP); Paired Comparisons, Human Error Assessment and Reduction Technique (HEART), Success Liklihood Index Method (SLIM)-Multi Attribute Utility Decomposition (MAUD), and Absolute Probability Judgement). This was achieved by assessing technique predictions against a set of known human error probabilities, and by comparing their predictions on a set of five realistic Probabilisitc Risk Assessment (PRA) human error. On a combined measure of accuracy THERP and Absolute Probability Judgement performed best, whilst HEART showed indications of accuracy and was lower in resources usage than other techniques. HEART and THERP both appear to benefit from using trained assessors in order to obtain the best results. SLIM and Paired Comparisons require further research on achieving a robust calibration relationship between their scale values and absolute probabilities. (author)

  18. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  19. Trench design and construction techniques for low-level radioactive waste disposal

    International Nuclear Information System (INIS)

    Tucker, P.G.

    1983-02-01

    This document provides information on trench design and construction techniques which can be used in the disposal of LLW by shallow land burial. It covers practices currently in use not only in the LLW disposal field, but also methods and materials being used in areas of hazardous and municipal waste disposal which are compatible with the performance objectives of 10 CFR Part 61. The complexity of a disposal site and its potential problems dictate the use of site-specific characteristics when designing a LLW disposal trench. This report presents the LLW disposal trench as consisting of various elements or unit processes. The term unit processes is used as it more fully relays the impact of the designer's choice of methods and materials. When choosing a material to fulfill the function of a certain trench element, the designer is also stipulating a portion of his operational procedure which must be compatible with the disposal operation as a whole. Information is provided on the properties, selection, and installation of various materials such as bentonite, soil-cement, polymeric materials, asphaltic materials, and geotechnical fabrics. This is not intended to outline step-by-step procedures. Basically, three time frames are addressed with respect to construction techniques; preoperational, operational, and postoperational. Within each of these time frames there are certain construction techniques which can be employed by the designer to enhance the overall ease of construction and ultimate success of the disposal facility. Among the techniques presented are precontouring the disposal area, alignment of the trench axis, sloping the trench bottom, incremental excavation, and surface water (runoff) management

  20. ESTIMATION OF INSULATOR CONTAMINATIONS BY MEANS OF REMOTE SENSING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    G. Han

    2016-06-01

    Full Text Available The accurate estimation of deposits adhering on insulators is critical to prevent pollution flashovers which cause huge costs worldwide. The traditional evaluation method of insulator contaminations (IC is based sparse manual in-situ measurements, resulting in insufficient spatial representativeness and poor timeliness. Filling that gap, we proposed a novel evaluation framework of IC based on remote sensing and data mining. Varieties of products derived from satellite data, such as aerosol optical depth (AOD, digital elevation model (DEM, land use and land cover and normalized difference vegetation index were obtained to estimate the severity of IC along with the necessary field investigation inventory (pollution sources, ambient atmosphere and meteorological data. Rough set theory was utilized to minimize input sets under the prerequisite that the resultant set is equivalent to the full sets in terms of the decision ability to distinguish severity levels of IC. We found that AOD, the strength of pollution source and the precipitation are the top 3 decisive factors to estimate insulator contaminations. On that basis, different classification algorithm such as mahalanobis minimum distance, support vector machine (SVM and maximum likelihood method were utilized to estimate severity levels of IC. 10-fold cross-validation was carried out to evaluate the performances of different methods. SVM yielded the best overall accuracy among three algorithms. An overall accuracy of more than 70% was witnessed, suggesting a promising application of remote sensing in power maintenance. To our knowledge, this is the first trial to introduce remote sensing and relevant data analysis technique into the estimation of electrical insulator contaminations.