WorldWideScience

Sample records for program analytical segmentation

  1. What are Segments in Google Analytics

    Science.gov (United States)

    Segments find all sessions that meet a specific condition. You can then apply this segment to any report in Google Analytics (GA). Segments are a way of identifying sessions and users while filters identify specific events, like pageviews.

  2. Creating Web Area Segments with Google Analytics

    Science.gov (United States)

    Segments allow you to quickly access data for a predefined set of Sessions or Users, such as government or education users, or sessions in a particular state. You can then apply this segment to any report within the Google Analytics (GA) interface.

  3. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing

    2011-01-01

    We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques. © 2011 ACM.

  4. Pre-analytical and analytical variation of drug determination in segmented hair using ultra-performance liquid chromatography-tandem mass spectrometry.

    Science.gov (United States)

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2014-01-01

    Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Differential segmentation responses to an alcohol social marketing program.

    Science.gov (United States)

    Dietrich, Timo; Rundle-Thiele, Sharyn; Schuster, Lisa; Drennan, Judy; Russell-Bennett, Rebekah; Leo, Cheryl; Gullo, Matthew J; Connor, Jason P

    2015-10-01

    This study seeks to establish whether meaningful subgroups exist within a 14-16 year old adolescent population and if these segments respond differently to the Game On: Know Alcohol (GOKA) intervention, a school-based alcohol social marketing program. This study is part of a larger cluster randomized controlled evaluation of the GOKA program implemented in 14 schools in 2013/2014. TwoStep cluster analysis was conducted to segment 2,114 high school adolescents (14-16 years old) on the basis of 22 demographic, behavioral, and psychographic variables. Program effects on knowledge, attitudes, behavioral intentions, social norms, alcohol expectancies, and drinking refusal self-efficacy of identified segments were subsequently examined. Three segments were identified: (1) Abstainers, (2) Bingers, and (3) Moderate Drinkers. Program effects varied significantly across segments. The strongest positive change effects post-participation were observed for Bingers, while mixed effects were evident for Moderate Drinkers and Abstainers. These findings provide preliminary empirical evidence supporting the application of social marketing segmentation in alcohol education programs. Development of targeted programs that meet the unique needs of each of the three identified segments will extend the social marketing footprint in alcohol education. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing; Koltun, Vladlen; Guibas, Leonidas

    2011-01-01

    program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape

  7. The MSCA Program: Developing Analytic Unicorns

    Science.gov (United States)

    Houghton, David M.; Schertzer, Clint; Beck, Scott

    2018-01-01

    Marketing analytics students who can communicate effectively with decision makers are in high demand. These "analytic unicorns" are hard to find. The Master of Science in Customer Analytics (MSCA) degree program at Xavier University seeks to fill that need. In this paper, we discuss the process of creating the MSCA program. We outline…

  8. Pre-analytical and analytical variation of drug determination in segmented hair using ultra-performance liquid chromatography-tandem mass spectrometry

    DEFF Research Database (Denmark)

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2014-01-01

    variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two...

  9. Exact analytical modeling of magnetic vector potential in surface inset permanent magnet DC machines considering magnet segmentation

    Science.gov (United States)

    Jabbari, Ali

    2018-01-01

    Surface inset permanent magnet DC machine can be used as an alternative in automation systems due to their high efficiency and robustness. Magnet segmentation is a common technique in order to mitigate pulsating torque components in permanent magnet machines. An accurate computation of air-gap magnetic field distribution is necessary in order to calculate machine performance. An exact analytical method for magnetic vector potential calculation in surface inset permanent magnet machines considering magnet segmentation has been proposed in this paper. The analytical method is based on the resolution of Laplace and Poisson equations as well as Maxwell equation in polar coordinate by using sub-domain method. One of the main contributions of the paper is to derive an expression for the magnetic vector potential in the segmented PM region by using hyperbolic functions. The developed method is applied on the performance computation of two prototype surface inset magnet segmented motors with open circuit and on load conditions. The results of these models are validated through FEM method.

  10. Writing analytic element programs in Python.

    Science.gov (United States)

    Bakker, Mark; Kelson, Victor A

    2009-01-01

    The analytic element method is a mesh-free approach for modeling ground water flow at both the local and the regional scale. With the advent of the Python object-oriented programming language, it has become relatively easy to write analytic element programs. In this article, an introduction is given of the basic principles of the analytic element method and of the Python programming language. A simple, yet flexible, object-oriented design is presented for analytic element codes using multiple inheritance. New types of analytic elements may be added without the need for any changes in the existing part of the code. The presented code may be used to model flow to wells (with either a specified discharge or drawdown) and streams (with a specified head). The code may be extended by any hydrogeologist with a healthy appetite for writing computer code to solve more complicated ground water flow problems. Copyright © 2009 The Author(s). Journal Compilation © 2009 National Ground Water Association.

  11. Development of novel segmented-plate linearly tunable MEMS capacitors

    International Nuclear Information System (INIS)

    Shavezipur, M; Khajepour, A; Hashemi, S M

    2008-01-01

    In this paper, novel MEMS capacitors with flexible moving electrodes and high linearity and tunability are presented. The moving plate is divided into small and rigid segments connected to one another by connecting beams at their end nodes. Under each node there is a rigid step which selectively limits the vertical displacement of the node. A lumped model is developed to analytically solve the governing equations of coupled structural-electrostatic physics with mechanical contact. Using the analytical solver, an optimization program finds the best set of step heights that provides the highest linearity. Analytical and finite element analyses of two capacitors with three-segmented- and six-segmented-plate confirm that the segmentation technique considerably improves the linearity while the tunability remains as high as that of a conventional parallel-plate capacitor. Moreover, since the new designs require customized fabrication processes, to demonstrate the applicability of the proposed technique for standard processes, a modified capacitor with flexible steps designed for PolyMUMPs is introduced. Dimensional optimization of the modified design results in a combination of high linearity and tunability. Constraining the displacement of the moving plate can be extended to more complex geometries to obtain smooth and highly linear responses

  12. Segmented fuel irradiation program: investigation on advanced materials

    International Nuclear Information System (INIS)

    Uchida, H.; Goto, K.; Sabate, R.; Abeta, S.; Baba, T.; Matias, E. de; Alonso, J.

    1999-01-01

    The Segmented Fuel Irradiation Program, started in 1991, is a collaboration between the Japanese organisations Nuclear Power Engineering Corporation (NUPEC), the Kansai Electric Power Co., Inc. (KEPCO) representing other Japanese utilities, and Mitsubishi Heavy Industries, Ltd. (MHI); and the Spanish Organisations Empresa Nacional de Electricidad, S.A. (ENDESA) representing A.N. Vandellos 2, and Empresa Nacional Uranio, S.A. (ENUSA); with the collaboration of Westinghouse. The objective of the Program is to make substantial contribution to the development of advanced cladding and fuel materials for better performance at high burn-up and under operational power transients. For this Program, segmented fuel rods were selected as the most appropriate vehicle to accomplish the aforementioned objective. Thus, a large number of fuel and cladding combinations are provided while minimising the total amount of new material, at the same time, facilitating an eventual irradiation extension in a test reactor. The Program consists of three major phases: phase I: design, licensing, fabrication and characterisation of the assemblies carrying the segmented rods (1991 - 1994); phase II: base irradiation of the assemblies at Vandellos 2 NPP, and on-site examination at the end of four cycles (1994-1999). Phase III: ramp testing at the Studsvik facilities and hot cell PIE (1996-2001). The main fuel design features whose effects on fuel behaviour are being analysed are: alloy composition (MDA and ZIRLO vs. Zircaloy-4); tubing texture; pellet grain size. The Program is progressing satisfactorily as planned. The base irradiation is completed in the first quarter of 1999, and so far, tests and inspections already carried out are providing useful information on the behaviour of the new materials. Also, the Program is delivering a well characterized fuel material, irradiated in a commercial reactor, which can be further used in other fuel behaviour experiments. The paper presents the main

  13. One size (never) fits all: segment differences observed following a school-based alcohol social marketing program.

    Science.gov (United States)

    Dietrich, Timo; Rundle-Thiele, Sharyn; Leo, Cheryl; Connor, Jason

    2015-04-01

    According to commercial marketing theory, a market orientation leads to improved performance. Drawing on the social marketing principles of segmentation and audience research, the current study seeks to identify segments to examine responses to a school-based alcohol social marketing program. A sample of 371 year 10 students (aged: 14-16 years; 51.4% boys) participated in a prospective (pre-post) multisite alcohol social marketing program. Game On: Know Alcohol (GO:KA) program included 6, student-centered, and interactive lessons to teach adolescents about alcohol and strategies to abstain or moderate drinking. A repeated measures design was used. Baseline demographics, drinking attitudes, drinking intentions, and alcohol knowledge were cluster analyzed to identify segments. Change on key program outcome measures and satisfaction with program components were assessed by segment. Three segments were identified; (1) Skeptics, (2) Risky Males, (3) Good Females. Segments 2 and 3 showed greatest change in drinking attitudes and intentions. Good Females reported highest satisfaction with all program components and Skeptics lowest program satisfaction with all program components. Three segments, each differing on psychographic and demographic variables, exhibited different change patterns following participation in GO:KA. Post hoc analysis identified that satisfaction with program components differed by segment offering opportunities for further research. © 2015, American School Health Association.

  14. Programming system for analytic geometry

    International Nuclear Information System (INIS)

    Raymond, Jacques

    1970-01-01

    After having outlined the characteristics of computing centres which do not comply with engineering tasks, notably the time required by all different tasks to be performed when developing a software (assembly, compilation, link edition, loading, run), and identified constraints specific to engineering, the author identifies the characteristics a programming system should have to suit engineering tasks. He discussed existing conversational systems and their programming language, and their main drawbacks. Then, he presents a system which aims at facilitating programming and addressing problems of analytic geometry and trigonometry

  15. Analytic central path, sensitivity analysis and parametric linear programming

    NARCIS (Netherlands)

    A.G. Holder; J.F. Sturm; S. Zhang (Shuzhong)

    1998-01-01

    textabstractIn this paper we consider properties of the central path and the analytic center of the optimal face in the context of parametric linear programming. We first show that if the right-hand side vector of a standard linear program is perturbed, then the analytic center of the optimal face

  16. Tank 241-S-102, Core 232 analytical results for the final report

    Energy Technology Data Exchange (ETDEWEB)

    STEEN, F.H.

    1998-11-04

    This document is the analytical laboratory report for tank 241-S-102 push mode core segments collected between March 5, 1998 and April 2, 1998. The segments were subsampled and analyzed in accordance with the Tank 241-S-102 Retained Gas Sampler System Sampling and Analysis Plan (TSAP) (McCain, 1998), Letter of Instruction for Compatibility Analysis of Samples from Tank 241-S-102 (LOI) (Thompson, 1998) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO) (Mulkey and Miller, 1998). The analytical results are included in the data summary table (Table 1).

  17. ATLAST ULE mirror segment performance analytical predictions based on thermally induced distortions

    Science.gov (United States)

    Eisenhower, Michael J.; Cohen, Lester M.; Feinberg, Lee D.; Matthews, Gary W.; Nissen, Joel A.; Park, Sang C.; Peabody, Hume L.

    2015-09-01

    The Advanced Technology Large-Aperture Space Telescope (ATLAST) is a concept for a 9.2 m aperture space-borne observatory operating across the UV/Optical/NIR spectra. The primary mirror for ATLAST is a segmented architecture with pico-meter class wavefront stability. Due to its extraordinarily low coefficient of thermal expansion, a leading candidate for the primary mirror substrate is Corning's ULE® titania-silicate glass. The ATLAST ULE® mirror substrates will be maintained at `room temperature' during on orbit flight operations minimizing the need for compensation of mirror deformation between the manufacturing temperature and the operational temperatures. This approach requires active thermal management to maintain operational temperature while on orbit. Furthermore, the active thermal control must be sufficiently stable to prevent time-varying thermally induced distortions in the mirror substrates. This paper describes a conceptual thermal management system for the ATLAST 9.2 m segmented mirror architecture that maintains the wavefront stability to less than 10 pico-meters/10 minutes RMS. Thermal and finite element models, analytical techniques, accuracies involved in solving the mirror figure errors, and early findings from the thermal and thermal-distortion analyses are presented.

  18. Hanford performance evaluation program for Hanford site analytical services

    International Nuclear Information System (INIS)

    Markel, L.P.

    1995-09-01

    The U.S. Department of Energy (DOE) Order 5700.6C, Quality Assurance, and Title 10 of the Code of Federal Regulations, Part 830.120, Quality Assurance Requirements, states that it is the responsibility of DOE contractors to ensure that ''quality is achieved and maintained by those who have been assigned the responsibility for performing the work.'' Hanford Analytical Services Quality Assurance Plan (HASQAP) is designed to meet the needs of the Richland Operations Office (RL) for maintaining a consistent level of quality for the analytical chemistry services provided by contractor and commmercial analytical laboratory operations. Therefore, services supporting Hanford environmental monitoring, environmental restoration, and waste management analytical services shall meet appropriate quality standards. This performance evaluation program will monitor the quality standards of all analytical laboratories supporting the Hanforad Site including on-site and off-site laboratories. The monitoring and evaluation of laboratory performance can be completed by the use of several tools. This program will discuss the tools that will be utilized for laboratory performance evaluations. Revision 0 will primarily focus on presently available programs using readily available performance evaluation materials provided by DOE, EPA or commercial sources. Discussion of project specific PE materials and evaluations will be described in section 9.0 and Appendix A

  19. Improved dynamic-programming-based algorithms for segmentation of masses in mammograms

    International Nuclear Information System (INIS)

    Dominguez, Alfonso Rojas; Nandi, Asoke K.

    2007-01-01

    In this paper, two new boundary tracing algorithms for segmentation of breast masses are presented. These new algorithms are based on the dynamic programming-based boundary tracing (DPBT) algorithm proposed in Timp and Karssemeijer, [S. Timp and N. Karssemeijer, Med. Phys. 31, 958-971 (2004)] The DPBT algorithm contains two main steps: (1) construction of a local cost function, and (2) application of dynamic programming to the selection of the optimal boundary based on the local cost function. The validity of some assumptions used in the design of the DPBT algorithm is tested in this paper using a set of 349 mammographic images. Based on the results of the tests, modifications to the computation of the local cost function have been designed and have resulted in the Improved-DPBT (IDPBT) algorithm. A procedure for the dynamic selection of the strength of the components of the local cost function is presented that makes these parameters independent of the image dataset. Incorporation of this dynamic selection procedure has produced another new algorithm which we have called ID 2 PBT. Methods for the determination of some other parameters of the DPBT algorithm that were not covered in the original paper are presented as well. The merits of the new IDPBT and ID 2 PBT algorithms are demonstrated experimentally by comparison against the DPBT algorithm. The segmentation results are evaluated with base on the area overlap measure and other segmentation metrics. Both of the new algorithms outperform the original DPBT; the improvements in the algorithms performance are more noticeable around the values of the segmentation metrics corresponding to the highest segmentation accuracy, i.e., the new algorithms produce more optimally segmented regions, rather than a pronounced increase in the average quality of all the segmented regions

  20. Analytical program: 1975 Bikini radiological survey

    International Nuclear Information System (INIS)

    Mount, M.E.; Robison, W.L.; Thompson, S.E.; Hamby, K.O.; Prindle, A.L.; Levy, H.B.

    1976-01-01

    The analytical program for samples of soil, vegetation, and animal tissue collected during the June 1975 field survey of Bikini and Eneu islands is described. The phases of this program are discussed in chronological order: initial processing of samples, gamma spectrometry, and wet chemistry. Included are discussions of quality control programs, reproducibility of measurements, and comparisons of gamma spectrometry with wet chemistry determinations of 241 Am. Wet chemistry results are used to examine differences in Pu:Am ratios and Pu-isotope ratios as a function of the type of sample and the location where samples were collected

  1. New digital demodulator with matched filters and curve segmentation techniques for BFSK demodulation: Analytical description

    Directory of Open Access Journals (Sweden)

    Jorge Torres Gómez

    2015-09-01

    Full Text Available The present article relates in general to digital demodulation of Binary Frequency Shift Keying (BFSK. The objective of the present research is to obtain a new processing method for demodulating BFSK-signals in order to reduce hardware complexity in comparison with other methods reported. The solution proposed here makes use of the matched filter theory and curve segmentation algorithms. This paper describes the integration and configuration of a Sampler Correlator and curve segmentation blocks in order to obtain a digital receiver for a proper demodulation of the received signal. The proposed solution is shown to strongly reduce hardware complexity. In this part a presentation of the proposed solution regarding the analytical expressions is addressed. The paper covers in detail the elements needed for properly configuring the system. In a second part it is presented the implementation of the system for FPGA technology and the simulation results in order to validate the overall performance.

  2. Lymph node segmentation by dynamic programming and active contours.

    Science.gov (United States)

    Tan, Yongqiang; Lu, Lin; Bonde, Apurva; Wang, Deling; Qi, Jing; Schwartz, Lawrence H; Zhao, Binsheng

    2018-03-03

    Enlarged lymph nodes are indicators of cancer staging, and the change in their size is a reflection of treatment response. Automatic lymph node segmentation is challenging, as the boundary can be unclear and the surrounding structures complex. This work communicates a new three-dimensional algorithm for the segmentation of enlarged lymph nodes. The algorithm requires a user to draw a region of interest (ROI) enclosing the lymph node. Rays are cast from the center of the ROI, and the intersections of the rays and the boundary of the lymph node form a triangle mesh. The intersection points are determined by dynamic programming. The triangle mesh initializes an active contour which evolves to low-energy boundary. Three radiologists independently delineated the contours of 54 lesions from 48 patients. Dice coefficient was used to evaluate the algorithm's performance. The mean Dice coefficient between computer and the majority vote results was 83.2%. The mean Dice coefficients between the three radiologists' manual segmentations were 84.6%, 86.2%, and 88.3%. The performance of this segmentation algorithm suggests its potential clinical value for quantifying enlarged lymph nodes. © 2018 American Association of Physicists in Medicine.

  3. A new 2D segmentation method based on dynamic programming applied to computer aided detection in mammography

    International Nuclear Information System (INIS)

    Timp, Sheila; Karssemeijer, Nico

    2004-01-01

    Mass segmentation plays a crucial role in computer-aided diagnosis (CAD) systems for classification of suspicious regions as normal, benign, or malignant. In this article we present a robust and automated segmentation technique--based on dynamic programming--to segment mass lesions from surrounding tissue. In addition, we propose an efficient algorithm to guarantee resulting contours to be closed. The segmentation method based on dynamic programming was quantitatively compared with two other automated segmentation methods (region growing and the discrete contour model) on a dataset of 1210 masses. For each mass an overlap criterion was calculated to determine the similarity with manual segmentation. The mean overlap percentage for dynamic programming was 0.69, for the other two methods 0.60 and 0.59, respectively. The difference in overlap percentage was statistically significant. To study the influence of the segmentation method on the performance of a CAD system two additional experiments were carried out. The first experiment studied the detection performance of the CAD system for the different segmentation methods. Free-response receiver operating characteristics analysis showed that the detection performance was nearly identical for the three segmentation methods. In the second experiment the ability of the classifier to discriminate between malignant and benign lesions was studied. For region based evaluation the area A z under the receiver operating characteristics curve was 0.74 for dynamic programming, 0.72 for the discrete contour model, and 0.67 for region growing. The difference in A z values obtained by the dynamic programming method and region growing was statistically significant. The differences between other methods were not significant

  4. A Big Data Analytics Methodology Program in the Health Sector

    Science.gov (United States)

    Lawler, James; Joseph, Anthony; Howell-Barber, H.

    2016-01-01

    The benefits of Big Data Analytics are cited frequently in the literature. However, the difficulties of implementing Big Data Analytics can limit the number of organizational projects. In this study, the authors evaluate business, procedural and technical factors in the implementation of Big Data Analytics, applying a methodology program. Focusing…

  5. Dynamic programming in parallel boundary detection with application to ultrasound intima-media segmentation.

    Science.gov (United States)

    Zhou, Yuan; Cheng, Xinyao; Xu, Xiangyang; Song, Enmin

    2013-12-01

    Segmentation of carotid artery intima-media in longitudinal ultrasound images for measuring its thickness to predict cardiovascular diseases can be simplified as detecting two nearly parallel boundaries within a certain distance range, when plaque with irregular shapes is not considered. In this paper, we improve the implementation of two dynamic programming (DP) based approaches to parallel boundary detection, dual dynamic programming (DDP) and piecewise linear dual dynamic programming (PL-DDP). Then, a novel DP based approach, dual line detection (DLD), which translates the original 2-D curve position to a 4-D parameter space representing two line segments in a local image segment, is proposed to solve the problem while maintaining efficiency and rotation invariance. To apply the DLD to ultrasound intima-media segmentation, it is imbedded in a framework that employs an edge map obtained from multiplication of the responses of two edge detectors with different scales and a coupled snake model that simultaneously deforms the two contours for maintaining parallelism. The experimental results on synthetic images and carotid arteries of clinical ultrasound images indicate improved performance of the proposed DLD compared to DDP and PL-DDP, with respect to accuracy and efficiency. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. The use of mixed-integer programming for inverse treatment planning with pre-defined field segments

    International Nuclear Information System (INIS)

    Bednarz, Greg; Michalski, Darek; Houser, Chris; Huq, M. Saiful; Xiao Ying; Rani, Pramila Anne; Galvin, James M.

    2002-01-01

    Complex intensity patterns generated by traditional beamlet-based inverse treatment plans are often very difficult to deliver. In the approach presented in this work the intensity maps are controlled by pre-defining field segments to be used for dose optimization. A set of simple rules was used to define a pool of allowable delivery segments and the mixed-integer programming (MIP) method was used to optimize segment weights. The optimization problem was formulated by combining real variables describing segment weights with a set of binary variables, used to enumerate voxels in targets and critical structures. The MIP method was compared to the previously used Cimmino projection algorithm. The field segmentation approach was compared to an inverse planning system with a traditional beamlet-based beam intensity optimization. In four complex cases of oropharyngeal cancer the segmental inverse planning produced treatment plans, which competed with traditional beamlet-based IMRT plans. The mixed-integer programming provided mechanism for imposition of dose-volume constraints and allowed for identification of the optimal solution for feasible problems. Additional advantages of the segmental technique presented here are: simplified dosimetry, quality assurance and treatment delivery. (author)

  7. 5 keys to business analytics program success

    CERN Document Server

    Boyer, John; Green, Brian; Harris, Tracy; Van De Vanter, Kay

    2012-01-01

    With business analytics is becoming increasingly strategic to all types of organizations and with many companies struggling to create a meaningful impact with this emerging technology, this work-based on the combined experience of 10 organizations that display excellence and expertise on the subject-shares the best practices, discusses the management aspects and sociology that drives success, and uncovers the five key aspects behind the success of some of the top business analytics programs in the industry. Readers will learn about numerous topics, including how to create and manage a changing

  8. Research on analytical model and design formulas of permanent magnetic bearings based on Halbach array with arbitrary segmented magnetized angle

    International Nuclear Information System (INIS)

    Wang, Nianxian; Wang, Dongxiong; Chen, Kuisheng; Wu, Huachun

    2016-01-01

    The bearing capacity of permanent magnetic bearings can be improved efficiently by using the Halbach array magnetization. However, the research on analytical model of Halbach array PMBs with arbitrary segmented magnetized angle has not been developed. The application of Halbach array PMBs has been limited by the absence of the analytical model and design formulas. In this research, the Halbach array PMBs with arbitrary segmented magnetized angle has been studied. The magnetization model of bearings is established. The magnetic field distribution model of the permanent magnet array is established by using the scalar magnetic potential model. On the basis of this, the bearing force model and the bearing stiffness model of the PMBs are established based on the virtual displacement method. The influence of the pair of magnetic rings in one cycle and the structure parameters of PMBs on the maximal bearing capacity and support stiffness characteristics are studied. The reference factors for the design process of PMBs have been given. Finally, the theoretical model and the conclusions are verified by the finite element analysis.

  9. Visual analytics for the exploration and assessment of segmentation errors

    NARCIS (Netherlands)

    Raidou, R.G.; Marcelis, F.J.J.; Breeuwer, M.; Gröller, M.E.; Vilanova Bartroli, A.

    2016-01-01

    Several diagnostic and treatment procedures require the segmentation of anatomical structures from medical images. However, the automatic model-based methods that are often employed, may produce inaccurate segmentations. These, if used as input for diagnosis or treatment, can have detrimental

  10. Automatic segmentation of closed-contour features in ophthalmic images using graph theory and dynamic programming

    Science.gov (United States)

    Chiu, Stephanie J.; Toth, Cynthia A.; Bowes Rickman, Catherine; Izatt, Joseph A.; Farsiu, Sina

    2012-01-01

    This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique. PMID:22567602

  11. A Comparison of Two Commercial Volumetry Software Programs in the Analysis of Pulmonary Ground-Glass Nodules: Segmentation Capability and Measurement Accuracy

    Science.gov (United States)

    Kim, Hyungjin; Lee, Sang Min; Lee, Hyun-Ju; Goo, Jin Mo

    2013-01-01

    Objective To compare the segmentation capability of the 2 currently available commercial volumetry software programs with specific segmentation algorithms for pulmonary ground-glass nodules (GGNs) and to assess their measurement accuracy. Materials and Methods In this study, 55 patients with 66 GGNs underwent unenhanced low-dose CT. GGN segmentation was performed by using 2 volumetry software programs (LungCARE, Siemens Healthcare; LungVCAR, GE Healthcare). Successful nodule segmentation was assessed visually and morphologic features of GGNs were evaluated to determine factors affecting segmentation by both types of software. In addition, the measurement accuracy of the software programs was investigated by using an anthropomorphic chest phantom containing simulated GGNs. Results The successful nodule segmentation rate was significantly higher in LungCARE (90.9%) than in LungVCAR (72.7%) (p = 0.012). Vascular attachment was a negatively influencing morphologic feature of nodule segmentation for both software programs. As for measurement accuracy, mean relative volume measurement errors in nodules ≥ 10 mm were 14.89% with LungCARE and 19.96% with LungVCAR. The mean relative attenuation measurement errors in nodules ≥ 10 mm were 3.03% with LungCARE and 5.12% with LungVCAR. Conclusion LungCARE shows significantly higher segmentation success rates than LungVCAR. Measurement accuracy of volume and attenuation of GGNs is acceptable in GGNs ≥ 10 mm by both software programs. PMID:23901328

  12. A comparison of two commercial volumetry software programs in the analysis of pulmonary ground-glass nodules: Segmentation capability and measurement accuracy

    International Nuclear Information System (INIS)

    Kim, Hyung Jin; Park, Chang Min; Lee, Sang Min; Lee, Hyun Joo; Goo, Jin Mo

    2013-01-01

    To compare the segmentation capability of the 2 currently available commercial volumetry software programs with specific segmentation algorithms for pulmonary ground-glass nodules (GGNs) and to assess their measurement accuracy. In this study, 55 patients with 66 GGNs underwent unenhanced low-dose CT. GGN segmentation was performed by using 2 volumetry software programs (LungCARE, Siemens Healthcare; LungVCAR, GE Healthcare). Successful nodule segmentation was assessed visually and morphologic features of GGNs were evaluated to determine factors affecting segmentation by both types of software. In addition, the measurement accuracy of the software programs was investigated by using an anthropomorphic chest phantom containing simulated GGNs. The successful nodule segmentation rate was significantly higher in LungCARE (90.9%) than in LungVCAR (72.7%) (p = 0.012). Vascular attachment was a negatively influencing morphologic feature of nodule segmentation for both software programs. As for measurement accuracy, mean relative volume measurement errors in nodules ≥ 10 mm were 14.89% with LungCARE and 19.96% with LungVCAR. The mean relative attenuation measurement errors in nodules ≥ 10 mm were 3.03% with LungCARE and 5.12% with LungVCAR. LungCARE shows significantly higher segmentation success rates than LungVCAR. Measurement accuracy of volume and attenuation of GGNs is acceptable in GGNs ≥ 10 mm by both software programs.

  13. A comparison of two commercial volumetry software programs in the analysis of pulmonary ground-glass nodules: Segmentation capability and measurement accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyung Jin; Park, Chang Min; Lee, Sang Min; Lee, Hyun Joo; Goo, Jin Mo [Dept. of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul (Korea, Republic of)

    2013-08-15

    To compare the segmentation capability of the 2 currently available commercial volumetry software programs with specific segmentation algorithms for pulmonary ground-glass nodules (GGNs) and to assess their measurement accuracy. In this study, 55 patients with 66 GGNs underwent unenhanced low-dose CT. GGN segmentation was performed by using 2 volumetry software programs (LungCARE, Siemens Healthcare; LungVCAR, GE Healthcare). Successful nodule segmentation was assessed visually and morphologic features of GGNs were evaluated to determine factors affecting segmentation by both types of software. In addition, the measurement accuracy of the software programs was investigated by using an anthropomorphic chest phantom containing simulated GGNs. The successful nodule segmentation rate was significantly higher in LungCARE (90.9%) than in LungVCAR (72.7%) (p = 0.012). Vascular attachment was a negatively influencing morphologic feature of nodule segmentation for both software programs. As for measurement accuracy, mean relative volume measurement errors in nodules ≥ 10 mm were 14.89% with LungCARE and 19.96% with LungVCAR. The mean relative attenuation measurement errors in nodules ≥ 10 mm were 3.03% with LungCARE and 5.12% with LungVCAR. LungCARE shows significantly higher segmentation success rates than LungVCAR. Measurement accuracy of volume and attenuation of GGNs is acceptable in GGNs ≥ 10 mm by both software programs.

  14. Segmenting the Adult Education Market.

    Science.gov (United States)

    Aurand, Tim

    1994-01-01

    Describes market segmentation and how the principles of segmentation can be applied to the adult education market. Indicates that applying segmentation techniques to adult education programs results in programs that are educationally and financially satisfying and serve an appropriate population. (JOW)

  15. Defining Audience Segments for Extension Programming Using Reported Water Conservation Practices

    Science.gov (United States)

    Monaghan, Paul; Ott, Emily; Wilber, Wendy; Gouldthorpe, Jessica; Racevskis, Laila

    2013-01-01

    A tool from social marketing can help Extension agents understand distinct audience segments among their constituents. Defining targeted audiences for Extension programming is a first step to influencing behavior change among the public. An online survey was conducted using an Extension email list for urban households receiving a monthly lawn and…

  16. FASP, an analytic resource appraisal program for petroleum play analysis

    Science.gov (United States)

    Crovelli, R.A.; Balay, R.H.

    1986-01-01

    An analytic probabilistic methodology for resource appraisal of undiscovered oil and gas resources in play analysis is presented in a FORTRAN program termed FASP. This play-analysis methodology is a geostochastic system for petroleum resource appraisal in explored as well as frontier areas. An established geologic model considers both the uncertainty of the presence of the assessed hydrocarbon and its amount if present. The program FASP produces resource estimates of crude oil, nonassociated gas, dissolved gas, and gas for a geologic play in terms of probability distributions. The analytic method is based upon conditional probability theory and many laws of expectation and variance. ?? 1986.

  17. Cavity contour segmentation in chest radiographs using supervised learning and dynamic programming

    International Nuclear Information System (INIS)

    Maduskar, Pragnya; Hogeweg, Laurens; Sánchez, Clara I.; Ginneken, Bram van; Jong, Pim A. de; Peters-Bax, Liesbeth; Dawson, Rodney; Ayles, Helen

    2014-01-01

    Purpose: Efficacy of tuberculosis (TB) treatment is often monitored using chest radiography. Monitoring size of cavities in pulmonary tuberculosis is important as the size predicts severity of the disease and its persistence under therapy predicts relapse. The authors present a method for automatic cavity segmentation in chest radiographs. Methods: A two stage method is proposed to segment the cavity borders, given a user defined seed point close to the center of the cavity. First, a supervised learning approach is employed to train a pixel classifier using texture and radial features to identify the border pixels of the cavity. A likelihood value of belonging to the cavity border is assigned to each pixel by the classifier. The authors experimented with four different classifiers:k-nearest neighbor (kNN), linear discriminant analysis (LDA), GentleBoost (GB), and random forest (RF). Next, the constructed likelihood map was used as an input cost image in the polar transformed image space for dynamic programming to trace the optimal maximum cost path. This constructed path corresponds to the segmented cavity contour in image space. Results: The method was evaluated on 100 chest radiographs (CXRs) containing 126 cavities. The reference segmentation was manually delineated by an experienced chest radiologist. An independent observer (a chest radiologist) also delineated all cavities to estimate interobserver variability. Jaccard overlap measure Ω was computed between the reference segmentation and the automatic segmentation; and between the reference segmentation and the independent observer's segmentation for all cavities. A median overlap Ω of 0.81 (0.76 ± 0.16), and 0.85 (0.82 ± 0.11) was achieved between the reference segmentation and the automatic segmentation, and between the segmentations by the two radiologists, respectively. The best reported mean contour distance and Hausdorff distance between the reference and the automatic segmentation were

  18. Cavity contour segmentation in chest radiographs using supervised learning and dynamic programming

    Energy Technology Data Exchange (ETDEWEB)

    Maduskar, Pragnya, E-mail: pragnya.maduskar@radboudumc.nl; Hogeweg, Laurens; Sánchez, Clara I.; Ginneken, Bram van [Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, 6525 GA (Netherlands); Jong, Pim A. de [Department of Radiology, University Medical Center Utrecht, 3584 CX (Netherlands); Peters-Bax, Liesbeth [Department of Radiology, Radboud University Medical Center, Nijmegen, 6525 GA (Netherlands); Dawson, Rodney [University of Cape Town Lung Institute, Cape Town 7700 (South Africa); Ayles, Helen [Department of Infectious and Tropical Diseases, London School of Hygiene and Tropical Medicine, London WC1E 7HT (United Kingdom)

    2014-07-15

    Purpose: Efficacy of tuberculosis (TB) treatment is often monitored using chest radiography. Monitoring size of cavities in pulmonary tuberculosis is important as the size predicts severity of the disease and its persistence under therapy predicts relapse. The authors present a method for automatic cavity segmentation in chest radiographs. Methods: A two stage method is proposed to segment the cavity borders, given a user defined seed point close to the center of the cavity. First, a supervised learning approach is employed to train a pixel classifier using texture and radial features to identify the border pixels of the cavity. A likelihood value of belonging to the cavity border is assigned to each pixel by the classifier. The authors experimented with four different classifiers:k-nearest neighbor (kNN), linear discriminant analysis (LDA), GentleBoost (GB), and random forest (RF). Next, the constructed likelihood map was used as an input cost image in the polar transformed image space for dynamic programming to trace the optimal maximum cost path. This constructed path corresponds to the segmented cavity contour in image space. Results: The method was evaluated on 100 chest radiographs (CXRs) containing 126 cavities. The reference segmentation was manually delineated by an experienced chest radiologist. An independent observer (a chest radiologist) also delineated all cavities to estimate interobserver variability. Jaccard overlap measure Ω was computed between the reference segmentation and the automatic segmentation; and between the reference segmentation and the independent observer's segmentation for all cavities. A median overlap Ω of 0.81 (0.76 ± 0.16), and 0.85 (0.82 ± 0.11) was achieved between the reference segmentation and the automatic segmentation, and between the segmentations by the two radiologists, respectively. The best reported mean contour distance and Hausdorff distance between the reference and the automatic segmentation were

  19. Prototype implementation of segment assembling software

    Directory of Open Access Journals (Sweden)

    Pešić Đorđe

    2018-01-01

    Full Text Available IT education is very important and a lot of effort is put into the development of tools for helping students to acquire programming knowledge and for helping teachers in automating the examination process. This paper describes a prototype of the program segment assembling software used in the context of making tests in the field of algorithmic complexity. The proposed new program segment assembling model uses rules and templates. A template is a simple program segment. A rule defines combining method and data dependencies if they exist. One example of program segment assembling by the proposed system is given. Graphical user interface is also described.

  20. One Size (Never) Fits All: Segment Differences Observed Following a School-Based Alcohol Social Marketing Program

    Science.gov (United States)

    Dietrich, Timo; Rundle-Thiele, Sharyn; Leo, Cheryl; Connor, Jason

    2015-01-01

    Background: According to commercial marketing theory, a market orientation leads to improved performance. Drawing on the social marketing principles of segmentation and audience research, the current study seeks to identify segments to examine responses to a school-based alcohol social marketing program. Methods: A sample of 371 year 10 students…

  1. A prequalifying program for evaluating the analytical performance of commercial laboratories

    International Nuclear Information System (INIS)

    Reith, C.C.; Bishop, C.T.

    1987-01-01

    Soil and water samples were spiked with known activities of radionuclides and sent to seven commercial laboratories that had expressed an interest in analyzing environmental samples for the Waste Isolation Pilot Plant (WIPP). This Prequalifying Program was part of the selection process for an analytical subcontractor for a three-year program of baseline radiological surveillance around the WIPP site. Both media were spiked at three different activity levels with several transuranic radionuclides, as well as tritium, fission products, and activation products. Laboratory performance was evaluated by calculating relative error for each radionuclide in each sample, assigning grade values, and compiling grades into report cards for each candidate. Results for the five laboratories completing the Prequalifying Program were pooled to reveal differing degrees of difficulty among the treatments and radionuclides. Interlaboratory comparisons revealed systematic errors in the performance of one candidate. The final report cards contained clear differences among overall grades for the five laboratories, enabling analytical performance to be used as a quantitative criterion in the selection of an analytical subcontractor. (author)

  2. System and Method for Providing a Climate Data Analytic Services Application Programming Interface Distribution Package

    Science.gov (United States)

    Schnase, John L. (Inventor); Duffy, Daniel Q. (Inventor); Tamkin, Glenn S. (Inventor)

    2016-01-01

    A system, method and computer-readable storage devices for providing a climate data analytic services application programming interface distribution package. The example system can provide various components. The system provides a climate data analytic services application programming interface library that enables software applications running on a client device to invoke the capabilities of a climate data analytic service. The system provides a command-line interface that provides a means of interacting with a climate data analytic service by issuing commands directly to the system's server interface. The system provides sample programs that call on the capabilities of the application programming interface library and can be used as templates for the construction of new client applications. The system can also provide test utilities, build utilities, service integration utilities, and documentation.

  3. Fully automated segmentation of left ventricle using dual dynamic programming in cardiac cine MR images

    Science.gov (United States)

    Jiang, Luan; Ling, Shan; Li, Qiang

    2016-03-01

    Cardiovascular diseases are becoming a leading cause of death all over the world. The cardiac function could be evaluated by global and regional parameters of left ventricle (LV) of the heart. The purpose of this study is to develop and evaluate a fully automated scheme for segmentation of LV in short axis cardiac cine MR images. Our fully automated method consists of three major steps, i.e., LV localization, LV segmentation at end-diastolic phase, and LV segmentation propagation to the other phases. First, the maximum intensity projection image along the time phases of the midventricular slice, located at the center of the image, was calculated to locate the region of interest of LV. Based on the mean intensity of the roughly segmented blood pool in the midventricular slice at each phase, end-diastolic (ED) and end-systolic (ES) phases were determined. Second, the endocardial and epicardial boundaries of LV of each slice at ED phase were synchronously delineated by use of a dual dynamic programming technique. The external costs of the endocardial and epicardial boundaries were defined with the gradient values obtained from the original and enhanced images, respectively. Finally, with the advantages of the continuity of the boundaries of LV across adjacent phases, we propagated the LV segmentation from the ED phase to the other phases by use of dual dynamic programming technique. The preliminary results on 9 clinical cardiac cine MR cases show that the proposed method can obtain accurate segmentation of LV based on subjective evaluation.

  4. Analytical Services Fiscal Year 1996 Multi-year Program Plan Fiscal Year Work Plan WBS 1.5.1, Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-09-01

    This document contains the Fiscal Year 1996 Work Plan and Multi-Year Program Plan for the Analytical Services Program at the Hanford Reservation in Richland, Washington. The Analytical Services Program provides vital support to the Hanford Site mission and provides technically sound, defensible, cost effective, high quality analytical chemistry data for the site programs. This report describes the goals and strategies for continuance of the Analytical Services Program through fiscal year 1996 and beyond.

  5. Analytical Services Fiscal Year 1996 Multi-year Program Plan Fiscal Year Work Plan WBS 1.5.1, Revision 1

    International Nuclear Information System (INIS)

    1995-09-01

    This document contains the Fiscal Year 1996 Work Plan and Multi-Year Program Plan for the Analytical Services Program at the Hanford Reservation in Richland, Washington. The Analytical Services Program provides vital support to the Hanford Site mission and provides technically sound, defensible, cost effective, high quality analytical chemistry data for the site programs. This report describes the goals and strategies for continuance of the Analytical Services Program through fiscal year 1996 and beyond

  6. Quality assurance programs developed and implemented by the US Department of Energy's Analytical Services Program for environmental restoration and waste management activities

    International Nuclear Information System (INIS)

    Lillian, D.; Bottrell, D.

    1993-01-01

    The U.S. Department of Energy's (DOE's) Office of Environmental Restoration and Waste Management (EM) has been tasked with addressing environmental contamination and waste problems facing the Department. A key element of any environmental restoration or waste management program is environmental data. An effective and efficient sampling and analysis program is required to generate credible environmental data. The bases for DOE's EM Analytical Services Program (ASP) are contained in the charter and commitments in Secretary of Energy Notice SEN-13-89, EM program policies and requirements, and commitments to Congress and the Office of Inspector General (IG). The Congressional commitment by DOE to develop and implement an ASP was in response to concerns raised by the Chairman of the Congressional Environment, Energy, and Natural Resources Subcommittee, and the Chairman of the Congressional Oversight and Investigations Subcommittee of the Committee on Energy and Commerce, regarding the production of analytical data. The development and implementation of an ASP also satisfies the IG's audit report recommendations on environmental analytical support, including development and implementation of a national strategy for acquisition of quality sampling and analytical services. These recommendations were endorsed in Departmental positions, which further emphasize the importance of the ASP to EM's programs. In September 1990, EM formed the Laboratory Management Division (LMD) in the Office of Technology Development to provide the programmatic direction needed to establish and operate an EM-wide ASP program. In January 1992, LMD issued the open-quotes Analytical Services Program Five-Year Plan.close quotes This document described LMD's strategy to ensure the production of timely, cost-effective, and credible environmental data. This presentation describes the overall LMD Analytical Services Program and, specifically, the various QA programs

  7. Endocardium and Epicardium Segmentation in MR Images Based on Developed Otsu and Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Shengzhou XU

    2014-03-01

    Full Text Available In order to accurately extract the endocardium and epicardium of the left ventricle from cardiac magnetic resonance (MR images, a method based on developed Otsu and dynamic programming has been proposed. First, regions with high gray value are divided into several left ventricle candidate regions by the developed Otsu algorithm, which based on constraining the search range of the ideal segmentation threshold. Then, left ventricular blood pool is selected from the candidate regions and its convex hull is found out as the endocardium. The epicardium is derived by applying dynamic programming method to find a closed path with minimum local cost. The local cost function of the dynamic programming method consists of two factors: boundary gradient and shape features. In order to improve the accuracy of segmentation, a non-maxima gradient suppression technique is adopted to get the boundary gradient. The experimental result of 138 MR images show that the method proposed has high accuracy and robustness.

  8. Coordinated experimental/analytical program for investigating margins to failure of Category I reinforced concrete structures

    International Nuclear Information System (INIS)

    Endebrock, E.; Dove, R.; Anderson, C.A.

    1981-01-01

    The material presented in this paper deals with a coordinated experimental/analytical program designed to provide information needed for making margins to failure assessments of seismic Category I reinforced concrete structures. The experimental program is emphasized and background information that lead to this particular experimental approach is presented. Analytical tools being developed to supplement the experimental program are discussed. 16 figures

  9. Web Analytics

    Science.gov (United States)

    EPA’s Web Analytics Program collects, analyzes, and provides reports on traffic, quality assurance, and customer satisfaction metrics for EPA’s website. The program uses a variety of analytics tools, including Google Analytics and CrazyEgg.

  10. Quality assurance programs developed and implemented by the US Department of Energy`s Analytical Services Program for environmental restoration and waste management activities

    Energy Technology Data Exchange (ETDEWEB)

    Lillian, D.; Bottrell, D. [Dept. of Energy, Germntown, MD (United States)

    1993-12-31

    The U.S. Department of Energy`s (DOE`s) Office of Environmental Restoration and Waste Management (EM) has been tasked with addressing environmental contamination and waste problems facing the Department. A key element of any environmental restoration or waste management program is environmental data. An effective and efficient sampling and analysis program is required to generate credible environmental data. The bases for DOE`s EM Analytical Services Program (ASP) are contained in the charter and commitments in Secretary of Energy Notice SEN-13-89, EM program policies and requirements, and commitments to Congress and the Office of Inspector General (IG). The Congressional commitment by DOE to develop and implement an ASP was in response to concerns raised by the Chairman of the Congressional Environment, Energy, and Natural Resources Subcommittee, and the Chairman of the Congressional Oversight and Investigations Subcommittee of the Committee on Energy and Commerce, regarding the production of analytical data. The development and implementation of an ASP also satisfies the IG`s audit report recommendations on environmental analytical support, including development and implementation of a national strategy for acquisition of quality sampling and analytical services. These recommendations were endorsed in Departmental positions, which further emphasize the importance of the ASP to EM`s programs. In September 1990, EM formed the Laboratory Management Division (LMD) in the Office of Technology Development to provide the programmatic direction needed to establish and operate an EM-wide ASP program. In January 1992, LMD issued the {open_quotes}Analytical Services Program Five-Year Plan.{close_quotes} This document described LMD`s strategy to ensure the production of timely, cost-effective, and credible environmental data. This presentation describes the overall LMD Analytical Services Program and, specifically, the various QA programs.

  11. Analytical SN solutions in heterogeneous slabs using symbolic algebra computer programs

    International Nuclear Information System (INIS)

    Warsa, J.S.

    2002-01-01

    A modern symbolic algebra computer program, MAPLE, is used to compute solutions to the well-known analytical discrete ordinates, or S N , solutions in one-dimensional, slab geometry. Symbolic algebra programs compute the solutions with arbitrary precision and are free of spatial discretization error so they can be used to investigate new discretizations for one-dimensional slab, geometry S N methods. Pointwise scalar flux solutions are computed for several sample calculations of interest. Sample MAPLE command scripts are provided to illustrate how easily the theory can be translated into a working solution and serve as a complete tool capable of computing analytical S N solutions for mono-energetic, one-dimensional transport problems

  12. Shielded analytical laboratory activities supporting waste isolation programs

    International Nuclear Information System (INIS)

    McCown, J.J.

    1985-08-01

    The Shielded Analytical Laboratory (SAL) is a six cell manipulator-equipped facility which was built in 1962 as an addition to the 325 Radiochemistry Bldg. in the 300 Area at Hanford. The facility provides the capability for handling a wide variety of radioactive materials and performing chemical dissolutions, separations and analyses on nuclear fuels, components, waste forms and materials from R and D programs

  13. Programs and analytical methods for the U.S. Geological Survey acid-rain quality-assurance project. Water Resources Investigation

    International Nuclear Information System (INIS)

    See, R.B.; Willoughby, T.C.; Brooks, M.H.; Gordon, J.D.

    1990-01-01

    The U.S. Geological Survey operates four programs to provide external quality-assurance of wet deposition monitoring by the National Atmospheric Deposition Program and the National Trends Network. An intersite-comparison program assesses the precision and bias of onsite determinations of pH and specific conductance made by site operators. A blind-audit program is used to assess the effect of routine sample-handling procedures and transportation on the precision and bias of wet-deposition data. An interlaboratory-comparison program is used to assess analytical results from three or more laboratories, which routinely analyze wet-deposition samples from the major North American networks, to determine if comparability exists between laboratory analytical results and to provide estimates of the analytical precision of each laboratory. A collocated-sampler program is used to estimate the precision of wet/dry precipitation sampling throughout the National Atmospheric Deposition Program and the National Trends Network, to assess the variability of diverse spatial arrays, and to evaluate the impact of violations of specific site criteria. The report documents the procedures and analytical methods used in these four quality-assurance programs

  14. Optimally segmented magnetic structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bahl, Christian; Bjørk, Rasmus

    We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... is not available.We will illustrate the results for magnet design problems from different areas, such as electric motors/generators (as the example in the picture), beam focusing for particle accelerators and magnetic refrigeration devices.......We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... magnets[1][2]. However, the powerful rare-earth magnets are generally expensive, so both the scientific and industrial communities have devoted a lot of effort into developing suitable design methods. Even so, many magnet optimization algorithms either are based on heuristic approaches[3...

  15. The accelerated site technology deployment program presents the segmented gate system

    International Nuclear Information System (INIS)

    Patteson, Raymond; Maynor, Doug; Callan, Connie

    2000-01-01

    The Department of Energy (DOE) is working to accelerate the acceptance and application of innovative technologies that improve the way the nation manages its environmental remediation problems. The DOE Office of Science and Technology established the Accelerated Site Technology Deployment Program (ASTD) to help accelerate the acceptance and implementation of new and innovative soil and ground water remediation technologies. Coordinated by the Department of Energy's Idaho Office, the ASTD Program reduces many of the classic barriers to the deployment of new technologies by involving government, industry, and regulatory agencies in the assessment, implementation, and validation of innovative technologies. The paper uses the example of the Segmented Gate System (SGS) to illustrate how the ASTD program works. The SGS was used to cost effectively separate clean and contaminated soil for four different radionuclides: plutonium, uranium, thorium, and cesium. Based on those results, it has been proposed to use the SGS at seven other DOE sites across the country

  16. ORBITALES. A program for the calculation of wave functions with an analytical central potential

    International Nuclear Information System (INIS)

    Yunta Carretero; Rodriguez Mayquez, E.

    1974-01-01

    In this paper is described the objective, basis, carrying out in FORTRAN language and use of the program ORBITALES. This program calculate atomic wave function in the case of ths analytical central potential (Author) 8 refs

  17. Online Programs as Tools to Improve Parenting: A meta-analytic review

    NARCIS (Netherlands)

    Prof. Dr. Jo J.M.A Hermanns; Prof. Dr. Ruben R.G. Fukkink; dr. Christa C.C. Nieuwboer

    2013-01-01

    Background. A number of parenting programs, aimed at improving parenting competencies,have recently been adapted or designed with the use of online technologies. Although webbased services have been claimed to hold promise for parent support, a meta-analytic review of online parenting interventions

  18. Online programs as tools to improve parenting: A meta-analytic review

    NARCIS (Netherlands)

    Nieuwboer, C.C.; Fukkink, R.G.; Hermanns, J.M.A.

    2013-01-01

    Background: A number of parenting programs, aimed at improving parenting competencies, have recently been adapted or designed with the use of online technologies. Although web-based services have been claimed to hold promise for parent support, a meta-analytic review of online parenting

  19. A program wide framework for evaluating data driven teaching and learning - earth analytics approaches, results and lessons learned

    Science.gov (United States)

    Wasser, L. A.; Gold, A. U.

    2017-12-01

    There is a deluge of earth systems data available to address cutting edge science problems yet specific skills are required to work with these data. The Earth analytics education program, a core component of Earth Lab at the University of Colorado - Boulder - is building a data intensive program that provides training in realms including 1) interdisciplinary communication and collaboration 2) earth science domain knowledge including geospatial science and remote sensing and 3) reproducible, open science workflows ("earth analytics"). The earth analytics program includes an undergraduate internship, undergraduate and graduate level courses and a professional certificate / degree program. All programs share the goals of preparing a STEM workforce for successful earth analytics driven careers. We are developing an program-wide evaluation framework that assesses the effectiveness of data intensive instruction combined with domain science learning to better understand and improve data-intensive teaching approaches using blends of online, in situ, asynchronous and synchronous learning. We are using targeted online search engine optimization (SEO) to increase visibility and in turn program reach. Finally our design targets longitudinal program impacts on participant career tracts over time.. Here we present results from evaluation of both an interdisciplinary undergrad / graduate level earth analytics course and and undergraduate internship. Early results suggest that a blended approach to learning and teaching that includes both synchronous in-person teaching and active classroom hands-on learning combined with asynchronous learning in the form of online materials lead to student success. Further we will present our model for longitudinal tracking of participant's career focus overtime to better understand long-term program impacts. We also demonstrate the impact of SEO optimization on online content reach and program visibility.

  20. Laboratory quality assurance and its role in the safeguards analytical laboratory evaluation (SALE) program

    International Nuclear Information System (INIS)

    Delvin, W.L.; Pietri, C.E.

    1981-07-01

    Since the late 1960's, strong emphasis has been given to quality assurance in the nuclear industry, particularly to that part involved in nuclear reactors. This emphasis has had impact on the analytical chemistry laboratory because of the importance of analytical measurements in the certification and acceptance of materials used in the fabrication and construction of reactor components. Laboratory quality assurance, in which the principles of quality assurance are applied to laboratory operations, has a significant role to play in processing, fabrication, and construction programs of the nuclear industry. That role impacts not only process control and material certification, but also safeguards and nuclear materials accountability. The implementation of laboratory quality assurance is done through a program plan that specifies how the principles of quality assurance are to be applied. Laboratory quality assurance identifies weaknesses and deficiencies in laboratory operations and provides confidence in the reliability of laboratory results. Such confidence in laboratory measurements is essential to the proper evaluation of laboratories participating in the Safeguards Analytical Laboratory Evaluation (SALE) Program

  1. Segmenting patients and physicians using preferences from discrete choice experiments.

    Science.gov (United States)

    Deal, Ken

    2014-01-01

    People often form groups or segments that have similar interests and needs and seek similar benefits from health providers. Health organizations need to understand whether the same health treatments, prevention programs, services, and products should be applied to everyone in the relevant population or whether different treatments need to be provided to each of several segments that are relatively homogeneous internally but heterogeneous among segments. Our objective was to explain the purposes, benefits, and methods of segmentation for health organizations, and to illustrate the process of segmenting health populations based on preference coefficients from a discrete choice conjoint experiment (DCE) using an example study of prevention of cyberbullying among university students. We followed a two-level procedure for investigating segmentation incorporating several methods for forming segments in Level 1 using DCE preference coefficients and testing their quality, reproducibility, and usability by health decision makers. Covariates (demographic, behavioral, lifestyle, and health state variables) were included in Level 2 to further evaluate quality and to support the scoring of large databases and developing typing tools for assigning those in the relevant population, but not in the sample, to the segments. Several segmentation solution candidates were found during the Level 1 analysis, and the relationship of the preference coefficients to the segments was investigated using predictive methods. Those segmentations were tested for their quality and reproducibility and three were found to be very close in quality. While one seemed better than others in the Level 1 analysis, another was very similar in quality and proved ultimately better in predicting segment membership using covariates in Level 2. The two segments in the final solution were profiled for attributes that would support the development and acceptance of cyberbullying prevention programs among university

  2. Standard guide for establishing a quality assurance program for analytical chemistry laboratories within the nuclear industry

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2006-01-01

    1.1 This guide covers the establishment of a quality assurance (QA) program for analytical chemistry laboratories within the nuclear industry. Reference to key elements of ANSI/ISO/ASQC Q9001, Quality Systems, provides guidance to the functional aspects of analytical laboratory operation. When implemented as recommended, the practices presented in this guide will provide a comprehensive QA program for the laboratory. The practices are grouped by functions, which constitute the basic elements of a laboratory QA program. 1.2 The essential, basic elements of a laboratory QA program appear in the following order: Section Organization 5 Quality Assurance Program 6 Training and Qualification 7 Procedures 8 Laboratory Records 9 Control of Records 10 Control of Procurement 11 Control of Measuring Equipment and Materials 12 Control of Measurements 13 Deficiencies and Corrective Actions 14

  3. Segmentation-less Digital Rock Physics

    Science.gov (United States)

    Tisato, N.; Ikeda, K.; Goldfarb, E. J.; Spikes, K. T.

    2017-12-01

    In the last decade, Digital Rock Physics (DRP) has become an avenue to investigate physical and mechanical properties of geomaterials. DRP offers the advantage of simulating laboratory experiments on numerical samples that are obtained from analytical methods. Potentially, DRP could allow sparing part of the time and resources that are allocated to perform complicated laboratory tests. Like classic laboratory tests, the goal of DRP is to estimate accurately physical properties of rocks like hydraulic permeability or elastic moduli. Nevertheless, the physical properties of samples imaged using micro-computed tomography (μCT) are estimated through segmentation of the μCT dataset. Segmentation proves to be a challenging and arbitrary procedure that typically leads to inaccurate estimates of physical properties. Here we present a novel technique to extract physical properties from a μCT dataset without the use of segmentation. We show examples in which we use segmentation-less method to simulate elastic wave propagation and pressure wave diffusion to estimate elastic properties and permeability, respectively. The proposed method takes advantage of effective medium theories and uses the density and the porosity that are measured in the laboratory to constrain the results. We discuss the results and highlight that segmentation-less DRP is more accurate than segmentation based DRP approaches and theoretical modeling for the studied rock. In conclusion, the segmentation-less approach here presented seems to be a promising method to improve accuracy and to ease the overall workflow of DRP.

  4. Waste Tank Organic Safety Program: Analytical methods development. Progress report, FY 1994

    International Nuclear Information System (INIS)

    Campbell, J.A.; Clauss, S.A.; Grant, K.E.

    1994-09-01

    The objectives of this task are to develop and document extraction and analysis methods for organics in waste tanks, and to extend these methods to the analysis of actual core samples to support the Waste Tank organic Safety Program. This report documents progress at Pacific Northwest Laboratory (a) during FY 1994 on methods development, the analysis of waste from Tank 241-C-103 (Tank C-103) and T-111, and the transfer of documented, developed analytical methods to personnel in the Analytical Chemistry Laboratory (ACL) and 222-S laboratory. This report is intended as an annual report, not a completed work

  5. Production of linear polarization by segmentation of helical undulator

    International Nuclear Information System (INIS)

    Tanaka, T.; Kitamura, H.

    2002-01-01

    A simple scheme to obtain linearly polarized radiation (LPR) with a segmented undulator is proposed. The undulator is composed of several segments each of which forms a helical undulator and has helicity opposite to those of adjacent segments. Due to coherent sum of radiation, the circularly polarized component is canceled out resulting in production of LPR without any higher harmonics. The radiation from the proposed device is investigated analytically, which shows that a high degree of linear polarization is obtained in spite of a finite beam emittance and angular acceptance of optics, if a sufficiently large number of segments and an adequate photon energy are chosen. Results of calculation to investigate practical performances of the proposed device are presented

  6. Segmental Refinement: A Multigrid Technique for Data Locality

    KAUST Repository

    Adams, Mark F.; Brown, Jed; Knepley, Matt; Samtaney, Ravi

    2016-01-01

    We investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. We present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinement and report performance results with up to 64K cores on a Cray XC30.

  7. Segmental Refinement: A Multigrid Technique for Data Locality

    KAUST Repository

    Adams, Mark F.

    2016-08-04

    We investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. We present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinement and report performance results with up to 64K cores on a Cray XC30.

  8. The Regional MBA: Distinct Segments, Wants, and Needs

    Science.gov (United States)

    Passyn, Kirsten; Diriker, Memo

    2011-01-01

    MBA Programs in top-tier school differ greatly from those in regional schools. A survey that aimed at assessing segmentation, pedagogy, and satisfaction in regional MBA programs was developed and administered in three universities of the Mid Atlantic, Midwest, and Southern regions. The results show four clearly distinguished segments that…

  9. Development of a segmented grating mount system for FIREX-1

    International Nuclear Information System (INIS)

    Ezaki, Y; Tabata, M; Kihara, M; Horiuchi, Y; Endo, M; Jitsuno, T

    2008-01-01

    A mount system for segmented meter-sized gratings has been developed, which has a high precision grating support mechanism and drive mechanism to minimize both deformation of the optical surfaces and misalignments in setting a segmented grating for obtaining sufficient performance of the pulse compressor. From analytical calculations, deformation of the grating surface is less than 1/20 lambda RMS and the estimated drive resolution for piston and tilt drive of the segmented grating is 1/20 lambda, which are both compliant with the requirements for the rear-end subsystem of FIREX-1

  10. Segmentation of the Indian photovoltaic market

    International Nuclear Information System (INIS)

    Srinivasan, S.

    2005-01-01

    This paper provides an analytical framework studying the actors, networks and institutions and examines the evolution of the Indian Solar Photovoltaic (PV) Market. Different market segments, along the lines of demand and supply of PV equipment, i.e. on the basis of geography, end-use application, subsidy policy and other financing mechanisms, are detailed. The objective of this effort is to identify segments that require special attention from policy makers, donors and the Ministry of Non-Conventional Energy Sources. The paper also discusses the evolution of the commercial PV market in certain parts of the country and trends in the maturity of the market. (author)

  11. Integrated multi-choice goal programming and multi-segment goal programming for supplier selection considering imperfect-quality and price-quantity discounts in a multiple sourcing environment

    Science.gov (United States)

    Chang, Ching-Ter; Chen, Huang-Mu; Zhuang, Zheng-Yun

    2014-05-01

    Supplier selection (SS) is a multi-criteria and multi-objective problem, in which multi-segment (e.g. imperfect-quality discount (IQD) and price-quantity discount (PQD)) and multi-aspiration level problems may be significantly important; however, little attention had been given to dealing with both of them simultaneously in the past. This study proposes a model for integrating multi-choice goal programming and multi-segment goal programming to solve the above-mentioned problems by providing the following main contributions: (1) it allows decision-makers to set multiple aspiration levels on the right-hand side of each goal to suit real-world situations, (2) the PQD and IQD conditions are considered in the proposed model simultaneously and (3) the proposed model can solve a SS problem with n suppliers where each supplier offers m IQD with r PQD intervals, where only ? extra binary variables are required. The usefulness of the proposed model is explained using a real case. The results indicate that the proposed model not only can deal with a SS problem with multi-segment and multi-aspiration levels, but also can help the decision-maker to find the appropriate order quantities for each supplier by considering cost, quality and delivery.

  12. Community-Based Mental Health and Behavioral Programs for Low-Income Urban Youth: A Meta-Analytic Review

    Science.gov (United States)

    Farahmand, Farahnaz K.; Duffy, Sophia N.; Tailor, Megha A.; Dubois, David L.; Lyon, Aaron L.; Grant, Kathryn E.; Zarlinski, Jennifer C.; Masini, Olivia; Zander, Keith J.; Nathanson, Alison M.

    2012-01-01

    A meta-analytic review of 33 studies and 41 independent samples was conducted of the effectiveness of community-based mental health and behavioral programs for low-income urban youth. Findings indicated positive effects, with an overall mean effect of 0.25 at post-test. While this is comparable to previous meta-analytic intervention research with…

  13. Requirements for quality control of analytical data for the Environmental Restoration Program

    International Nuclear Information System (INIS)

    Engels, J.

    1992-12-01

    The Environmental Restoration (ER) Program was established for the investigation and remediation of inactive US Department of Energy (DOE) sites and facilities that have been declared surplus in terms of their previous uses. The purpose of this document is to Specify ER requirements for quality control (QC) of analytical data. Activities throughout all phases of the investigation may affect the quality of the final data product, thus are subject to control specifications. Laboratory control is emphasized in this document, and field concerns will be addressed in a companion document Energy Systems, in its role of technical coordinator and at the request of DOE-OR, extends the application of these requirements to all participants in ER activities. Because every instance and concern may not be addressed in this document, participants are encouraged to discuss any questions with the ER Quality Assurance (QA) Office, the Analytical Environmental Support Group (AESG), or the Analytical Project Office (APO)

  14. The Solid* toolset for software visual analytics of program structure and metrics comprehension : From research prototype to product

    NARCIS (Netherlands)

    Reniers, Dennie; Voinea, Lucian; Ersoy, Ozan; Telea, Alexandru

    2014-01-01

    Software visual analytics (SVA) tools combine static program analysis and fact extraction with information visualization to support program comprehension. However, building efficient and effective SVA tools is highly challenging, as it involves extensive software development in program analysis,

  15. Segmentation of Portuguese customers’ expectations from fitness programs

    Directory of Open Access Journals (Sweden)

    Ricardo Gouveia Rodrigues

    2017-10-01

    Full Text Available Expectations towards fitness exercises are the major factor in customer satisfaction in the service sector in question. The purpose of this study is to present a segmentation framework for fitness customers, based on their individual expectations. The survey was designed and validated to evaluate individual expectations towards exercises. The study included a randomly recruited sample of 723 subjects (53% males; 47% females; 42.1±19.7 years. Factor analysis and cluster analysis with Ward’s cluster method with squared Euclidean distance were used to analyse the data obtained. Four components were extracted (performance, enjoyment, beauty and health explaining 68.7% of the total variance and three distinct segments were found: Exercise Lovers (n=312, Disinterested (n=161 and Beauty Seekers (n=250. All the factors identified have a significant contribution to differentiate the clusters, the first and third clusters being most similar. The segmentation framework obtained based on customer expectations allows better understanding of customers’ profiles, thus helping the fitness industry develop services more suitable for each type of customers. A follow-up study was conducted 5 years later and the results concur with the initial study.

  16. Quality assurance of analytical, scientific, and design computer programs for nuclear power plants

    International Nuclear Information System (INIS)

    1994-06-01

    This Standard applies to the design and development, modification, documentation, execution, and configuration management of computer programs used to perform analytical, scientific, and design computations during the design and analysis of safety-related nuclear power plant equipment, systems, structures, and components as identified by the owner. 2 figs

  17. Quality assurance of analytical, scientific, and design computer programs for nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-06-01

    This Standard applies to the design and development, modification, documentation, execution, and configuration management of computer programs used to perform analytical, scientific, and design computations during the design and analysis of safety-related nuclear power plant equipment, systems, structures, and components as identified by the owner. 2 figs.

  18. Flammable gas safety program. Analytical methods development: FY 1993 progress report

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, J.A.; Clauss, S.; Grant, K.; Hoopes, V.; Lerner, B.; Lucke, R.; Mong, G.; Rau, J.; Steele, R.

    1994-01-01

    This report describes the status of developing analytical methods to account for the organic constituents in Hanford waste tanks, with particular emphasis on those tanks that have been assigned to the Flammable Gas Watch List. Six samples of core segments from Tank 101-SY, obtained during the window E core sampling, have been analyzed for organic constituents. Four of the samples were from the upper region, or convective layer, of the tank and two were from the lower, nonconvective layer. The samples were analyzed for chelators, chelator fragments, and several carboxylic acids by derivatization gas chromatography/mass spectrometry (GC/MS). The major components detected were ethylenediaminetetraacetic acid (EDTA), nitroso-iminodiacetic acid (NIDA), nitrilotriacetic acid (NTA), citric acid (CA), succinic acid (SA), and ethylenediaminetriacetic acid (ED3A). The chelator of highest concentration was EDTA in all six samples analyzed. Liquid chromatography (LC) was used to quantitate low molecular weight acids (LMWA) including oxalic, formic, glycolic, and acetic acids, which are present in the waste as acid salts. From 23 to 61% of the total organic carbon (TOC) in the samples analyzed was accounted for by these acids. Oxalate constituted approximately 40% of the TOC in the nonconvective layer samples. Oxalate was found to be approximately 3 to 4 times higher in concentration in the nonconvective layer than in the convective layer. During FY 1993, LC methods for analyzing LWMA, and two chelators N-(2-hydroxyethyl) ethylenediaminetriacetic acid and EDTA, were transferred to personnel in the Analytical Chemistry Laboratory and the 222-S laboratory.

  19. On the Multilevel Nature of Meta-Analysis: A Tutorial, Comparison of Software Programs, and Discussion of Analytic Choices.

    Science.gov (United States)

    Pastor, Dena A; Lazowski, Rory A

    2018-01-01

    The term "multilevel meta-analysis" is encountered not only in applied research studies, but in multilevel resources comparing traditional meta-analysis to multilevel meta-analysis. In this tutorial, we argue that the term "multilevel meta-analysis" is redundant since all meta-analysis can be formulated as a special kind of multilevel model. To clarify the multilevel nature of meta-analysis the four standard meta-analytic models are presented using multilevel equations and fit to an example data set using four software programs: two specific to meta-analysis (metafor in R and SPSS macros) and two specific to multilevel modeling (PROC MIXED in SAS and HLM). The same parameter estimates are obtained across programs underscoring that all meta-analyses are multilevel in nature. Despite the equivalent results, not all software programs are alike and differences are noted in the output provided and estimators available. This tutorial also recasts distinctions made in the literature between traditional and multilevel meta-analysis as differences between meta-analytic choices, not between meta-analytic models, and provides guidance to inform choices in estimators, significance tests, moderator analyses, and modeling sequence. The extent to which the software programs allow flexibility with respect to these decisions is noted, with metafor emerging as the most favorable program reviewed.

  20. Probabilistic Segmentation of Folk Music Recordings

    Directory of Open Access Journals (Sweden)

    Ciril Bohak

    2016-01-01

    Full Text Available The paper presents a novel method for automatic segmentation of folk music field recordings. The method is based on a distance measure that uses dynamic time warping to cope with tempo variations and a dynamic programming approach to handle pitch drifting for finding similarities and estimating the length of repeating segment. A probabilistic framework based on HMM is used to find segment boundaries, searching for optimal match between the expected segment length, between-segment similarities, and likely locations of segment beginnings. Evaluation of several current state-of-the-art approaches for segmentation of commercial music is presented and their weaknesses when dealing with folk music are exposed, such as intolerance to pitch drift and variable tempo. The proposed method is evaluated and its performance analyzed on a collection of 206 folk songs of different ensemble types: solo, two- and three-voiced, choir, instrumental, and instrumental with singing. It outperforms current commercial music segmentation methods for noninstrumental music and is on a par with the best for instrumental recordings. The method is also comparable to a more specialized method for segmentation of solo singing folk music recordings.

  1. Towards Secure and Trustworthy Cyberspace: Social Media Analytics on Hacker Communities

    Science.gov (United States)

    Li, Weifeng

    2017-01-01

    Social media analytics is a critical research area spawned by the increasing availability of rich and abundant online user-generated content. So far, social media analytics has had a profound impact on organizational decision making in many aspects, including product and service design, market segmentation, customer relationship management, and…

  2. Analytic models for the evolution of semilocal string networks

    International Nuclear Information System (INIS)

    Nunes, A. S.; Martins, C. J. A. P.; Avgoustidis, A.; Urrestilla, J.

    2011-01-01

    We revisit previously developed analytic models for defect evolution and adapt them appropriately for the study of semilocal string networks. We thus confirm the expectation (based on numerical simulations) that linear scaling evolution is the attractor solution for a broad range of model parameters. We discuss in detail the evolution of individual semilocal segments, focusing on the phenomenology of segment growth, and also provide a preliminary comparison with existing numerical simulations.

  3. Unsupervised Segmentation Methods of TV Contents

    Directory of Open Access Journals (Sweden)

    Elie El-Khoury

    2010-01-01

    Full Text Available We present a generic algorithm to address various temporal segmentation topics of audiovisual contents such as speaker diarization, shot, or program segmentation. Based on a GLR approach, involving the ΔBIC criterion, this algorithm requires the value of only a few parameters to produce segmentation results at a desired scale and on most typical low-level features used in the field of content-based indexing. Results obtained on various corpora are of the same quality level than the ones obtained by other dedicated and state-of-the-art methods.

  4. Analytical methods manual for the Mineral Resource Surveys Program, U.S. Geological Survey

    Science.gov (United States)

    Arbogast, Belinda F.

    1996-01-01

    The analytical methods validated by the Mineral Resource Surveys Program, Geologic Division, is the subject of this manual. This edition replaces the methods portion of Open-File Report 90-668 published in 1990. Newer methods may be used which have been approved by the quality assurance (QA) project and are on file with the QA coordinator.This manual is intended primarily for use by laboratory scientists; this manual can also assist laboratory users to evaluate the data they receive. The analytical methods are written in a step by step approach so that they may be used as a training tool and provide detailed documentation of the procedures for quality assurance. A "Catalog of Services" is available for customer (submitter) use with brief listings of:the element(s)/species determined,method of determination,reference to cite,contact person,summary of the technique,and analyte concentration range.For a copy please contact the Branch office at (303) 236-1800 or fax (303) 236-3200.

  5. ADVANCED CLUSTER BASED IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    D. Kesavaraja

    2011-11-01

    Full Text Available This paper presents efficient and portable implementations of a useful image segmentation technique which makes use of the faster and a variant of the conventional connected components algorithm which we call parallel Components. In the Modern world majority of the doctors are need image segmentation as the service for various purposes and also they expect this system is run faster and secure. Usually Image segmentation Algorithms are not working faster. In spite of several ongoing researches in Conventional Segmentation and its Algorithms might not be able to run faster. So we propose a cluster computing environment for parallel image Segmentation to provide faster result. This paper is the real time implementation of Distributed Image Segmentation in Clustering of Nodes. We demonstrate the effectiveness and feasibility of our method on a set of Medical CT Scan Images. Our general framework is a single address space, distributed memory programming model. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. The image segmentation algorithm makes use of an efficient cluster process which uses a novel approach for parallel merging. Our experimental results are consistent with the theoretical analysis and practical results. It provides the faster execution time for segmentation, when compared with Conventional method. Our test data is different CT scan images from the Medical database. More efficient implementations of Image Segmentation will likely result in even faster execution times.

  6. BeamOptics. A program for analytical beam optics

    International Nuclear Information System (INIS)

    Autin, B.; Carli, C.; D'Amico, T.; Groebner, O.; Martini, M.; Wildner, E.

    1998-01-01

    Analytical beam optics deals with the basic properties of the magnetic modules which compose particle accelerators in the same way as light optics was developed for telescopes, microscopes, or other instruments. The difference between photon and charged-particle optics lies in the nature of the field which acts upon the particle. The magnets of accelerators do not have the rotational symmetry of glass lenses and the computational problems are much more difficult. For this reason, the symbolic program BeamOptics has been written to assist the user in finding the parameters of systems whose complexity is better treated by computer than by hand. Symbolic results may be hard to interpret. Thin-lens models have been adopted because their description is algebraic and emphasis has been put on the existence of solutions, the number of solutions, and simple yet unknown special schemes. The program can also be applied to real machines with long elements. In that case, it works with numerical data but the results are accessible through continuous functions which provide the machine parameters at arbitrary positions along the reference orbit. The code is organized to be implemented in accelerator controls and has functions to correct all the first-order perturbations using a universal procedure. (orig.)

  7. ANALYTIC WORD RECOGNITION WITHOUT SEGMENTATION BASED ON MARKOV RANDOM FIELDS

    NARCIS (Netherlands)

    Coisy, C.; Belaid, A.

    2004-01-01

    In this paper, a method for analytic handwritten word recognition based on causal Markov random fields is described. The words models are HMMs where each state corresponds to a letter; each letter is modelled by a NSHP­HMM (Markov field). Global models are build dynamically, and used for recognition

  8. Evaluation of Respiratory Protection Program in Petrochemical Industries: Application of Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Hadi Kolahi

    2018-03-01

    Full Text Available Background: Respiratory protection equipment (RPE is the last resort to control exposure to workplace air pollutants. A comprehensive respiratory protection program (RPP ensures that RPE is selected, used, and cared properly. Therefore, RPP must be well integrated into the occupational health and safety requirements. In this study, we evaluated the implementation of RPP in Iranian petrochemical industries to identify the required solutions to improve the current status of respiratory protection. Methods: This cross-sectional study was conducted among 24 petrochemical industries in Iran. The survey instrument was a checklist extracted from the Occupational Safety and Health Administration respiratory protection standard. An index, Respiratory Protection Program Index (RPPI, was developed and weighted by analytic hierarchy process to determine the compliance rate (CR of provided respiratory protection measures with the RPP standard. Data analysis was performed using Excel 2010. Results: The most important element of RPP, according to experts, was respiratory hazard evaluation. The average value of RPPI in the petrochemical plants was 49 ± 15%. The highest and lowest of CR among RPP elements were RPE selection and medical evaluation, respectively. Conclusion: None of studied petrochemical industries implemented RPP completely. This can lead to employees' overexposure to hazardous workplace air contaminants. Increasing awareness of employees and employers through training is suggested by this study to improve such conditions. Keywords: analytic hierarchy process, petrochemical industries, respiratory protection program

  9. Analytical torque calculation and experimental verification of synchronous permanent magnet couplings with Halbach arrays

    Science.gov (United States)

    Seo, Sung-Won; Kim, Young-Hyun; Lee, Jung-Ho; Choi, Jang-Young

    2018-05-01

    This paper presents analytical torque calculation and experimental verification of synchronous permanent magnet couplings (SPMCs) with Halbach arrays. A Halbach array is composed of various numbers of segments per pole; we calculate and compare the magnetic torques for 2, 3, and 4 segments. Firstly, based on the magnetic vector potential, and using a 2D polar coordinate system, we obtain analytical solutions for the magnetic field. Next, through a series of processes, we perform magnetic torque calculations using the derived solutions and a Maxwell stress tensor. Finally, the analytical results are verified by comparison with the results of 2D and 3D finite element analysis and the results of an experiment.

  10. Visual programming for next-generation sequencing data analytics.

    Science.gov (United States)

    Milicchio, Franco; Rose, Rebecca; Bian, Jiang; Min, Jae; Prosperi, Mattia

    2016-01-01

    High-throughput or next-generation sequencing (NGS) technologies have become an established and affordable experimental framework in biological and medical sciences for all basic and translational research. Processing and analyzing NGS data is challenging. NGS data are big, heterogeneous, sparse, and error prone. Although a plethora of tools for NGS data analysis has emerged in the past decade, (i) software development is still lagging behind data generation capabilities, and (ii) there is a 'cultural' gap between the end user and the developer. Generic software template libraries specifically developed for NGS can help in dealing with the former problem, whilst coupling template libraries with visual programming may help with the latter. Here we scrutinize the state-of-the-art low-level software libraries implemented specifically for NGS and graphical tools for NGS analytics. An ideal developing environment for NGS should be modular (with a native library interface), scalable in computational methods (i.e. serial, multithread, distributed), transparent (platform-independent), interoperable (with external software interface), and usable (via an intuitive graphical user interface). These characteristics should facilitate both the run of standardized NGS pipelines and the development of new workflows based on technological advancements or users' needs. We discuss in detail the potential of a computational framework blending generic template programming and visual programming that addresses all of the current limitations. In the long term, a proper, well-developed (although not necessarily unique) software framework will bridge the current gap between data generation and hypothesis testing. This will eventually facilitate the development of novel diagnostic tools embedded in routine healthcare.

  11. Evaluation of analytical results on DOE Quality Assessment Program Samples

    International Nuclear Information System (INIS)

    Jaquish, R.E.; Kinnison, R.R.; Mathur, S.P.; Sastry, R.

    1985-01-01

    Criteria were developed for evaluating the participants analytical results in the DOE Quality Assessment Program (QAP). Historical data from previous QAP studies were analyzed using descriptive statistical methods to determine the interlaboratory precision that had been attained. Performance criteria used in other similar programs were also reviewed. Using these data, precision values and control limits were recommended for each type of analysis performed in the QA program. Results of the analysis performed by the QAP participants on the November 1983 samples were statistically analyzed and evaluated. The Environmental Measurements Laboratory (EML) values were used as the known values and 3-sigma precision values were used as control limits. Results were submitted by 26 participating laboratories for 49 different radionuclide media combinations. The participants reported 419 results and of these, 350 or 84% were within control limits. Special attention was given to the data from gamma spectral analysis of air filters and water samples. both normal probability and box plots were prepared for each nuclide to help evaluate the distribution of the data. Results that were outside the expected range were identified and suggestions made that laboratories check calculations, and procedures on these results

  12. Learning About Love: A Meta-Analytic Study of Individually-Oriented Relationship Education Programs for Adolescents and Emerging Adults.

    Science.gov (United States)

    Simpson, David M; Leonhardt, Nathan D; Hawkins, Alan J

    2018-03-01

    Despite recent policy initiatives and substantial federal funding of individually oriented relationship education programs for youth, there have been no meta-analytic reviews of this growing field. This meta-analytic study draws on 17 control-group studies and 13 one-group/pre-post studies to evaluate the effectiveness of relationship education programs on adolescents' and emerging adults' relationship knowledge, attitudes, and skills. Overall, control-group studies produced a medium effect (d = .36); one-group/pre-post studies also produced a medium effect (d = .47). However, the lack of studies with long-term follow-ups of relationship behaviors in the young adult years is a serious weakness in the field, limiting what we can say about the value of these programs for helping youth achieve their aspirations for healthy romantic relationships and stable marriages.

  13. Evaluating the use of programming games for building early analytical thinking skills

    Directory of Open Access Journals (Sweden)

    H. Tsalapatas

    2015-11-01

    Full Text Available Analytical thinking is a transversal skill that helps learners excel academically independently of theme area. It is on high demand in the world of work especially in innovation related sectors. It involves finding a viable solution to a problem by identifying goals, parameters, and resources available for deployment. These are strategy elements in game play. They further constitute good practices in programming. This work evaluates how serious games based on visual programming as a solution synthesis tool within exploration, inquiry, and collaboration can help learners build structured mindsets. It analyses how a visual programming environment that supports experimentation for building intuition on potential solutions to logical puzzles, and then encourages learners to synthesize a solution interactively, helps learners through gaming principles to build self-esteem on their problem solving ability, to develop algorithmic thinking capacity, and to stay engaged in learning.

  14. Analysis of a Segmented Annular Coplanar Capacitive Tilt Sensor with Increased Sensitivity

    OpenAIRE

    Jiahao Guo; Pengcheng Hu; Jiubin Tan

    2016-01-01

    An investigation of a segmented annular coplanar capacitor is presented. We focus on its theoretical model, and a mathematical expression of the capacitance value is derived by solving a Laplace equation with Hankel transform. The finite element method is employed to verify the analytical result. Different control parameters are discussed, and each contribution to the capacitance value of the capacitor is obtained. On this basis, we analyze and optimize the structure parameters of a segmented...

  15. Multiple beam envelope equations for electron injectors using a bunch segmentation model

    Directory of Open Access Journals (Sweden)

    A. Mizuno

    2012-06-01

    Full Text Available A new semianalytical method of investigating the beam dynamics for electron injectors was developed. In this method, a short bunched electron beam is assumed to be an ensemble of several segmentation pieces in both the longitudinal and the transverse directions. The trajectory of each electron in the segmentation pieces is solved by the beam envelope equations while taking into account the space charge fields produced by all the pieces, the electromagnetic fields of an rf cavity, and the image charge fields at a cathode surface. The shape of the entire bunch is consequently calculated, and thus the emittances can be obtained from weighted mean values of the solutions for the obtained electron trajectories. The advantage of this method is its unique assumption for the beam parameters. We assume that each segmentation slice is not warped in the calculations. Although if the beam energy is low and the charge density is large, this condition is not satisfied, in practice, this condition is usually satisfied. We have performed beam dynamics calculations to obtain traces in free space and in the BNL-type rf gun cavity by comparing the analytical solutions with those obtained by simulation. In most cases, the emittances obtained by the simulation become closer to those obtained analytically with increasing the number of particles used in the simulation. Therefore, the analytically obtained emittances are expected to coincide with converged values obtained by the simulation. The applicable range of the analytical method for the BNL-type rf gun cavity is under 0.5 nC per bunch. This range is often used in recently built x-ray free electron laser facilities.

  16. Buildings and Terrain of Urban Area Point Cloud Segmentation based on PCL

    International Nuclear Information System (INIS)

    Liu, Ying; Zhong, Ruofei

    2014-01-01

    One current problem with laser radar point data classification is building and urban terrain segmentation, this paper proposes a point cloud segmentation method base on PCL libraries. PCL is a large cross-platform open source C++ programming library, which implements a large number of point cloud related efficient data structures and generic algorithms involving point cloud retrieval, filtering, segmentation, registration, feature extraction and curved surface reconstruction, visualization, etc. Due to laser radar point cloud characteristics with large amount of data, unsymmetrical distribution, this paper proposes using the data structure of kd-tree to organize data; then using Voxel Grid filter for point cloud resampling, namely to reduce the amount of point cloud data, and at the same time keep the point cloud shape characteristic; use PCL Segmentation Module, we use a Euclidean Cluster Extraction class with Europe clustering for buildings and ground three-dimensional point cloud segmentation. The experimental results show that this method avoids the multiple copy system existing data needs, saves the program storage space through the call of PCL library method and class, shortens the program compiled time and improves the running speed of the program

  17. Optimization of hot water transport and distribution networks by analytical method: OPTAL program

    International Nuclear Information System (INIS)

    Barreau, Alain; Caizergues, Robert; Moret-Bailly, Jean

    1977-06-01

    This report presents optimization studies of hot water transport and distribution network by minimizing operating cost. Analytical optimization is used: Lagrange's method of undetermined multipliers. Optimum diameter of each pipe is calculated for minimum network operating cost. The characteristics of the computer program used for calculations, OPTAL, are given in this report. An example of network is calculated and described: 52 branches and 27 customers. Results are discussed [fr

  18. Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A...Program Name Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A) DoD Component Air Force Responsible Office Program...APB) dated March 9, 2015 DCAPES Inc 2A 2016 MAR UNCLASSIFIED 4 Program Description Deliberate and Crisis Action Planning and Execution Segments

  19. Segmenting the MBA Market: An Australian Strategy.

    Science.gov (United States)

    Everett, James E.; Armstrong, Robert W.

    1990-01-01

    A University of Western Australia market segmentation study for the masters program in business administration examined the relationship between Graduate Management Admission Test scores, work experience, faculty of undergraduate degree, gender, and academic success in the program. Implications of the results for establishing admission criteria…

  20. Analytical chemistry department. Annual report, 1977

    International Nuclear Information System (INIS)

    Knox, E.M.

    1978-09-01

    The annual report describes the analytical methods, analyses and equipment developed or adopted for use by the Analytical Chemistry Department during 1977. The individual articles range from a several page description of development and study programs to brief one paragraph descriptions of methods adopted for use with or without some modification. This year, we have included a list of the methods incorporated into our Analytical Chemistry Methods Manual. This report is organized into laboratory sections within the Department as well as major programs within General Atomic Company. Minor programs and studies are included under Miscellaneous. The analytical and technical support activities for GAC include gamma-ray spectroscopy, radiochemistry, activation analysis, gas chromatography, atomic absorption, spectrophotometry, emission spectroscopy, x-ray diffractometry, electron microprobe, titrimetry, gravimetry, and quality control. Services are provided to all organizations throughout General Atomic Company. The major effort, however, is in support of the research and development programs within HTGR Generic Technology Programs ranging from new fuel concepts, end-of-life studies, and irradiated capsules to fuel recycle studies

  1. A model-based Bayesian framework for ECG beat segmentation

    International Nuclear Information System (INIS)

    Sayadi, O; Shamsollahi, M B

    2009-01-01

    The study of electrocardiogram (ECG) waveform amplitudes, timings and patterns has been the subject of intense research, for it provides a deep insight into the diagnostic features of the heart's functionality. In some recent works, a Bayesian filtering paradigm has been proposed for denoising and compression of ECG signals. In this paper, it is shown that this framework may be effectively used for ECG beat segmentation and extraction of fiducial points. Analytic expressions for the determination of points and intervals are derived and evaluated on various real ECG signals. Simulation results show that the method can contribute to and enhance the clinical ECG beat segmentation performance

  2. Analyzing Array Manipulating Programs by Program Transformation

    Science.gov (United States)

    Cornish, J. Robert M.; Gange, Graeme; Navas, Jorge A.; Schachte, Peter; Sondergaard, Harald; Stuckey, Peter J.

    2014-01-01

    We explore a transformational approach to the problem of verifying simple array-manipulating programs. Traditionally, verification of such programs requires intricate analysis machinery to reason with universally quantified statements about symbolic array segments, such as "every data item stored in the segment A[i] to A[j] is equal to the corresponding item stored in the segment B[i] to B[j]." We define a simple abstract machine which allows for set-valued variables and we show how to translate programs with array operations to array-free code for this machine. For the purpose of program analysis, the translated program remains faithful to the semantics of array manipulation. Based on our implementation in LLVM, we evaluate the approach with respect to its ability to extract useful invariants and the cost in terms of code size.

  3. Locally excitatory, globally inhibitory oscillator networks: theory and application to scene segmentation

    Science.gov (United States)

    Wang, DeLiang; Terman, David

    1995-01-01

    A novel class of locally excitatory, globally inhibitory oscillator networks (LEGION) is proposed and investigated analytically and by computer simulation. The model of each oscillator corresponds to a standard relaxation oscillator with two time scales. The network exhibits a mechanism of selective gating, whereby an oscillator jumping up to its active phase rapidly recruits the oscillators stimulated by the same pattern, while preventing other oscillators from jumping up. We show analytically that with the selective gating mechanism the network rapidly achieves both synchronization within blocks of oscillators that are stimulated by connected regions and desynchronization between different blocks. Computer simulations demonstrate LEGION's promising ability for segmenting multiple input patterns in real time. This model lays a physical foundation for the oscillatory correlation theory of feature binding, and may provide an effective computational framework for scene segmentation and figure/ground segregation.

  4. A Cautionary Analysis of STAPLE Using Direct Inference of Segmentation Truth

    DEFF Research Database (Denmark)

    Van Leemput, Koen; Sabuncu, Mert R.

    2014-01-01

    In this paper we analyze the properties of the well-known segmentation fusion algorithm STAPLE, using a novel inference technique that analytically marginalizes out all model parameters. We demonstrate both theoretically and empirically that when the number of raters is large, or when consensus r...

  5. A Comparison of Four Software Programs for Implementing Decision Analytic Cost-Effectiveness Models.

    Science.gov (United States)

    Hollman, Chase; Paulden, Mike; Pechlivanoglou, Petros; McCabe, Christopher

    2017-08-01

    The volume and technical complexity of both academic and commercial research using decision analytic modelling has increased rapidly over the last two decades. The range of software programs used for their implementation has also increased, but it remains true that a small number of programs account for the vast majority of cost-effectiveness modelling work. We report a comparison of four software programs: TreeAge Pro, Microsoft Excel, R and MATLAB. Our focus is on software commonly used for building Markov models and decision trees to conduct cohort simulations, given their predominance in the published literature around cost-effectiveness modelling. Our comparison uses three qualitative criteria as proposed by Eddy et al.: "transparency and validation", "learning curve" and "capability". In addition, we introduce the quantitative criterion of processing speed. We also consider the cost of each program to academic users and commercial users. We rank the programs based on each of these criteria. We find that, whilst Microsoft Excel and TreeAge Pro are good programs for educational purposes and for producing the types of analyses typically required by health technology assessment agencies, the efficiency and transparency advantages of programming languages such as MATLAB and R become increasingly valuable when more complex analyses are required.

  6. Remote sensing image segmentation based on Hadoop cloud platform

    Science.gov (United States)

    Li, Jie; Zhu, Lingling; Cao, Fubin

    2018-01-01

    To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.

  7. Modelling and Optimization of Four-Segment Shielding Coils of Current Transformers.

    Science.gov (United States)

    Gao, Yucheng; Zhao, Wei; Wang, Qing; Qu, Kaifeng; Li, He; Shao, Haiming; Huang, Songling

    2017-05-26

    Applying shielding coils is a practical way to protect current transformers (CTs) for large-capacity generators from the intensive magnetic interference produced by adjacent bus-bars. The aim of this study is to build a simple analytical model for the shielding coils, from which the optimization of the shielding coils can be calculated effectively. Based on an existing stray flux model, a new analytical model for the leakage flux of partial coils is presented, and finite element method-based simulations are carried out to develop empirical equations for the core-pickup factors of the models. Using the flux models, a model of the common four-segment shielding coils is derived. Furthermore, a theoretical analysis is carried out on the optimal performance of the four-segment shielding coils in a typical six-bus-bars scenario. It turns out that the "all parallel" shielding coils with a 45° starting position have the best shielding performance, whereas the "separated loop" shielding coils with a 0° starting position feature the lowest heating value. Physical experiments were performed, which verified all the models and the conclusions proposed in the paper. In addition, for shielding coils with other than the four-segment configuration, the analysis process will generally be the same.

  8. Analytical Thinking, Analytical Action: Using Prelab Video Demonstrations and e-Quizzes to Improve Undergraduate Preparedness for Analytical Chemistry Practical Classes

    Science.gov (United States)

    Jolley, Dianne F.; Wilson, Stephen R.; Kelso, Celine; O'Brien, Glennys; Mason, Claire E.

    2016-01-01

    This project utilizes visual and critical thinking approaches to develop a higher-education synergistic prelab training program for a large second-year undergraduate analytical chemistry class, directing more of the cognitive learning to the prelab phase. This enabled students to engage in more analytical thinking prior to engaging in the…

  9. Big data and analytics strategic and organizational impacts

    CERN Document Server

    Morabito, Vincenzo

    2015-01-01

    This book presents and discusses the main strategic and organizational challenges posed by Big Data and analytics in a manner relevant to both practitioners and scholars. The first part of the book analyzes strategic issues relating to the growing relevance of Big Data and analytics for competitive advantage, which is also attributable to empowerment of activities such as consumer profiling, market segmentation, and development of new products or services. Detailed consideration is also given to the strategic impact of Big Data and analytics on innovation in domains such as government and education and to Big Data-driven business models. The second part of the book addresses the impact of Big Data and analytics on management and organizations, focusing on challenges for governance, evaluation, and change management, while the concluding part reviews real examples of Big Data and analytics innovation at the global level. The text is supported by informative illustrations and case studies, so that practitioners...

  10. Marketing Education Through Benefit Segmentation. AIR Forum 1981 Paper.

    Science.gov (United States)

    Goodnow, Wilma Elizabeth

    The applicability of the "benefit segmentation" marketing technique to education was tested at the College of DuPage in 1979. Benefit segmentation identified target markets homogeneous in benefits expected from a program offering and may be useful in combatting declining enrollments. The 487 randomly selected students completed the 223…

  11. Segmentation: Identification of consumer segments

    DEFF Research Database (Denmark)

    Høg, Esben

    2005-01-01

    It is very common to categorise people, especially in the advertising business. Also traditional marketing theory has taken in consumer segments as a favorite topic. Segmentation is closely related to the broader concept of classification. From a historical point of view, classification has its...... origin in other sciences as for example biology, anthropology etc. From an economic point of view, it is called segmentation when specific scientific techniques are used to classify consumers to different characteristic groupings. What is the purpose of segmentation? For example, to be able to obtain...... a basic understanding of grouping people. Advertising agencies may use segmentation totarget advertisements, while food companies may usesegmentation to develop products to various groups of consumers. MAPP has for example investigated the positioning of fish in relation to other food products...

  12. Use of the analytical tree technique to develop a radiological protection program

    International Nuclear Information System (INIS)

    Domenech N, H.; Jova S, L.

    1996-01-01

    The results obtained by the Cuban Center for Radiological Protection and Hygiene by using an analytical tree technique to develop its general operational radiation protection program are presented. By the application of this method, some factors such as the organization of the radiation protection services, the provision of administrative requirements, the existing general laboratories requirements, the viability of resources and the current documentation was evaluated. Main components were considered such as: complete normative and regulatory documentation; automatic radiological protection data management; scope of 'on the-job'and radiological protection training for the personnel; previous radiological appraisal for the safety performance of the works and application of dose constrains for the personnel and the public. The detailed development of the program allowed to identify the basic aims to be achieved in its maintenance and improvement. (authors). 3 refs

  13. Japanese migration in contemporary Japan: economic segmentation and interprefectural migration.

    Science.gov (United States)

    Fukurai, H

    1991-01-01

    This paper examines the economic segmentation model in explaining 1985-86 Japanese interregional migration. The analysis takes advantage of statistical graphic techniques to illustrate the following substantive issues of interregional migration: (1) to examine whether economic segmentation significantly influences Japanese regional migration and (2) to explain socioeconomic characteristics of prefectures for both in- and out-migration. Analytic techniques include a latent structural equation (LISREL) methodology and statistical residual mapping. The residual dispersion patterns, for instance, suggest the extent to which socioeconomic and geopolitical variables explain migration differences by showing unique clusters of unexplained residuals. The analysis further points out that extraneous factors such as high residential land values, significant commuting populations, and regional-specific cultures and traditions need to be incorporated in the economic segmentation model in order to assess the extent of the model's reliability in explaining the pattern of interprefectural migration.

  14. Learning Analytics: Potential for Enhancing School Library Programs

    Science.gov (United States)

    Boulden, Danielle Cadieux

    2015-01-01

    Learning analytics has been defined as the measurement, collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs. The potential use of data and learning analytics in educational contexts has caught the attention of educators and…

  15. Schedule Analytics

    Science.gov (United States)

    2016-04-30

    Warfare, Naval Sea Systems Command Acquisition Cycle Time : Defining the Problem David Tate, Institute for Defense Analyses Schedule Analytics Jennifer...research was comprised of the following high- level steps :  Identify and review primary data sources 1...research. However, detailed reviews of the OMB IT Dashboard data revealed that schedule data is highly aggregated. Program start date and program end date

  16. Analytical Chemistry Laboratory, progress report for FY 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-12-01

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year (FY) 1993 (October 1992 through September 1993). This annual report is the tenth for the ACL and describes continuing effort on projects, work on new projects, and contributions of the ACL staff to various programs at ANL. The Analytical Chemistry Laboratory is a full-cost-recovery service center, with the primary mission of providing a broad range of analytical chemistry support services to the scientific and engineering programs at ANL. The ACL also has research programs in analytical chemistry, conducts instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems. Some routine or standard analyses are done, but it is common for the Argonne programs to generate unique problems that require development or modification of methods and adaption of techniques to obtain useful analytical data. The ACL is administratively within the Chemical Technology Division (CMT), its principal ANL client, but provides technical support for many of the technical divisions and programs at ANL. The ACL has four technical groups--Chemical Analysis, Instrumental Analysis, Organic Analysis, and Environmental Analysis--which together include about 45 technical staff members. Talents and interests of staff members cross the group lines, as do many projects within the ACL.

  17. Comparability between NQA-1 and the QA programs for analytical laboratories within the nuclear industry and EPA hazardous waste laboratories

    International Nuclear Information System (INIS)

    English, S.L.; Dahl, D.R.

    1989-01-01

    There is increasing cooperation between the Department of Energy (DOE), Department of Defense (DOD), and the Environmental Protection Agency (EPA) in the activities associated with monitoring and clean-up of hazardous wastes. Pacific Northwest Laboratory (PNL) examined the quality assurance/quality control programs that the EPA requires of the private sector when performing routine analyses of hazardous wastes to confirm how or if the requirements correspond with PNL's QA program based upon NQA-1. This paper presents the similarities and differences between NQA-1 and the QA program identified in ASTM-C1009-83, Establishing a QA Program for Analytical Chemistry Laboratories within the Nuclear Industry; EPA QAMS-005/80, Interim Guidelines and Specifications for Preparing Quality Assurance Project Plans, which is referenced in Statements of Work for CERCLA analytical activities; and Chapter 1 of SW-846, which is used in analyses of RCRA samples. The EPA QA programs for hazardous waste analyses are easily encompassed within an already established NQA-1 QA program. A few new terms are introduced and there is an increased emphasis upon the QC/verification, but there are many of the same basic concepts in all the programs

  18. Tank 241-BY-109, cores 201 and 203, analytical results for the final report

    International Nuclear Information System (INIS)

    Esch, R.A.

    1997-01-01

    This document is the final laboratory report for tank 241-BY-109 push mode core segments collected between June 6, 1997 and June 17, 1997. The segments were subsampled and analyzed in accordance with the Tank Push Mode Core Sampling and Analysis Plan (Bell, 1997), the Tank Safety Screening Data Quality Objective (Dukelow, et al, 1995). The analytical results are included

  19. Big Data Analytics for Demand Response: Clustering Over Space and Time

    Energy Technology Data Exchange (ETDEWEB)

    Chelmis, Charalampos [Univ. of Southern California, Los Angeles, CA (United States); Kolte, Jahanvi [Nirma Univ., Gujarat (India); Prasanna, Viktor K. [Univ. of Southern California, Los Angeles, CA (United States)

    2015-10-29

    The pervasive deployment of advanced sensing infrastructure in Cyber-Physical systems, such as the Smart Grid, has resulted in an unprecedented data explosion. Such data exhibit both large volumes and high velocity characteristics, two of the three pillars of Big Data, and have a time-series notion as datasets in this context typically consist of successive measurements made over a time interval. Time-series data can be valuable for data mining and analytics tasks such as identifying the “right” customers among a diverse population, to target for Demand Response programs. However, time series are challenging to mine due to their high dimensionality. In this paper, we motivate this problem using a real application from the smart grid domain. We explore novel representations of time-series data for BigData analytics, and propose a clustering technique for determining natural segmentation of customers and identification of temporal consumption patterns. Our method is generizable to large-scale, real-world scenarios, without making any assumptions about the data. We evaluate our technique using real datasets from smart meters, totaling ~ 18,200,000 data points, and show the efficacy of our technique in efficiency detecting the number of optimal number of clusters.

  20. Loading effects of anterior cervical spine fusion on adjacent segments

    Directory of Open Access Journals (Sweden)

    Chien-Shiung Wang

    2012-11-01

    Full Text Available Adjacent segment degeneration typically follows anterior cervical spine fusion. However, the primary cause of adjacent segment degeneration remains unknown. Therefore, in order to identify the loading effects that cause adjacent segment degeneration, this study examined the loading effects to superior segments adjacent to fused bone following anterior cervical spine fusion. The C3–C6 cervical spine segments of 12 sheep were examined. Specimens were divided into the following groups: intact spine (group 1; and C5–C6 segments that were fused via cage-instrumented plate fixation (group 2. Specimens were cycled between 20° flexion and 15° extension with a displacement control of 1°/second. The tested parameters included the range of motion (ROM of each segment, torque and strain on both the body and inferior articular process at the superior segments (C3–C4 adjacent to the fused bone, and the position of the neutral axis of stress at under 20° flexion and 15° extension. Under flexion and Group 2, torque, ROM, and strain on both the bodies and facets of superior segments adjacent to the fused bone were higher than those of Group 1. Under extension and Group 2, ROM for the fused segment was less than that of Group 1; torque, ROM, and stress on both the bodies and facets of superior segments adjacent to the fused bone were higher than those of Group 1. These analytical results indicate that the muscles and ligaments require greater force to achieve cervical motion than the intact spine following anterior cervical spine fusion. In addition, ROM and stress on the bodies and facets of the joint segments adjacent to the fused bone were significantly increased. Under flexion, the neutral axis of the stress on the adjacent segment moved backward, and the stress on the bodies of the segments adjacent to the fused bone increased. These comparative results indicate that increased stress on the adjacent segments is caused by stress-shielding effects

  1. Segmentation of ribs in digital chest radiographs

    Science.gov (United States)

    Cong, Lin; Guo, Wei; Li, Qiang

    2016-03-01

    Ribs and clavicles in posterior-anterior (PA) digital chest radiographs often overlap with lung abnormalities such as nodules, and cause missing of these abnormalities, it is therefore necessary to remove or reduce the ribs in chest radiographs. The purpose of this study was to develop a fully automated algorithm to segment ribs within lung area in digital radiography (DR) for removal of the ribs. The rib segmentation algorithm consists of three steps. Firstly, a radiograph was pre-processed for contrast adjustment and noise removal; second, generalized Hough transform was employed to localize the lower boundary of the ribs. In the third step, a novel bilateral dynamic programming algorithm was used to accurately segment the upper and lower boundaries of ribs simultaneously. The width of the ribs and the smoothness of the rib boundaries were incorporated in the cost function of the bilateral dynamic programming for obtaining consistent results for the upper and lower boundaries. Our database consisted of 93 DR images, including, respectively, 23 and 70 images acquired with a DR system from Shanghai United-Imaging Healthcare Co. and from GE Healthcare Co. The rib localization algorithm achieved a sensitivity of 98.2% with 0.1 false positives per image. The accuracy of the detected ribs was further evaluated subjectively in 3 levels: "1", good; "2", acceptable; "3", poor. The percentages of good, acceptable, and poor segmentation results were 91.1%, 7.2%, and 1.7%, respectively. Our algorithm can obtain good segmentation results for ribs in chest radiography and would be useful for rib reduction in our future study.

  2. Development and implementation of information systems for the DOE's National Analytical Management Program (NAMP)

    International Nuclear Information System (INIS)

    Streets, W. E.

    1999-01-01

    The Department of Energy (DOE) faces a challenging environmental management effort, including environmental protection, environmental restoration, waste management, and decommissioning. This effort requires extensive sampling and analysis to determine the type and level of contamination and the appropriate technology for cleanup, and to verify compliance with environmental regulations. Data obtained from these sampling and analysis activities are used to support environmental management decisions. Confidence in the data is critical, having legal, regulatory, and therefore, economic impact. To promote quality in the planning, management, and performance of these sampling and analysis operations, DOE's Office of Environmental Management (EM) has established the National Analytical Management Program (NAMP). With a focus on reducing the estimated costs of over $200M per year for EM's analytical services, NAMP has been charged with developing products that will decrease the costs for DOE complex-wide environmental management while maintaining quality in all aspects of the analytical data generation. As part of this thrust to streamline operations, NAMP is developing centralized information systems that will allow DOE complex personnel to share information about EM contacts at the various sites, pertinent methodologies for environmental restoration and waste management, costs of analyses, and performance of contracted laboratories

  3. Analytical chemistry: Principles and techniques

    International Nuclear Information System (INIS)

    Hargis, L.G.

    1988-01-01

    Although this text seems to have been intended for use in a one-semester course in undergraduate analytical chemistry, it includes the range of topics usually encountered in a two-semester introductory course in chemical analysis. The material is arranged logically for use in a two-semester course: the first 12 chapters contain the subjects most often covered in the first term, and the next 10 chapters pertain to the second (instrumental) term. Overall breadth and level of treatment are standards for an undergraduate text of this sort, and the only major omission is that of kinetic methods (which is a common omission in analytical texts). In the first 12 chapters coverage of the basic material is quite good. The emphasis on the underlying principles of the techniques rather than on specifics and design of instrumentation is welcomed. This text may be more useful for the instrumental portion of an analytical chemistry course than for the solution chemistry segment. The instrumental analysis portion is appropriate for an introductory textbook

  4. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    Science.gov (United States)

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  5. Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B...Information Program Name Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B) DoD Component Air Force Responsible Office...been established. DCAPES Inc 2B 2016 MAR UNCLASSIFIED 4 Program Description Deliberate and Crisis Action Planning and Execution Segments (DCAPES) is

  6. Contour tracing for segmentation of mammographic masses

    International Nuclear Information System (INIS)

    Elter, Matthias; Held, Christian; Wittenberg, Thomas

    2010-01-01

    CADx systems have the potential to support radiologists in the difficult task of discriminating benign and malignant mammographic lesions. The segmentation of mammographic masses from the background tissue is an important module of CADx systems designed for the characterization of mass lesions. In this work, a novel approach to this task is presented. The segmentation is performed by automatically tracing the mass' contour in-between manually provided landmark points defined on the mass' margin. The performance of the proposed approach is compared to the performance of implementations of three state-of-the-art approaches based on region growing and dynamic programming. For an unbiased comparison of the different segmentation approaches, optimal parameters are selected for each approach by means of tenfold cross-validation and a genetic algorithm. Furthermore, segmentation performance is evaluated on a dataset of ROI and ground-truth pairs. The proposed method outperforms the three state-of-the-art methods. The benchmark dataset will be made available with publication of this paper and will be the first publicly available benchmark dataset for mass segmentation.

  7. Analytical Chemistry Laboratory: Progress report for FY 1988

    International Nuclear Information System (INIS)

    Green, D.W.; Heinrich, R.R.; Graczyk, D.G.; Lindahl, P.C.; Erickson, M.D.

    1988-12-01

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for fiscal year 1988 (October 1987 through September 1988). The Analytical Chemistry Laboratory is a full-cost recovery service center, with the primary mission of providing a broad range of analytical chemistry support services to the scientific and engineering programs at ANL. In addition, the ACL conducts a research program in analytical chemistry, works on instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems, from routine standard analyses to unique problems that require significant development of methods and techniques

  8. Analytical Chemistry Laboratory progress report for FY 1989

    International Nuclear Information System (INIS)

    Green, D.W.; Heinrich, R.R.; Graczyk, D.G.; Lindahl, P.C.; Erickson, M.D.

    1989-12-01

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year 1989 (October 1988 through September 1989). The Analytical Chemistry Laboratory is a full-cost-recovery service center, with the primary mission of providing a broad range of analytical chemistry support services to the scientific and engineering programs at ANL. In addition, the ACL conducts a research program in analytical chemistry, works on instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems, from routine standard analyses to unique problems that require significant development of methods and techniques

  9. Analytical Chemistry Laboratory: Progress report for FY 1988

    Energy Technology Data Exchange (ETDEWEB)

    Green, D.W.; Heinrich, R.R.; Graczyk, D.G.; Lindahl, P.C.; Erickson, M.D.

    1988-12-01

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for fiscal year 1988 (October 1987 through September 1988). The Analytical Chemistry Laboratory is a full-cost recovery service center, with the primary mission of providing a broad range of analytical chemistry support services to the scientific and engineering programs at ANL. In addition, the ACL conducts a research program in analytical chemistry, works on instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems, from routine standard analyses to unique problems that require significant development of methods and techniques.

  10. Status of the segment interconnect, cable segment ancillary logic, and the cable segment hybrid driver projects

    International Nuclear Information System (INIS)

    Swoboda, C.; Barsotti, E.; Chappa, S.; Downing, R.; Goeransson, G.; Lensy, D.; Moore, G.; Rotolo, C.; Urish, J.

    1985-01-01

    The FASTBUS Segment Interconnect (SI) provides a communication path between two otherwise independent, asynchronous bus segments. In particular, the Segment Interconnect links a backplane crate segment to a cable segment. All standard FASTBUS address and data transactions can be passed through the SI or any number of SIs and segments in a path. Thus systems of arbitrary connection complexity can be formed, allowing simultaneous independent processing, yet still permitting devices associated with one segment to be accessed from others. The model S1 Segment Interconnect and the Cable Segment Ancillary Logic covered in this report comply with all the mandatory features stated in the FASTBUS specification document DOE/ER-0189. A block diagram of the SI is shown

  11. The SRS analytical laboratories strategic plan

    International Nuclear Information System (INIS)

    Hiland, D.E.

    1993-01-01

    There is an acute shortage of Savannah River Site (SRS) analytical laboratory capacity to support key Department of Energy (DOE) environmental restoration and waste management (EM) programs while making the transition from traditional defense program (DP) missions as a result of the cessation of the Cold War. This motivated Westinghouse Savannah River Company (WSRC) to develop an open-quotes Analytical Laboratories Strategic Planclose quotes (ALSP) in order to provide appropriate input to SRS operating plans and justification for proposed analytical laboratory projects. The methodology used to develop this plan is applicable to all types of strategic planning

  12. Human body segmentation via data-driven graph cut.

    Science.gov (United States)

    Li, Shifeng; Lu, Huchuan; Shao, Xingqing

    2014-11-01

    Human body segmentation is a challenging and important problem in computer vision. Existing methods usually entail a time-consuming training phase for prior knowledge learning with complex shape matching for body segmentation. In this paper, we propose a data-driven method that integrates top-down body pose information and bottom-up low-level visual cues for segmenting humans in static images within the graph cut framework. The key idea of our approach is first to exploit human kinematics to search for body part candidates via dynamic programming for high-level evidence. Then, by using the body parts classifiers, obtaining bottom-up cues of human body distribution for low-level evidence. All the evidence collected from top-down and bottom-up procedures are integrated in a graph cut framework for human body segmentation. Qualitative and quantitative experiment results demonstrate the merits of the proposed method in segmenting human bodies with arbitrary poses from cluttered backgrounds.

  13. Modeling of market segmentation for new IT product development

    Science.gov (United States)

    Nasiopoulos, Dimitrios K.; Sakas, Damianos P.; Vlachos, D. S.; Mavrogianni, Amanda

    2015-02-01

    Businesses from all Information Technology sectors use market segmentation[1] in their product development[2] and strategic planning[3]. Many studies have concluded that market segmentation is considered as the norm of modern marketing. With the rapid development of technology, customer needs are becoming increasingly diverse. These needs can no longer be satisfied by a mass marketing approach and follow one rule. IT Businesses can face with this diversity by pooling customers[4] with similar requirements and buying behavior and strength into segments. The result of the best choices about which segments are the most appropriate to serve can then be made, thus making the best of finite resources. Despite the attention which segmentation gathers and the resources that are invested in it, growing evidence suggests that businesses have problems operationalizing segmentation[5]. These problems take various forms. There may have been a rule that the segmentation process necessarily results in homogeneous groups of customers for whom appropriate marketing programs and procedures for dealing with them can be developed. Then the segmentation process, that a company follows, can fail. This increases concerns about what causes segmentation failure and how it might be overcome. To prevent the failure, we created a dynamic simulation model of market segmentation[6] based on the basic factors leading to this segmentation.

  14. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics.

    Science.gov (United States)

    Hoyt, Robert Eugene; Snider, Dallas; Thompson, Carla; Mantravadi, Sarita

    2016-10-11

    We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix

  15. Analytical Chemistry Laboratory progress report for FY 1991

    Energy Technology Data Exchange (ETDEWEB)

    Green, D.W.; Heinrich, R.R.; Graczyk, D.G.; Lindahl, P.C.; Boparai, A.S.

    1991-12-01

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year 1991 (October 1990 through September 1991). This is the eighth annual report for the ACL. The Analytical Chemistry Laboratory is a full-cost-recovery service center, with the primary mission of providing a broad range of analytical chemistry support services to the scientific and engineering programs at ANL. In addition, the ACL conducts a research program in analytical chemistry, works on instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems, from routine standard analyses to unique problems that require significant development of methods and techniques.

  16. Design of segmented thermoelectric generator based on cost-effective and light-weight thermoelectric alloys

    International Nuclear Information System (INIS)

    Kim, Hee Seok; Kikuchi, Keiko; Itoh, Takashi; Iida, Tsutomu; Taya, Minoru

    2014-01-01

    Highlights: • Segmented thermoelectric (TE) module operating at 500 °C for combustion engine system. • Si based light-weight TE generator increases the specific power density [W/kg]. • Study of contact resistance at the bonding interfaces maximizing output power. • Accurate agreement of the theoretical predictions with experimental results. - Abstract: A segmented thermoelectric (TE) generator was designed with higher temperature segments composed of n-type Mg 2 Si and p-type higher manganese silicide (HMS) and lower temperature segments composed of n- and p-type Bi–Te based compounds. Since magnesium and silicon based TE alloys have low densities, they produce a TE module with a high specific power density that is suitable for airborne applications. A two-pair segmented π-shaped TE generator was assembled with low contact resistance materials across bonding interfaces. The peak specific power density of this generator was measured at 42.9 W/kg under a 498 °C temperature difference, which has a good agreement with analytical predictions

  17. New analytical methodology to reach the actinide determination accuracy ({+-} 2%) required by the OSMOSE program

    Energy Technology Data Exchange (ETDEWEB)

    Boyer-Deslys, V.; Combaluzier, T.; Dalier, V.; Martin, J.C.; Viallesoubranne, C. [DRCP/SE2A/LAMM, CEA/VALRHO - Marcoule, BP 17171, 30207 Bagnols-sur-Ceze (France); Crozet, M. [LEHA, CEA/VALRHO - Marcoule, BP 17171, 30207 Bagnols-sur-Ceze (France)

    2008-07-01

    This article describes the analytical procedure optimized by LAMM (Laboratory for Analysis and Materials Methodology) in order to characterize the actinide-doped pellets used in the Osmose (Oscillation in Minerve of isotopes in eupraxis spectra) program (developed for transmutation reactor physics). Osmose aims at providing precise experimental data (absorption cross sections) for heavy nuclides (atomic mass from 232 to 245). This procedure requires the use of the analytical equipment and expertise of the LAMM: TIMS (Thermal Ionization Mass Spectrometer), ICP (Inductively Coupled Plasma) QMS (Quadrupole Mass Spectrometer), SFMS (Sector Field Mass Spectrometer), AES (Atomic Emission Spectrometer), alpha spectrometry and photo-gravimetric analysis. These techniques have met all the specification requirements: extended uncertainties (k=2) below {+-} 2% on the uranium and dopant concentrations, the impurity concentration and the americium-241 concentration.

  18. Tank 241-TX-104, cores 230 and 231 analytical results for the final report

    International Nuclear Information System (INIS)

    Diaz, L.A.

    1998-01-01

    This document is the analytical laboratory report for tank 241-TX-104 push mode core segments collected between February 18, 1998 and February 23, 1998. The segments were subsampled and analyzed in accordance with the Tank 241-TX-104 Push Mode Core Sampling and Analysis Plan (TSAP) (McCain, 1997), the Data Quality Objective to Support Resolution of the Organic Complexant Safety Issue (Organic DQO) (Turner, et al., 1995) and the Safety Screening Data Quality Objective (DQO) (Dukelow, et.al., 1995). The analytical results are included in the data summary table. None of the samples submitted for Differential Scanning Calorimetry (DSC) and Total Alpha Activity (AT) exceeded notification limits as stated in the TSAP. The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group in accordance with the Memorandum of Understanding (Schreiber, 1997) and are not considered in this report. Appearance and Sample Handling Attachment 1 is a cross reference to relate the tank farm identification numbers to the 222-S Laboratory LabCore/LIMS sample numbers. The subsamples generated in the laboratory for analyses are identified in these diagrams with their sources shown. Core 230: Three push mode core segments were removed from tank 241-TX-104 riser 9A on February 18, 1998. Segments were received by the 222-S Laboratory on February 19, 1998. Two segments were expected for this core. However, due to poor sample recovery, an additional segment was taken and identified as 2A. Core 231: Four push mode core segments were removed from tank 241-TX-104 riser 13A between February 19, 1998 and February 23, 1998. Segments were received by the 222-S Laboratory on February 24, 1998. Two segments were expected for this core. However, due to poor sample recovery, additional segments were taken and identified as 2A and 2B. The TSAP states the core samples should be transported to the laboratory within three

  19. Segmented block copolymers with monodisperse aramide end-segments

    NARCIS (Netherlands)

    Araichimani, A.; Gaymans, R.J.

    2008-01-01

    Segmented block copolymers were synthesized using monodisperse diaramide (TT) as hard segments and PTMO with a molecular weight of 2 900 g · mol-1 as soft segments. The aramide: PTMO segment ratio was increased from 1:1 to 2:1 thereby changing the structure from a high molecular weight multi-block

  20. Efficient Algorithms for Segmentation of Item-Set Time Series

    Science.gov (United States)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  1. Segmentation of consumer's markets and evaluation of market's segments

    OpenAIRE

    ŠVECOVÁ, Iveta

    2013-01-01

    The goal of this bachelor thesis was to explain a possibly segmentation of consumer´s markets for a chosen company, and to present a suitable goods offer, so it would be suitable to the needs of selected segments. The work is divided into theoretical and practical part. First part describes marketing, segmentation, segmentation of consumer's markets, consumer's market, market's segments a other terms. Second part describes an evaluation of questionnaire survey, discovering of market's segment...

  2. The influence of interactions between market segmentation strategy and competition on organizational performance. A simulation study.

    OpenAIRE

    Dolnicar, Sara; Freitag, Roman

    2003-01-01

    A computer simulation study is conducted to explore the interaction of alternative segmentation strategies and the competitiveness of the market environment, a goal that can neither be tackled by purely analytic approaches nor is sufficient and undistorted real market data available to deduct findings in an empirical manner. The fundamental idea of the simulation is to increase competition in the artificial marketplace and to study the influence of segmentation strategy and varying market con...

  3. Generalized Analytical Program of Thyristor Phase Control Circuit with Series and Parallel Resonance Load

    OpenAIRE

    Nakanishi, Sen-ichiro; Ishida, Hideaki; Himei, Toyoji

    1981-01-01

    The systematic analytical method is reqUired for the ac phase control circuit by means of an inverse parallel thyristor pair which has a series and parallel L-C resonant load, because the phase control action causes abnormal and interesting phenomena, such as an extreme increase of voltage and current, an unique increase and decrease of contained higher harmonics, and a wide variation of power factor, etc. In this paper, the program for the analysis of the thyristor phase control circuit with...

  4. Strategies for regular segmented reductions on GPU

    DEFF Research Database (Denmark)

    Larsen, Rasmus Wriedt; Henriksen, Troels

    2017-01-01

    We present and evaluate an implementation technique for regular segmented reductions on GPUs. Existing techniques tend to be either consistent in performance but relatively inefficient in absolute terms, or optimised for specific workloads and thereby exhibiting bad performance for certain input...... is in the context of the Futhark compiler, the implementation technique is applicable to any library or language that has a need for segmented reductions. We evaluate the technique on four microbenchmarks, two of which we also compare to implementations in the CUB library for GPU programming, as well as on two...

  5. General Staining and Segmentation Procedures for High Content Imaging and Analysis.

    Science.gov (United States)

    Chambers, Kevin M; Mandavilli, Bhaskar S; Dolman, Nick J; Janes, Michael S

    2018-01-01

    Automated quantitative fluorescence microscopy, also known as high content imaging (HCI), is a rapidly growing analytical approach in cell biology. Because automated image analysis relies heavily on robust demarcation of cells and subcellular regions, reliable methods for labeling cells is a critical component of the HCI workflow. Labeling of cells for image segmentation is typically performed with fluorescent probes that bind DNA for nuclear-based cell demarcation or with those which react with proteins for image analysis based on whole cell staining. These reagents, along with instrument and software settings, play an important role in the successful segmentation of cells in a population for automated and quantitative image analysis. In this chapter, we describe standard procedures for labeling and image segmentation in both live and fixed cell samples. The chapter will also provide troubleshooting guidelines for some of the common problems associated with these aspects of HCI.

  6. [Segmentation of whole body bone SPECT image based on BP neural network].

    Science.gov (United States)

    Zhu, Chunmei; Tian, Lianfang; Chen, Ping; He, Yuanlie; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

    2007-10-01

    In this paper, BP neural network is used to segment whole body bone SPECT image so that the lesion area can be recognized automatically. For the uncertain characteristics of SPECT images, it is hard to achieve good segmentation result if only the BP neural network is employed. Therefore, the segmentation process is divided into three steps: first, the optimal gray threshold segmentation method is employed for preprocessing, then BP neural network is used to roughly identify the lesions, and finally template match method and symmetry-removing program are adopted to delete the wrongly recognized areas.

  7. Implementation of evidence-based home visiting programs aimed at reducing child maltreatment: A meta-analytic review.

    Science.gov (United States)

    Casillas, Katherine L; Fauchier, Angèle; Derkash, Bridget T; Garrido, Edward F

    2016-03-01

    In recent years there has been an increase in the popularity of home visitation programs as a means of addressing risk factors for child maltreatment. The evidence supporting the effectiveness of these programs from several meta-analyses, however, is mixed. One potential explanation for this inconsistency explored in the current study involves the manner in which these programs were implemented. In the current study we reviewed 156 studies associated with 9 different home visitation program models targeted to caregivers of children between the ages of 0 and 5. Meta-analytic techniques were used to determine the impact of 18 implementation factors (e.g., staff selection, training, supervision, fidelity monitoring, etc.) and four study characteristics (publication type, target population, study design, comparison group) in predicting program outcomes. Results from analyses revealed that several implementation factors, including training, supervision, and fidelity monitoring, had a significant effect on program outcomes, particularly child maltreatment outcomes. Study characteristics, including the program's target population and the comparison group employed, also had a significant effect on program outcomes. Implications of the study's results for those interested in implementing home visitation programs are discussed. A careful consideration and monitoring of program implementation is advised as a means of achieving optimal study results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Analytical Chemistry Laboratory Progress Report for FY 1994

    Energy Technology Data Exchange (ETDEWEB)

    Green, D.W.; Boparai, A.S.; Bowers, D.L. [and others

    1994-12-01

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year (FY) 1994 (October 1993 through September 1994). This annual report is the eleventh for the ACL and describes continuing effort on projects, work on new projects, and contributions of the ACL staff to various programs at ANL. The Analytical Chemistry Laboratory is a full-cost-recovery service center, with the primary mission of providing a broad range of analytical chemistry support services to the scientific and engineering programs at ANL. The ACL also has a research program in analytical chemistry, conducts instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems. Some routine or standard analyses are done, but it is common for the Argonne programs to generate unique problems that require significant development of methods and adaption of techniques to obtain useful analytical data. The ACL has four technical groups -- Chemical Analysis, Instrumental Analysis, Organic Analysis, and Environmental Analysis -- which together include about 45 technical staff members. Talents and interests of staff members cross the group lines, as do many projects within the ACL. The Chemical Analysis Group uses wet- chemical and instrumental methods for elemental, compositional, and isotopic determinations in solid, liquid, and gaseous samples and provides specialized analytical services. Major instruments in this group include an ion chromatograph (IC), an inductively coupled plasma/atomic emission spectrometer (ICP/AES), spectrophotometers, mass spectrometers (including gas-analysis and thermal-ionization mass spectrometers), emission spectrographs, autotitrators, sulfur and carbon determinators, and a kinetic phosphorescence uranium analyzer.

  9. SIDES - Segment Interconnect Diagnostic Expert System

    International Nuclear Information System (INIS)

    Booth, A.W.; Forster, R.; Gustafsson, L.; Ho, N.

    1989-01-01

    It is well known that the FASTBUS Segment Interconnect (SI) provides a communication path between two otherwise independent, asynchronous bus segments. The SI is probably the most important module in any FASTBUS data acquisition network since it's failure to function can cause whole segments of the network to be inaccessible and sometimes inoperable. This paper describes SIDES, an intelligent program designed to diagnose SI's both in situ as they operate in a data acquisition network, and in the laboratory in an acceptance/repair environment. The paper discusses important issues such as knowledge acquisition; extracting knowledge from human experts and other knowledge sources. SIDES can benefit high energy physics experiments, where SI problems can be diagnosed and solved more quickly. Equipment pool technicians can also benefit from SIDES, first by decreasing the number of SI's erroneously turned in for repair, and secondly as SIDES acts as an intelligent assistant to the technician in the diagnosis and repair process

  10. Supporting Imagers' VOICE: A National Training Program in Comparative Effectiveness Research and Big Data Analytics.

    Science.gov (United States)

    Kang, Stella K; Rawson, James V; Recht, Michael P

    2017-12-05

    Provided methodologic training, more imagers can contribute to the evidence basis on improved health outcomes and value in diagnostic imaging. The Value of Imaging Through Comparative Effectiveness Research Program was developed to provide hands-on, practical training in five core areas for comparative effectiveness and big biomedical data research: decision analysis, cost-effectiveness analysis, evidence synthesis, big data principles, and applications of big data analytics. The program's mixed format consists of web-based modules for asynchronous learning as well as in-person sessions for practical skills and group discussion. Seven diagnostic radiology subspecialties and cardiology are represented in the first group of program participants, showing the collective potential for greater depth of comparative effectiveness research in the imaging community. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  11. Industrial Guidelines for Undertaking a Hard-Core Employment Program: An Analytic Case Study of the Experience of an Urban Industrial Organization.

    Science.gov (United States)

    Feifer, Irwin; And Others

    Based on an analytically evaluative case study of a New York City furniture department store's experiences with a Manpower Administration contract, this report deals with the development and progress of the program as analyzed by one investigator through interviews with almost all of the participants in the program. As a result of the study,…

  12. A network analytical approach to the study of labour market mobility

    DEFF Research Database (Denmark)

    Toubøl, Jonas; Larsen, Anton Grau; Jensen, Carsten Strøby

    (RR), which enable us to identify clusters of inter-mobile categories. We apply the method to data of the labour market mobility in Denmark 2000-2007 and demonstrate how this new method can overcome some long standing obstacles to the advance of labour market segmentation theory: Instead...... of the typical theory driven definition of the labour market segments, the use of social network analysis enable a data driven definition of the segments based on the direct observation of mobility between job-positions, which reveals a number of new findings.......The aim of this paper is to present a new network analytical method for analysis of social mobility between categories like occupations or industries. The method consists of two core components; the algorithm MONECA (Mobility Network Clustering Algorithm), and the intensity measure of Relative Risk...

  13. Commercial Midstream Energy Efficiency Incentive Programs: Guidelines for Future Program Design, Implementation, and Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Milostan, Catharina [Argonne National Lab. (ANL), Argonne, IL (United States); Levin, Todd [Argonne National Lab. (ANL), Argonne, IL (United States); Muehleisen, Ralph T. [Argonne National Lab. (ANL), Argonne, IL (United States); Guzowski, Leah Bellah B. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-01-01

    Many electric utilities operate energy efficiency incentive programs that encourage increased dissemination and use of energy-efficient (EE) products in their service territories. The programs can be segmented into three broad categories—downstream incentive programs target product end users, midstream programs target product distributors, and upstream programs target product manufacturers. Traditional downstream programs have had difficulty engaging Small Business/Small Portfolio (SBSP) audiences, and an opportunity exists to expand Commercial Midstream Incentive Programs (CMIPs) to reach this market segment instead.

  14. Market segmentation and positioning: matching creativity with fiscal responsibility.

    Science.gov (United States)

    Kiener, M E

    1989-01-01

    This paper describes an approach to continuing professional education (CPE) program development in nursing within a university environment that utilizes the concepts of market segmentation and positioning. Use of these strategies enables the academic CPE enterprise to move beyond traditional needs assessment practices to create more successful and better-managed CPE programs.

  15. Determination Public Acceptance Segmentation for Nuclear Power Program Interest

    International Nuclear Information System (INIS)

    Syirrazie Che Soh; Aini Wahidah Abdul Wahab

    2012-01-01

    This paper is focus to discuss segmentation aspect among inter-disciplinary group of public. This discussion is the pre-stage to ensure the right initiative strategies are implemented to gain public interest and acceptance towards on developing nuclear power plant. The applied strategies are implemented based on different interest among the different groups of public. These strategies may increase public acceptance level towards developing nuclear power plant. (author)

  16. Evaluation of soft segment modeling on a context independent phoneme classification system

    International Nuclear Information System (INIS)

    Razzazi, F.; Sayadiyan, A.

    2007-01-01

    The geometric distribution of states duration is one of the main performance limiting assumptions of hidden Markov modeling of speech signals. Stochastic segment models, generally, and segmental HMM, specifically overcome this deficiency partly at the cost of more complexity in both training and recognition phases. In addition to this assumption, the gradual temporal changes of speech statistics has not been modeled in HMM. In this paper, a new duration modeling approach is presented. The main idea of the model is to consider the effect of adjacent segments on the probability density function estimation and evaluation of each acoustic segment. This idea not only makes the model robust against segmentation errors, but also it models gradual change from one segment to the next one with a minimum set of parameters. The proposed idea is analytically formulated and tested on a TIMIT based context independent phenomena classification system. During the test procedure, the phoneme classification of different phoneme classes was performed by applying various proposed recognition algorithms. The system was optimized and the results have been compared with a continuous density hidden Markov model (CDHMM) with similar computational complexity. The results show 8-10% improvement in phoneme recognition rate in comparison with standard continuous density hidden Markov model. This indicates improved compatibility of the proposed model with the speech nature. (author)

  17. Analytical Chemistry Laboratory progress report for FY 1985

    Energy Technology Data Exchange (ETDEWEB)

    Green, D.W.; Heinrich, R.R.; Jensen, K.J.

    1985-12-01

    The Analytical Chemistry Laboratory is a full-cost-recovery service center, with the primary mission of providing a broad range of technical support services to the scientific and engineering programs at ANL. In addition, ACL conducts a research program in analytical chemistry, works on instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems, from routine standard analyses to unique problems that require significant development of methods and techniques. The purpose of this report is to summarize the technical and administrative activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year 1985 (October 1984 through September 1985). This is the second annual report for the ACL. 4 figs., 1 tab.

  18. Analytical Chemistry Laboratory progress report for FY 1985

    International Nuclear Information System (INIS)

    Green, D.W.; Heinrich, R.R.; Jensen, K.J.

    1985-12-01

    The Analytical Chemistry Laboratory is a full-cost-recovery service center, with the primary mission of providing a broad range of technical support services to the scientific and engineering programs at ANL. In addition, ACL conducts a research program in analytical chemistry, works on instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems, from routine standard analyses to unique problems that require significant development of methods and techniques. The purpose of this report is to summarize the technical and administrative activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year 1985 (October 1984 through September 1985). This is the second annual report for the ACL. 4 figs., 1 tab

  19. The PLUS family: A set of computer programs to evaluate analytical solutions of the diffusion equation and thermoelasticity

    International Nuclear Information System (INIS)

    Montan, D.N.

    1987-02-01

    This report is intended to describe, document and provide instructions for the use of new versions of a set of computer programs commonly referred to as the PLUS family. These programs were originally designed to numerically evaluate simple analytical solutions of the diffusion equation. The new versions include linear thermo-elastic effects from thermal fields calculated by the diffusion equation. After the older versions of the PLUS family were documented a year ago, it was realized that the techniques employed in the programs were well suited to the addition of linear thermo-elastic phenomena. This has been implemented and this report describes the additions. 3 refs., 14 figs

  20. Application gives the technique the analytic tree in the evaluation the effectiveness programs to radiological protection

    International Nuclear Information System (INIS)

    Perez Gonzalez, F.; Perez Velazquez, R.S.; Fornet Rodriguez, O.; Mustelier Hechevarria, A.; Miller Clemente, A.

    1998-01-01

    In the work we develop the IAEA recommendations in the application the analytic tree as instrument for the evaluation the effectiveness the occupational radiological protection programs. Is reflected like it has been assimilated and converted that technique in daily work istruments in the evaluation process the security conditions in the institutions that apply the nuclear techniques with a view to its autorization on the part of the regulatory organ

  1. Tank 241-T-203, core 190 analytical results for the final report

    International Nuclear Information System (INIS)

    Steen, F.H.

    1997-01-01

    This document is the analytical laboratory report for tank 241-T-203 push mode core segments collected on April 17, 1997 and April 18, 1997. The segments were subsainpled and analyzed in accordance with the Tank 241-T-203 Push Mode Core Sampling andanalysis Plan (TSAP) (Schreiber, 1997a), the Safety Screening Data Quality Objective (DQO)(Dukelow, et al., 1995) and Leffer oflnstructionfor Core Sample Analysis of Tanks 241-T-201, 241-T-202, 241-T-203, and 241-T-204 (LOI)(Hall, 1997). The analytical results are included in the data summary report (Table 1). None of the samples submitted for Differential Scanning Calorimetry (DSC), Total Alpha Activity (AT) and Total Organic Carbon (TOC) exceeded notification limits as stated in the TSAP (Schreiber, 1997a). The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems (TWRS) Technical Basis Group in accordance with the Memorandum of Understanding (Schreiber, 1997b) and not considered in this report

  2. An analytic thomism?

    Directory of Open Access Journals (Sweden)

    Daniel Alejandro Pérez Chamorro.

    2012-12-01

    Full Text Available For 50 years the philosophers of the Anglo-Saxon analytic tradition (E. Anscombre, P. Geach, A. Kenny, P. Foot have tried to follow the Thomas Aquinas School which they use as a source to surpass the Cartesian Epistemology and to develop the virtue ethics. Recently, J. Haldane has inaugurated a program of “analytical thomism” which main result until the present has been his “theory of identity mind/world”. Nevertheless, none of Thomás’ admirers has still found the means of assimilating his metaphysics of being.

  3. Increasing Enrollment by Better Serving Your Institution's Target Audiences through Benefit Segmentation.

    Science.gov (United States)

    Goodnow, Betsy

    The marketing technique of benefit segmentation may be effective in increasing enrollment in adult educational programs, according to a study at College of DuPage, Glen Ellyn, Illinois. The study was conducted to test applicability of benefit segmentation to enrollment generation. The measuring instrument used in this study--the course improvement…

  4. Fourier decomposition of segmented magnets with radial magnetization in surface-mounted PM machines

    Science.gov (United States)

    Tiang, Tow Leong; Ishak, Dahaman; Lim, Chee Peng

    2017-11-01

    This paper presents a generic field model of radial magnetization (RM) pattern produced by multiple segmented magnets per rotor pole in surface-mounted permanent magnet (PM) machines. The magnetization vectors from either odd- or even-number of magnet blocks per pole are described. Fourier decomposition is first employed to derive the field model, and later integrated with the exact 2D analytical subdomain method to predict the magnetic field distributions and other motor global quantities. For the assessment purpose, a 12-slot/8-pole surface-mounted PM motor with two segmented magnets per pole is investigated by using the proposed field model. The electromagnetic performances of the PM machines are intensively predicted by the proposed magnet field model which include the magnetic field distributions, airgap flux density, phase back-EMF, cogging torque, and output torque during either open-circuit or on-load operating conditions. The analytical results are evaluated and compared with those obtained from both 2D and 3D finite element analyses (FEA) where an excellent agreement has been achieved.

  5. Chain segmentation for the Monte Carlo solution of particle transport problems

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.

    1984-01-01

    A Monte Carlo approach is proposed where the random walk chains generated in particle transport simulations are segmented. Forward and adjoint-mode estimators are then used in conjunction with the firstevent source density on the segmented chains to obtain multiple estimates of the individual terms of the Neumann series solution at each collision point. The solution is then constructed by summation of the series. The approach is compared to the exact analytical and to the Monte Carlo nonabsorption weighting method results for two representative slowing down and deep penetration problems. Application of the proposed approach leads to unbiased estimates for limited numbers of particle simulations and is useful in suppressing an effective bias problem observed in some cases of deep penetration particle transport problems

  6. SRL online Analytical Development

    International Nuclear Information System (INIS)

    Jenkins, C.W.

    1991-01-01

    The Savannah River Site is operated by the Westinghouse Savannah River Co. for the Department of Energy to produce special nuclear materials for defense. R ampersand D support for site programs is provided by the Savannah River Laboratory, which I represent. The site is known primarily for its nuclear reactors, but actually three fourths of the efforts at the site are devoted to fuel/target fabrication, fuel/target reprocessing, and waste management. All of these operations rely heavily on chemical processes. The site is therefore a large chemical plant. There are then many potential applications for process analytical chemistry at SRS. The Savannah River Laboratory (SRL) has an Analytical Development Section of roughly 65 personnel that perform analyses for R ampersand D efforts at the lab, act as backup to the site Analytical Laboratories Department and develop analytical methods and instruments. I manage a subgroup of the Analytical Development Section called the Process Control ampersand Analyzer Development Group. The Prime mission of this group is to develop online/at-line analytical systems for site applications

  7. Business Analytics in Practice and in Education: A Competency-Based Perspective

    Science.gov (United States)

    Mamonov, Stanislav; Misra, Ram; Jain, Rashmi

    2015-01-01

    Business analytics is a fast-growing area in practice. The rapid growth of business analytics in practice in the recent years is mirrored by a corresponding fast evolution of new educational programs. While more than 130 graduate and undergraduate degree programs in business analytics have been launched in the past 5 years, no commonly accepted…

  8. Learning Analytics for Online Discussions: Embedded and Extracted Approaches

    Science.gov (United States)

    Wise, Alyssa Friend; Zhao, Yuting; Hausknecht, Simone Nicole

    2014-01-01

    This paper describes an application of learning analytics that builds on an existing research program investigating how students contribute and attend to the messages of others in asynchronous online discussions. We first overview the E-Listening research program and then explain how this work was translated into analytics that students and…

  9. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  10. Information-Analytic support of the programs of eliminating the consequences of the Chernobyl accident: gained experience and its future application

    International Nuclear Information System (INIS)

    Arutyunyan, R.V.; Bolshov, L.A.; Linge, I.I.; Abalkina, I.L.; Simonov, A.V.; Pavlovsky, O.A.

    1996-01-01

    On the initial stage of eliminating the consequences of the Chernobyl accident, the role of system-analytic and information support in the decision making process for protection of the population and rehabilitation of territories was. to a certain extent, underestimated. Starting from 1991, activity in system-analytic support was the part of the USSR (later on, Russian) stage programs. This activity covered three direction: development of the Central bank of the generalized data on the radiation catastrophes; development, implementation, and maintenance of the control informational system for the Federal bodies; computer-system integration

  11. 3-D discrete analytical ridgelet transform.

    Science.gov (United States)

    Helbert, David; Carré, Philippe; Andres, Eric

    2006-12-01

    In this paper, we propose an implementation of the 3-D Ridgelet transform: the 3-D discrete analytical Ridgelet transform (3-D DART). This transform uses the Fourier strategy for the computation of the associated 3-D discrete Radon transform. The innovative step is the definition of a discrete 3-D transform with the discrete analytical geometry theory by the construction of 3-D discrete analytical lines in the Fourier domain. We propose two types of 3-D discrete lines: 3-D discrete radial lines going through the origin defined from their orthogonal projections and 3-D planes covered with 2-D discrete line segments. These discrete analytical lines have a parameter called arithmetical thickness, allowing us to define a 3-D DART adapted to a specific application. Indeed, the 3-D DART representation is not orthogonal, It is associated with a flexible redundancy factor. The 3-D DART has a very simple forward/inverse algorithm that provides an exact reconstruction without any iterative method. In order to illustrate the potentiality of this new discrete transform, we apply the 3-D DART and its extension to the Local-DART (with smooth windowing) to the denoising of 3-D image and color video. These experimental results show that the simple thresholding of the 3-D DART coefficients is efficient.

  12. Guide to Savannah River Laboratory Analytical Services Group

    Energy Technology Data Exchange (ETDEWEB)

    1990-04-01

    The mission of the Analytical Services Group (ASG) is to provide analytical support for Savannah River Laboratory Research and Development Programs using onsite and offsite analytical labs as resources. A second mission is to provide Savannah River Site (SRS) operations with analytical support for nonroutine material characterization or special chemical analyses. The ASG provides backup support for the SRS process control labs as necessary.

  13. Guide to Savannah River Laboratory Analytical Services Group

    International Nuclear Information System (INIS)

    1990-04-01

    The mission of the Analytical Services Group (ASG) is to provide analytical support for Savannah River Laboratory Research and Development Programs using onsite and offsite analytical labs as resources. A second mission is to provide Savannah River Site (SRS) operations with analytical support for nonroutine material characterization or special chemical analyses. The ASG provides backup support for the SRS process control labs as necessary

  14. Deriving Earth Science Data Analytics Requirements

    Science.gov (United States)

    Kempler, Steven J.

    2015-01-01

    Data Analytics applications have made successful strides in the business world where co-analyzing extremely large sets of independent variables have proven profitable. Today, most data analytics tools and techniques, sometimes applicable to Earth science, have targeted the business industry. In fact, the literature is nearly absent of discussion about Earth science data analytics. Earth science data analytics (ESDA) is the process of examining large amounts of data from a variety of sources to uncover hidden patterns, unknown correlations, and other useful information. ESDA is most often applied to data preparation, data reduction, and data analysis. Co-analysis of increasing number and volume of Earth science data has become more prevalent ushered by the plethora of Earth science data sources generated by US programs, international programs, field experiments, ground stations, and citizen scientists.Through work associated with the Earth Science Information Partners (ESIP) Federation, ESDA types have been defined in terms of data analytics end goals. Goals of which are very different than those in business, requiring different tools and techniques. A sampling of use cases have been collected and analyzed in terms of data analytics end goal types, volume, specialized processing, and other attributes. The goal of collecting these use cases is to be able to better understand and specify requirements for data analytics tools and techniques yet to be implemented. This presentation will describe the attributes and preliminary findings of ESDA use cases, as well as provide early analysis of data analytics toolstechniques requirements that would support specific ESDA type goals. Representative existing data analytics toolstechniques relevant to ESDA will also be addressed.

  15. Proposal of a segmentation procedure for skid resistance data

    International Nuclear Information System (INIS)

    Tejeda, S. V.; Tampier, Hernan de Solominihac; Navarro, T.E.

    2008-01-01

    Skin resistance of pavements presents a high spatial variability along a road. This pavement characteristic is directly related to wet weather accidents; therefore, it is important to identify and characterize the skid resistance of homogeneous segments along a road in order to implement proper road safety management. Several data segmentation methods have been applied to other pavement characteristics (e.g. roughness). However, no application to skin resistance data was found during the literature review for this study. Typical segmentation methods are rather too general or too specific to ensure a detailed segmentation of skid resistance data, which can be used for managing pavement performance. The main objective of this paper is to propose a procedure for segmenting skid resistance data, based on existing data segmentation methods. The procedure needs to be efficient and to fulfill road management requirements. The proposed procedure considers the Leverage method to identify outlier data, the CUSUM method to accomplish initial data segmentation and a statistical method to group consecutive segments that are statistically similar. The statistical method applies the Student's t-test of mean equities, along with analysis of variance and the Tuckey test for the multiple comparison of means. The proposed procedure was applied to a sample of skid resistance data measured with SCRIM (Side Force Coefficient Routine Investigatory Machine) on a 4.2 km section of Chilean road and was compared to conventional segmentation methods. Results showed that the proposed procedure is more efficient than the conventional segmentation procedures, achieving the minimum weighted sum of square errors (SSEp) with all the identified segments statistically different. Due to its mathematical basis, proposed procedure can be easily adapted and programmed for use in road safety management. (author)

  16. Automatic segmentation of the left ventricle in a cardiac MR short axis image using blind morphological operation

    Science.gov (United States)

    Irshad, Mehreen; Muhammad, Nazeer; Sharif, Muhammad; Yasmeen, Mussarat

    2018-04-01

    Conventionally, cardiac MR image analysis is done manually. Automatic examination for analyzing images can replace the monotonous tasks of massive amounts of data to analyze the global and regional functions of the cardiac left ventricle (LV). This task is performed using MR images to calculate the analytic cardiac parameter like end-systolic volume, end-diastolic volume, ejection fraction, and myocardial mass, respectively. These analytic parameters depend upon genuine delineation of epicardial, endocardial, papillary muscle, and trabeculations contours. In this paper, we propose an automatic segmentation method using the sum of absolute differences technique to localize the left ventricle. Blind morphological operations are proposed to segment and detect the LV contours of the epicardium and endocardium, automatically. We test the benchmark Sunny Brook dataset for evaluation of the proposed work. Contours of epicardium and endocardium are compared quantitatively to determine contour's accuracy and observe high matching values. Similarity or overlapping of an automatic examination to the given ground truth analysis by an expert are observed with high accuracy as with an index value of 91.30% . The proposed method for automatic segmentation gives better performance relative to existing techniques in terms of accuracy.

  17. Brookhaven segment interconnect

    International Nuclear Information System (INIS)

    Morse, W.M.; Benenson, G.; Leipuner, L.B.

    1983-01-01

    We have performed a high energy physics experiment using a multisegment Brookhaven FASTBUS system. The system was composed of three crate segments and two cable segments. We discuss the segment interconnect module which permits communication between the various segments

  18. Active Segmentation.

    Science.gov (United States)

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  19. Tank 241-AN-104, cores 163 and 164 analytical results for the final report

    International Nuclear Information System (INIS)

    Steen, F.H.

    1997-01-01

    This document is the analytical laboratory report for tank 241-AN-104 push mode core segments collected between August 8, 1996 and September 12, 1996. The segments were subsampled and analyzed in accordance with the Tank 241-AAr-1 04 Push Mode Core Sampling and Analysis Plan (TSAP) (Winkelman, 1996), the Safety Screening Data Quality Objective (DQO) (Dukelow, et at., 1995) and the Flammable Gas Data Quality Objective (DQO) (Benar, 1995). The analytical results are included in a data summary table. None of the samples submitted for Differential Scanning Calorimetry (DSC), Total Alpha Activity (AT), Total Organic Carbon (TOC) and Plutonium analyses (239,240 Pu) exceeded notification limits as stated in the TSAP. The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group in accordance with the Memorandum of Understanding (Schreiber, 1997) and not considered in this report

  20. Tank 241-TX-118, core 236 analytical results for the final report

    International Nuclear Information System (INIS)

    ESCH, R.A.

    1998-01-01

    This document is the analytical laboratory report for tank 241-TX-118 push mode core segments collected between April 1, 1998 and April 13, 1998. The segments were subsampled and analyzed in accordance with the Tank 241-TX-118 Push Mode Core sampling and Analysis Plan (TSAP) (Benar, 1997), the Safety Screening Data Quality Objective (DQO) (Dukelow, et al., 1995), the Data Quality Objective to Support Resolution of the Organic Complexant Safety Issue (Organic DQO) (Turner, et al, 1995) and the Historical Model Evaluation Data Requirements (Historical DQO) (Sipson, et al., 1995). The analytical results are included in the data summary table (Table 1). None of the samples submitted for Differential Scanning Calorimetry (DSC) and Total Organic Carbon (TOC) exceeded notification limits as stated in the TSAP (Benar, 1997). One sample exceeded the Total Alpha Activity (AT) analysis notification limit of 38.4microCi/g (based on a bulk density of 1.6), core 236 segment 1 lower half solids (S98T001524). Appropriate notifications were made. Plutonium 239/240 analysis was requested as a secondary analysis. The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group in accordance with the Memorandum of Understanding (Schreiber, 1997) and are not considered in this report

  1. GeoSegmenter: A statistically learned Chinese word segmenter for the geoscience domain

    Science.gov (United States)

    Huang, Lan; Du, Youfu; Chen, Gongyang

    2015-03-01

    Unlike English, the Chinese language has no space between words. Segmenting texts into words, known as the Chinese word segmentation (CWS) problem, thus becomes a fundamental issue for processing Chinese documents and the first step in many text mining applications, including information retrieval, machine translation and knowledge acquisition. However, for the geoscience subject domain, the CWS problem remains unsolved. Although a generic segmenter can be applied to process geoscience documents, they lack the domain specific knowledge and consequently their segmentation accuracy drops dramatically. This motivated us to develop a segmenter specifically for the geoscience subject domain: the GeoSegmenter. We first proposed a generic two-step framework for domain specific CWS. Following this framework, we built GeoSegmenter using conditional random fields, a principled statistical framework for sequence learning. Specifically, GeoSegmenter first identifies general terms by using a generic baseline segmenter. Then it recognises geoscience terms by learning and applying a model that can transform the initial segmentation into the goal segmentation. Empirical experimental results on geoscience documents and benchmark datasets showed that GeoSegmenter could effectively recognise both geoscience terms and general terms.

  2. Single-segment and double-segment INTACS for post-LASIK ectasia.

    Directory of Open Access Journals (Sweden)

    Hassan Hashemi

    2014-09-01

    Full Text Available The objective of the present study was to compare single segment and double segment INTACS rings in the treatment of post-LASIK ectasia. In this interventional study, 26 eyes with post-LASIK ectasia were assessed. Ectasia was defined as progressive myopia regardless of astigmatism, along with topographic evidence of inferior steepening of the cornea after LASIK. We excluded those with a history of intraocular surgery, certain eye conditions, and immune disorders, as well as monocular, pregnant and lactating patients. A total of 11 eyes had double ring and 15 eyes had single ring implantation. Visual and refractive outcomes were compared with preoperative values based on the number of implanted INTACS rings. Pre and postoperative spherical equivalent were -3.92 and -2.29 diopter (P=0.007. The spherical equivalent decreased by 1 ± 3.2 diopter in the single-segment group and 2.56 ± 1.58 diopter in the double-segment group (P=0.165. Mean preoperative astigmatism was 2.38 ± 1.93 diopter which decreased to 2.14 ± 1.1 diopter after surgery (P=0.508; 0.87 ± 1.98 diopter decrease in the single-segment group and 0.67 ± 1.2 diopter increase in the double-segment group (P=0.025. Nineteen patients (75% gained one or two lines, and only three, who were all in the double-segment group, lost one or two lines of best corrected visual acuity. The spherical equivalent and vision significantly decreased in all patients. In these post-LASIK ectasia patients, the spherical equivalent was corrected better with two segments compared to single segment implantation; nonetheless, the level of astigmatism in the single-segment group was significantly better than that in the double-segment group.

  3. Use of market segmentation to identify untapped consumer needs in vision correction surgery for future growth.

    Science.gov (United States)

    Loarie, Thomas M; Applegate, David; Kuenne, Christopher B; Choi, Lawrence J; Horowitz, Diane P

    2003-01-01

    Market segmentation analysis identifies discrete segments of the population whose beliefs are consistent with exhibited behaviors such as purchase choice. This study applies market segmentation analysis to low myopes (-1 to -3 D with less than 1 D cylinder) in their consideration and choice of a refractive surgery procedure to discover opportunities within the market. A quantitative survey based on focus group research was sent to a demographically balanced sample of myopes using contact lenses and/or glasses. A variable reduction process followed by a clustering analysis was used to discover discrete belief-based segments. The resulting segments were validated both analytically and through in-market testing. Discontented individuals who wear contact lenses are the primary target for vision correction surgery. However, 81% of the target group is apprehensive about laser in situ keratomileusis (LASIK). They are nervous about the procedure and strongly desire reversibility and exchangeability. There exists a large untapped opportunity for vision correction surgery within the low myope population. Market segmentation analysis helped determine how to best meet this opportunity through repositioning existing procedures or developing new vision correction technology, and could also be applied to identify opportunities in other vision correction populations.

  4. Analysis of a Segmented Annular Coplanar Capacitive Tilt Sensor with Increased Sensitivity.

    Science.gov (United States)

    Guo, Jiahao; Hu, Pengcheng; Tan, Jiubin

    2016-01-21

    An investigation of a segmented annular coplanar capacitor is presented. We focus on its theoretical model, and a mathematical expression of the capacitance value is derived by solving a Laplace equation with Hankel transform. The finite element method is employed to verify the analytical result. Different control parameters are discussed, and each contribution to the capacitance value of the capacitor is obtained. On this basis, we analyze and optimize the structure parameters of a segmented coplanar capacitive tilt sensor, and three models with different positions of the electrode gap are fabricated and tested. The experimental result shows that the model (whose electrode-gap position is 10 mm from the electrode center) realizes a high sensitivity: 0.129 pF/° with a non-linearity of design.

  5. Analytical Chemistry Laboratory progress report for FY 1984

    International Nuclear Information System (INIS)

    Green, D.W.; Heinrich, R.R.; Jensen, K.J.; Stetter, J.R.

    1985-03-01

    Technical and administrative activities of the Analytical Chemistry Laboratory (ACL) are reported for fiscal year 1984. The ACL is a full-cost-recovery service center, with the primary mission of providing a broad range of technical support services to the scientific and engineering programs at ANL. In addition, ACL conducts a research program in analytical chemistry, works on instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL is administratively within the Chemical Technology Division, the principal user, but provides technical support for all of the technical divisions and programs at ANL. The ACL has three technical groups - Chemical Analysis, Instrumental Analysis, and Organic Analysis. Under technical activities 26 projects are briefly described. Under professional activities, a list is presented for publications and reports, oral presentations, awards and meetings attended. 6 figs., 2 tabs

  6. Testing program for burning plasma experiment vacuum vessel bolted joint

    International Nuclear Information System (INIS)

    Hsueh, P.K.; Khan, M.Z.; Swanson, J.; Feng, T.; Dinkevich, S.; Warren, J.

    1992-01-01

    As presently designed, the Burning Plasma Experiment vacuum vessel will be segmentally fabricated and assembled by bolted joints in the field. Due to geometry constraints, most of the bolted joints have significant eccentricity which causes the joint behavior to be sensitive to joint clamping forces. Experience indicates that as a result of this eccentricity, the joint will tend to open at the side closest to the applied load with the extent of the opening being dependent on the initial preload. In this paper analytical models coupled with a confirmatory testing program are developed to investigate and predict the non-linear behavior of the vacuum vessel bolted joint

  7. Spot detection and image segmentation in DNA microarray data.

    Science.gov (United States)

    Qin, Li; Rueda, Luis; Ali, Adnan; Ngom, Alioune

    2005-01-01

    Following the invention of microarrays in 1994, the development and applications of this technology have grown exponentially. The numerous applications of microarray technology include clinical diagnosis and treatment, drug design and discovery, tumour detection, and environmental health research. One of the key issues in the experimental approaches utilising microarrays is to extract quantitative information from the spots, which represent genes in a given experiment. For this process, the initial stages are important and they influence future steps in the analysis. Identifying the spots and separating the background from the foreground is a fundamental problem in DNA microarray data analysis. In this review, we present an overview of state-of-the-art methods for microarray image segmentation. We discuss the foundations of the circle-shaped approach, adaptive shape segmentation, histogram-based methods and the recently introduced clustering-based techniques. We analytically show that clustering-based techniques are equivalent to the one-dimensional, standard k-means clustering algorithm that utilises the Euclidean distance.

  8. The implement of Talmud property allocation algorithm based on graphic point-segment way

    Science.gov (United States)

    Cen, Haifeng

    2017-04-01

    Under the guidance of the Talmud allocation scheme's theory, the paper analyzes the algorithm implemented process via the perspective of graphic point-segment way, and designs the point-segment way's Talmud property allocation algorithm. Then it uses Java language to implement the core of allocation algorithm, by using Android programming to build a visual interface.

  9. Hanford analytical sample projections FY 1998 - FY 2002

    International Nuclear Information System (INIS)

    Joyce, S.M.

    1998-01-01

    Analytical Services projections are compiled for the Hanford site based on inputs from the major programs for the years 1998 through 2002. Projections are categorized by radiation level, protocol, sample matrix and program. Analyses requirements are also presented. This document summarizes the Hanford sample projections for fiscal years 1998 to 2002. Sample projections are based on inputs submitted to Analytical Services covering Environmental Restoration, Tank Waste Remediation Systems (TWRS), Solid Waste, Liquid Effluents, Spent Nuclear Fuels, Transition Projects, Site Monitoring, Industrial Hygiene, Analytical Services and miscellaneous Hanford support activities. In addition, details on laboratory scale technology (development) work, Sample Management, and Data Management activities are included. This information will be used by Hanford Analytical Services (HAS) and the Sample Management Working Group (SMWG) to assure that laboratories and resources are available and effectively utilized to meet these documented needs

  10. Hanford analytical sample projections FY 1998--FY 2002

    Energy Technology Data Exchange (ETDEWEB)

    Joyce, S.M.

    1998-02-12

    Analytical Services projections are compiled for the Hanford site based on inputs from the major programs for the years 1998 through 2002. Projections are categorized by radiation level, protocol, sample matrix and program. Analyses requirements are also presented. This document summarizes the Hanford sample projections for fiscal years 1998 to 2002. Sample projections are based on inputs submitted to Analytical Services covering Environmental Restoration, Tank Waste Remediation Systems (TWRS), Solid Waste, Liquid Effluents, Spent Nuclear Fuels, Transition Projects, Site Monitoring, Industrial Hygiene, Analytical Services and miscellaneous Hanford support activities. In addition, details on laboratory scale technology (development) work, Sample Management, and Data Management activities are included. This information will be used by Hanford Analytical Services (HAS) and the Sample Management Working Group (SMWG) to assure that laboratories and resources are available and effectively utilized to meet these documented needs.

  11. Analytical Chemistry Division's sample transaction system

    International Nuclear Information System (INIS)

    Stanton, J.S.; Tilson, P.A.

    1980-10-01

    The Analytical Chemistry Division uses the DECsystem-10 computer for a wide range of tasks: sample management, timekeeping, quality assurance, and data calculation. This document describes the features and operating characteristics of many of the computer programs used by the Division. The descriptions are divided into chapters which cover all of the information about one aspect of the Analytical Chemistry Division's computer processing

  12. Tank 241-AX-103, cores 212 and 214 analytical results for the final report

    International Nuclear Information System (INIS)

    Steen, F.H.

    1998-01-01

    This document is the analytical laboratory report for tank 241-AX-103 push mode core segments collected between July 30, 1997 and August 11, 1997. The segments were subsampled and analyzed in accordance with the Tank 241-AX-103 Push Mode Core Sampling and Analysis Plan (TSAP) (Comer, 1997), the Safety Screening Data Quality Objective (DQO) (Dukelow, et al., 1995) and the Data Quality Objective to Support Resolution of the Organic Complexant Safety Issue (Organic DQO) (Turner, et al., 1995). The analytical results are included in the data summary table (Table 1). None of the samples submitted for Differential Scanning Calorimetry (DSC), Total Alpha Activity (AT), plutonium 239 (Pu239), and Total Organic Carbon (TOC) exceeded notification limits as stated in the TSAP (Conner, 1997). The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group in accordance with the Memorandum of Understanding (Schreiber, 1997) and not considered in this report

  13. The Enhanced Segment Interconnect for FASTBUS data communications

    International Nuclear Information System (INIS)

    Machen, D.R.; Downing, R.W.; Kirsten, F.A.; Nelson, R.O.

    1987-01-01

    The Enhanced Segment Interconnect concept (ESI) for improved FASTBUS data communications is a development supported by the U.S. Department of Energy under the Small Business Innovation Research (SBIR) program. The ESI will contain both the Segment Interconnect (SI) Tyhpe S-1 and an optional buffered interconnect for store-and-forward data communications; fiber-optic-coupled serial ports will provide optional data paths. The ESI can be applied in large FASTBUS-implemented physics experiments whose data-set or data-transmission distance requirements dictate alternate approaches to data communications. This paper describes the functions of the ESI and the status of its development, now 25% complete

  14. Accounting for segment correlations in segmented gamma-ray scans

    International Nuclear Information System (INIS)

    Sheppard, G.A.; Prettyman, T.H.; Piquette, E.C.

    1994-01-01

    In a typical segmented gamma-ray scanner (SGS), the detector's field of view is collimated so that a complete horizontal slice or segment of the desired thickness is visible. Ordinarily, the collimator is not deep enough to exclude gamma rays emitted from sample volumes above and below the segment aligned with the collimator. This can lead to assay biases, particularly for certain radioactive-material distributions. Another consequence of the collimator's low aspect ratio is that segment assays at the top and bottom of the sample are biased low because the detector's field of view is not filled. This effect is ordinarily countered by placing the sample on a low-Z pedestal and scanning one or more segment thicknesses below and above the sample. This takes extra time, however, We have investigated a number of techniques that both account for correlated segments and correct for end effects in SGS assays. Also, we have developed an algorithm that facilitates estimates of assay precision. Six calculation methods have been compared by evaluating the results of thousands of simulated, assays for three types of gamma-ray source distribution and ten masses. We will report on these computational studies and their experimental verification

  15. Analytical Solution and Application for One-Dimensional Consolidation of Tailings Dam

    Directory of Open Access Journals (Sweden)

    Hai-ming Liu

    2018-01-01

    Full Text Available The pore water pressure of tailings dam has a very great influence on the stability of tailings dam. Based on the assumption of one-dimensional consolidation and small strain, the partial differential equation of pore water pressure is deduced. The obtained differential equation can be simplified based on the parameters which are constants. According to the characteristics of the tailings dam, the pore water pressure of the tailings dam can be divided into the slope dam segment, dry beach segment, and artificial lake segment. The pore water pressure is obtained through solving the partial differential equation by separation variable method. On this basis, the dissipation and accumulation of pore water pressure of the upstream tailings dam are analyzed. The example of typical tailings is introduced to elaborate the applicability of the analytic solution. What is more, the application of pore water pressure in tailings dam is discussed. The research results have important scientific and engineering application value for the stability of tailings dam.

  16. Analytical calculations by computer in physics and mathematics

    International Nuclear Information System (INIS)

    Gerdt, V.P.; Tarasov, O.V.; Shirokov, D.V.

    1978-01-01

    The review of present status of analytical calculations by computer is given. Some programming systems for analytical computations are considered. Such systems as SCHOONSCHIP, CLAM, REDUCE-2, SYMBAL, CAMAL, AVTO-ANALITIK which are implemented or will be implemented in JINR, and MACSYMA - one of the most developed systems - are discussed. It is shown on the basis of mathematical operations, realized in these systems, that they are appropriated for different problems of theoretical physics and mathematics, for example, for problems of quantum field theory, celestial mechanics, general relativity and so on. Some problems solved in JINR by programming systems for analytical computations are described. The review is intended for specialists in different fields of theoretical physics and mathematics

  17. Characterizing and reaching high-risk drinkers using audience segmentation.

    Science.gov (United States)

    Moss, Howard B; Kirby, Susan D; Donodeo, Fred

    2009-08-01

    Market or audience segmentation is widely used in social marketing efforts to help planners identify segments of a population to target for tailored program interventions. Market-based segments are typically defined by behaviors, attitudes, knowledge, opinions, or lifestyles. They are more helpful to health communication and marketing planning than epidemiologically defined groups because market-based segments are similar in respect to how they behave or might react to marketing and communication efforts. However, market segmentation has rarely been used in alcohol research. As an illustration of its utility, we employed commercial data that describes the sociodemographic characteristics of high-risk drinkers as an audience segment, including where they tend to live, lifestyles, interests, consumer behaviors, alcohol consumption behaviors, other health-related behaviors, and cultural values. Such information can be extremely valuable in targeting and planning public health campaigns, targeted mailings, prevention interventions, and research efforts. We described the results of a segmentation analysis of those individuals who self-reported to consume 5 or more drinks per drinking episode at least twice in the last 30 days. The study used the proprietary PRIZM (Claritas, Inc., San Diego, CA) audience segmentation database merged with the Center for Disease Control and Prevention's (CDC) Behavioral Risk Factor Surveillance System (BRFSS) database. The top 10 of the 66 PRIZM audience segments for this risky drinking pattern are described. For five of these segments we provided additional in-depth details about consumer behavior and the estimates of the market areas where these risky drinkers resided. The top 10 audience segments (PRIZM clusters) most likely to engage in high-risk drinking are described. The cluster with the highest concentration of binge-drinking behavior is referred to as the "Cyber Millenials." This cluster is characterized as "the nation's tech

  18. A study of symbol segmentation method for handwritten mathematical formula recognition using mathematical structure information

    OpenAIRE

    Toyozumi, Kenichi; Yamada, Naoya; Kitasaka, Takayuki; Mori, Kensaku; Suenaga, Yasuhito; Mase, Kenji; Takahashi, Tomoichi

    2004-01-01

    Symbol segmentation is very important in handwritten mathematical formula recognition, since it is the very first portion of the recognition, since it is the very first portion of the recognition process. This paper proposes a new symbol segmentation method using mathematical structure information. The base technique of symbol segmentation employed in theexisting methods is dynamic programming which optimizes the overall results of individual symbol recognition. The new method we propose here...

  19. European specialist porphyria laboratories: diagnostic strategies, analytical quality, clinical interpretation, and reporting as assessed by an external quality assurance program.

    Science.gov (United States)

    Aarsand, Aasne K; Villanger, Jørild H; Støle, Egil; Deybach, Jean-Charles; Marsden, Joanne; To-Figueras, Jordi; Badminton, Mike; Elder, George H; Sandberg, Sverre

    2011-11-01

    The porphyrias are a group of rare metabolic disorders whose diagnosis depends on identification of specific patterns of porphyrin precursor and porphyrin accumulation in urine, blood, and feces. Diagnostic tests for porphyria are performed by specialized laboratories in many countries. Data regarding the analytical and diagnostic performance of these laboratories are scarce. We distributed 5 sets of multispecimen samples from different porphyria patients accompanied by clinical case histories to 18-21 European specialist porphyria laboratories/centers as part of a European Porphyria Network organized external analytical and postanalytical quality assessment (EQA) program. The laboratories stated which analyses they would normally have performed given the case histories and reported results of all porphyria-related analyses available, interpretative comments, and diagnoses. Reported diagnostic strategies initially showed considerable diversity, but the number of laboratories applying adequate diagnostic strategies increased during the study period. We found an average interlaboratory CV of 50% (range 12%-152%) for analytes in absolute concentrations. Result normalization by forming ratios to the upper reference limits did not reduce this variation. Sixty-five percent of reported results were within biological variation-based analytical quality specifications. Clinical interpretation of the obtained analytical results was accurate, and most laboratories established the correct diagnosis in all distributions. Based on a case-based EQA scheme, variations were apparent in analytical and diagnostic performance between European specialist porphyria laboratories. Our findings reinforce the use of EQA schemes as an essential tool to assess both analytical and diagnostic processes and thereby to improve patient care in rare diseases.

  20. Developing Healthcare Data Analytics APPs with Open Data Science Tools.

    Science.gov (United States)

    Hao, Bibo; Sun, Wen; Yu, Yiqin; Xie, Guotong

    2017-01-01

    Recent advances in big data analytics provide more flexible, efficient, and open tools for researchers to gain insight from healthcare data. Whilst many tools require researchers to develop programs with programming languages like Python, R and so on, which is not a skill set grasped by many researchers in the healthcare data analytics area. To make data science more approachable, we explored existing tools and developed a practice that can help data scientists convert existing analytics pipelines to user-friendly analytics APPs with rich interactions and features of real-time analysis. With this practice, data scientists can develop customized analytics pipelines as APPs in Jupyter Notebook and disseminate them to other researchers easily, and researchers can benefit from the shared notebook to perform analysis tasks or reproduce research results much more easily.

  1. A general strategy for performing temperature-programming in high performance liquid chromatography--prediction of segmented temperature gradients.

    Science.gov (United States)

    Wiese, Steffen; Teutenberg, Thorsten; Schmidt, Torsten C

    2011-09-28

    In the present work it is shown that the linear elution strength (LES) model which was adapted from temperature-programming gas chromatography (GC) can also be employed to predict retention times for segmented-temperature gradients based on temperature-gradient input data in liquid chromatography (LC) with high accuracy. The LES model assumes that retention times for isothermal separations can be predicted based on two temperature gradients and is employed to calculate the retention factor of an analyte when changing the start temperature of the temperature gradient. In this study it was investigated whether this approach can also be employed in LC. It was shown that this approximation cannot be transferred to temperature-programmed LC where a temperature range from 60°C up to 180°C is investigated. Major relative errors up to 169.6% were observed for isothermal retention factor predictions. In order to predict retention times for temperature gradients with different start temperatures in LC, another relationship is required to describe the influence of temperature on retention. Therefore, retention times for isothermal separations based on isothermal input runs were predicted using a plot of the natural logarithm of the retention factor vs. the inverse temperature and a plot of the natural logarithm of the retention factor vs. temperature. It could be shown that a plot of lnk vs. T yields more reliable isothermal/isocratic retention time predictions than a plot of lnk vs. 1/T which is usually employed. Hence, in order to predict retention times for temperature-gradients with different start temperatures in LC, two temperature gradient and two isothermal measurements have been employed. In this case, retention times can be predicted with a maximal relative error of 5.5% (average relative error: 2.9%). In comparison, if the start temperature of the simulated temperature gradient is equal to the start temperature of the input data, only two temperature

  2. Demonstrating Success: Web Analytics and Continuous Improvement

    Science.gov (United States)

    Loftus, Wayne

    2012-01-01

    As free and low-cost Web analytics tools become more sophisticated, libraries' approach to user analysis can become more nuanced and precise. Tracking appropriate metrics with a well-formulated analytics program can inform design decisions, demonstrate the degree to which those decisions have succeeded, and thereby inform the next iteration in the…

  3. O papel dos programas interlaboratoriais para a qualidade dos resultados analíticos Interlaboratorial programs for improving the quality of analytical results

    Directory of Open Access Journals (Sweden)

    Queenie Siu Hang Chui

    2004-12-01

    Full Text Available Interlaboratorial programs are conducted for a number of purposes: to identify problems related to the calibration of instruments, to assess the degree of equivalence of analytical results among several laboratories, to attribute quantity values and its uncertainties in the development of a certified reference material and to verify the performance of laboratories as in proficiency testing, a key quality assurance technique, which is sometimes used in conjunction with accreditation. Several statistics tools are employed to assess the analytical results of laboratories participating in an intercomparison program. Among them are the z-score technique, the elypse of confidence and the Grubbs and Cochran test. This work presents the experience in coordinating an intercomparison exercise in order to determine Ca, Al, Fe, Ti and Mn, as impurities in samples of silicon metal of chemical grade prepared as a candidate for reference material.

  4. Intrinsically disordered segments and the evolution of protein half-life

    Science.gov (United States)

    Babu, M.

    2013-03-01

    Precise turnover of proteins is essential for cellular homeostasis and is primarily mediated by the proteasome. Thus, a fundamental question is: What features make a protein an efficient substrate for degradation? Here I will present results that proteins with a long terminal disordered segment or internal disordered segments have a significantly shorter half-life in yeast. This relationship appears to be evolutionarily conserved in mouse and human. Furthermore, upon gene duplication, divergence in the length of terminal disorder or variation in the number of internal disordered segments results in significant alteration of the half-life of yeast paralogs. Many proteins that exhibit such changes participate in signaling, where altered protein half-life will likely influence their activity. We suggest that variation in the length and number of disordered segments could serve as a remarkably simple means to evolve protein half-life and may serve as an underappreciated source of genetic variation with important phenotypic consequences. MMB acknowledges the Medical Research Council for funding his research program.

  5. Hydrophilic segmented block copolymers based on poly(ethylene oxide) and monodisperse amide segments

    NARCIS (Netherlands)

    Husken, D.; Feijen, Jan; Gaymans, R.J.

    2007-01-01

    Segmented block copolymers based on poly(ethylene oxide) (PEO) flexible segments and monodisperse crystallizable bisester tetra-amide segments were made via a polycondensation reaction. The molecular weight of the PEO segments varied from 600 to 4600 g/mol and a bisester tetra-amide segment (T6T6T)

  6. Conflation of Short Identity-by-Descent Segments Bias Their Inferred Length Distribution

    Directory of Open Access Journals (Sweden)

    Charleston W. K. Chiang

    2016-05-01

    Full Text Available Identity-by-descent (IBD is a fundamental concept in genetics with many applications. In a common definition, two haplotypes are said to share an IBD segment if that segment is inherited from a recent shared common ancestor without intervening recombination. Segments several cM long can be efficiently detected by a number of algorithms using high-density SNP array data from a population sample, and there are currently efforts to detect shorter segments from sequencing. Here, we study a problem of identifiability: because existing approaches detect IBD based on contiguous segments of identity-by-state, inferred long segments of IBD may arise from the conflation of smaller, nearby IBD segments. We quantified this effect using coalescent simulations, finding that significant proportions of inferred segments 1–2 cM long are results of conflations of two or more shorter segments, each at least 0.2 cM or longer, under demographic scenarios typical for modern humans for all programs tested. The impact of such conflation is much smaller for longer (> 2 cM segments. This biases the inferred IBD segment length distribution, and so can affect downstream inferences that depend on the assumption that each segment of IBD derives from a single common ancestor. As an example, we present and analyze an estimator of the de novo mutation rate using IBD segments, and demonstrate that unmodeled conflation leads to underestimates of the ages of the common ancestors on these segments, and hence a significant overestimate of the mutation rate. Understanding the conflation effect in detail will make its correction in future methods more tractable.

  7. ORBITALES. A program for the calculation of wave functions with an analytical central potential; ORBITALES. Programa de calculo de Funciones de Onda para una Potencial Central Analitico

    Energy Technology Data Exchange (ETDEWEB)

    Carretero, Yunta; Rodriguez Mayquez, E

    1974-07-01

    In this paper is described the objective, basis, carrying out in FORTRAN language and use of the program ORBITALES. This program calculate atomic wave function in the case of ths analytical central potential (Author) 8 refs.

  8. Spinal segmental dysgenesis

    Directory of Open Access Journals (Sweden)

    N Mahomed

    2009-06-01

    Full Text Available Spinal segmental dysgenesis is a rare congenital spinal abnormality , seen in neonates and infants in which a segment of the spine and spinal cord fails to develop normally . The condition is segmental with normal vertebrae above and below the malformation. This condition is commonly associated with various abnormalities that affect the heart, genitourinary, gastrointestinal tract and skeletal system. We report two cases of spinal segmental dysgenesis and the associated abnormalities.

  9. Parallel segmented outlet flow high performance liquid chromatography with multiplexed detection

    International Nuclear Information System (INIS)

    Camenzuli, Michelle; Terry, Jessica M.; Shalliker, R. Andrew; Conlan, Xavier A.; Barnett, Neil W.; Francis, Paul S.

    2013-01-01

    Graphical abstract: -- Highlights: •Multiplexed detection for liquid chromatography. •‘Parallel segmented outlet flow’ distributes inner and outer portions of the analyte zone. •Three detectors were used simultaneously for the determination of opiate alkaloids. -- Abstract: We describe a new approach to multiplex detection for HPLC, exploiting parallel segmented outlet flow – a new column technology that provides pressure-regulated control of eluate flow through multiple outlet channels, which minimises the additional dead volume associated with conventional post-column flow splitting. Using three detectors: one UV-absorbance and two chemiluminescence systems (tris(2,2′-bipyridine)ruthenium(III) and permanganate), we examine the relative responses for six opium poppy (Papaver somniferum) alkaloids under conventional and multiplexed conditions, where approximately 30% of the eluate was distributed to each detector and the remaining solution directed to a collection vessel. The parallel segmented outlet flow mode of operation offers advantages in terms of solvent consumption, waste generation, total analysis time and solute band volume when applying multiple detectors to HPLC, but the manner in which each detection system is influenced by changes in solute concentration and solution flow rates must be carefully considered

  10. Parallel segmented outlet flow high performance liquid chromatography with multiplexed detection

    Energy Technology Data Exchange (ETDEWEB)

    Camenzuli, Michelle [Australian Centre for Research on Separation Science (ACROSS), School of Science and Health, University of Western Sydney (Parramatta), Sydney, NSW (Australia); Terry, Jessica M. [Centre for Chemistry and Biotechnology, School of Life and Environmental Sciences, Deakin University, Geelong, Victoria 3216 (Australia); Shalliker, R. Andrew, E-mail: r.shalliker@uws.edu.au [Australian Centre for Research on Separation Science (ACROSS), School of Science and Health, University of Western Sydney (Parramatta), Sydney, NSW (Australia); Conlan, Xavier A.; Barnett, Neil W. [Centre for Chemistry and Biotechnology, School of Life and Environmental Sciences, Deakin University, Geelong, Victoria 3216 (Australia); Francis, Paul S., E-mail: paul.francis@deakin.edu.au [Centre for Chemistry and Biotechnology, School of Life and Environmental Sciences, Deakin University, Geelong, Victoria 3216 (Australia)

    2013-11-25

    Graphical abstract: -- Highlights: •Multiplexed detection for liquid chromatography. •‘Parallel segmented outlet flow’ distributes inner and outer portions of the analyte zone. •Three detectors were used simultaneously for the determination of opiate alkaloids. -- Abstract: We describe a new approach to multiplex detection for HPLC, exploiting parallel segmented outlet flow – a new column technology that provides pressure-regulated control of eluate flow through multiple outlet channels, which minimises the additional dead volume associated with conventional post-column flow splitting. Using three detectors: one UV-absorbance and two chemiluminescence systems (tris(2,2′-bipyridine)ruthenium(III) and permanganate), we examine the relative responses for six opium poppy (Papaver somniferum) alkaloids under conventional and multiplexed conditions, where approximately 30% of the eluate was distributed to each detector and the remaining solution directed to a collection vessel. The parallel segmented outlet flow mode of operation offers advantages in terms of solvent consumption, waste generation, total analysis time and solute band volume when applying multiple detectors to HPLC, but the manner in which each detection system is influenced by changes in solute concentration and solution flow rates must be carefully considered.

  11. Tank 241-U-106, cores 147 and 148, analytical results for the final report

    Energy Technology Data Exchange (ETDEWEB)

    Steen, F.H.

    1996-09-27

    This document is the final report deliverable for tank 241-U-106 push mode core segments collected between May 8, 1996 and May 10, 1996 and received by the 222-S Laboratory between May 14, 1996 and May 16, 1996. The segments were subsampled and analyzed in accordance with the Tank 241-U-106 Push Mode Core Sampling and analysis Plan (TSAP), the Historical Model Evaluation Data Requirements (Historical DQO), Data Quality Objective to Support Resolution of the Organic Complexant Safety Issue (Organic DQO) and the Safety Screening Data Quality Objective (DQO). The analytical results are included in Table 1.

  12. Automated statistical modeling of analytical measurement systems

    International Nuclear Information System (INIS)

    Jacobson, J.J.

    1992-01-01

    The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability

  13. Microscopy Image Browser: A Platform for Segmentation and Analysis of Multidimensional Datasets.

    Directory of Open Access Journals (Sweden)

    Ilya Belevich

    2016-01-01

    Full Text Available Understanding the structure-function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program.

  14. Handling movement epenthesis and hand segmentation ambiguities in continuous sign language recognition using nested dynamic programming.

    Science.gov (United States)

    Yang, Ruiduo; Sarkar, Sudeep; Loeding, Barbara

    2010-03-01

    We consider two crucial problems in continuous sign language recognition from unaided video sequences. At the sentence level, we consider the movement epenthesis (me) problem and at the feature level, we consider the problem of hand segmentation and grouping. We construct a framework that can handle both of these problems based on an enhanced, nested version of the dynamic programming approach. To address movement epenthesis, a dynamic programming (DP) process employs a virtual me option that does not need explicit models. We call this the enhanced level building (eLB) algorithm. This formulation also allows the incorporation of grammar models. Nested within this eLB is another DP that handles the problem of selecting among multiple hand candidates. We demonstrate our ideas on four American Sign Language data sets with simple background, with the signer wearing short sleeves, with complex background, and across signers. We compared the performance with Conditional Random Fields (CRF) and Latent Dynamic-CRF-based approaches. The experiments show more than 40 percent improvement over CRF or LDCRF approaches in terms of the frame labeling rate. We show the flexibility of our approach when handling a changing context. We also find a 70 percent improvement in sign recognition rate over the unenhanced DP matching algorithm that does not accommodate the me effect.

  15. Automatic Melody Segmentation

    NARCIS (Netherlands)

    Rodríguez López, Marcelo

    2016-01-01

    The work presented in this dissertation investigates music segmentation. In the field of Musicology, segmentation refers to a score analysis technique, whereby notated pieces or passages of these pieces are divided into “units” referred to as sections, periods, phrases, and so on. Segmentation

  16. Analytical model for an electrostatically actuated miniature diaphragm compressor

    International Nuclear Information System (INIS)

    Sathe, Abhijit A; Groll, Eckhard A; Garimella, Suresh V

    2008-01-01

    This paper presents a new analytical approach for quasi-static modeling of an electrostatically actuated diaphragm compressor that could be employed in a miniature scale refrigeration system. The compressor consists of a flexible circular diaphragm clamped at its circumference. A conformal chamber encloses the diaphragm completely. The membrane and the chamber surfaces are coated with metallic electrodes. A potential difference applied between the diaphragm and the chamber pulls the diaphragm toward the chamber surface progressively from the outer circumference toward the center. This zipping actuation reduces the volume available to the refrigerant gas, thereby increasing its pressure. A segmentation technique is proposed for analysis of the compressor by which the domain is divided into multiple segments for each of which the forces acting on the diaphragm are estimated. The pull-down voltage to completely zip each individual segment is thus obtained. The required voltage for obtaining a specific pressure rise in the chamber can thus be determined. Predictions from the model compare well with other simulation results from the literature, as well as to experimental measurements of the diaphragm displacement and chamber pressure rise in a custom-built setup

  17. Patient Segmentation Analysis Offers Significant Benefits For Integrated Care And Support.

    Science.gov (United States)

    Vuik, Sabine I; Mayer, Erik K; Darzi, Ara

    2016-05-01

    Integrated care aims to organize care around the patient instead of the provider. It is therefore crucial to understand differences across patients and their needs. Segmentation analysis that uses big data can help divide a patient population into distinct groups, which can then be targeted with care models and intervention programs tailored to their needs. In this article we explore the potential applications of patient segmentation in integrated care. We propose a framework for population strategies in integrated care-whole populations, subpopulations, and high-risk populations-and show how patient segmentation can support these strategies. Through international case examples, we illustrate practical considerations such as choosing a segmentation logic, accessing data, and tailoring care models. Important issues for policy makers to consider are trade-offs between simplicity and precision, trade-offs between customized and off-the-shelf solutions, and the availability of linked data sets. We conclude that segmentation can provide many benefits to integrated care, and we encourage policy makers to support its use. Project HOPE—The People-to-People Health Foundation, Inc.

  18. Characterizing and Reaching High-Risk Drinkers Using Audience Segmentation

    Science.gov (United States)

    Moss, Howard B.; Kirby, Susan D.; Donodeo, Fred

    2010-01-01

    Background Market or audience segmentation is widely used in social marketing efforts to help planners identify segments of a population to target for tailored program interventions. Market-based segments are typically defined by behaviors, attitudes, knowledge, opinions, or lifestyles. They are more helpful to health communication and marketing planning than epidemiologically-defined groups because market-based segments are similar in respect to how they behave or might react to marketing and communication efforts. However, market segmentation has rarely been used in alcohol research. As an illustration of its utility, we employed commercial data that describes the sociodemographic characteristics of high-risk drinkers as an audience segment; where they tend to live, lifestyles, interests, consumer behaviors, alcohol consumption behaviors, other health-related behaviors, and cultural values. Such information can be extremely valuable in targeting and planning public health campaigns, targeted mailings, prevention interventions and research efforts. Methods We describe the results of a segmentation analysis of those individuals who self-report consuming five or more drinks per drinking episode at least twice in the last 30-days. The study used the proprietary PRIZM™ audience segmentation database merged with Center for Disease Control and Prevention's (CDC) Behavioral Risk Factor Surveillance System (BRFSS) database. The top ten of the 66 PRIZM™ audience segments for this risky drinking pattern are described. For five of these segments we provide additional in-depth details about consumer behavior and the estimates of the market areas where these risky drinkers reside. Results The top ten audience segments (PRIZM clusters) most likely to engage in high-risk drinking are described. The cluster with the highest concentration of binge drinking behavior is referred to as the “Cyber Millenials.” This cluster is characterized as “the nation's tech-savvy singles

  19. Analytical Chemistry Laboratory. Progress report for FY 1996

    Energy Technology Data Exchange (ETDEWEB)

    Green, D.W.; Boparai, A.S.; Bowers, D.L.

    1996-12-01

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year (FY) 1996. This annual report is the thirteenth for the ACL. It describes effort on continuing and new projects and contributions of the ACL staff to various programs at ANL. The ACL operates in the ANL system as a full-cost-recovery service center, but has a mission that includes a complementary research and development component: The Analytical Chemistry Laboratory will provide high-quality, cost-effective chemical analysis and related technical support to solve research problems of our clients -- Argonne National Laboratory, the Department of Energy, and others -- and will conduct world-class research and development in analytical chemistry and its applications. Because of the diversity of research and development work at ANL, the ACL handles a wide range of analytical chemistry problems. Some routine or standard analyses are done, but the ACL usually works with commercial laboratories if our clients require high-volume, production-type analyses. It is common for ANL programs to generate unique problems that require significant development of methods and adaption of techniques to obtain useful analytical data. Thus, much of the support work done by the ACL is very similar to our applied analytical chemistry research.

  20. Segmented trapped vortex cavity

    Science.gov (United States)

    Grammel, Jr., Leonard Paul (Inventor); Pennekamp, David Lance (Inventor); Winslow, Jr., Ralph Henry (Inventor)

    2010-01-01

    An annular trapped vortex cavity assembly segment comprising includes a cavity forward wall, a cavity aft wall, and a cavity radially outer wall there between defining a cavity segment therein. A cavity opening extends between the forward and aft walls at a radially inner end of the assembly segment. Radially spaced apart pluralities of air injection first and second holes extend through the forward and aft walls respectively. The segment may include first and second expansion joint features at distal first and second ends respectively of the segment. The segment may include a forward subcomponent including the cavity forward wall attached to an aft subcomponent including the cavity aft wall. The forward and aft subcomponents include forward and aft portions of the cavity radially outer wall respectively. A ring of the segments may be circumferentially disposed about an axis to form an annular segmented vortex cavity assembly.

  1. Performance Analysis of Segmentation of Hyperspectral Images Based on Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Praveen Agarwal

    2017-06-01

    Full Text Available Image segmentation is a fundamental approach in the field of image processing and based on user’s application .This paper propose an original and simple segmentation strategy based on the EM approach that resolves many informatics problems about hyperspectral images which are observed by airborne sensors. In a first step, to simplify the input color textured image into a color image without texture. The final segmentation is simply achieved by a spatially color segmentation using feature vector with the set of color values contained around the pixel to be classified with some mathematical equations. The spatial constraint allows taking into account the inherent spatial relationships of any image and its color. This approach provides effective PSNR for the segmented image. These results have the better performance as the segmented images are compared with Watershed & Region Growing Algorithm and provide effective segmentation for the Spectral Images & Medical Images.

  2. LDR segmented mirror technology assessment study

    Science.gov (United States)

    Krim, M.; Russo, J.

    1983-01-01

    In the mid-1990s, NASA plans to orbit a giant telescope, whose aperture may be as great as 30 meters, for infrared and sub-millimeter astronomy. Its primary mirror will be deployed or assembled in orbit from a mosaic of possibly hundreds of mirror segments. Each segment must be shaped to precise curvature tolerances so that diffraction-limited performance will be achieved at 30 micron (nominal operating wavelength). All panels must lie within 1 micron on a theoretical surface described by the optical precipitation of the telescope's primary mirror. To attain diffraction-limited performance, the issues of alignment and/or position sensing, position control of micron tolerances, and structural, thermal, and mechanical considerations for stowing, deploying, and erecting the reflector must be resolved. Radius of curvature precision influences panel size, shape, material, and type of construction. Two superior material choices emerged: fused quartz (sufficiently homogeneous with respect to thermal expansivity to permit a thin shell substrate to be drape molded between graphite dies to a precise enough off-axis asphere for optical finishing on the as-received a segment) and a Pyrex or Duran (less expensive than quartz and formable at lower temperatures). The optimal reflector panel size is between 1-1/2 and 2 meters. Making one, two-meter mirror every two weeks requires new approaches to manufacturing off-axis parabolic or aspheric segments (drape molding on precision dies and subsequent finishing on a nonrotationally symmetric dependent machine). Proof-of-concept developmental programs were identified to prove the feasibility of the materials and manufacturing ideas.

  3. Examination of segmental average mass spectra from liquid chromatography-tandem mass spectrometric (LC-MS/MS) data enables screening of multiple types of protein modifications.

    Science.gov (United States)

    Liu, Nai-Yu; Lee, Hsiao-Hui; Chang, Zee-Fen; Tsay, Yeou-Guang

    2015-09-10

    It has been observed that a modified peptide and its non-modified counterpart, when analyzed with reverse phase liquid chromatography, usually share a very similar elution property [1-3]. Inasmuch as this property is common to many different types of protein modifications, we propose an informatics-based approach, featuring the generation of segmental average mass spectra ((sa)MS), that is capable of locating different types of modified peptides in two-dimensional liquid chromatography-mass spectrometric (LC-MS) data collected for regular protease digests from proteins in gels or solutions. To enable the localization of these peptides in the LC-MS map, we have implemented a set of computer programs, or the (sa)MS package, that perform the needed functions, including generating a complete set of segmental average mass spectra, compiling the peptide inventory from the Sequest/TurboSequest results, searching modified peptide candidates and annotating a tandem mass spectrum for final verification. Using ROCK2 as an example, our programs were applied to identify multiple types of modified peptides, such as phosphorylated and hexosylated ones, which particularly include those peptides that could have been ignored due to their peculiar fragmentation patterns and consequent low search scores. Hence, we demonstrate that, when complemented with peptide search algorithms, our approach and the entailed computer programs can add the sequence information needed for bolstering the confidence of data interpretation by the present analytical platforms and facilitate the mining of protein modification information out of complicated LC-MS/MS data. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Waste minimization in analytical methods

    International Nuclear Information System (INIS)

    Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S. Schilling, J.B.

    1995-01-01

    The US Department of Energy (DOE) will require a large number of waste characterizations over a multi-year period to accomplish the Department's goals in environmental restoration and waste management. Estimates vary, but two million analyses annually are expected. The waste generated by the analytical procedures used for characterizations is a significant source of new DOE waste. Success in reducing the volume of secondary waste and the costs of handling this waste would significantly decrease the overall cost of this DOE program. Selection of appropriate analytical methods depends on the intended use of the resultant data. It is not always necessary to use a high-powered analytical method, typically at higher cost, to obtain data needed to make decisions about waste management. Indeed, for samples taken from some heterogeneous systems, the meaning of high accuracy becomes clouded if the data generated are intended to measure a property of this system. Among the factors to be considered in selecting the analytical method are the lower limit of detection, accuracy, turnaround time, cost, reproducibility (precision), interferences, and simplicity. Occasionally, there must be tradeoffs among these factors to achieve the multiple goals of a characterization program. The purpose of the work described here is to add waste minimization to the list of characteristics to be considered. In this paper the authors present results of modifying analytical methods for waste characterization to reduce both the cost of analysis and volume of secondary wastes. Although tradeoffs may be required to minimize waste while still generating data of acceptable quality for the decision-making process, they have data demonstrating that wastes can be reduced in some cases without sacrificing accuracy or precision

  5. Segmental Vitiligo.

    Science.gov (United States)

    van Geel, Nanja; Speeckaert, Reinhart

    2017-04-01

    Segmental vitiligo is characterized by its early onset, rapid stabilization, and unilateral distribution. Recent evidence suggests that segmental and nonsegmental vitiligo could represent variants of the same disease spectrum. Observational studies with respect to its distribution pattern point to a possible role of cutaneous mosaicism, whereas the original stated dermatomal distribution seems to be a misnomer. Although the exact pathogenic mechanism behind the melanocyte destruction is still unknown, increasing evidence has been published on the autoimmune/inflammatory theory of segmental vitiligo. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Analytical Chemistry Division annual progress report for period ending December 31, 1988

    Energy Technology Data Exchange (ETDEWEB)

    1988-05-01

    The Analytical Chemistry Division of Oak Ridge National Laboratory (ORNL) is a large and diversified organization. As such, it serves a multitude of functions for a clientele that exists both in and outside of ORNL. These functions fall into the following general categories: (1) Analytical Research, Development, and Implementation. The division maintains a program to conceptualize, investigate, develop, assess, improve, and implement advanced technology for chemical and physicochemical measurements. Emphasis is on problems and needs identified with ORNL and Department of Energy (DOE) programs; however, attention is also given to advancing the analytical sciences themselves. (2) Programmatic Research, Development, and Utilization. The division carries out a wide variety of chemical work that typically involves analytical research and/or development plus the utilization of analytical capabilities to expedite programmatic interests. (3) Technical Support. The division performs chemical and physicochemical analyses of virtually all types. The Analytical Chemistry Division is organized into four major sections, each of which may carry out any of the three types of work mentioned above. Chapters 1 through 4 of this report highlight progress within the four sections during the period January 1 to December 31, 1988. A brief discussion of the division's role in an especially important environmental program is given in Chapter 5. Information about quality assurance, safety, and training programs is presented in Chapter 6, along with a tabulation of analyses rendered. Publications, oral presentations, professional activities, educational programs, and seminars are cited in Chapters 7 and 8.

  7. Analytical Chemistry Division annual progress report for period ending December 31, 1988

    International Nuclear Information System (INIS)

    1988-05-01

    The Analytical Chemistry Division of Oak Ridge National Laboratory (ORNL) is a large and diversified organization. As such, it serves a multitude of functions for a clientele that exists both in and outside of ORNL. These functions fall into the following general categories: (1) Analytical Research, Development, and Implementation. The division maintains a program to conceptualize, investigate, develop, assess, improve, and implement advanced technology for chemical and physicochemical measurements. Emphasis is on problems and needs identified with ORNL and Department of Energy (DOE) programs; however, attention is also given to advancing the analytical sciences themselves. (2) Programmatic Research, Development, and Utilization. The division carries out a wide variety of chemical work that typically involves analytical research and/or development plus the utilization of analytical capabilities to expedite programmatic interests. (3) Technical Support. The division performs chemical and physicochemical analyses of virtually all types. The Analytical Chemistry Division is organized into four major sections, each of which may carry out any of the three types of work mentioned above. Chapters 1 through 4 of this report highlight progress within the four sections during the period January 1 to December 31, 1988. A brief discussion of the division's role in an especially important environmental program is given in Chapter 5. Information about quality assurance, safety, and training programs is presented in Chapter 6, along with a tabulation of analyses rendered. Publications, oral presentations, professional activities, educational programs, and seminars are cited in Chapters 7 and 8

  8. Segmental vitiligo with segmental morphea: An autoimmune link?

    Directory of Open Access Journals (Sweden)

    Pravesh Yadav

    2014-01-01

    Full Text Available An 18-year old girl with segmental vitiligo involving the left side of the trunk and left upper limb with segmental morphea involving the right side of trunk and right upper limb without any deeper involvement is illustrated. There was no history of preceding drug intake, vaccination, trauma, radiation therapy, infection, or hormonal therapy. Family history of stable vitiligo in her brother and a history of type II diabetes mellitus in the father were elicited. Screening for autoimmune diseases and antithyroid antibody was negative. An autoimmune link explaining the co-occurrence has been proposed. Cutaneous mosiacism could explain the presence of both the pathologies in a segmental distribution.

  9. Market Segmentation in Business Technology Base: The Case of Segmentation of Sparkling

    Directory of Open Access Journals (Sweden)

    Valéria Riscarolli

    2014-08-01

    Full Text Available A common market segmentation premise for products and services rules consumer behavior as the segmentation center piece. Would this be the logic for segmentation used by small technology based companies? In this article we target at determining the principles of market segmentation used by a vitiwinery company, as research object. This company is recognized by its products excellence, either in domestic as well as in the foreign market, among 13 distinct countries. The research method used is a case study, through information from the company’s CEOs and crossed by primary information from observation and formal registries and documents of the company. In this research we look at sparkling wines market segmentation. Main results indicate that the winery studied considers only technological elements as the basis to build a market segment. One may conclude that a market segmentation for this company is based upon technological dominion of sparkling wines production, aligned with a premium-price policy. In the company, directorship believes that as sparkling wines market is still incipient in the country, sparkling wine market segments will form and consolidate after the evolution of consumers tasting preferences, depending on technologies that boost sparkling wines quality. 

  10. Making Decisions by Analytical Chemistry

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    . These discrepancies are very unfortunate because erroneous conclusions may arise from an otherwise meticulous and dedicated effort of research staff. This may eventually lead to unreliable conclusions thus jeopardizing investigations of environmental monitoring, climate changes, food safety, clinical chemistry......It has been long recognized that results of analytical chemistry are not flawless, owing to the fact that professional laboratories and research laboratories analysing the same type of samples by the same type of instruments are likely to obtain significantly different results. The European......, forensics and other fields of science where analytical chemistry is the key instrument of decision making. In order to elucidate the potential origin of the statistical variations found among laboratories, a major program was undertaken including several analytical technologies where the purpose...

  11. Fluence map segmentation

    International Nuclear Information System (INIS)

    Rosenwald, J.-C.

    2008-01-01

    The lecture addressed the following topics: 'Interpreting' the fluence map; The sequencer; Reasons for difference between desired and actual fluence map; Principle of 'Step and Shoot' segmentation; Large number of solutions for given fluence map; Optimizing 'step and shoot' segmentation; The interdigitation constraint; Main algorithms; Conclusions on segmentation algorithms (static mode); Optimizing intensity levels and monitor units; Sliding window sequencing; Synchronization to avoid the tongue-and-groove effect; Accounting for physical characteristics of MLC; Importance of corrections for leaf transmission and offset; Accounting for MLC mechanical constraints; The 'complexity' factor; Incorporating the sequencing into optimization algorithm; Data transfer to the treatment machine; Interface between R and V and accelerator; and Conclusions on fluence map segmentation (Segmentation is part of the overall inverse planning procedure; 'Step and Shoot' and 'Dynamic' options are available for most TPS (depending on accelerator model; The segmentation phase tends to come into the optimization loop; The physical characteristics of the MLC have a large influence on final dose distribution; The IMRT plans (MU and relative dose distribution) must be carefully validated). (P.A.)

  12. Strategic market segmentation

    Directory of Open Access Journals (Sweden)

    Maričić Branko R.

    2015-01-01

    Full Text Available Strategic planning of marketing activities is the basis of business success in modern business environment. Customers are not homogenous in their preferences and expectations. Formulating an adequate marketing strategy, focused on realization of company's strategic objectives, requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation. Strategic planning imposes a need to plan marketing activities according to strategically important segments on the long term basis. At the same time, there is a need to revise and adapt marketing activities on the short term basis. There are number of criteria based on which market segmentation is performed. The paper will consider effectiveness and efficiency of different market segmentation criteria based on empirical research of customer expectations and preferences. The analysis will include traditional criteria and criteria based on behavioral model. The research implications will be analyzed from the perspective of selection of the most adequate market segmentation criteria in strategic planning of marketing activities.

  13. Why segmentation matters: Experience-driven segmentation errors impair "morpheme" learning.

    Science.gov (United States)

    Finn, Amy S; Hudson Kam, Carla L

    2015-09-01

    We ask whether an adult learner's knowledge of their native language impedes statistical learning in a new language beyond just word segmentation (as previously shown). In particular, we examine the impact of native-language word-form phonotactics on learners' ability to segment words into their component morphemes and learn phonologically triggered variation of morphemes. We find that learning is impaired when words and component morphemes are structured to conflict with a learner's native-language phonotactic system, but not when native-language phonotactics do not conflict with morpheme boundaries in the artificial language. A learner's native-language knowledge can therefore have a cascading impact affecting word segmentation and the morphological variation that relies upon proper segmentation. These results show that getting word segmentation right early in learning is deeply important for learning other aspects of language, even those (morphology) that are known to pose a great difficulty for adult language learners. (c) 2015 APA, all rights reserved).

  14. Computing the zeros of analytic functions

    CERN Document Server

    Kravanja, Peter

    2000-01-01

    Computing all the zeros of an analytic function and their respective multiplicities, locating clusters of zeros and analytic fuctions, computing zeros and poles of meromorphic functions, and solving systems of analytic equations are problems in computational complex analysis that lead to a rich blend of mathematics and numerical analysis. This book treats these four problems in a unified way. It contains not only theoretical results (based on formal orthogonal polynomials or rational interpolation) but also numerical analysis and algorithmic aspects, implementation heuristics, and polished software (the package ZEAL) that is available via the CPC Program Library. Graduate studets and researchers in numerical mathematics will find this book very readable.

  15. Program to develop analytical tools for environmental and safety assessment of nuclear material shipping container systems

    International Nuclear Information System (INIS)

    Butler, T.A.

    1978-11-01

    This paper describes a program for developing analytical techniques to evaluate the response of nuclear material shipping containers to severe accidents. Both lumped-mass and finite element techniques are employed to predict shipping container and shipping container-carrier response to impact. The general impact problem is computationally expensive because of its nonlinear, three-dimensional nature. This expense is minimized by using approximate models to parametrically identify critical cases before more exact analyses are performed. The computer codes developed for solving the problem are being experimentally substantiated with test data from full-scale and scale-model container drop tests. 6 figures, 1 table

  16. Tank 241-S-106, cores 183, 184 and 187 analytical results for the final report

    International Nuclear Information System (INIS)

    Esch, R.A.

    1997-01-01

    This document is the final laboratory report for tank 241-S-106 push mode core segments collected between February 12, 1997 and March 21, 1997. The segments were subsampled and analyzed in accordance with the Tank Push Mode Core Sampling and Analysis Plan (TSAP), the Tank Safety Screening Data Quality Objective (Safety DQO), the Historical Model Evaluation Data Requirements (Historical DQO) and the Data Quality Objective to Support Resolution of the Organic Complexant Safety Issue (Organic DQO). The analytical results are included in Table 1. Six of the twenty-four subsamples submitted for the differential scanning calorimetry (DSC) analysis exceeded the notification limit of 480 Joules/g stated in the DQO. Appropriate notifications were made. Total Organic Carbon (TOC) analyses were performed on all samples that produced exotherms during the DSC analysis. All results were less than the notification limit of three weight percent TOC. No cyanide analysis was performed, per agreement with the Tank Safety Program. None of the samples submitted for Total Alpha Activity exceeded notification limits as stated in the TSAP. Statistical evaluation of results by calculating the 95% upper confidence limit is not performed by the 222-S Laboratory and is not considered in this report. No core composites were created because there was insufficient solid material from any of the three core sampling events to generate a composite that would be representative of the tank contents

  17. Reactor Section standard analytical methods. Part 1

    Energy Technology Data Exchange (ETDEWEB)

    Sowden, D.

    1954-07-01

    the Standard Analytical Methods manual was prepared for the purpose of consolidating and standardizing all current analytical methods and procedures used in the Reactor Section for routine chemical analyses. All procedures are established in accordance with accepted practice and the general analytical methods specified by the Engineering Department. These procedures are specifically adapted to the requirements of the water treatment process and related operations. The methods included in this manual are organized alphabetically within the following five sections which correspond to the various phases of the analytical control program in which these analyses are to be used: water analyses, essential material analyses, cotton plug analyses boiler water analyses, and miscellaneous control analyses.

  18. Developing automated analytical methods for scientific environments using LabVIEW.

    Science.gov (United States)

    Wagner, Christoph; Armenta, Sergio; Lendl, Bernhard

    2010-01-15

    The development of new analytical techniques often requires the building of specially designed devices, each requiring its own dedicated control software. Especially in the research and development phase, LabVIEW has proven to be one highly useful tool for developing this software. Yet, it is still common practice to develop individual solutions for different instruments. In contrast to this, we present here a single LabVIEW-based program that can be directly applied to various analytical tasks without having to change the program code. Driven by a set of simple script commands, it can control a whole range of instruments, from valves and pumps to full-scale spectrometers. Fluid sample (pre-)treatment and separation procedures can thus be flexibly coupled to a wide range of analytical detection methods. Here, the capabilities of the program have been demonstrated by using it for the control of both a sequential injection analysis - capillary electrophoresis (SIA-CE) system with UV detection, and an analytical setup for studying the inhibition of enzymatic reactions using a SIA system with FTIR detection.

  19. Application of Micro-segmentation Algorithms to the Healthcare Market:A Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Sukumar, Sreenivas R [ORNL; Aline, Frank [ORNL

    2013-01-01

    We draw inspiration from the recent success of loyalty programs and targeted personalized market campaigns of retail companies such as Kroger, Netflix, etc. to understand beneficiary behaviors in the healthcare system. Our posit is that we can emulate the financial success the companies have achieved by better understanding and predicting customer behaviors and translating such success to healthcare operations. Towards that goal, we survey current practices in market micro-segmentation research and analyze health insurance claims data using those algorithms. We present results and insights from micro-segmentation of the beneficiaries using different techniques and discuss how the interpretation can assist with matching the cost-effective insurance payment models to the beneficiary micro-segments.

  20. Development of CAD implementing the algorithm of boundary elements’ numerical analytical method

    Directory of Open Access Journals (Sweden)

    Yulia V. Korniyenko

    2015-03-01

    Full Text Available Up to recent days the algorithms for numerical-analytical boundary elements method had been implemented with programs written in MATLAB environment language. Each program had a local character, i.e. used to solve a particular problem: calculation of beam, frame, arch, etc. Constructing matrices in these programs was carried out “manually” therefore being time-consuming. The research was purposed onto a reasoned choice of programming language for new CAD development, allows to implement algorithm of numerical analytical boundary elements method and to create visualization tools for initial objects and calculation results. Research conducted shows that among wide variety of programming languages the most efficient one for CAD development, employing the numerical analytical boundary elements method algorithm, is the Java language. This language provides tools not only for development of calculating CAD part, but also to build the graphic interface for geometrical models construction and calculated results interpretation.

  1. Advances in segmentation modeling for health communication and social marketing campaigns.

    Science.gov (United States)

    Albrecht, T L; Bryant, C

    1996-01-01

    Large-scale communication campaigns for health promotion and disease prevention involve analysis of audience demographic and psychographic factors for effective message targeting. A variety of segmentation modeling techniques, including tree-based methods such as Chi-squared Automatic Interaction Detection and logistic regression, are used to identify meaningful target groups within a large sample or population (N = 750-1,000+). Such groups are based on statistically significant combinations of factors (e.g., gender, marital status, and personality predispositions). The identification of groups or clusters facilitates message design in order to address the particular needs, attention patterns, and concerns of audience members within each group. We review current segmentation techniques, their contributions to conceptual development, and cost-effective decision making. Examples from a major study in which these strategies were used are provided from the Texas Women, Infants and Children Program's Comprehensive Social Marketing Program.

  2. Visual analytics in medical education: impacting analytical reasoning and decision making for quality improvement.

    Science.gov (United States)

    Vaitsis, Christos; Nilsson, Gunnar; Zary, Nabil

    2015-01-01

    The medical curriculum is the main tool representing the entire undergraduate medical education. Due to its complexity and multilayered structure it is of limited use to teachers in medical education for quality improvement purposes. In this study we evaluated three visualizations of curriculum data from a pilot course, using teachers from an undergraduate medical program and applying visual analytics methods. We found that visual analytics can be used to positively impacting analytical reasoning and decision making in medical education through the realization of variables capable to enhance human perception and cognition on complex curriculum data. The positive results derived from our evaluation of a medical curriculum and in a small scale, signify the need to expand this method to an entire medical curriculum. As our approach sustains low levels of complexity it opens a new promising direction in medical education informatics research.

  3. Documented Safety Analysis for the B695 Segment

    Energy Technology Data Exchange (ETDEWEB)

    Laycak, D

    2008-09-11

    This Documented Safety Analysis (DSA) was prepared for the Lawrence Livermore National Laboratory (LLNL) Building 695 (B695) Segment of the Decontamination and Waste Treatment Facility (DWTF). The report provides comprehensive information on design and operations, including safety programs and safety structures, systems and components to address the potential process-related hazards, natural phenomena, and external hazards that can affect the public, facility workers, and the environment. Consideration is given to all modes of operation, including the potential for both equipment failure and human error. The facilities known collectively as the DWTF are used by LLNL's Radioactive and Hazardous Waste Management (RHWM) Division to store and treat regulated wastes generated at LLNL. RHWM generally processes low-level radioactive waste with no, or extremely low, concentrations of transuranics (e.g., much less than 100 nCi/g). Wastes processed often contain only depleted uranium and beta- and gamma-emitting nuclides, e.g., {sup 90}Sr, {sup 137}Cs, or {sup 3}H. The mission of the B695 Segment centers on container storage, lab-packing, repacking, overpacking, bulking, sampling, waste transfer, and waste treatment. The B695 Segment is used for storage of radioactive waste (including transuranic and low-level), hazardous, nonhazardous, mixed, and other waste. Storage of hazardous and mixed waste in B695 Segment facilities is in compliance with the Resource Conservation and Recovery Act (RCRA). LLNL is operated by the Lawrence Livermore National Security, LLC, for the Department of Energy (DOE). The B695 Segment is operated by the RHWM Division of LLNL. Many operations in the B695 Segment are performed under a Resource Conservation and Recovery Act (RCRA) operation plan, similar to commercial treatment operations with best demonstrated available technologies. The buildings of the B695 Segment were designed and built considering such operations, using proven building

  4. Documented Safety Analysis for the B695 Segment

    International Nuclear Information System (INIS)

    Laycak, D.

    2008-01-01

    This Documented Safety Analysis (DSA) was prepared for the Lawrence Livermore National Laboratory (LLNL) Building 695 (B695) Segment of the Decontamination and Waste Treatment Facility (DWTF). The report provides comprehensive information on design and operations, including safety programs and safety structures, systems and components to address the potential process-related hazards, natural phenomena, and external hazards that can affect the public, facility workers, and the environment. Consideration is given to all modes of operation, including the potential for both equipment failure and human error. The facilities known collectively as the DWTF are used by LLNL's Radioactive and Hazardous Waste Management (RHWM) Division to store and treat regulated wastes generated at LLNL. RHWM generally processes low-level radioactive waste with no, or extremely low, concentrations of transuranics (e.g., much less than 100 nCi/g). Wastes processed often contain only depleted uranium and beta- and gamma-emitting nuclides, e.g., 90 Sr, 137 Cs, or 3 H. The mission of the B695 Segment centers on container storage, lab-packing, repacking, overpacking, bulking, sampling, waste transfer, and waste treatment. The B695 Segment is used for storage of radioactive waste (including transuranic and low-level), hazardous, nonhazardous, mixed, and other waste. Storage of hazardous and mixed waste in B695 Segment facilities is in compliance with the Resource Conservation and Recovery Act (RCRA). LLNL is operated by the Lawrence Livermore National Security, LLC, for the Department of Energy (DOE). The B695 Segment is operated by the RHWM Division of LLNL. Many operations in the B695 Segment are performed under a Resource Conservation and Recovery Act (RCRA) operation plan, similar to commercial treatment operations with best demonstrated available technologies. The buildings of the B695 Segment were designed and built considering such operations, using proven building systems, and keeping

  5. Deformable meshes for medical image segmentation accurate automatic segmentation of anatomical structures

    CERN Document Server

    Kainmueller, Dagmar

    2014-01-01

    ? Segmentation of anatomical structures in medical image data is an essential task in clinical practice. Dagmar Kainmueller introduces methods for accurate fully automatic segmentation of anatomical structures in 3D medical image data. The author's core methodological contribution is a novel deformation model that overcomes limitations of state-of-the-art Deformable Surface approaches, hence allowing for accurate segmentation of tip- and ridge-shaped features of anatomical structures. As for practical contributions, she proposes application-specific segmentation pipelines for a range of anatom

  6. Symbolic computation of analytic approximate solutions for nonlinear fractional differential equations

    Science.gov (United States)

    Lin, Yezhi; Liu, Yinping; Li, Zhibin

    2013-01-01

    The Adomian decomposition method (ADM) is one of the most effective methods to construct analytic approximate solutions for nonlinear differential equations. In this paper, based on the new definition of the Adomian polynomials, Rach (2008) [22], the Adomian decomposition method and the Padé approximants technique, a new algorithm is proposed to construct analytic approximate solutions for nonlinear fractional differential equations with initial or boundary conditions. Furthermore, a MAPLE software package is developed to implement this new algorithm, which is user-friendly and efficient. One only needs to input the system equation, initial or boundary conditions and several necessary parameters, then our package will automatically deliver the analytic approximate solutions within a few seconds. Several different types of examples are given to illustrate the scope and demonstrate the validity of our package, especially for non-smooth initial value problems. Our package provides a helpful and easy-to-use tool in science and engineering simulations. Program summaryProgram title: ADMP Catalogue identifier: AENE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12011 No. of bytes in distributed program, including test data, etc.: 575551 Distribution format: tar.gz Programming language: MAPLE R15. Computer: PCs. Operating system: Windows XP/7. RAM: 2 Gbytes Classification: 4.3. Nature of problem: Constructing analytic approximate solutions of nonlinear fractional differential equations with initial or boundary conditions. Non-smooth initial value problems can be solved by this program. Solution method: Based on the new definition of the Adomian polynomials [1], the Adomian decomposition method and the Pad

  7. Who puts the most energy into energy conservation? A segmentation of energy consumers based on energy-related behavioral characteristics

    International Nuclear Information System (INIS)

    Sütterlin, Bernadette; Brunner, Thomas A.; Siegrist, Michael

    2011-01-01

    The present paper aims to identify and describe different types of energy consumers in a more comprehensive way than previous segmentation studies using cluster analysis. Energy consumers were segmented based on their energy-related behavioral characteristics. In addition to purchase- and curtailment-related energy-saving behavior, consumer classification was also based on acceptance of policy measures and energy-related psychosocial factors, so the used behavioral segmentation base was more comprehensive compared to other studies. Furthermore, differentiation between the energy-saving purchase of daily products, such as food, and of energy efficient appliances allowed a more differentiated characterization of the energy consumer segments. The cluster analysis revealed six energy consumer segments: the idealistic, the selfless inconsequent, the thrifty, the materialistic, the convenience-oriented indifferent, and the problem-aware well-being-oriented energy consumer. Findings emphasize that using a broader and more distinct behavioral base is crucial for an adequate and differentiated description of energy consumer types. The paper concludes by highlighting the most promising energy consumer segments and discussing possible segment-specific marketing and policy strategies. - Highlights: ► By applying a cluster-analytic approach, new energy consumer segments are identified. ► A comprehensive, differentiated description of the different energy consumer types is provided. ► A distinction between purchase of daily products and energy efficient appliances is essential. ► Behavioral variables are a more suitable base for segmentation than general characteristics.

  8. Synthetic salt cake standards for analytical laboratory quality control

    International Nuclear Information System (INIS)

    Schilling, A.E.; Miller, A.G.

    1980-01-01

    The validation of analytical results in the characterization of Hanford Nuclear Defense Waste requires the preparation of synthetic waste for standard reference materials. Two independent synthetic salt cake standards have been prepared to monitor laboratory quality control for the chemical characterization of high-level salt cake and sludge waste in support of Rockwell Hanford Operations' High-Level Waste Management Program. Each synthetic salt cake standard contains 15 characterized chemical species and was subjected to an extensive verification/characterization program in two phases. Phase I consisted of an initial verification of each analyte in salt cake form in order to determine the current analytical capability for chemical analysis. Phase II consisted of a final characterization of those chemical species in solution form where conflicting verification data were observed. The 95 percent confidence interval on the mean for the following analytes within each standard is provided: sodium, nitrate, nitrite, phosphate, carbonate, sulfate, hydroxide, chromate, chloride, fluoride, aluminum, plutonium-239/240, strontium-90, cesium-137, and water

  9. Experience with Dismantling of the Analytic Cell in the JRTF Decommissioning Program

    International Nuclear Information System (INIS)

    Annoh, Akio; Nemoto, Koichi; Tajiri, Hideo; Saito, Keiichiro; Miyajima, Kazutoshi; Myodo, Masato

    2003-01-01

    The analytic cell was mainly used for process control analysis of the reprocessing process and for the measurement of fuel burn up ratio in JAERI's Reprocessing Test Facility (JRTF). The analytic cell was a heavy shielded one and equipped with a conveyor. The cell was alpha and beta(gamma)contaminated. For dismantling of analytic cells, it is very important to establish a method to remove the heavy shield safely and reduce the exposure. At first, a green house was set up to prevent the spread out of contamination, and next, the analytic cell was dismantled. Depending on the contamination condition, the workers wore protective suits such as air ventilated-suits for prevention of internal exposure and vinyl chloride aprons, lead aprons in order to reduce external exposure. From the work carried out, various data such as needed manpower for the activities, the collective dose of workers by external exposure, the amount of radioactive wastes and the relation between the weight of the shield and its dismantling efficiency were obtained and input for the database. The method of dismantling and the experience with the dismantling of the analytic cell in the JRTF, carried out during 2001 and 2002, are described in this paper

  10. Speaker segmentation and clustering

    OpenAIRE

    Kotti, M; Moschou, V; Kotropoulos, C

    2008-01-01

    07.08.13 KB. Ok to add the accepted version to Spiral, Elsevier says ok whlile mandate not enforced. This survey focuses on two challenging speech processing topics, namely: speaker segmentation and speaker clustering. Speaker segmentation aims at finding speaker change points in an audio stream, whereas speaker clustering aims at grouping speech segments based on speaker characteristics. Model-based, metric-based, and hybrid speaker segmentation algorithms are reviewed. Concerning speaker...

  11. Segmentation of the Infant Food Market

    OpenAIRE

    Hrůzová, Daniela

    2015-01-01

    The theoretical part covers general market segmentation, namely the marketing importance of differences among consumers, the essence of market segmentation, its main conditions and the process of segmentation, which consists of four consecutive phases - defining the market, determining important criteria, uncovering segments and developing segment profiles. The segmentation criteria, segmentation approaches, methods and techniques for the process of market segmentation are also described in t...

  12. Pancreas and cyst segmentation

    Science.gov (United States)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  13. Phasing multi-segment undulators

    International Nuclear Information System (INIS)

    Chavanne, J.; Elleaume, P.; Vaerenbergh, P. Van

    1996-01-01

    An important issue in the manufacture of multi-segment undulators as a source of synchrotron radiation or as a free-electron laser (FEL) is the phasing between successive segments. The state of the art is briefly reviewed, after which a novel pure permanent magnet phasing section that is passive and does not require any current is presented. The phasing section allows the introduction of a 6 mm longitudinal gap between each segment, resulting in complete mechanical independence and reduced magnetic interaction between segments. The tolerance of the longitudinal positioning of one segment with respect to the next is found to be 2.8 times lower than that of conventional phasing. The spectrum at all gaps and useful harmonics is almost unchanged when compared with a single-segment undulator of the same total length. (au) 3 refs

  14. Supercritical fluid analytical methods

    International Nuclear Information System (INIS)

    Smith, R.D.; Kalinoski, H.T.; Wright, B.W.; Udseth, H.R.

    1988-01-01

    Supercritical fluids are providing the basis for new and improved methods across a range of analytical technologies. New methods are being developed to allow the detection and measurement of compounds that are incompatible with conventional analytical methodologies. Characterization of process and effluent streams for synfuel plants requires instruments capable of detecting and measuring high-molecular-weight compounds, polar compounds, or other materials that are generally difficult to analyze. The purpose of this program is to develop and apply new supercritical fluid techniques for extraction, separation, and analysis. These new technologies will be applied to previously intractable synfuel process materials and to complex mixtures resulting from their interaction with environmental and biological systems

  15. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  16. Analytical method for predicting plastic flow in notched fiber composite materials

    International Nuclear Information System (INIS)

    Flynn, P.L.; Ebert, L.J.

    1977-01-01

    An analytical system was developed for prediction of the onset and progress of plastic flow of oriented fiber composite materials in which both externally applied complex stress states and stress raisers were present. The predictive system was a unique combination of two numerical systems, the ''SAAS II'' finite element analysis system and a micromechanics finite element program. The SAAS II system was used to generate the three-dimensional stress distributions, which were used as the input into the finite element micromechanics program. Appropriate yielding criteria were then applied to this latter program. The accuracy of the analytical system was demonstrated by the agreement between the analytically predicted and the experimentally measured flow values of externally notched tungsten wire reinforced copper oriented fiber composites, in which the fiber fraction was 50 vol pct

  17. Why segmentation matters: experience-driven segmentation errors impair “morpheme” learning

    Science.gov (United States)

    Finn, Amy S.; Hudson Kam, Carla L.

    2015-01-01

    We ask whether an adult learner’s knowledge of their native language impedes statistical learning in a new language beyond just word segmentation (as previously shown). In particular, we examine the impact of native-language word-form phonotactics on learners’ ability to segment words into their component morphemes and learn phonologically triggered variation of morphemes. We find that learning is impaired when words and component morphemes are structured to conflict with a learner’s native-language phonotactic system, but not when native-language phonotactics do not conflict with morpheme boundaries in the artificial language. A learner’s native-language knowledge can therefore have a cascading impact affecting word segmentation and the morphological variation that relies upon proper segmentation. These results show that getting word segmentation right early in learning is deeply important for learning other aspects of language, even those (morphology) that are known to pose a great difficulty for adult language learners. PMID:25730305

  18. Segment-Tube: Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation

    OpenAIRE

    Le Wang; Xuhuan Duan; Qilin Zhang; Zhenxing Niu; Gang Hua; Nanning Zheng

    2018-01-01

    Inspired by the recent spatio-temporal action localization efforts with tubelets (sequences of bounding boxes), we present a new spatio-temporal action localization detector Segment-tube, which consists of sequences of per-frame segmentation masks. The proposed Segment-tube detector can temporally pinpoint the starting/ending frame of each action category in the presence of preceding/subsequent interference actions in untrimmed videos. Simultaneously, the Segment-tube detector produces per-fr...

  19. 3D medical image segmentation based on a continuous modelling of the volume

    International Nuclear Information System (INIS)

    Marque, I.

    1990-12-01

    Several medical imaging/techniques, including Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) provide 3D information of the human body by means of a stack of parallel cross-sectional images. But a more sophisticated edge detection step has to be performed when the object under study is not well defined by its characteristic density or when an analytical knowledge of the surface of the object is useful for later processings. A new method for medical image segmentation has been developed: it uses the stability and differentiability properties of a continuous modelling of the 3D data. The idea is to build a system of Ordinary Differential Equations which the stable manifold is the surface of the object we are looking for. This technique has been applied to classical edge detection operators: threshold following, laplacian, gradient maximum in its direction. It can be used in 2D as well as in 3D and has been extended to seek particular points of the surface, such as local extrema. The major advantages of this method are as follows: the segmentation and boundary following steps are performed simultaneously, an analytical representation of the surface is obtained straightforwardly and complex objects in which branching problems may occur can be described automatically. Simulations on noisy synthetic images have induced a quantization step to test the sensitiveness to noise of our method with respect to each operator, and to study the influence of all the parameters. Last, this method has been applied to numerous real clinical exams: skull or femur images provided by CT, MR images of a cerebral tumor and of the ventricular system. These results show the reliability and the efficiency of this new method of segmentation [fr

  20. Surface wave propagation effects on buried segmented pipelines

    Directory of Open Access Journals (Sweden)

    Peixin Shi

    2015-08-01

    Full Text Available This paper deals with surface wave propagation (WP effects on buried segmented pipelines. Both simplified analytical model and finite element (FE model are developed for estimating the axial joint pullout movement of jointed concrete cylinder pipelines (JCCPs of which the joints have a brittle tensile failure mode under the surface WP effects. The models account for the effects of peak ground velocity (PGV, WP velocity, predominant period of seismic excitation, shear transfer between soil and pipelines, axial stiffness of pipelines, joint characteristics, and cracking strain of concrete mortar. FE simulation of the JCCP interaction with surface waves recorded during the 1985 Michoacan earthquake results in joint pullout movement, which is consistent with the field observations. The models are expanded to estimate the joint axial pullout movement of cast iron (CI pipelines of which the joints have a ductile tensile failure mode. Simplified analytical equation and FE model are developed for estimating the joint pullout movement of CI pipelines. The joint pullout movement of the CI pipelines is mainly affected by the variability of the joint tensile capacity and accumulates at local weak joints in the pipeline.

  1. Gatlinburg conference: barometer of progress in analytical chemistry

    International Nuclear Information System (INIS)

    Shults, W.D.

    1981-01-01

    Much progress has been made in the field of analytical chemistry over the past twenty-five years. The AEC-ERDA-DOE family of laboratories contributed greatly to this progress. It is not surprising then to find a close correlation between program content of past Gatlinburg conferences and developments in analytical methodology. These conferences have proved to be a barometer of technical status

  2. Simulating Deformations of MR Brain Images for Validation of Atlas-based Segmentation and Registration Algorithms

    OpenAIRE

    Xue, Zhong; Shen, Dinggang; Karacali, Bilge; Stern, Joshua; Rottenberg, David; Davatzikos, Christos

    2006-01-01

    Simulated deformations and images can act as the gold standard for evaluating various template-based image segmentation and registration algorithms. Traditional deformable simulation methods, such as the use of analytic deformation fields or the displacement of landmarks followed by some form of interpolation, are often unable to construct rich (complex) and/or realistic deformations of anatomical organs. This paper presents new methods aiming to automatically simulate realistic inter- and in...

  3. Technical Safety Requirements for the B695 Segment

    Energy Technology Data Exchange (ETDEWEB)

    Laycak, D

    2008-09-11

    This document contains Technical Safety Requirements (TSRs) for the Radioactive and Hazardous Waste Management (RHWM) Division's B695 Segment of the Decontamination and Waste Treatment Facility (DWTF) at Lawrence Livermore National Laboratory (LLNL). The TSRs constitute requirements regarding the safe operation of the B695 Segment. The TSRs are derived from the Documented Safety Analysis (DSA) for the B695 Segment (LLNL 2007). The analysis presented there determined that the B695 Segment is a low-chemical hazard, Hazard Category 3, nonreactor nuclear facility. The TSRs consist primarily of inventory limits as well as controls to preserve the underlying assumptions in the hazard analyses. Furthermore, appropriate commitments to safety programs are presented in the administrative controls section of the TSRs. The B695 Segment (B695 and the west portion of B696) is a waste treatment and storage facility located in the northeast quadrant of the LLNL main site. The approximate area and boundary of the B695 Segment are shown in the B695 Segment DSA. Activities typically conducted in the B695 Segment include container storage, lab-packing, repacking, overpacking, bulking, sampling, waste transfer, and waste treatment. B695 is used to store and treat radioactive, mixed, and hazardous waste, and it also contains equipment used in conjunction with waste processing operations to treat various liquid and solid wastes. The portion of the building called Building 696 Solid Waste Processing Area (SWPA), also referred to as B696S in this report, is used primarily to manage solid radioactive, mixed, and hazardous waste. Operations specific to the SWPA include sorting and segregating waste, lab-packing, sampling, and crushing empty drums that previously contained waste. Furthermore, a Waste Packaging Unit will be permitted to treat hazardous and mixed waste. RHWM generally processes LLW with no, or extremely low, concentrations of transuranics (i.e., much less than 100 n

  4. Technical Safety Requirements for the B695 Segment

    International Nuclear Information System (INIS)

    Laycak, D.

    2008-01-01

    This document contains Technical Safety Requirements (TSRs) for the Radioactive and Hazardous Waste Management (RHWM) Division's B695 Segment of the Decontamination and Waste Treatment Facility (DWTF) at Lawrence Livermore National Laboratory (LLNL). The TSRs constitute requirements regarding the safe operation of the B695 Segment. The TSRs are derived from the Documented Safety Analysis (DSA) for the B695 Segment (LLNL 2007). The analysis presented there determined that the B695 Segment is a low-chemical hazard, Hazard Category 3, nonreactor nuclear facility. The TSRs consist primarily of inventory limits as well as controls to preserve the underlying assumptions in the hazard analyses. Furthermore, appropriate commitments to safety programs are presented in the administrative controls section of the TSRs. The B695 Segment (B695 and the west portion of B696) is a waste treatment and storage facility located in the northeast quadrant of the LLNL main site. The approximate area and boundary of the B695 Segment are shown in the B695 Segment DSA. Activities typically conducted in the B695 Segment include container storage, lab-packing, repacking, overpacking, bulking, sampling, waste transfer, and waste treatment. B695 is used to store and treat radioactive, mixed, and hazardous waste, and it also contains equipment used in conjunction with waste processing operations to treat various liquid and solid wastes. The portion of the building called Building 696 Solid Waste Processing Area (SWPA), also referred to as B696S in this report, is used primarily to manage solid radioactive, mixed, and hazardous waste. Operations specific to the SWPA include sorting and segregating waste, lab-packing, sampling, and crushing empty drums that previously contained waste. Furthermore, a Waste Packaging Unit will be permitted to treat hazardous and mixed waste. RHWM generally processes LLW with no, or extremely low, concentrations of transuranics (i.e., much less than 100 n

  5. SALE, Quality Control of Analytical Chemical Measurements

    International Nuclear Information System (INIS)

    Bush, W.J.; Gentillon, C.D.

    1985-01-01

    1 - Description of problem or function: The Safeguards Analytical Laboratory Evaluation (SALE) program is a statistical analysis program written to analyze the data received from laboratories participating in the SALE quality control and evaluation program. The system is aimed at identifying and reducing analytical chemical measurement errors. Samples of well-characterized materials are distributed to laboratory participants at periodic intervals for determination of uranium or plutonium concentration and isotopic distributions. The results of these determinations are statistically evaluated and participants are informed of the accuracy and precision of their results. 2 - Method of solution: Various statistical techniques produce the SALE output. Assuming an unbalanced nested design, an analysis of variance is performed, resulting in a test of significance for time and analyst effects. A trend test is performed. Both within- laboratory and between-laboratory standard deviations are calculated. 3 - Restrictions on the complexity of the problem: Up to 1500 pieces of data for each nuclear material sampled by a maximum of 75 laboratories may be analyzed

  6. Model studies on segmental movement in lumbar spine using a semi-automated program for volume fusion.

    Science.gov (United States)

    Svedmark, P; Weidenhielm, L; Nemeth, G; Tullberg, T; Noz, M E; Maguire, G Q; Zeleznik, M P; Olivecrona, H

    2008-01-01

    To validate a new non-invasive CT method for measuring segmental translations in lumbar spine in a phantom using plastic vertebrae with tantalum markers and human vertebrae. One hundred and four CT volumes were acquired of a phantom incorporating three lumbar vertebrae. Lumbar segmental translation was simulated by altering the position of one vertebra in all three cardinal axes between acquisitions. The CT volumes were combined into 64 case pairs, simulating lumbar segmental movement of up to 3 mm between acquisitions. The relative movement between the vertebrae was evaluated visually and numerically using a volume fusion image post-processing tool. Results were correlated to direct measurements of the phantom. On visual inspection, translation of at least 1 mm or more could be safely detected and correlated with separation between the vertebrae in three dimensions. There were no significant differences between plastic and human vertebrae. Numerically, the accuracy limit for all the CT measurements of the 3D segmental translations was 0.56 mm (median: 0.12; range: -0.76 to +0.49 mm). The accuracy for the sagittal axis was 0.45 mm (median: 0.10; range: -0.46 to +0.62 mm); the accuracy for the coronal axis was 0.46 mm (median: 0.09; range: -0.66 to +0.69 mm); and the accuracy for the axial axis was 0.45 mm (median: 0.05; range: -0.72 to + 0.62 mm). The repeatability, calculated over 10 cases, was 0.35 mm (median: 0.16; range: -0.26 to +0.30 mm). The accuracy of this non-invasive method is better than that of current routine methods for detecting segmental movements. The method allows both visual and numerical evaluation of such movements. Further studies are needed to validate this method in patients.

  7. Analytical Sociology: A Bungean Appreciation

    Science.gov (United States)

    Wan, Poe Yu-ze

    2012-10-01

    Analytical sociology, an intellectual project that has garnered considerable attention across a variety of disciplines in recent years, aims to explain complex social processes by dissecting them, accentuating their most important constituent parts, and constructing appropriate models to understand the emergence of what is observed. To achieve this goal, analytical sociologists demonstrate an unequivocal focus on the mechanism-based explanation grounded in action theory. In this article I attempt a critical appreciation of analytical sociology from the perspective of Mario Bunge's philosophical system, which I characterize as emergentist systemism. I submit that while the principles of analytical sociology and those of Bunge's approach share a lot in common, the latter brings to the fore the ontological status and explanatory importance of supra-individual actors (as concrete systems endowed with emergent causal powers) and macro-social mechanisms (as processes unfolding in and among social systems), and therefore it does not stipulate that every causal explanation of social facts has to include explicit references to individual-level actors and mechanisms. In this sense, Bunge's approach provides a reasonable middle course between the Scylla of sociological reification and the Charybdis of ontological individualism, and thus serves as an antidote to the untenable "strong program of microfoundations" to which some analytical sociologists are committed.

  8. Computerized operating procedures for shearing and dissolution of segments from LWBR [Light Water Breeder Reactor] fuel rods

    International Nuclear Information System (INIS)

    Osudar, J.; Deeken, P.G.; Graczyk, D.G.; Fagan, J.E.; Martino, F.J.; Parks, J.E.; Levitz, N.M.; Kessie, R.W.; Leddin, J.M.

    1987-05-01

    This report presents two detailed computerized operating procedures developed to assist and control the shearing and dissolution of irradiated fuel rods. The procedures were employed in the destructive analysis of end-of-life fuel rods from the Light Water Breeder Reactor (LWBR) that was designed by the Westinghouse Electric Corporation Bettis Atomic Power Laboratory. Seventeen entire fuel rods from the end-of-life core of the LWBR were sheared into 169 precisely characterized segments, and more than 150 of these segments were dissolved during execution of the LWBR Proof-of-Breeding (LWBR-POB) Analytical Support Project at Argonne National Laboratory. The procedures illustrate our approaches to process monitoring, data reduction, and quality assurance during the LWBR-POB work

  9. Analysis of a Segmented Annular Coplanar Capacitive Tilt Sensor with Increased Sensitivity

    Directory of Open Access Journals (Sweden)

    Jiahao Guo

    2016-01-01

    Full Text Available An investigation of a segmented annular coplanar capacitor is presented. We focus on its theoretical model, and a mathematical expression of the capacitance value is derived by solving a Laplace equation with Hankel transform. The finite element method is employed to verify the analytical result. Different control parameters are discussed, and each contribution to the capacitance value of the capacitor is obtained. On this basis, we analyze and optimize the structure parameters of a segmented coplanar capacitive tilt sensor, and three models with different positions of the electrode gap are fabricated and tested. The experimental result shows that the model (whose electrode-gap position is 10 mm from the electrode center realizes a high sensitivity: 0.129 pF/° with a non-linearity of <0.4% FS (full scale of ±40°. This finding offers plenty of opportunities for various measurement requirements in addition to achieving an optimized structure in practical design.

  10. ImageSURF: An ImageJ Plugin for Batch Pixel-Based Image Segmentation Using Random Forests

    Directory of Open Access Journals (Sweden)

    Aidan O'Mara

    2017-11-01

    Full Text Available Image segmentation is a necessary step in automated quantitative imaging. ImageSURF is a macro-compatible ImageJ2/FIJI plugin for pixel-based image segmentation that considers a range of image derivatives to train pixel classifiers which are then applied to image sets of any size to produce segmentations without bias in a consistent, transparent and reproducible manner. The plugin is available from ImageJ update site http://sites.imagej.net/ImageSURF/ and source code from https://github.com/omaraa/ImageSURF. Funding statement: This research was supported by an Australian Government Research Training Program Scholarship.

  11. Segmentation-DrivenTomographic Reconstruction

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas

    such that the segmentation subsequently can be carried out by use of a simple segmentation method, for instance just a thresholding method. We tested the advantages of going from a two-stage reconstruction method to a one stage segmentation-driven reconstruction method for the phase contrast tomography reconstruction......The tomographic reconstruction problem is concerned with creating a model of the interior of an object from some measured data, typically projections of the object. After reconstructing an object it is often desired to segment it, either automatically or manually. For computed tomography (CT...

  12. Rediscovering market segmentation.

    Science.gov (United States)

    Yankelovich, Daniel; Meer, David

    2006-02-01

    In 1964, Daniel Yankelovich introduced in the pages of HBR the concept of nondemographic segmentation, by which he meant the classification of consumers according to criteria other than age, residence, income, and such. The predictive power of marketing studies based on demographics was no longer strong enough to serve as a basis for marketing strategy, he argued. Buying patterns had become far better guides to consumers' future purchases. In addition, properly constructed nondemographic segmentations could help companies determine which products to develop, which distribution channels to sell them in, how much to charge for them, and how to advertise them. But more than 40 years later, nondemographic segmentation has become just as unenlightening as demographic segmentation had been. Today, the technique is used almost exclusively to fulfill the needs of advertising, which it serves mainly by populating commercials with characters that viewers can identify with. It is true that psychographic types like "High-Tech Harry" and "Joe Six-Pack" may capture some truth about real people's lifestyles, attitudes, self-image, and aspirations. But they are no better than demographics at predicting purchase behavior. Thus they give corporate decision makers very little idea of how to keep customers or capture new ones. Now, Daniel Yankelovich returns to these pages, with consultant David Meer, to argue the case for a broad view of nondemographic segmentation. They describe the elements of a smart segmentation strategy, explaining how segmentations meant to strengthen brand identity differ from those capable of telling a company which markets it should enter and what goods to make. And they introduce their "gravity of decision spectrum", a tool that focuses on the form of consumer behavior that should be of the greatest interest to marketers--the importance that consumers place on a product or product category.

  13. Analytical Chemistry Division annual progress report for period ending December 31, 1990

    Energy Technology Data Exchange (ETDEWEB)

    1991-04-01

    The Analytical Chemistry Division has programs in inorganic mass spectrometry, optical spectroscopy, organic mass spectrometry, and secondary ion mass spectrometry. It maintains a transuranium analytical laboratory and an environmental analytical laboratory. It carries out chemical and physical analysis in the fields of inorganic chemistry, organic spectroscopy, separations and synthesis. (WET)

  14. Analytical Chemistry Division annual progress report for period ending December 31, 1992

    Energy Technology Data Exchange (ETDEWEB)

    Shults, W.D.

    1993-04-01

    This report is divided into: Analytical spectroscopy (optical spectroscopy, organic mass spectrometry, inorganic mass spectrometry, secondary ion mass spectrometry), inorganic and radiochemistry (transuranium and activation analysis, low-level radiochemical analysis, inorganic analysis, radioactive materials analysis, special projects), organic chemistry (organic spectroscopy, separations and synthesis, special projects, organic analysis, ORNL/UT research program), operations (quality assurance/quality control, environmental protection, safety, analytical improvement, training, radiation control), education programs, supplementary activities, and presentation of research results. Tables are included for articles reviewed or refereed for periodicals, analytical service work, division manpower and financial summary, and organization chart; a glossary is also included.

  15. Development of Land Segmentation, Stream-Reach Network, and Watersheds in Support of Hydrological Simulation Program-Fortran (HSPF) Modeling, Chesapeake Bay Watershed, and Adjacent Parts of Maryland, Delaware, and Virginia

    Science.gov (United States)

    Martucci, Sarah K.; Krstolic, Jennifer L.; Raffensperger, Jeff P.; Hopkins, Katherine J.

    2006-01-01

    The U.S. Geological Survey, U.S. Environmental Protection Agency Chesapeake Bay Program Office, Interstate Commission on the Potomac River Basin, Maryland Department of the Environment, Virginia Department of Conservation and Recreation, Virginia Department of Environmental Quality, and the University of Maryland Center for Environmental Science are collaborating on the Chesapeake Bay Regional Watershed Model, using Hydrological Simulation Program - FORTRAN to simulate streamflow and concentrations and loads of nutrients and sediment to Chesapeake Bay. The model will be used to provide information for resource managers. In order to establish a framework for model simulation, digital spatial datasets were created defining the discretization of the model region (including the Chesapeake Bay watershed, as well as the adjacent parts of Maryland, Delaware, and Virginia outside the watershed) into land segments, a stream-reach network, and associated watersheds. Land segmentation was based on county boundaries represented by a 1:100,000-scale digital dataset. Fifty of the 254 counties and incorporated cities in the model region were divided on the basis of physiography and topography, producing a total of 309 land segments. The stream-reach network for the Chesapeake Bay watershed part of the model region was based on the U.S. Geological Survey Chesapeake Bay SPARROW (SPAtially Referenced Regressions On Watershed attributes) model stream-reach network. Because that network was created only for the Chesapeake Bay watershed, the rest of the model region uses a 1:500,000-scale stream-reach network. Streams with mean annual streamflow of less than 100 cubic feet per second were excluded based on attributes from the dataset. Additional changes were made to enhance the data and to allow for inclusion of stream reaches with monitoring data that were not part of the original network. Thirty-meter-resolution Digital Elevation Model data were used to delineate watersheds for each

  16. Examinations on Applications of Manual Calculation Programs on Lung Cancer Radiation Therapy Using Analytical Anisotropic Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Min; Kim, Dae Sup; Hong, Dong Ki; Back, Geum Mun; Kwak, Jung Won [Dept. of Radiation Oncology, , Seoul (Korea, Republic of)

    2012-03-15

    There was a problem with using MU verification programs for the reasons that there were errors of MU when using MU verification programs based on Pencil Beam Convolution (PBC) Algorithm with radiation treatment plans around lung using Analytical Anisotropic Algorithm (AAA). On this study, we studied the methods that can verify the calculated treatment plans using AAA. Using Eclipse treatment planning system (Version 8.9, Varian, USA), for each 57 fields of 7 cases of Lung Stereotactic Body Radiation Therapy (SBRT), we have calculated using PBC and AAA with dose calculation algorithm. By developing MU of established plans, we compared and analyzed with MU of manual calculation programs. We have analyzed relationship between errors and 4 variables such as field size, lung path distance of radiation, Tumor path distance of radiation, effective depth that can affect on errors created from PBC algorithm and AAA using commonly used programs. Errors of PBC algorithm have showned 0.2{+-}1.0% and errors of AAA have showned 3.5{+-}2.8%. Moreover, as a result of analyzing 4 variables that can affect on errors, relationship in errors between lung path distance and MU, connection coefficient 0.648 (P=0.000) has been increased and we could calculate MU correction factor that is A.E=L.P 0.00903+0.02048 and as a result of replying for manual calculation program, errors of 3.5{+-}2.8% before the application has been decreased within 0.4{+-}2.0%. On this study, we have learned that errors from manual calculation program have been increased as lung path distance of radiation increases and we could verified MU of AAA with a simple method that is called MU correction factor.

  17. Examinations on Applications of Manual Calculation Programs on Lung Cancer Radiation Therapy Using Analytical Anisotropic Algorithm

    International Nuclear Information System (INIS)

    Kim, Jung Min; Kim, Dae Sup; Hong, Dong Ki; Back, Geum Mun; Kwak, Jung Won

    2012-01-01

    There was a problem with using MU verification programs for the reasons that there were errors of MU when using MU verification programs based on Pencil Beam Convolution (PBC) Algorithm with radiation treatment plans around lung using Analytical Anisotropic Algorithm (AAA). On this study, we studied the methods that can verify the calculated treatment plans using AAA. Using Eclipse treatment planning system (Version 8.9, Varian, USA), for each 57 fields of 7 cases of Lung Stereotactic Body Radiation Therapy (SBRT), we have calculated using PBC and AAA with dose calculation algorithm. By developing MU of established plans, we compared and analyzed with MU of manual calculation programs. We have analyzed relationship between errors and 4 variables such as field size, lung path distance of radiation, Tumor path distance of radiation, effective depth that can affect on errors created from PBC algorithm and AAA using commonly used programs. Errors of PBC algorithm have showned 0.2±1.0% and errors of AAA have showned 3.5±2.8%. Moreover, as a result of analyzing 4 variables that can affect on errors, relationship in errors between lung path distance and MU, connection coefficient 0.648 (P=0.000) has been increased and we could calculate MU correction factor that is A.E=L.P 0.00903+0.02048 and as a result of replying for manual calculation program, errors of 3.5±2.8% before the application has been decreased within 0.4±2.0%. On this study, we have learned that errors from manual calculation program have been increased as lung path distance of radiation increases and we could verified MU of AAA with a simple method that is called MU correction factor.

  18. Reflection symmetry-integrated image segmentation.

    Science.gov (United States)

    Sun, Yu; Bhanu, Bir

    2012-09-01

    This paper presents a new symmetry-integrated region-based image segmentation method. The method is developed to obtain improved image segmentation by exploiting image symmetry. It is realized by constructing a symmetry token that can be flexibly embedded into segmentation cues. Interesting points are initially extracted from an image by the SIFT operator and they are further refined for detecting the global bilateral symmetry. A symmetry affinity matrix is then computed using the symmetry axis and it is used explicitly as a constraint in a region growing algorithm in order to refine the symmetry of the segmented regions. A multi-objective genetic search finds the segmentation result with the highest performance for both segmentation and symmetry, which is close to the global optimum. The method has been investigated experimentally in challenging natural images and images containing man-made objects. It is shown that the proposed method outperforms current segmentation methods both with and without exploiting symmetry. A thorough experimental analysis indicates that symmetry plays an important role as a segmentation cue, in conjunction with other attributes like color and texture.

  19. ZFITTER - an analytical program for fermion-pair production

    International Nuclear Information System (INIS)

    Riemann, T.

    1992-10-01

    I discuss the semi-analytical codes which have been developed for the Z line-shape analysis at LEP I. They are applied for a model-independent and, when using a weak library, a Standard Model interpretation of the data. Some of them are applicable for New Physics searches. The package ZF I TT ER serves as an example, and comparisons of the codes are discussed. The degrees of freedom of the line shape and of asymmetries are made explicit. (orig.)

  20. Hanford high level waste: Sample Exchange/Evaluation (SEE) Program

    International Nuclear Information System (INIS)

    King, A.G.

    1994-08-01

    The Pacific Northwest Laboratory (PNL)/Analytical Chemistry Laboratory (ACL) and the Westinghouse Hanford Company (WHC)/Process Analytical Laboratory (PAL) provide analytical support services to various environmental restoration and waste management projects/programs at Hanford. In response to a US Department of Energy -- Richland Field Office (DOE-RL) audit, which questioned the comparability of analytical methods employed at each laboratory, the Sample Exchange/Exchange (SEE) program was initiated. The SEE Program is a selfassessment program designed to compare analytical methods of the PAL and ACL laboratories using sitespecific waste material. The SEE program is managed by a collaborative, the Quality Assurance Triad (Triad). Triad membership is made up of representatives from the WHC/PAL, PNL/ACL, and WHC Hanford Analytical Services Management (HASM) organizations. The Triad works together to design/evaluate/implement each phase of the SEE Program

  1. Lung segment geometry study: simulation of largest possible tumours that fit into bronchopulmonary segments.

    Science.gov (United States)

    Welter, S; Stöcker, C; Dicken, V; Kühl, H; Krass, S; Stamatis, G

    2012-03-01

    Segmental resection in stage I non-small cell lung cancer (NSCLC) has been well described and is considered to have similar survival rates as lobectomy but with increased rates of local tumour recurrence due to inadequate parenchymal margins. In consequence, today segmentectomy is only performed when the tumour is smaller than 2 cm. Three-dimensional reconstructions from 11 thin-slice CT scans of bronchopulmonary segments were generated, and virtual spherical tumours were placed over the segments, respecting all segmental borders. As a next step, virtual parenchymal safety margins of 2 cm and 3 cm were subtracted and the size of the remaining tumour calculated. The maximum tumour diameters with a 30-mm parenchymal safety margin ranged from 26.1 mm in right-sided segments 7 + 8 to 59.8 mm in the left apical segments 1-3. Using a three-dimensional reconstruction of lung CT scans, we demonstrated that segmentectomy or resection of segmental groups should be feasible with adequate margins, even for larger tumours in selected cases. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  2. A development of simulation and analytical program for through-diffusion experiments for a single layer of diffusion media

    International Nuclear Information System (INIS)

    Sato, Haruo

    2001-01-01

    A program (TDROCK1. FOR) for simulation and analysis of through-diffusion experiments for a single layer of diffusion media was developed. This program was made by Pro-Fortran language, which was suitable for scientific and technical calculations, and relatively easy explicit difference method was adopted for an analysis. In the analysis, solute concentration in the tracer cell as a function of time that we could not treat to date can be input and the decrease in the solute concentration as a function of time by diffusion from the tracer cell to the measurement cell, the solute concentration distribution in the porewater of diffusion media and the solute concentration in the measurement cell as a function of time can be calculated. In addition, solution volume in both cells and diameter and thickness of the diffusion media are also variable as an input condition. This simulation program could well explain measured result by simulating solute concentration in the measurement cell as a function of time for case which apparent and effective diffusion coefficients were already known. Based on this, the availability and applicability of this program to actual analysis and simulation were confirmed. This report describes the theoretical treatment for the through-diffusion experiments for a single layer of diffusion media, analytical model, an example of source program and the manual. (author)

  3. Sipunculans and segmentation

    DEFF Research Database (Denmark)

    Wanninger, Andreas; Kristof, Alen; Brinkmann, Nora

    2009-01-01

    mechanisms may act on the level of gene expression, cell proliferation, tissue differentiation and organ system formation in individual segments. Accordingly, in some polychaete annelids the first three pairs of segmental peripheral neurons arise synchronously, while the metameric commissures of the ventral...

  4. SU-E-J-132: Automated Segmentation with Post-Registration Atlas Selection Based On Mutual Information

    International Nuclear Information System (INIS)

    Ren, X; Gao, H; Sharp, G

    2015-01-01

    Purpose: The delineation of targets and organs-at-risk is a critical step during image-guided radiation therapy, for which manual contouring is the gold standard. However, it is often time-consuming and may suffer from intra- and inter-rater variability. The purpose of this work is to investigate the automated segmentation. Methods: The automatic segmentation here is based on mutual information (MI), with the atlas from Public Domain Database for Computational Anatomy (PDDCA) with manually drawn contours.Using dice coefficient (DC) as the quantitative measure of segmentation accuracy, we perform leave-one-out cross-validations for all PDDCA images sequentially, during which other images are registered to each chosen image and DC is computed between registered contour and ground truth. Meanwhile, six strategies, including MI, are selected to measure the image similarity, with MI to be the best. Then given a target image to be segmented and an atlas, automatic segmentation consists of: (a) the affine registration step for image positioning; (b) the active demons registration method to register the atlas to the target image; (c) the computation of MI values between the deformed atlas and the target image; (d) the weighted image fusion of three deformed atlas images with highest MI values to form the segmented contour. Results: MI was found to be the best among six studied strategies in the sense that it had the highest positive correlation between similarity measure (e.g., MI values) and DC. For automated segmentation, the weighted image fusion of three deformed atlas images with highest MI values provided the highest DC among four proposed strategies. Conclusion: MI has the highest correlation with DC, and therefore is an appropriate choice for post-registration atlas selection in atlas-based segmentation. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)

  5. SU-E-J-132: Automated Segmentation with Post-Registration Atlas Selection Based On Mutual Information

    Energy Technology Data Exchange (ETDEWEB)

    Ren, X; Gao, H [Shanghai Jiao Tong University, Shanghai, Shanghai (China); Sharp, G [Massachusetts General Hospital, Boston, MA (United States)

    2015-06-15

    Purpose: The delineation of targets and organs-at-risk is a critical step during image-guided radiation therapy, for which manual contouring is the gold standard. However, it is often time-consuming and may suffer from intra- and inter-rater variability. The purpose of this work is to investigate the automated segmentation. Methods: The automatic segmentation here is based on mutual information (MI), with the atlas from Public Domain Database for Computational Anatomy (PDDCA) with manually drawn contours.Using dice coefficient (DC) as the quantitative measure of segmentation accuracy, we perform leave-one-out cross-validations for all PDDCA images sequentially, during which other images are registered to each chosen image and DC is computed between registered contour and ground truth. Meanwhile, six strategies, including MI, are selected to measure the image similarity, with MI to be the best. Then given a target image to be segmented and an atlas, automatic segmentation consists of: (a) the affine registration step for image positioning; (b) the active demons registration method to register the atlas to the target image; (c) the computation of MI values between the deformed atlas and the target image; (d) the weighted image fusion of three deformed atlas images with highest MI values to form the segmented contour. Results: MI was found to be the best among six studied strategies in the sense that it had the highest positive correlation between similarity measure (e.g., MI values) and DC. For automated segmentation, the weighted image fusion of three deformed atlas images with highest MI values provided the highest DC among four proposed strategies. Conclusion: MI has the highest correlation with DC, and therefore is an appropriate choice for post-registration atlas selection in atlas-based segmentation. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)

  6. Using alternative segmentation techniques to examine residential customer`s energy needs, wants, and preferences

    Energy Technology Data Exchange (ETDEWEB)

    Hollander, C.; Kidwell, S. [Union Electric Co., St. Louis, MO (United States); Banks, J.; Taylor, E. [Cambridge Reports/Research International, MA (United States)

    1994-11-01

    The primary objective of this study was to examine residential customers` attitudes toward energy usage, conservation, and efficiency, and to examine the implications of these attitudes for how the utility should design and communicate about programs and services in these areas. This study combined focus groups and customer surveys, and utilized several customer segmentation schemes -- grouping customers by geodemographics, as well as customers` energy and environmental values, beliefs, and opinions -- to distinguish different segments of customers.

  7. N286.7-99, A Canadian standard specifying software quality management system requirements for analytical, scientific, and design computer programs and its implementation at AECL

    International Nuclear Information System (INIS)

    Abel, R.

    2000-01-01

    Analytical, scientific, and design computer programs (referred to in this paper as 'scientific computer programs') are developed for use in a large number of ways by the user-engineer to support and prove engineering calculations and assumptions. These computer programs are subject to frequent modifications inherent in their application and are often used for critical calculations and analysis relative to safety and functionality of equipment and systems. N286.7-99(4) was developed to establish appropriate quality management system requirements to deal with the development, modification, and application of scientific computer programs. N286.7-99 provides particular guidance regarding the treatment of legacy codes

  8. Analysis of linear measurements on 3D surface models using CBCT data segmentation obtained by automatic standard pre-set thresholds in two segmentation software programs: an in vitro study.

    Science.gov (United States)

    Poleti, Marcelo Lupion; Fernandes, Thais Maria Freire; Pagin, Otávio; Moretti, Marcela Rodrigues; Rubira-Bullen, Izabel Regina Fischer

    2016-01-01

    The aim of this in vitro study was to evaluate the reliability and accuracy of linear measurements on three-dimensional (3D) surface models obtained by standard pre-set thresholds in two segmentation software programs. Ten mandibles with 17 silica markers were scanned for 0.3-mm voxels in the i-CAT Classic (Imaging Sciences International, Hatfield, PA, USA). Twenty linear measurements were carried out by two observers two times on the 3D surface models: the Dolphin Imaging 11.5 (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA), using two filters(Translucent and Solid-1), and in the InVesalius 3.0.0 (Centre for Information Technology Renato Archer, Campinas, SP, Brazil). The physical measurements were made by another observer two times using a digital caliper on the dry mandibles. Excellent intra- and inter-observer reliability for the markers, physical measurements, and 3D surface models were found (intra-class correlation coefficient (ICC) and Pearson's r ≥ 0.91). The linear measurements on 3D surface models by Dolphin and InVesalius software programs were accurate (Dolphin Solid-1 > InVesalius > Dolphin Translucent). The highest absolute and percentage errors were obtained for the variable R1-R1 (1.37 mm) and MF-AC (2.53 %) in the Dolphin Translucent and InVesalius software, respectively. Linear measurements on 3D surface models obtained by standard pre-set thresholds in the Dolphin and InVesalius software programs are reliable and accurate compared with physical measurements. Studies that evaluate the reliability and accuracy of the 3D models are necessary to ensure error predictability and to establish diagnosis, treatment plan, and prognosis in a more realistic way.

  9. Using Predictability for Lexical Segmentation.

    Science.gov (United States)

    Çöltekin, Çağrı

    2017-09-01

    This study investigates a strategy based on predictability of consecutive sub-lexical units in learning to segment a continuous speech stream into lexical units using computational modeling and simulations. Lexical segmentation is one of the early challenges during language acquisition, and it has been studied extensively through psycholinguistic experiments as well as computational methods. However, despite strong empirical evidence, the explicit use of predictability of basic sub-lexical units in models of segmentation is underexplored. This paper presents an incremental computational model of lexical segmentation for exploring the usefulness of predictability for lexical segmentation. We show that the predictability cue is a strong cue for segmentation. Contrary to earlier reports in the literature, the strategy yields state-of-the-art segmentation performance with an incremental computational model that uses only this particular cue in a cognitively plausible setting. The paper also reports an in-depth analysis of the model, investigating the conditions affecting the usefulness of the strategy. Copyright © 2016 Cognitive Science Society, Inc.

  10. Efficient graph-cut tattoo segmentation

    Science.gov (United States)

    Kim, Joonsoo; Parra, Albert; Li, He; Delp, Edward J.

    2015-03-01

    Law enforcement is interested in exploiting tattoos as an information source to identify, track and prevent gang-related crimes. Many tattoo image retrieval systems have been described. In a retrieval system tattoo segmentation is an important step for retrieval accuracy since segmentation removes background information in a tattoo image. Existing segmentation methods do not extract the tattoo very well when the background includes textures and color similar to skin tones. In this paper we describe a tattoo segmentation approach by determining skin pixels in regions near the tattoo. In these regions graph-cut segmentation using a skin color model and a visual saliency map is used to find skin pixels. After segmentation we determine which set of skin pixels are connected with each other that form a closed contour including a tattoo. The regions surrounded by the closed contours are considered tattoo regions. Our method segments tattoos well when the background includes textures and color similar to skin.

  11. Integration of safety engineering into a cost optimized development program.

    Science.gov (United States)

    Ball, L. W.

    1972-01-01

    A six-segment management model is presented, each segment of which represents a major area in a new product development program. The first segment of the model covers integration of specialist engineers into 'systems requirement definition' or the system engineering documentation process. The second covers preparation of five basic types of 'development program plans.' The third segment covers integration of system requirements, scheduling, and funding of specialist engineering activities into 'work breakdown structures,' 'cost accounts,' and 'work packages.' The fourth covers 'requirement communication' by line organizations. The fifth covers 'performance measurement' based on work package data. The sixth covers 'baseline requirements achievement tracking.'

  12. A Goal Programming R&D (Research and Development) Project Funding Model of the U.S. Army Strategic Defense Command Using the Analytic Hierarchy Process.

    Science.gov (United States)

    1987-09-01

    A187 899 A GOAL PROGRANNIN R&D (RESEARCH AND DEVELOPMENT) 1/2 PROJECT FUNDING MODEL 0 (U) NAVAL POSTGRADUATE SCHOOL MONTEREY CA S M ANDERSON SEP 87...PROGRAMMING R&D PROJECT FUNDING MODEL OF THE U.S. ARMY STRATEGIC DEFENSE COMMAND USING THE ANALYTIC HIERARCHY PROCESS by Steven M. Anderson September 1987...jACCESSION NO TITI E (Influde Securt ClauAIcatsrn) A Goal Programming R&D Project Funding Model of the U.S. Army Strategic Defense Command Using the

  13. Analytic Provenance Datasets: A Data Repository of Human Analysis Activity and Interaction Logs

    OpenAIRE

    Mohseni, Sina; Pachuilo, Andrew; Nirjhar, Ehsanul Haque; Linder, Rhema; Pena, Alyssa; Ragan, Eric D.

    2018-01-01

    We present an analytic provenance data repository that can be used to study human analysis activity, thought processes, and software interaction with visual analysis tools during exploratory data analysis. We conducted a series of user studies involving exploratory data analysis scenario with textual and cyber security data. Interactions logs, think-alouds, videos and all coded data in this study are available online for research purposes. Analysis sessions are segmented in multiple sub-task ...

  14. Analytic index of Wittgenstein´s Nachlass

    DEFF Research Database (Denmark)

    2010-01-01

    Together with Professor Mark Addis Birmingham and the Research Team at AKSIS, Bergen University, I have developed an interactive analytic index to the whole of the Nachlass of LudwigWittgenstein. The project is funded by Nordforsk/WAB-Bergen/VWA-Helsinki , and is associated with the EU discovery ...... program, and the European Cultural Heritage program. The application is available on the web at the Philospace Home Page....

  15. Segmental stabilization and muscular strengthening in chronic low back pain: a comparative study

    Directory of Open Access Journals (Sweden)

    Fábio Renovato França

    2010-01-01

    Full Text Available OBJECTIVE: To contrast the efficacy of two exercise programs, segmental stabilization and strengthening of abdominal and trunk muscles, on pain, functional disability, and activation of the transversus abdominis muscle (TrA, in individuals with chronic low back pain. DESIGN: Our sample consisted of 30 individuals, randomly assigned to one of two treatment groups: segmental stabilization, where exercises focused on the TrA and lumbar multifidus muscles, and superficial strengthening, where exercises focused on the rectus abdominis, abdominus obliquus internus, abdominus obliquus externus, and erector spinae. Groups were examined to discovere whether the exercises created contrasts regarding pain (visual analogical scale and McGill pain questionnaire, functional disability (Oswestry disability questionnaire, and TrA muscle activation capacity (Pressure Biofeedback Unit = PBU. The program lasted 6 weeks, and 30-minute sessions occurred twice a week. Analysis of variance was used for inter- and intra-group comparisons. The significance level was established at 5%. RESULTS: As compared to baseline, both treatments were effective in relieving pain and improving disability (p<0.001. Those in the segmental stabilization group had significant gains for all variables when compared to the ST group (p<0.001, including TrA activation, where relative gains were 48.3% and -5.1%, respectively. CONCLUSION: Both techniques lessened pain and reduced disability. Segmental stabilization is superior to superficial strengthening for all variables. Superficial strengthening does not improve TrA activation capacity.

  16. Fold distributions at clover, crystal and segment levels for segmented clover detectors

    International Nuclear Information System (INIS)

    Kshetri, R; Bhattacharya, P

    2014-01-01

    Fold distributions at clover, crystal and segment levels have been extracted for an array of segmented clover detectors for various gamma energies. A simple analysis of the results based on a model independant approach has been presented. For the first time, the clover fold distribution of an array and associated array addback factor have been extracted. We have calculated the percentages of the number of crystals and segments that fire for a full energy peak event

  17. Intercalary bone segment transport in treatment of segmental tibial defects

    International Nuclear Information System (INIS)

    Iqbal, A.; Amin, M.S.

    2002-01-01

    Objective: To evaluate the results and complications of intercalary bone segment transport in the treatment of segmental tibial defects. Design: This is a retrospective analysis of patients with segmental tibial defects who were treated with intercalary bone segment transport method. Place and Duration of Study: The study was carried out at Combined Military Hospital, Rawalpindi from September 1997 to April 2001. Subjects and methods: Thirteen patients were included in the study who had developed tibial defects either due to open fractures with bone loss or subsequent to bone debridement of infected non unions. The mean bone defect was 6.4 cms and there were eight associated soft tissue defects. Locally made unilateral 'Naseer-Awais' (NA) fixator was used for bone segment transport. The distraction was done at the rate of 1mm/day after 7-10 days of osteotomy. The patients were followed-up fortnightly during distraction and monthly thereafter. The mean follow-up duration was 18 months. Results: The mean time in external fixation was 9.4 months. The m ean healing index' was 1.47 months/cm. Satisfactory union was achieved in all cases. Six cases (46.2%) required bone grafting at target site and in one of them grafting was required at the level of regeneration as well. All the wounds healed well with no residual infection. There was no residual leg length discrepancy of more than 20 mm nd one angular deformity of more than 5 degrees. The commonest complication encountered was pin track infection seen in 38% of Shanz Screws applied. Loosening occurred in 6.8% of Shanz screws, requiring re-adjustment. Ankle joint contracture with equinus deformity and peroneal nerve paresis occurred in one case each. The functional results were graded as 'good' in seven, 'fair' in four, and 'poor' in two patients. Overall, thirteen patients had 31 (minor/major) complications with a ratio of 2.38 complications per patient. To treat the bone defects and associated complications, a mean of

  18. Variation in medication adherence across patient behavioral segments: a multi-country study in hypertension

    Directory of Open Access Journals (Sweden)

    Sandy R

    2015-10-01

    Full Text Available Robert Sandy, Ulla Connor CoMac Analytics, Inc, Providence, RI, USA Objectives: This study determines the following for a hypertensive patient population: 1 the prevalence of patient worldview clusters; 2 differences in medication adherence across these clusters; and 3 the adherence predictive power of the clusters relative to measures of patients’ concerns over their medication’s cost, side effects, and efficacy. Methods: Members from patient panels in the UK, Germany, Italy, and Spain were invited to participate in an online survey that included the Medication Adherence Report Scale-5 (MARS-5 adherence instrument and a patient segmentation instrument developed by CoMac Analytics, Inc, based on a linguistic analysis of patient talk. Subjects were screened to have a diagnosis of hypertension and treatment with at least one antihypertensive agent. Results: A total of 353 patients completed the online survey in August/September 2011 and were categorized against three different behavioral domains: 1 control orientation (n=176 respondents [50%] for I, internal; n=177 respondents [50%] for E, external; 2 emotion (n=100 respondents [28%] for P, positive; n=253 respondents [72%] for N, negative; and 3 agency or ability to act on choices (n=227 respondents [64%] for H, high agency; n=126 [36%] for L, low agency. Domains were grouped into eight different clusters with EPH and IPH being the most prevalent (88 respondents [25%] in each cluster. The prevalence of other behavior clusters ranged from 6% (22 respondents, INH to 12% (41 respondents, IPL. The proportion of patients defined as perfectly adherent (scored 25 on MARS-5 varied sharply across the segments: 51% adherent (45 of 88 respondents for the IPH vs 8% adherent (2 of 25 respondents classified as INL. Side effects, being employed, and stopping medicine because the patient got better were all significant determinants of adherence in a probit regression model. Conclusion: By categorizing

  19. Marker-based reconstruction of the kinematics of a chain of segments: a new method that incorporates joint kinematic constraints.

    Science.gov (United States)

    Klous, Miriam; Klous, Sander

    2010-07-01

    The aim of skin-marker-based motion analysis is to reconstruct the motion of a kinematical model from noisy measured motion of skin markers. Existing kinematic models for reconstruction of chains of segments can be divided into two categories: analytical methods that do not take joint constraints into account and numerical global optimization methods that do take joint constraints into account but require numerical optimization of a large number of degrees of freedom, especially when the number of segments increases. In this study, a new and largely analytical method for a chain of rigid bodies is presented, interconnected in spherical joints (chain-method). In this method, the number of generalized coordinates to be determined through numerical optimization is three, irrespective of the number of segments. This new method is compared with the analytical method of Veldpaus et al. [1988, "A Least-Squares Algorithm for the Equiform Transformation From Spatial Marker Co-Ordinates," J. Biomech., 21, pp. 45-54] (Veldpaus-method, a method of the first category) and the numerical global optimization method of Lu and O'Connor [1999, "Bone Position Estimation From Skin-Marker Co-Ordinates Using Global Optimization With Joint Constraints," J. Biomech., 32, pp. 129-134] (Lu-method, a method of the second category) regarding the effects of continuous noise simulating skin movement artifacts and regarding systematic errors in joint constraints. The study is based on simulated data to allow a comparison of the results of the different algorithms with true (noise- and error-free) marker locations. Results indicate a clear trend that accuracy for the chain-method is higher than the Veldpaus-method and similar to the Lu-method. Because large parts of the equations in the chain-method can be solved analytically, the speed of convergence in this method is substantially higher than in the Lu-method. With only three segments, the average number of required iterations with the chain

  20. Market segmentation in behavioral perspective.

    OpenAIRE

    Wells, V.K.; Chang, S.W.; Oliveira-Castro, J.M.; Pallister, J.

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847 consumers and from a total of 76,682 individual purchases, brand choice and price and reinforcement responsiveness were assessed for each segment a...

  1. Holistic versus Analytic Evaluation of EFL Writing: A Case Study

    Science.gov (United States)

    Ghalib, Thikra K.; Al-Hattami, Abdulghani A.

    2015-01-01

    This paper investigates the performance of holistic and analytic scoring rubrics in the context of EFL writing. Specifically, the paper compares EFL students' scores on a writing task using holistic and analytic scoring rubrics. The data for the study was collected from 30 participants attending an English undergraduate program in a Yemeni…

  2. Noise destroys feedback enhanced figure-ground segmentation but not feedforward figure-ground segmentation

    Science.gov (United States)

    Romeo, August; Arall, Marina; Supèr, Hans

    2012-01-01

    Figure-ground (FG) segmentation is the separation of visual information into background and foreground objects. In the visual cortex, FG responses are observed in the late stimulus response period, when neurons fire in tonic mode, and are accompanied by a switch in cortical state. When such a switch does not occur, FG segmentation fails. Currently, it is not known what happens in the brain on such occasions. A biologically plausible feedforward spiking neuron model was previously devised that performed FG segmentation successfully. After incorporating feedback the FG signal was enhanced, which was accompanied by a change in spiking regime. In a feedforward model neurons respond in a bursting mode whereas in the feedback model neurons fired in tonic mode. It is known that bursts can overcome noise, while tonic firing appears to be much more sensitive to noise. In the present study, we try to elucidate how the presence of noise can impair FG segmentation, and to what extent the feedforward and feedback pathways can overcome noise. We show that noise specifically destroys the feedback enhanced FG segmentation and leaves the feedforward FG segmentation largely intact. Our results predict that noise produces failure in FG perception. PMID:22934028

  3. Nyheder i SAS Analytics 14.2

    DEFF Research Database (Denmark)

    Milhøj, Anders

    2017-01-01

    I november 2016 blev Analytical Produts i den opdaterede version 14.2 sendt på markedet. Denne opdatering indeholder opdateringer af de analytiske programpakker inden for statistik, økonometri, operationsanalyse etc. Disse opdateringer er nu løsrevet fra samtidige opdateringer af det samlede SAS-program...

  4. Segmentation Techniques for Expanding a Library Instruction Market: Evaluating and Brainstorming.

    Science.gov (United States)

    Warren, Rebecca; Hayes, Sherman; Gunter, Donna

    2001-01-01

    Describes a two-part segmentation technique applied to an instruction program for an academic library during a strategic planning process. Discusses a brainstorming technique used to create a list of existing and potential audiences, and then describes a follow-up review session that evaluated the past years' efforts. (Author/LRW)

  5. Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis

    Science.gov (United States)

    Che, E.; Olsen, M. J.

    2017-09-01

    Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.

  6. Consumer energy - conservation policy: an analytical approach

    Energy Technology Data Exchange (ETDEWEB)

    McDougall, G.H.G.; Ritchie, J.R.B.

    1984-06-01

    To capture the potential energy savings available in the consumer sector an analytical approach to conservation policy is proposed. A policy framework is described, and the key constructs including a payoff matrix analysis and a consumer impact analysis are discussed. Implications derived from the considerable amount of prior consumer research are provided to illustrate the effect on the design and implementation of future programs. The result of this analytical approach to conservation policy (economic stability and economic security) are goals well worth pursuing. 13 references, 2 tables.

  7. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    Energy Technology Data Exchange (ETDEWEB)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich [Departments of Electrical and Computer Engineering and Internal Medicine, Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, A-8010 Graz (Austria); Department of Electrical and Computer Engineering, Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Department of Radiology, Medical University Graz, Auenbruggerplatz 34, A-8010 Graz (Austria)

    2012-03-15

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  8. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    International Nuclear Information System (INIS)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-01-01

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  9. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods.

    Science.gov (United States)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-03-01

    Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and∕or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of user interaction

  10. Cluster Ensemble-Based Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xiaoru Wang

    2013-07-01

    Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new cluster ensemble-based image segmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.

  11. A numerical study on seismic response of self-centring precast segmental columns at different post-tensioning forces

    Directory of Open Access Journals (Sweden)

    Ehsan Nikbakht

    Full Text Available Precast bridge columns have shown increasing demand over the past few years due to the advantages of such columns when compared against conventional bridge columns, particularly due to the fact that precast bridge columns can be constructed off site and erected in a short period of time. The present study analytically investigates the behaviour of self-centring precast segmental bridge columns under nonlinear-static and pseudo-dynamic loading at different prestressing strand levels. Self-centring segmental columns are composed of prefabricated reinforced concrete segments which are connected by central post-tensioning (PT strands. The present study develops a three dimensional (3D nonlinear finite element model for hybrid post-tensioned precast segmental bridge columns. The model is subjected to constant axial loading and lateral reverse cyclic loading. The lateral force displacement results of the analysed columns show good agreement with the experimental response of the columns. Bonded post-tensioned segmental columns at 25%, 40% and 70% prestressing strand stress levels are analysed and compared with an emulative monolithic conventional column. The columns with a higher initial prestressing strand levels show greater initial stiffness and strength but show higher stiffness reduction at large drifts. In the time-history analysis, the column samples are subjected to different earthquake records to investigate the effect post-tensioning force levels on their lateral seismic response in low and higher seismicity zones. The results indicate that, for low seismicity zones, post-tensioned segmental columns with a higher initial stress level deflect lower lateral peak displacement. However, in higher seismicity zones, applying a high initial stress level should be avoided for precast segmental self-centring columns with low energy dissipation capacity.

  12. Albedo estimation for scene segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C H; Rosenfeld, A

    1983-03-01

    Standard methods of image segmentation do not take into account the three-dimensional nature of the underlying scene. For example, histogram-based segmentation tacitly assumes that the image intensity is piecewise constant, and this is not true when the scene contains curved surfaces. This paper introduces a method of taking 3d information into account in the segmentation process. The image intensities are adjusted to compensate for the effects of estimated surface orientation; the adjusted intensities can be regarded as reflectivity estimates. When histogram-based segmentation is applied to these new values, the image is segmented into parts corresponding to surfaces of constant reflectivity in the scene. 7 references.

  13. Automatic generation of pictorial transcripts of video programs

    Science.gov (United States)

    Shahraray, Behzad; Gibbon, David C.

    1995-03-01

    An automatic authoring system for the generation of pictorial transcripts of video programs which are accompanied by closed caption information is presented. A number of key frames, each of which represents the visual information in a segment of the video (i.e., a scene), are selected automatically by performing a content-based sampling of the video program. The textual information is recovered from the closed caption signal and is initially segmented based on its implied temporal relationship with the video segments. The text segmentation boundaries are then adjusted, based on lexical analysis and/or caption control information, to account for synchronization errors due to possible delays in the detection of scene boundaries or the transmission of the caption information. The closed caption text is further refined through linguistic processing for conversion to lower- case with correct capitalization. The key frames and the related text generate a compact multimedia presentation of the contents of the video program which lends itself to efficient storage and transmission. This compact representation can be viewed on a computer screen, or used to generate the input to a commercial text processing package to generate a printed version of the program.

  14. Site study plan for geochemical analytical requirements and methodologies: Revision 1

    International Nuclear Information System (INIS)

    1987-12-01

    This site study plan documents the analytical methodologies and procedures that will be used to analyze geochemically the rock and fluid samples collected during Site Characterization. Information relating to the quality aspects of these analyses is also provided, where available. Most of the proposed analytical procedures have been used previously on the program and are sufficiently sensitive to yield high-quality analyses. In a few cases improvements in analytical methodology (e.g., greater sensitivity, fewer interferences) are desired. Suggested improvements to these methodologies are discussed. In most cases these method-development activities have already been initiated. The primary source of rock and fluid samples for geochemical analysis during Site Characterization will be the drilling program, as described in various SRP Site Study Plans. The Salt Repository Project (SRP) Networks specify the schedule under which the program will operate. Drilling will not begin until after site ground water baseline conditions have been established. The Technical Field Services Contractor (TFSC) is responsible for conducting the field program of drilling and testing. Samples and data will be handled and reported in accordance with established SRP procedures. A quality assurance program will be utilized to assure that activities affecting quality are performed correctly and that the appropriate documentation is maintained. 28 refs., 9 figs., 14 tabs

  15. Gamifying Video Object Segmentation.

    Science.gov (United States)

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  16. Problem-based learning on quantitative analytical chemistry course

    Science.gov (United States)

    Fitri, Noor

    2017-12-01

    This research applies problem-based learning method on chemical quantitative analytical chemistry, so called as "Analytical Chemistry II" course, especially related to essential oil analysis. The learning outcomes of this course include aspects of understanding of lectures, the skills of applying course materials, and the ability to identify, formulate and solve chemical analysis problems. The role of study groups is quite important in improving students' learning ability and in completing independent tasks and group tasks. Thus, students are not only aware of the basic concepts of Analytical Chemistry II, but also able to understand and apply analytical concepts that have been studied to solve given analytical chemistry problems, and have the attitude and ability to work together to solve the problems. Based on the learning outcome, it can be concluded that the problem-based learning method in Analytical Chemistry II course has been proven to improve students' knowledge, skill, ability and attitude. Students are not only skilled at solving problems in analytical chemistry especially in essential oil analysis in accordance with local genius of Chemistry Department, Universitas Islam Indonesia, but also have skilled work with computer program and able to understand material and problem in English.

  17. A dorsolateral prefrontal cortex semi-automatic segmenter

    Science.gov (United States)

    Al-Hakim, Ramsey; Fallon, James; Nain, Delphine; Melonakos, John; Tannenbaum, Allen

    2006-03-01

    Structural, functional, and clinical studies in schizophrenia have, for several decades, consistently implicated dysfunction of the prefrontal cortex in the etiology of the disease. Functional and structural imaging studies, combined with clinical, psychometric, and genetic analyses in schizophrenia have confirmed the key roles played by the prefrontal cortex and closely linked "prefrontal system" structures such as the striatum, amygdala, mediodorsal thalamus, substantia nigra-ventral tegmental area, and anterior cingulate cortices. The nodal structure of the prefrontal system circuit is the dorsal lateral prefrontal cortex (DLPFC), or Brodmann area 46, which also appears to be the most commonly studied and cited brain area with respect to schizophrenia. 1, 2, 3, 4 In 1986, Weinberger et. al. tied cerebral blood flow in the DLPFC to schizophrenia.1 In 2001, Perlstein et. al. demonstrated that DLPFC activation is essential for working memory tasks commonly deficient in schizophrenia. 2 More recently, groups have linked morphological changes due to gene deletion and increased DLPFC glutamate concentration to schizophrenia. 3, 4 Despite the experimental and clinical focus on the DLPFC in structural and functional imaging, the variability of the location of this area, differences in opinion on exactly what constitutes DLPFC, and inherent difficulties in segmenting this highly convoluted cortical region have contributed to a lack of widely used standards for manual or semi-automated segmentation programs. Given these implications, we developed a semi-automatic tool to segment the DLPFC from brain MRI scans in a reproducible way to conduct further morphological and statistical studies. The segmenter is based on expert neuroanatomist rules (Fallon-Kindermann rules), inspired by cytoarchitectonic data and reconstructions presented by Rajkowska and Goldman-Rakic. 5 It is semi-automated to provide essential user interactivity. We present our results and provide details on

  18. Cultivating Institutional Capacities for Learning Analytics

    Science.gov (United States)

    Lonn, Steven; McKay, Timothy A.; Teasley, Stephanie D.

    2017-01-01

    This chapter details the process the University of Michigan developed to build institutional capacity for learning analytics. A symposium series, faculty task force, fellows program, research grants, and other initiatives are discussed, with lessons learned for future efforts and how other institutions might adapt such efforts to spur cultural…

  19. SU-C-207B-04: Automated Segmentation of Pectoral Muscle in MR Images of Dense Breasts

    Energy Technology Data Exchange (ETDEWEB)

    Verburg, E; Waard, SN de; Veldhuis, WB; Gils, CH van; Gilhuijs, KGA [University Medical Center Utrecht, Utrecht (Netherlands)

    2016-06-15

    Purpose: To develop and evaluate a fully automated method for segmentation of the pectoral muscle boundary in Magnetic Resonance Imaging (MRI) of dense breasts. Methods: Segmentation of the pectoral muscle is an important part of automatic breast image analysis methods. Current methods for segmenting the pectoral muscle in breast MRI have difficulties delineating the muscle border correctly in breasts with a large proportion of fibroglandular tissue (i.e., dense breasts). Hence, an automated method based on dynamic programming was developed, incorporating heuristics aimed at shape, location and gradient features.To assess the method, the pectoral muscle was segmented in 91 randomly selected participants (mean age 56.6 years, range 49.5–75.2 years) from a large MRI screening trial in women with dense breasts (ACR BI-RADS category 4). Each MR dataset consisted of 178 or 179 T1-weighted images with voxel size 0.64 × 0.64 × 1.00 mm3. All images (n=16,287) were reviewed and scored by a radiologist. In contrast to volume overlap coefficients, such as DICE, the radiologist detected deviations in the segmented muscle border and determined whether the result would impact the ability to accurately determine the volume of fibroglandular tissue and detection of breast lesions. Results: According to the radiologist’s scores, 95.5% of the slices did not mask breast tissue in such way that it could affect detection of breast lesions or volume measurements. In 13.1% of the slices a deviation in the segmented muscle border was present which would not impact breast lesion detection. In 70 datasets (78%) at least 95% of the slices were segmented in such a way it would not affect detection of breast lesions, and in 60 (66%) datasets this was 100%. Conclusion: Dynamic programming with dedicated heuristics shows promising potential to segment the pectoral muscle in women with dense breasts.

  20. SU-C-207B-04: Automated Segmentation of Pectoral Muscle in MR Images of Dense Breasts

    International Nuclear Information System (INIS)

    Verburg, E; Waard, SN de; Veldhuis, WB; Gils, CH van; Gilhuijs, KGA

    2016-01-01

    Purpose: To develop and evaluate a fully automated method for segmentation of the pectoral muscle boundary in Magnetic Resonance Imaging (MRI) of dense breasts. Methods: Segmentation of the pectoral muscle is an important part of automatic breast image analysis methods. Current methods for segmenting the pectoral muscle in breast MRI have difficulties delineating the muscle border correctly in breasts with a large proportion of fibroglandular tissue (i.e., dense breasts). Hence, an automated method based on dynamic programming was developed, incorporating heuristics aimed at shape, location and gradient features.To assess the method, the pectoral muscle was segmented in 91 randomly selected participants (mean age 56.6 years, range 49.5–75.2 years) from a large MRI screening trial in women with dense breasts (ACR BI-RADS category 4). Each MR dataset consisted of 178 or 179 T1-weighted images with voxel size 0.64 × 0.64 × 1.00 mm3. All images (n=16,287) were reviewed and scored by a radiologist. In contrast to volume overlap coefficients, such as DICE, the radiologist detected deviations in the segmented muscle border and determined whether the result would impact the ability to accurately determine the volume of fibroglandular tissue and detection of breast lesions. Results: According to the radiologist’s scores, 95.5% of the slices did not mask breast tissue in such way that it could affect detection of breast lesions or volume measurements. In 13.1% of the slices a deviation in the segmented muscle border was present which would not impact breast lesion detection. In 70 datasets (78%) at least 95% of the slices were segmented in such a way it would not affect detection of breast lesions, and in 60 (66%) datasets this was 100%. Conclusion: Dynamic programming with dedicated heuristics shows promising potential to segment the pectoral muscle in women with dense breasts.

  1. U.S. Army Custom Segmentation System

    Science.gov (United States)

    2007-06-01

    segmentation is individual or intergroup differences in response to marketing - mix variables. Presumptions about segments: •different demands in a...product or service category, •respond differently to changes in the marketing mix Criteria for segments: •The segments must exist in the environment

  2. Getting started with Greenplum for big data analytics

    CERN Document Server

    Gollapudi, Sunila

    2013-01-01

    Standard tutorial-based approach.""Getting Started with Greenplum for Big Data"" Analytics is great for data scientists and data analysts with a basic knowledge of Data Warehousing and Business Intelligence platforms who are new to Big Data and who are looking to get a good grounding in how to use the Greenplum Platform. It's assumed that you will have some experience with database design and programming as well as be familiar with analytics tools like R and Weka.

  3. Marketing University Outreach Programs.

    Science.gov (United States)

    Foster, Ralph S., Jr., Ed.; And Others

    A collection of 12 essays and model program descriptions addresses issues in the marketing of university extension, outreach, and distance education programs. They include: (1) "Marketing and University Outreach: Parallel Processes" (William I. Sauser, Jr. and others); (2) "Segmenting and Targeting the Organizational Market"…

  4. Multi-product dynamic advertisement planning in a segmented market

    Directory of Open Access Journals (Sweden)

    Aggarwal Sugandha

    2017-01-01

    Full Text Available In this paper, a dynamic multi-objective linear integer programming model is proposed to optimally distribute a firm’s advertising budget among multiple products and media in a segmented market. To make the media plan responsive to the changes in the market, the distribution is carried out dynamically by dividing the planning horizon into smaller periods. The model incorporates the effect of the previous period advertising reach on the current period (taken through retention factor, and it also considers cross-product effect of simultaneously advertising different products. An application of the model is presented for an insurance firm that markets five different products, using goal programming approach.

  5. Segmentation of confocal Raman microspectroscopic imaging data using edge-preserving denoising and clustering.

    Science.gov (United States)

    Alexandrov, Theodore; Lasch, Peter

    2013-06-18

    Over the past decade, confocal Raman microspectroscopic (CRM) imaging has matured into a useful analytical tool to obtain spatially resolved chemical information on the molecular composition of biological samples and has found its way into histopathology, cytology, and microbiology. A CRM imaging data set is a hyperspectral image in which Raman intensities are represented as a function of three coordinates: a spectral coordinate λ encoding the wavelength and two spatial coordinates x and y. Understanding CRM imaging data is challenging because of its complexity, size, and moderate signal-to-noise ratio. Spatial segmentation of CRM imaging data is a way to reveal regions of interest and is traditionally performed using nonsupervised clustering which relies on spectral domain-only information with the main drawback being the high sensitivity to noise. We present a new pipeline for spatial segmentation of CRM imaging data which combines preprocessing in the spectral and spatial domains with k-means clustering. Its core is the preprocessing routine in the spatial domain, edge-preserving denoising (EPD), which exploits the spatial relationships between Raman intensities acquired at neighboring pixels. Additionally, we propose to use both spatial correlation to identify Raman spectral features colocalized with defined spatial regions and confidence maps to assess the quality of spatial segmentation. For CRM data acquired from midsagittal Syrian hamster ( Mesocricetus auratus ) brain cryosections, we show how our pipeline benefits from the complex spatial-spectral relationships inherent in the CRM imaging data. EPD significantly improves the quality of spatial segmentation that allows us to extract the underlying structural and compositional information contained in the Raman microspectra.

  6. 3D analytical field calculation using triangular magnet segments applied to a skewed linear permanent magnet actuator

    NARCIS (Netherlands)

    Janssen, J.L.G.; Paulides, J.J.H.; Lomonova, E.

    2010-01-01

    This paper presents novel analytical expressions which describe the 3D magnetic field of arbitrarily magnetized triangular-shaped charged surfaces. These versatile expressions are suitable to model triangularshaped permanent magnets and can be expanded to any polyhedral shape. Many applications are

  7. 3D Analytical field calculation using triangular magnet segments applied to a skewed linear permanent magnet actuator

    NARCIS (Netherlands)

    Janssen, J.L.G.; Paulides, J.J.H.; Lomonova, E.

    2009-01-01

    This paper presents novel analytical expressions which describe the 3D magnetic field of arbitrarily magnetized triangular-shaped charged surfaces. These versatile expressions are suitable to model triangularshaped permanent magnets and can be expanded to any polyhedral shape. Many applications are

  8. Multidendritic sensory neurons in the adult Drosophila abdomen: origins, dendritic morphology, and segment- and age-dependent programmed cell death

    Directory of Open Access Journals (Sweden)

    Sugimura Kaoru

    2009-10-01

    Full Text Available Abstract Background For the establishment of functional neural circuits that support a wide range of animal behaviors, initial circuits formed in early development have to be reorganized. One way to achieve this is local remodeling of the circuitry hardwiring. To genetically investigate the underlying mechanisms of this remodeling, one model system employs a major group of Drosophila multidendritic sensory neurons - the dendritic arborization (da neurons - which exhibit dramatic dendritic pruning and subsequent growth during metamorphosis. The 15 da neurons are identified in each larval abdominal hemisegment and are classified into four categories - classes I to IV - in order of increasing size of their receptive fields and/or arbor complexity at the mature larval stage. Our knowledge regarding the anatomy and developmental basis of adult da neurons is still fragmentary. Results We identified multidendritic neurons in the adult Drosophila abdomen, visualized the dendritic arbors of the individual neurons, and traced the origins of those cells back to the larval stage. There were six da neurons in abdominal hemisegment 3 or 4 (A3/4 of the pharate adult and the adult just after eclosion, five of which were persistent larval da neurons. We quantitatively analyzed dendritic arbors of three of the six adult neurons and examined expression in the pharate adult of key transcription factors that result in the larval class-selective dendritic morphologies. The 'baseline design' of A3/4 in the adult was further modified in a segment-dependent and age-dependent manner. One of our notable findings is that a larval class I neuron, ddaE, completed dendritic remodeling in A2 to A4 and then underwent caspase-dependent cell death within 1 week after eclosion, while homologous neurons in A5 and in more posterior segments degenerated at pupal stages. Another finding is that the dendritic arbor of a class IV neuron, v'ada, was immediately reshaped during post

  9. Poly(ether amide) segmented block copolymers with adipicacid based tetra amide segments

    NARCIS (Netherlands)

    Biemond, G.J.E.; Feijen, Jan; Gaymans, R.J.

    2007-01-01

    Poly(tetramethylene oxide)-based poly(ether ester amide)s with monodisperse tetraamide segments were synthesized. The tetraamide segment was based on adipic acid, terephthalic acid, and hexamethylenediamine. The synthesis method of the copolymers and the influence of the tetraamide concentration,

  10. Fusion set selection with surrogate metric in multi-atlas based image segmentation

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2016-01-01

    Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation. (paper)

  11. Metric Learning for Hyperspectral Image Segmentation

    Science.gov (United States)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  12. Program Performance Assessment System (PPAS). External reviewers' report of the consultants' meeting on analytical quality control services

    International Nuclear Information System (INIS)

    2001-01-01

    In reviewing the recommendations of previous Consultants' Meetings concerning the AQCS program, it is apparent that there has been a clear and consistent agreement on what the objectives of the AQCS activities should be. The mission statement as given in the Agency's 'Blue Book 1997-1998' states 'To assist analytical laboratories in Member States in maintaining/improving the quality of their analytical measurements, to achieve internationally acceptable levels of quality assurance and to develop and supply appropriate reference standards to achieve these objectives'. In concert with this mission statement, the consultants have endorsed an elaboration of these objectives for both the Agency' s laboratories and Member State laboratories as outlined in the 1994 Consultants' Report (KONA, HI, USA) which includes: the improvement of the reliability of results for the intended purposes; the enhancement of the comparability of results from one measurement laboratory to another; the attainment of compatibility of results in physical and chemical sciences with specific coverage of international standards for food and agriculture, human health, environment, industry, earth sciences, radiation safety, and safeguards activities; the demonstration of quality measurement systems sufficient for laboratory/analyst accreditation or acceptance, and; the establishment of traceability of radioactivity measurements and chemical analyses to the international SI system of measurements

  13. Market Segmentation for Information Services.

    Science.gov (United States)

    Halperin, Michael

    1981-01-01

    Discusses the advantages and limitations of market segmentation as strategy for the marketing of information services made available by nonprofit organizations, particularly libraries. Market segmentation is defined, a market grid for libraries is described, and the segmentation of information services is outlined. A 16-item reference list is…

  14. Automated medical image segmentation techniques

    Directory of Open Access Journals (Sweden)

    Sharma Neeraj

    2010-01-01

    Full Text Available Accurate segmentation of medical images is a key step in contouring during radiotherapy planning. Computed topography (CT and Magnetic resonance (MR imaging are the most widely used radiographic techniques in diagnosis, clinical studies and treatment planning. This review provides details of automated segmentation methods, specifically discussed in the context of CT and MR images. The motive is to discuss the problems encountered in segmentation of CT and MR images, and the relative merits and limitations of methods currently available for segmentation of medical images.

  15. Prognostic validation of a 17-segment score derived from a 20-segment score for myocardial perfusion SPECT interpretation.

    Science.gov (United States)

    Berman, Daniel S; Abidov, Aiden; Kang, Xingping; Hayes, Sean W; Friedman, John D; Sciammarella, Maria G; Cohen, Ishac; Gerlach, James; Waechter, Parker B; Germano, Guido; Hachamovitch, Rory

    2004-01-01

    Recently, a 17-segment model of the left ventricle has been recommended as an optimally weighted approach for interpreting myocardial perfusion single photon emission computed tomography (SPECT). Methods to convert databases from previous 20- to new 17-segment data and criteria for abnormality for the 17-segment scores are needed. Initially, for derivation of the conversion algorithm, 65 patients were studied (algorithm population) (pilot group, n = 28; validation group, n = 37). Three conversion algorithms were derived: algorithm 1, which used mid, distal, and apical scores; algorithm 2, which used distal and apical scores alone; and algorithm 3, which used maximal scores of the distal septal, lateral, and apical segments in the 20-segment model for 3 corresponding segments of the 17-segment model. The prognosis population comprised 16,020 consecutive patients (mean age, 65 +/- 12 years; 41% women) who had exercise or vasodilator stress technetium 99m sestamibi myocardial perfusion SPECT and were followed up for 2.1 +/- 0.8 years. In this population, 17-segment scores were derived from 20-segment scores by use of algorithm 2, which demonstrated the best agreement with expert 17-segment reading in the algorithm population. The prognostic value of the 20- and 17-segment scores was compared by converting the respective summed scores into percent myocardium abnormal. Conversion algorithm 2 was found to be highly concordant with expert visual analysis by the 17-segment model (r = 0.982; kappa = 0.866) in the algorithm population. In the prognosis population, 456 cardiac deaths occurred during follow-up. When the conversion algorithm was applied, extent and severity of perfusion defects were nearly identical by 20- and derived 17-segment scores. The receiver operating characteristic curve areas by 20- and 17-segment perfusion scores were identical for predicting cardiac death (both 0.77 +/- 0.02, P = not significant). The optimal prognostic cutoff value for either 20

  16. NUCLEAR SEGMENTATION IN MICROSCOPE CELL IMAGES: A HAND-SEGMENTED DATASET AND COMPARISON OF ALGORITHMS

    OpenAIRE

    Coelho, Luís Pedro; Shariff, Aabid; Murphy, Robert F.

    2009-01-01

    Image segmentation is an essential step in many image analysis pipelines and many algorithms have been proposed to solve this problem. However, they are often evaluated subjectively or based on a small number of examples. To fill this gap, we hand-segmented a set of 97 fluorescence microscopy images (a total of 4009 cells) and objectively evaluated some previously proposed segmentation algorithms.

  17. Letter of Intent for River Protection Project (RPP) Characterization Program: Process Engineering and Hanford Analytical Services and Characterization Project Operations and Quality Assurance

    International Nuclear Information System (INIS)

    ADAMS, M.R.

    2000-01-01

    The Characterization Project level of success achieved by the River Protection Project (RPP) is determined by the effectiveness of several organizations across RPP working together. The requirements, expectations, interrelationships, and performance criteria for each of these organizations were examined in order to understand the performances necessary to achieve characterization objectives. This Letter of Intent documents the results of the above examination. It formalizes the details of interfaces, working agreements, and requirements for obtaining and transferring tank waste samples from the Tank Farm System (RPP Process Engineering, Characterization Project Operations, and RPP Quality Assurance) to the characterization laboratory complex (222-S Laboratory, Waste Sampling and Characterization Facility, and the Hanford Analytical Service Program) and for the laboratory complex analysis and reporting of analytical results

  18. Generalized pixel profiling and comparative segmentation with application to arteriovenous malformation segmentation.

    Science.gov (United States)

    Babin, D; Pižurica, A; Bellens, R; De Bock, J; Shang, Y; Goossens, B; Vansteenkiste, E; Philips, W

    2012-07-01

    Extraction of structural and geometric information from 3-D images of blood vessels is a well known and widely addressed segmentation problem. The segmentation of cerebral blood vessels is of great importance in diagnostic and clinical applications, with a special application in diagnostics and surgery on arteriovenous malformations (AVM). However, the techniques addressing the problem of the AVM inner structure segmentation are rare. In this work we present a novel method of pixel profiling with the application to segmentation of the 3-D angiography AVM images. Our algorithm stands out in situations with low resolution images and high variability of pixel intensity. Another advantage of our method is that the parameters are set automatically, which yields little manual user intervention. The results on phantoms and real data demonstrate its effectiveness and potentials for fine delineation of AVM structure. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Analytical Chemistry Core Capability Assessment - Preliminary Report

    International Nuclear Information System (INIS)

    Barr, Mary E.; Farish, Thomas J.

    2012-01-01

    The concept of 'core capability' can be nebulous one. Even at a fairly specific level, where core capability equals maintaining essential services, it is highly dependent upon the perspective of the requestor. Samples are submitted to analytical services because the requesters do not have the capability to conduct adequate analyses themselves. Some requests are for general chemical information in support of R and D, process control, or process improvement. Many analyses, however, are part of a product certification package and must comply with higher-level customer quality assurance requirements. So which services are essential to that customer - just those for product certification? Does the customer also (indirectly) need services that support process control and improvement? And what is the timeframe? Capability is often expressed in terms of the currently utilized procedures, and most programmatic customers can only plan a few years out, at best. But should core capability consider the long term where new technologies, aging facilities, and personnel replacements must be considered? These questions, and a multitude of others, explain why attempts to gain long-term consensus on the definition of core capability have consistently failed. This preliminary report will not try to define core capability for any specific program or set of programs. Instead, it will try to address the underlying concerns that drive the desire to determine core capability. Essentially, programmatic customers want to be able to call upon analytical chemistry services to provide all the assays they need, and they don't want to pay for analytical chemistry services they don't currently use (or use infrequently). This report will focus on explaining how the current analytical capabilities and methods evolved to serve a variety of needs with a focus on why some analytes have multiple analytical techniques, and what determines the infrastructure for these analyses. This information will be

  20. Robust shape regression for supervised vessel segmentation and its application to coronary segmentation in CTA

    DEFF Research Database (Denmark)

    Schaap, Michiel; van Walsum, Theo; Neefjes, Lisan

    2011-01-01

    This paper presents a vessel segmentation method which learns the geometry and appearance of vessels in medical images from annotated data and uses this knowledge to segment vessels in unseen images. Vessels are segmented in a coarse-to-fine fashion. First, the vessel boundaries are estimated...

  1. Structure-properties relationships of novel poly(carbonate-co-amide) segmented copolymers with polyamide-6 as hard segments and polycarbonate as soft segments

    Science.gov (United States)

    Yang, Yunyun; Kong, Weibo; Yuan, Ye; Zhou, Changlin; Cai, Xufu

    2018-04-01

    Novel poly(carbonate-co-amide) (PCA) block copolymers are prepared with polycarbonate diol (PCD) as soft segments, polyamide-6 (PA6) as hard segments and 4,4'-diphenylmethane diisocyanate (MDI) as coupling agent through reactive processing. The reactive processing strategy is eco-friendly and resolve the incompatibility between polyamide segments and PCD segments in preparation processing. The chemical structure, crystalline properties, thermal properties, mechanical properties and water resistance were extensively studied by Fourier transform infrared spectroscopy (FTIR), X-ray diffraction (XRD), Differential scanning calorimetry (DSC), Thermal gravity analysis (TGA), Dynamic mechanical analysis (DMA), tensile testing, water contact angle and water absorption, respectively. The as-prepared PCAs exhibit obvious microphase separation between the crystalline hard PA6 phase and amorphous PCD soft segments. Meanwhile, PCAs showed outstanding mechanical with the maximum tensile strength of 46.3 MPa and elongation at break of 909%. The contact angle and water absorption results indicate that PCAs demonstrate outstanding water resistance even though possess the hydrophilic surfaces. The TGA measurements prove that the thermal stability of PCA can satisfy the requirement of multiple-processing without decomposition.

  2. Analytical aids in land management planning

    Science.gov (United States)

    David R. Betters

    1978-01-01

    Quantitative techniques may be applied to aid in completing various phases of land management planning. Analytical procedures which have been used include a procedure for public involvement, PUBLIC; a matrix information generator, MAGE5; an allocation procedure, linear programming (LP); and an input-output economic analysis (EA). These techniques have proven useful in...

  3. Segmentation and informality in Vietnam : a survey of the literature: country case study on labour market segmentation

    OpenAIRE

    Cling, Jean-Pierre; Razafindrakoto, Mireille; Roubaud, François

    2014-01-01

    Labour market segmentation is usually defined as the division of the labour markets into separate sub-markets or segments, distinguished by different characteristics and behavioural rules (incomes, contracts, etc.). The economic debate on the segmentation issue has been focusing in developed countries, and especially in Europe, on contractual segmentation and dualism.

  4. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation......We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  5. Learning analytics as a tool for closing the assessment loop in higher education

    OpenAIRE

    Karen D. Mattingly; Margaret C. Rice; Zane L. Berge

    2012-01-01

    This paper examines learning and academic analytics and its relevance to distance education in undergraduate and graduate programs as it impacts students and teaching faculty, and also academic institutions. The focus is to explore the measurement, collection, analysis, and reporting of data as predictors of student success and drivers of departmental process and program curriculum. Learning and academic analytics in higher education is used to predict student success by examining how and wha...

  6. Human Memory Organization for Computer Programs.

    Science.gov (United States)

    Norcio, A. F.; Kerst, Stephen M.

    1983-01-01

    Results of study investigating human memory organization in processing of computer programming languages indicate that algorithmic logic segments form a cognitive organizational structure in memory for programs. Statement indentation and internal program documentation did not enhance organizational process of recall of statements in five Fortran…

  7. Pavement management segment consolidation

    Science.gov (United States)

    1998-01-01

    Dividing roads into "homogeneous" segments has been a major problem for all areas of highway engineering. SDDOT uses Deighton Associates Limited software, dTIMS, to analyze life-cycle costs for various rehabilitation strategies on each segment of roa...

  8. Automatic segmentation of vertebrae from radiographs

    DEFF Research Database (Denmark)

    Mysling, Peter; Petersen, Peter Kersten; Nielsen, Mads

    2011-01-01

    Segmentation of vertebral contours is an essential task in the design of automatic tools for vertebral fracture assessment. In this paper, we propose a novel segmentation technique which does not require operator interaction. The proposed technique solves the segmentation problem in a hierarchical...... is constrained by a conditional shape model, based on the variability of the coarse spine location estimates. The technique is evaluated on a data set of manually annotated lumbar radiographs. The results compare favorably to the previous work in automatic vertebra segmentation, in terms of both segmentation...

  9. Leaf sequencing algorithms for segmented multileaf collimation

    International Nuclear Information System (INIS)

    Kamath, Srijit; Sahni, Sartaj; Li, Jonathan; Palta, Jatinder; Ranka, Sanjay

    2003-01-01

    The delivery of intensity-modulated radiation therapy (IMRT) with a multileaf collimator (MLC) requires the conversion of a radiation fluence map into a leaf sequence file that controls the movement of the MLC during radiation delivery. It is imperative that the fluence map delivered using the leaf sequence file is as close as possible to the fluence map generated by the dose optimization algorithm, while satisfying hardware constraints of the delivery system. Optimization of the leaf sequencing algorithm has been the subject of several recent investigations. In this work, we present a systematic study of the optimization of leaf sequencing algorithms for segmental multileaf collimator beam delivery and provide rigorous mathematical proofs of optimized leaf sequence settings in terms of monitor unit (MU) efficiency under most common leaf movement constraints that include minimum leaf separation constraint and leaf interdigitation constraint. Our analytical analysis shows that leaf sequencing based on unidirectional movement of the MLC leaves is as MU efficient as bidirectional movement of the MLC leaves

  10. Leaf sequencing algorithms for segmented multileaf collimation

    Energy Technology Data Exchange (ETDEWEB)

    Kamath, Srijit [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Sahni, Sartaj [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Li, Jonathan [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Palta, Jatinder [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Ranka, Sanjay [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States)

    2003-02-07

    The delivery of intensity-modulated radiation therapy (IMRT) with a multileaf collimator (MLC) requires the conversion of a radiation fluence map into a leaf sequence file that controls the movement of the MLC during radiation delivery. It is imperative that the fluence map delivered using the leaf sequence file is as close as possible to the fluence map generated by the dose optimization algorithm, while satisfying hardware constraints of the delivery system. Optimization of the leaf sequencing algorithm has been the subject of several recent investigations. In this work, we present a systematic study of the optimization of leaf sequencing algorithms for segmental multileaf collimator beam delivery and provide rigorous mathematical proofs of optimized leaf sequence settings in terms of monitor unit (MU) efficiency under most common leaf movement constraints that include minimum leaf separation constraint and leaf interdigitation constraint. Our analytical analysis shows that leaf sequencing based on unidirectional movement of the MLC leaves is as MU efficient as bidirectional movement of the MLC leaves.

  11. A combined segmenting and non-segmenting approach to signal quality estimation for ambulatory photoplethysmography

    International Nuclear Information System (INIS)

    Wander, J D; Morris, D

    2014-01-01

    Continuous cardiac monitoring of healthy and unhealthy patients can help us understand the progression of heart disease and enable early treatment. Optical pulse sensing is an excellent candidate for continuous mobile monitoring of cardiovascular health indicators, but optical pulse signals are susceptible to corruption from a number of noise sources, including motion artifact. Therefore, before higher-level health indicators can be reliably computed, corrupted data must be separated from valid data. This is an especially difficult task in the presence of artifact caused by ambulation (e.g. walking or jogging), which shares significant spectral energy with the true pulsatile signal. In this manuscript, we present a machine-learning-based system for automated estimation of signal quality of optical pulse signals that performs well in the presence of periodic artifact. We hypothesized that signal processing methods that identified individual heart beats (segmenting approaches) would be more error-prone than methods that did not (non-segmenting approaches) when applied to data contaminated by periodic artifact. We further hypothesized that a fusion of segmenting and non-segmenting approaches would outperform either approach alone. Therefore, we developed a novel non-segmenting approach to signal quality estimation that we then utilized in combination with a traditional segmenting approach. Using this system we were able to robustly detect differences in signal quality as labeled by expert human raters (Pearson’s r = 0.9263). We then validated our original hypotheses by demonstrating that our non-segmenting approach outperformed the segmenting approach in the presence of contaminated signal, and that the combined system outperformed either individually. Lastly, as an example, we demonstrated the utility of our signal quality estimation system in evaluating the trustworthiness of heart rate measurements derived from optical pulse signals. (paper)

  12. Rhythm-based segmentation of Popular Chinese Music

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2005-01-01

    We present a new method to segment popular music based on rhythm. By computing a shortest path based on the self-similarity matrix calculated from a model of rhythm, segmenting boundaries are found along the di- agonal of the matrix. The cost of a new segment is opti- mized by matching manual...... and automatic segment boundaries. We compile a small song database of 21 randomly selected popular Chinese songs which come from Chinese Mainland, Taiwan and Hong Kong. The segmenting results on the small corpus show that 78% manual segmentation points are detected and 74% auto- matic segmentation points...

  13. Current status of JAERI program on development of ultra-trace-analytical technology for safeguards environmental samples

    International Nuclear Information System (INIS)

    Adachi, T.; Usuda, S.; Watanabe, K.

    2001-01-01

    Full text: In order to contribute to the strengthened safeguards system based on the Program 93+2 of the IAEA, Japan Atomic Energy Research Institute (JAERI) is developing analytical technology for ultra-trace amounts of nuclear materials in environmental samples, and constructed the CLEAR facility (Clean Laboratory for Environmental Analysis and Research) for this purpose. The development of the technology is carried out, at existing laboratories for time being, in the following fields: screening, bulk analysis and particle analysis. The screening aims at estimating the amounts of nuclear materials in environmental samples to be introduced into the clean rooms, and is the first step to avoid cross-contamination among the samples and contamination of the clean rooms themselves. In addition to ordinary radiation spectrometry, Compton suppression technique was applied to low energy γ- and X-ray measurements, and sufficient reduction in background level has been demonstrated. Another technique in examination is imaging-plate method, which is a kind of autoradiography and suitable for determination of radioactive-particle distribution in the samples as well as for semiquantitative determination. As for the bulk analysis, the efforts are temporally made on uranium in swipe samples. Preliminary examination for optimization of sample pre-treatment conditions is in progress. At present, ashing by low-temperature-plasma method gives better results than high-temperature ashing or acid leaching. For the isotopic ratio measurement, instrumental performance of inductively-coupled plasma mass spectrometry (ICP-MS) are mainly examined because sample preparation for ICP-MS is simpler than that for thermal ionization mass spectrometry (TIMS). It was found by our measurement that the swipe material (TexWipe TX304, usually used by IAEA) contains un-negligible uranium blank with large deviation (2-6 ng/sheet). This would introduce significant uncertainty in the trace analysis. JAERI

  14. Unsupervised Performance Evaluation of Image Segmentation

    Directory of Open Access Journals (Sweden)

    Chabrier Sebastien

    2006-01-01

    Full Text Available We present in this paper a study of unsupervised evaluation criteria that enable the quantification of the quality of an image segmentation result. These evaluation criteria compute some statistics for each region or class in a segmentation result. Such an evaluation criterion can be useful for different applications: the comparison of segmentation results, the automatic choice of the best fitted parameters of a segmentation method for a given image, or the definition of new segmentation methods by optimization. We first present the state of art of unsupervised evaluation, and then, we compare six unsupervised evaluation criteria. For this comparative study, we use a database composed of 8400 synthetic gray-level images segmented in four different ways. Vinet's measure (correct classification rate is used as an objective criterion to compare the behavior of the different criteria. Finally, we present the experimental results on the segmentation evaluation of a few gray-level natural images.

  15. Segmentized Clear Channel Assessment for IEEE 802.15.4 Networks.

    Science.gov (United States)

    Son, Kyou Jung; Hong, Sung Hyeuck; Moon, Seong-Pil; Chang, Tae Gyu; Cho, Hanjin

    2016-06-03

    This paper proposed segmentized clear channel assessment (CCA) which increases the performance of IEEE 802.15.4 networks by improving carrier sense multiple access with collision avoidance (CSMA/CA). Improving CSMA/CA is important because the low-power consumption feature and throughput performance of IEEE 802.15.4 are greatly affected by CSMA/CA behavior. To improve the performance of CSMA/CA, this paper focused on increasing the chance to transmit a packet by assessing precise channel status. The previous method used in CCA, which is employed by CSMA/CA, assesses the channel by measuring the energy level of the channel. However, this method shows limited channel assessing behavior, which comes from simple threshold dependent channel busy evaluation. The proposed method solves this limited channel decision problem by dividing CCA into two groups. Two groups of CCA compare their energy levels to get precise channel status. To evaluate the performance of the segmentized CCA method, a Markov chain model has been developed. The validation of analytic results is confirmed by comparing them with simulation results. Additionally, simulation results show the proposed method is improving a maximum 8.76% of throughput and decreasing a maximum 3.9% of the average number of CCAs per packet transmission than the IEEE 802.15.4 CCA method.

  16. Methods of evaluating segmentation characteristics and segmentation of major faults

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok [Seoul National Univ., Seoul (Korea, Republic of)] (and others)

    2000-03-15

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary.

  17. Methods of evaluating segmentation characteristics and segmentation of major faults

    International Nuclear Information System (INIS)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok

    2000-03-01

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary

  18. Tank 241-T-204, core 188 analytical results for the final report

    Energy Technology Data Exchange (ETDEWEB)

    Nuzum, J.L.

    1997-07-24

    TANK 241-T-204, CORE 188, ANALYTICAL RESULTS FOR THE FINAL REPORT. This document is the final laboratory report for Tank 241 -T-204. Push mode core segments were removed from Riser 3 between March 27, 1997, and April 11, 1997. Segments were received and extruded at 222-8 Laboratory. Analyses were performed in accordance with Tank 241-T-204 Push Mode Core Sampling and analysis Plan (TRAP) (Winkleman, 1997), Letter of instruction for Core Sample Analysis of Tanks 241-T-201, 241- T-202, 241-T-203, and 241-T-204 (LAY) (Bell, 1997), and Safety Screening Data Qual@ Objective (DO) ODukelow, et al., 1995). None of the subsamples submitted for total alpha activity (AT) or differential scanning calorimetry (DC) analyses exceeded the notification limits stated in DO. The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group and are not considered in this report.

  19. Nodewise analytical calculation of the transfer function

    International Nuclear Information System (INIS)

    Makai, Mihaly

    1994-01-01

    The space dependence of neutron noise has so far been mostly investigated in homogeneous core models. Application of core diagnostic methods to locate a malfunction requires however that the transfer function be calculated for real, inhomogeneous cores. A code suitable for such purpose must be able to handle complex arithmetic and delta-function source. Further requirements are analytical dependence in one spatial variable and fast execution. The present work describes the TIDE program written to fulfil the above requirements. The core is subdivided into homogeneous, square assemblies. An analytical solution is given, which is a generalisation of the inhomogeneous response matrix method. (author)

  20. A new framework for interactive images segmentation

    International Nuclear Information System (INIS)

    Ashraf, M.; Sarim, M.; Shaikh, A.B.

    2017-01-01

    Image segmentation has become a widely studied research problem in image processing. There exist different graph based solutions for interactive image segmentation but the domain of image segmentation still needs persistent improvements. The segmentation quality of existing techniques generally depends on the manual input provided in beginning, therefore, these algorithms may not produce quality segmentation with initial seed labels provided by a novice user. In this work we investigated the use of cellular automata in image segmentation and proposed a new algorithm that follows a cellular automaton in label propagation. It incorporates both the pixel's local and global information in the segmentation process. We introduced the novel global constraints in automata evolution rules; hence proposed scheme of automata evolution is more effective than the automata based earlier evolution schemes. Global constraints are also effective in deceasing the sensitivity towards small changes made in manual input; therefore proposed approach is less dependent on label seed marks. It can produce the quality segmentation with modest user efforts. Segmentation results indicate that the proposed algorithm performs better than the earlier segmentation techniques. (author)

  1. Comparing genomes with rearrangements and segmental duplications.

    Science.gov (United States)

    Shao, Mingfu; Moret, Bernard M E

    2015-06-15

    Large-scale evolutionary events such as genomic rearrange.ments and segmental duplications form an important part of the evolution of genomes and are widely studied from both biological and computational perspectives. A basic computational problem is to infer these events in the evolutionary history for given modern genomes, a task for which many algorithms have been proposed under various constraints. Algorithms that can handle both rearrangements and content-modifying events such as duplications and losses remain few and limited in their applicability. We study the comparison of two genomes under a model including general rearrangements (through double-cut-and-join) and segmental duplications. We formulate the comparison as an optimization problem and describe an exact algorithm to solve it by using an integer linear program. We also devise a sufficient condition and an efficient algorithm to identify optimal substructures, which can simplify the problem while preserving optimality. Using the optimal substructures with the integer linear program (ILP) formulation yields a practical and exact algorithm to solve the problem. We then apply our algorithm to assign in-paralogs and orthologs (a necessary step in handling duplications) and compare its performance with that of the state-of-the-art method MSOAR, using both simulations and real data. On simulated datasets, our method outperforms MSOAR by a significant margin, and on five well-annotated species, MSOAR achieves high accuracy, yet our method performs slightly better on each of the 10 pairwise comparisons. http://lcbb.epfl.ch/softwares/coser. © The Author 2015. Published by Oxford University Press.

  2. The Savannah River Site's Groundwater Monitoring Program

    International Nuclear Information System (INIS)

    1992-01-01

    This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted during the first quarter of 1992. It includes the analytical data, field data, data review, quality control, and other documentation for this program; provides a record of the program's activities; and serves as an official document of the analytical results

  3. International EUREKA: Initialization Segment

    International Nuclear Information System (INIS)

    1982-02-01

    The Initialization Segment creates the starting description of the uranium market. The starting description includes the international boundaries of trade, the geologic provinces, resources, reserves, production, uranium demand forecasts, and existing market transactions. The Initialization Segment is designed to accept information of various degrees of detail, depending on what is known about each region. It must transform this information into a specific data structure required by the Market Segment of the model, filling in gaps in the information through a predetermined sequence of defaults and built in assumptions. A principal function of the Initialization Segment is to create diagnostic messages indicating any inconsistencies in data and explaining which assumptions were used to organize the data base. This permits the user to manipulate the data base until such time the user is satisfied that all the assumptions used are reasonable and that any inconsistencies are resolved in a satisfactory manner

  4. Real-Time Analytics for the Healthcare Industry: Arrhythmia Detection.

    Science.gov (United States)

    Agneeswaran, Vijay Srinivas; Mukherjee, Joydeb; Gupta, Ashutosh; Tonpay, Pranay; Tiwari, Jayati; Agarwal, Nitin

    2013-09-01

    It is time for the healthcare industry to move from the era of "analyzing our health history" to the age of "managing the future of our health." In this article, we illustrate the importance of real-time analytics across the healthcare industry by providing a generic mechanism to reengineer traditional analytics expressed in the R programming language into Storm-based real-time analytics code. This is a powerful abstraction, since most data scientists use R to write the analytics and are not clear on how to make the data work in real-time and on high-velocity data. Our paper focuses on the applications necessary to a healthcare analytics scenario, specifically focusing on the importance of electrocardiogram (ECG) monitoring. A physician can use our framework to compare ECG reports by categorization and consequently detect Arrhythmia. The framework can read the ECG signals and uses a machine learning-based categorizer that runs within a Storm environment to compare different ECG signals. The paper also presents some performance studies of the framework to illustrate the throughput and accuracy trade-off in real-time analytics.

  5. Image Segmentation Using Minimum Spanning Tree

    Science.gov (United States)

    Dewi, M. P.; Armiati, A.; Alvini, S.

    2018-04-01

    This research aim to segmented the digital image. The process of segmentation is to separate the object from the background. So the main object can be processed for the other purposes. Along with the development of technology in digital image processing application, the segmentation process becomes increasingly necessary. The segmented image which is the result of the segmentation process should accurate due to the next process need the interpretation of the information on the image. This article discussed the application of minimum spanning tree on graph in segmentation process of digital image. This method is able to separate an object from the background and the image will change to be the binary images. In this case, the object that being the focus is set in white, while the background is black or otherwise.

  6. A method and software for segmentation of anatomic object ensembles by deformable m-reps

    International Nuclear Information System (INIS)

    Pizer, Stephen M.; Fletcher, P. Thomas; Joshi, Sarang; Gash, A. Graham; Stough, Joshua; Thall, Andrew; Tracton, Gregg; Chaney, Edward L.

    2005-01-01

    Deformable shape models (DSMs) comprise a general approach that shows great promise for automatic image segmentation. Published studies by others and our own research results strongly suggest that segmentation of a normal or near-normal object from 3D medical images will be most successful when the DSM approach uses (1) knowledge of the geometry of not only the target anatomic object but also the ensemble of objects providing context for the target object and (2) knowledge of the image intensities to be expected relative to the geometry of the target and contextual objects. The segmentation will be most efficient when the deformation operates at multiple object-related scales and uses deformations that include not just local translations but the biologically important transformations of bending and twisting, i.e., local rotation, and local magnification. In computer vision an important class of DSM methods uses explicit geometric models in a Bayesian statistical framework to provide a priori information used in posterior optimization to match the DSM against a target image. In this approach a DSM of the object to be segmented is placed in the target image data and undergoes a series of rigid and nonrigid transformations that deform the model to closely match the target object. The deformation process is driven by optimizing an objective function that has terms for the geometric typicality and model-to-image match for each instance of the deformed model. The success of this approach depends strongly on the object representation, i.e., the structural details and parameter set for the DSM, which in turn determines the analytic form of the objective function. This paper describes a form of DSM called m-reps that has or allows these properties, and a method of segmentation consisting of large to small scale posterior optimization of m-reps. Segmentation by deformable m-reps, together with the appropriate data representations, visualizations, and user interface, has been

  7. Procedure for hazards analysis of plutonium gloveboxes used in analytical chemistry operations

    International Nuclear Information System (INIS)

    Delvin, W.L.

    1977-06-01

    A procedure is presented to identify and assess hazards associated with gloveboxes used for analytical chemistry operations involving plutonium. This procedure is based upon analytic tree methodology and it has been adapted from the US Energy Research and Development Administration's safety program, the Management Oversight and Risk Tree

  8. Mild toxic anterior segment syndrome mimicking delayed onset toxic anterior segment syndrome after cataract surgery

    Directory of Open Access Journals (Sweden)

    Su-Na Lee

    2014-01-01

    Full Text Available Toxic anterior segment syndrome (TASS is an acute sterile postoperative anterior segment inflammation that may occur after anterior segment surgery. I report herein a case that developed mild TASS in one eye after bilateral uneventful cataract surgery, which was masked during early postoperative period under steroid eye drop and mimicking delayed onset TASS after switching to weaker steroid eye drop.

  9. STEM Employment in the New Economy: A Labor Market Segmentation Approach

    Science.gov (United States)

    Torres-Olave, Blanca M.

    2013-01-01

    The present study examined the extent to which the U.S. STEM labor market is stratified in terms of quality of employment. Through a series of cluster analyses and Chi-square tests on data drawn from the 2008 Survey of Income Program Participation (SIPP), the study found evidence of segmentation in the highly-skilled STEM and non-STEM samples,…

  10. Scorpion image segmentation system

    Science.gov (United States)

    Joseph, E.; Aibinu, A. M.; Sadiq, B. A.; Bello Salau, H.; Salami, M. J. E.

    2013-12-01

    Death as a result of scorpion sting has been a major public health problem in developing countries. Despite the high rate of death as a result of scorpion sting, little report exists in literature of intelligent device and system for automatic detection of scorpion. This paper proposed a digital image processing approach based on the floresencing characteristics of Scorpion under Ultra-violet (UV) light for automatic detection and identification of scorpion. The acquired UV-based images undergo pre-processing to equalize uneven illumination and colour space channel separation. The extracted channels are then segmented into two non-overlapping classes. It has been observed that simple thresholding of the green channel of the acquired RGB UV-based image is sufficient for segmenting Scorpion from other background components in the acquired image. Two approaches to image segmentation have also been proposed in this work, namely, the simple average segmentation technique and K-means image segmentation. The proposed algorithm has been tested on over 40 UV scorpion images obtained from different part of the world and results obtained show an average accuracy of 97.7% in correctly classifying the pixel into two non-overlapping clusters. The proposed 1system will eliminate the problem associated with some of the existing manual approaches presently in use for scorpion detection.

  11. Combining Vertex-centric Graph Processing with SPARQL for Large-scale RDF Data Analytics

    KAUST Repository

    Abdelaziz, Ibrahim

    2017-06-27

    Modern applications, such as drug repositioning, require sophisticated analytics on RDF graphs that combine structural queries with generic graph computations. Existing systems support either declarative SPARQL queries, or generic graph processing, but not both. We bridge the gap by introducing Spartex, a versatile framework for complex RDF analytics. Spartex extends SPARQL to support programs that combine seamlessly generic graph algorithms (e.g., PageRank, Shortest Paths, etc.) with SPARQL queries. Spartex builds on existing vertex-centric graph processing frameworks, such as Graphlab or Pregel. It implements a generic SPARQL operator as a vertex-centric program that interprets SPARQL queries and executes them efficiently using a built-in optimizer. In addition, any graph algorithm implemented in the underlying vertex-centric framework, can be executed in Spartex. We present various scenarios where our framework simplifies significantly the implementation of complex RDF data analytics programs. We demonstrate that Spartex scales to datasets with billions of edges, and show that our core SPARQL engine is at least as fast as the state-of-the-art specialized RDF engines. For complex analytical tasks that combine generic graph processing with SPARQL, Spartex is at least an order of magnitude faster than existing alternatives.

  12. Brain Tumor Image Segmentation in MRI Image

    Science.gov (United States)

    Peni Agustin Tjahyaningtijas, Hapsari

    2018-04-01

    Brain tumor segmentation plays an important role in medical image processing. Treatment of patients with brain tumors is highly dependent on early detection of these tumors. Early detection of brain tumors will improve the patient’s life chances. Diagnosis of brain tumors by experts usually use a manual segmentation that is difficult and time consuming because of the necessary automatic segmentation. Nowadays automatic segmentation is very populer and can be a solution to the problem of tumor brain segmentation with better performance. The purpose of this paper is to provide a review of MRI-based brain tumor segmentation methods. There are number of existing review papers, focusing on traditional methods for MRI-based brain tumor image segmentation. this paper, we focus on the recent trend of automatic segmentation in this field. First, an introduction to brain tumors and methods for brain tumor segmentation is given. Then, the state-of-the-art algorithms with a focus on recent trend of full automatic segmentaion are discussed. Finally, an assessment of the current state is presented and future developments to standardize MRI-based brain tumor segmentation methods into daily clinical routine are addressed.

  13. Colour application on mammography image segmentation

    Science.gov (United States)

    Embong, R.; Aziz, N. M. Nik Ab.; Karim, A. H. Abd; Ibrahim, M. R.

    2017-09-01

    The segmentation process is one of the most important steps in image processing and computer vision since it is vital in the initial stage of image analysis. Segmentation of medical images involves complex structures and it requires precise segmentation result which is necessary for clinical diagnosis such as the detection of tumour, oedema, and necrotic tissues. Since mammography images are grayscale, researchers are looking at the effect of colour in the segmentation process of medical images. Colour is known to play a significant role in the perception of object boundaries in non-medical colour images. Processing colour images require handling more data, hence providing a richer description of objects in the scene. Colour images contain ten percent (10%) additional edge information as compared to their grayscale counterparts. Nevertheless, edge detection in colour image is more challenging than grayscale image as colour space is considered as a vector space. In this study, we implemented red, green, yellow, and blue colour maps to grayscale mammography images with the purpose of testing the effect of colours on the segmentation of abnormality regions in the mammography images. We applied the segmentation process using the Fuzzy C-means algorithm and evaluated the percentage of average relative error of area for each colour type. The results showed that all segmentation with the colour map can be done successfully even for blurred and noisy images. Also the size of the area of the abnormality region is reduced when compare to the segmentation area without the colour map. The green colour map segmentation produced the smallest percentage of average relative error (10.009%) while yellow colour map segmentation gave the largest percentage of relative error (11.367%).

  14. Cluster Analysis as an Analytical Tool of Population Policy

    Directory of Open Access Journals (Sweden)

    Oksana Mikhaylovna Shubat

    2017-12-01

    Full Text Available The predicted negative trends in Russian demography (falling birth rates, population decline actualize the need to strengthen measures of family and population policy. Our research purpose is to identify groups of Russian regions with similar characteristics in the family sphere using cluster analysis. The findings should make an important contribution to the field of family policy. We used hierarchical cluster analysis based on the Ward method and the Euclidean distance for segmentation of Russian regions. Clustering is based on four variables, which allowed assessing the family institution in the region. The authors used the data of Federal State Statistics Service from 2010 to 2015. Clustering and profiling of each segment has allowed forming a model of Russian regions depending on the features of the family institution in these regions. The authors revealed four clusters grouping regions with similar problems in the family sphere. This segmentation makes it possible to develop the most relevant family policy measures in each group of regions. Thus, the analysis has shown a high degree of differentiation of the family institution in the regions. This suggests that a unified approach to population problems’ solving is far from being effective. To achieve greater results in the implementation of family policy, a differentiated approach is needed. Methods of multidimensional data classification can be successfully applied as a relevant analytical toolkit. Further research could develop the adaptation of multidimensional classification methods to the analysis of the population problems in Russian regions. In particular, the algorithms of nonparametric cluster analysis may be of relevance in future studies.

  15. 40 CFR 86.214-94 - Analytical gases.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Analytical gases. 86.214-94 Section 86.214-94 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED... Later Model Year Gasoline-Fueled New Light-Duty Vehicles, New Light-Duty Trucks and New Medium-Duty...

  16. Ontario hydro's aqueous discharge monitoring program

    International Nuclear Information System (INIS)

    Mehdi, S.H.; Booth, M.R.; Massey, R.; Herrmann, O.

    1992-01-01

    The Province of Ontario has legislated a comprehensive monitoring program for waterborne trace contaminants called MISA - Municipal Industrial Strategy for Abatement. The electric power sector regulation applies to all generating stations (Thermal, Nuclear, Hydraulic). The program commenced in June, 1990. The current phase of the regulation requires the operators of the plants to measure the detailed composition of the direct discharges to water for a one year period. Samples are to be taken from about 350 identified streams at frequencies varying from continuous and daily to quarterly. The data from this program will be used to determine the scope of the ongoing monitoring program and control. This paper discusses the preparation and planning, commissioning, training and early operations phase of the MISA program. In response, the central Analytical Laboratory and Environmental staff worked to develop a sampling and analytical approach which uses the plant laboratories, the central analytical laboratory and a variety of external laboratories. The approach considered analytical frequency, sample stability, presence of radioactivity, suitability of staff, laboratory qualifications, need for long term internal capabilities, availability of equipment, difficulty of analysis, relationship to other work and problems, capital and operating costs. The complexity of the sampling program required the development of a computer based schedule to ensure that all required samples were taken as required with phase shifts between major sampling events at different plants to prevent swamping the capability of the central or external laboratories. New equipment has been purchased and installed at each plant to collect 24 hour composite samples. Analytical equipment has been purchased for each plant for analysis of perishable analytes or of samples requiring daily or thrice weekly analysis. Training programs and surveys have been implemented to assure production of valid data

  17. A genetic algorithm-based job scheduling model for big data analytics.

    Science.gov (United States)

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  18. ANALYTICAL, CRITICAL AND CREATIVE THINKING DEVELOPMENT OF THE GIFTED CHILDREN IN THE USA SCHOOLS

    Directory of Open Access Journals (Sweden)

    Anna Yurievna Kuvarzina

    2013-11-01

    Full Text Available Teachers of the gifted students should not only make an enrichment and acceleration program for them but also pay attention to the development of analytical, critical and creative thinking skills. Despite great interest for this issue in the last years, the topic of analytical and creative thinking is poorly considered in the textbooks for the gifted. In this article some methods, materials and programs of analytical, critical and creative thinking skills development, which are used in the USA, are described.  The author analyses and systematize the methods and also suggests some ways of their usage in the Russian educational system.Purpose: to analyze and systematize methods, materials and programs, that are used in the USA for teaching gifted children analytical, critical and creative thinking, for development of their capacities of problem-solving and decision-making. Methods and methodology of the research: analysis, comparison, principle of the historical and logical approaches unity.Results: positive results of employment of analytical, critical and creative thinking development methods were shown in the practical experience of teaching and educating gifted children in the USA educational system.Results employment field: the Russian Federation educational system: schools, special classes and courses for the gifted children.DOI: http://dx.doi.org/10.12731/2218-7405-2013-7-42

  19. Segmentation, advertising and prices

    NARCIS (Netherlands)

    Galeotti, Andrea; Moraga González, José

    This paper explores the implications of market segmentation on firm competitiveness. In contrast to earlier work, here market segmentation is minimal in the sense that it is based on consumer attributes that are completely unrelated to tastes. We show that when the market is comprised by two

  20. Chromosome condensation and segmentation

    International Nuclear Information System (INIS)

    Viegas-Pequignot, E.M.

    1981-01-01

    Some aspects of chromosome condensation in mammalians -humans especially- were studied by means of cytogenetic techniques of chromosome banding. Two further approaches were adopted: a study of normal condensation as early as prophase, and an analysis of chromosome segmentation induced by physical (temperature and γ-rays) or chemical agents (base analogues, antibiotics, ...) in order to show out the factors liable to affect condensation. Here 'segmentation' means an abnormal chromosome condensation appearing systematically and being reproducible. The study of normal condensation was made possible by the development of a technique based on cell synchronization by thymidine and giving prophasic and prometaphasic cells. Besides, the possibility of inducing R-banding segmentations on these cells by BrdU (5-bromodeoxyuridine) allowed a much finer analysis of karyotypes. Another technique was developed using 5-ACR (5-azacytidine), it allowed to induce a segmentation similar to the one obtained using BrdU and identify heterochromatic areas rich in G-C bases pairs [fr

  1. Hanford analytical sample projections FY 1996 - FY 2001. Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Joyce, S.M.

    1997-07-02

    This document summarizes the biannual Hanford sample projections for fiscal year 1997-2001. Sample projections are based on inputs submitted to Analytical Services covering Environmental Restoration, Tank Wastes Remediation Systems, Solid Wastes, Liquid Effluents, Spent Nuclear Fuels, Transition Projects, Site Monitoring, Industrial Hygiene, Analytical Services and miscellaneous Hanford support activities. In addition to this revision, details on Laboratory scale technology (development), Sample management, and Data management activities were requested. This information will be used by the Hanford Analytical Services program and the Sample Management Working Group to assure that laboratories and resources are available and effectively utilized to meet these documented needs.

  2. Analytic Methods Used in Quality Control in a Compounding Pharmacy.

    Science.gov (United States)

    Allen, Loyd V

    2017-01-01

    Analytical testing will no doubt become a more important part of pharmaceutical compounding as the public and regulatory agencies demand increasing documentation of the quality of compounded preparations. Compounding pharmacists must decide what types of testing and what amount of testing to include in their quality-control programs, and whether testing should be done in-house or outsourced. Like pharmaceutical compounding, analytical testing should be performed only by those who are appropriately trained and qualified. This article discusses the analytical methods that are used in quality control in a compounding pharmacy. Copyright© by International Journal of Pharmaceutical Compounding, Inc.

  3. Track segment synthesis method for NTA film

    International Nuclear Information System (INIS)

    Kumazawa, Shigeru

    1980-03-01

    A method is presented for synthesizing track segments extracted from a gray-level digital picture of NTA film in automatic counting system. In order to detect each track in an arbitrary direction, even if it has some gaps, as a set of the track segments, the method links extracted segments along the track, in succession, to the linked track segments, according to whether each extracted segment bears a similarity of direction to the track or not and whether it is connected with the linked track segments or not. In the case of a large digital picture, the method is applied to each subpicture, which is a strip of the picture, and then concatenates subsets of track segments linked at each subpicture as a set of track segments belonging to a track. The method was applied to detecting tracks in various directions over the eight 364 x 40-pixel subpictures with the gray scale of 127/pixel (picture element) of the microphotograph of NTA film. It was proved to be able to synthesize track segments correctly for every track in the picture. (author)

  4. The Savannah River Site's groundwater monitoring program

    International Nuclear Information System (INIS)

    1991-01-01

    This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted by EPD/EMS in the first quarter of 1991. In includes the analytical data, field data, data review, quality control, and other documentation for this program, provides a record of the program's activities and rationale, and serves as an official document of the analytical results

  5. Characterization of Analytical Reference Glass-1 (ARG-1)

    International Nuclear Information System (INIS)

    Smith, G.L.

    1993-12-01

    High-level radioactive waste may be immobilized in borosilicate glass at the West Valley Demonstration Project, West Valley, New York, the Defense Waste Processing Facility (DWPF), Aiken, South Carolina, and the Hanford Waste Vitrification Project (HWVP), Richland, Washington. The vitrified waste form will be stored in stainless steel canisters before its eventual transfer to a geologic repository for long-term disposal. Waste Acceptance Product Specifications (WAPS) (DOE 1993), Section 1.1.2 requires that the waste form producers must report the measured chemical composition of the vitrified waste in their production records before disposal. Chemical analysis of glass waste forms is receiving increased attention due to qualification requirements of vitrified waste forms. The Pacific Northwest Laboratory (PNL) has been supporting the glass producers' analytical laboratories by a continuing program of multilaboratory analytical testing using interlaboratory ''round robin'' methods. At the PNL Materials Characterization Center Analytical Round Robin 4 workshop ''Analysis of Nuclear Waste Glass and Related Materials,'' January 16--17, 1990, Pleasanton, California, the meeting attendees decided that simulated nuclear waste analytical reference glasses were needed for use as analytical standards. Use of common standard analytical reference materials would allow the glass producers' analytical laboratories to calibrate procedures and instrumentation, to control laboratory performance and conduct self-appraisals, and to help qualify their various waste forms

  6. LIFE-STYLE SEGMENTATION WITH TAILORED INTERVIEWING

    NARCIS (Netherlands)

    KAMAKURA, WA; WEDEL, M

    The authors present a tailored interviewing procedure for life-style segmentation. The procedure assumes that a life-style measurement instrument has been designed. A classification of a sample of consumers into life-style segments is obtained using a latent-class model. With these segments, the

  7. Analytical Chemistry Division annual progress report for period ending December 31, 1989

    International Nuclear Information System (INIS)

    1990-04-01

    The Analytical Chemistry Division of Oak Ridge National Laboratory (ORNL) is a large and diversified organization. As such, it serves a multitude of functions for a clientele that exists both in and outside of ORNL. These functions fall into the following general categories: Analytical Research, Development and Implementation; Programmatic Research, Development, and Utilization; and Technical Support. The Analytical Chemistry Division is organized into four major sections, each which may carry out any of the three types of work mentioned above. Chapters 1 through 4 of this report highlight progress within the four sections during the period January 1 to December 31, 1989. A brief discussion of the division's role in an especially important environmental program is given in Chapter 5. Information about quality assurance, safety, and training programs is presented in Chapter 6, along with a tabulation of analyses rendered. Publications, oral presentations, professional activities, educational programs, and seminars are cited in Chapters 7 and 8. Approximately 69 articles, 41 proceedings, and 31 reports were published, and 151 oral presentations were given during this reporting period. Some 308,981 determinations were performed

  8. Segmented rail linear induction motor

    Science.gov (United States)

    Cowan, Jr., Maynard; Marder, Barry M.

    1996-01-01

    A segmented rail linear induction motor has a segmented rail consisting of a plurality of nonferrous electrically conductive segments aligned along a guideway. The motor further includes a carriage including at least one pair of opposed coils fastened to the carriage for moving the carriage. A power source applies an electric current to the coils to induce currents in the conductive surfaces to repel the coils from adjacent edges of the conductive surfaces.

  9. Deformable segmentation via sparse shape representation.

    Science.gov (United States)

    Zhang, Shaoting; Zhan, Yiqiang; Dewan, Maneesh; Huang, Junzhou; Metaxas, Dimitris N; Zhou, Xiang Sean

    2011-01-01

    Appearance and shape are two key elements exploited in medical image segmentation. However, in some medical image analysis tasks, appearance cues are weak/misleading due to disease/artifacts and often lead to erroneous segmentation. In this paper, a novel deformable model is proposed for robust segmentation in the presence of weak/misleading appearance cues. Owing to the less trustable appearance information, this method focuses on the effective shape modeling with two contributions. First, a shape composition method is designed to incorporate shape prior on-the-fly. Based on two sparsity observations, this method is robust to false appearance information and adaptive to statistically insignificant shape modes. Second, shape priors are modeled and used in a hierarchical fashion. More specifically, by using affinity propagation method, our deformable surface is divided into multiple partitions, on which local shape models are built independently. This scheme facilitates a more compact shape prior modeling and hence a more robust and efficient segmentation. Our deformable model is applied on two very diverse segmentation problems, liver segmentation in PET-CT images and rodent brain segmentation in MR images. Compared to state-of-art methods, our method achieves better performance in both studies.

  10. Segmenting hospitals for improved management strategy.

    Science.gov (United States)

    Malhotra, N K

    1989-09-01

    The author presents a conceptual framework for the a priori and clustering-based approaches to segmentation and evaluates them in the context of segmenting institutional health care markets. An empirical study is reported in which the hospital market is segmented on three state-of-being variables. The segmentation approach also takes into account important organizational decision-making variables. The sophisticated Thurstone Case V procedure is employed. Several marketing implications for hospitals, other health care organizations, hospital suppliers, and donor publics are identified.

  11. Data analysis and analytical predictions of a steam generator tube bundle flow field for verification of 2-D T/H computer code

    International Nuclear Information System (INIS)

    Hwang, J.Y.; Reid, H.C.; Berringer, R.

    1981-01-01

    Analytical predictions of the flow field within a 60 deg segment flow model of a proposed sodium heated steam generator are compared to experimental results obtained from several axial levels between baffling. The axial/crossflow field is developed by use of alternating multi-ported baffling, accomplished by radial perforation distribution. Radial and axial porous model predictions from an axisymmetric computational analysis compared to intra-pitch experimental data at the mid baffle span location for various levels. The analytical mechanics utilizes a cylindrical, axisymmetric, finite difference model, solving conservation mass and momentum equations. 6 refs

  12. Analysis of the analytic formulae application area for free oscillation frequency calculation in isochronous cyclotrons

    International Nuclear Information System (INIS)

    Kiyan, I.N.; Taraszkiewicz, R.

    2005-01-01

    Selection of optimal analytic formulae for calculation of free oscillation frequencies of the particles in isochronous cyclotrons, ν r (r) and ν z (r), and their application area are described. The selected formulae are used in the program BORP SR - Betatron Oscillation Research Program Second Release - written in C++ with the help of MS Visual C++ .NET. The free oscillation frequencies, calculated by using the program, are used for the evaluation of the modeled regimes of the work of the AIC144 isochronous cyclotron. The analytic formulae were selected by comparing the results of the calculations performed by using formulae adduced by T.Stammbach, Y.Jongen - S.Zaremba, V.V.Kolga with the results of the calculations performed by using the CYCLOPS iterative program, developed by M.M.Gordon. The least difference in the calculation results was obtained for the analytic formulae adduced by V.V.Kolga. The ν r (r) calculation difference ranged from -0.5 to 1.5% and the ν z (r) calculation difference ranged from -5 to 4% for the working radii of the isochronous cyclotron. As the beam was obtained, the selected analytic formulae can be successfully used in the program BORP SR for free oscillation frequency calculation during the evaluation of the modeled regimes of the work of different isochronous cyclotrons

  13. Review of segmentation process in consumer markets

    Directory of Open Access Journals (Sweden)

    Veronika Jadczaková

    2013-01-01

    Full Text Available Although there has been a considerable debate on market segmentation over five decades, attention was merely devoted to single stages of the segmentation process. In doing so, stages as segmentation base selection or segments profiling have been heavily covered in the extant literature, whereas stages as implementation of the marketing strategy or market definition were of a comparably lower interest. Capitalizing on this shortcoming, this paper strives to close the gap and provide each step of the segmentation process with equal treatment. Hence, the objective of this paper is two-fold. First, a snapshot of the segmentation process in a step-by-step fashion will be provided. Second, each step (where possible will be evaluated on chosen criteria by means of description, comparison, analysis and synthesis of 32 academic papers and 13 commercial typology systems. Ultimately, the segmentation stages will be discussed with empirical findings prevalent in the segmentation studies and last but not least suggestions calling for further investigation will be presented. This seven-step-framework may assist when segmenting in practice allowing for more confidential targeting which in turn might prepare grounds for creating of a differential advantage.

  14. Physical linkage of a human immunoglobulin heavy chain variable region gene segment to diversity and joining region elements

    International Nuclear Information System (INIS)

    Schroeder, H.W. Jr.; Walter, M.A.; Hofker, M.H.; Ebens, A.; Van Dijk, K.W.; Liao, L.C.; Cox, D.W.; Milner, E.C.B.; Perlmutter, R.M.

    1988-01-01

    Antibody genes are assembled from a series of germ-line gene segments that are juxtaposed during the maturation of B lymphocytes. Although diversification of the adult antibody repertoire results in large part from the combinatorial joining of these gene segments, a restricted set of antibody heavy chain variable (V H ), diversity (D H ), and joining (J H ) region gene segments appears preferentially in the human fetal repertoire. The authors report here that one of these early-expressed V H elements (termed V H 6) is the most 3' V H gene segment, positioned 77 kilobases on the 5' side of the J H locus and immediately adjacent to a set of previously described D H sequences. In addition to providing a physical map linking human V H , D H , and J H elements, these results support the view that the programmed development of the antibody V H repertoire is determined in part by the chromosomal position of these gene segments

  15. Hanford analytical services quality assurance requirements documents. Volume 1: Administrative Requirements

    International Nuclear Information System (INIS)

    Hyatt, J.E.

    1997-01-01

    Hanford Analytical Services Quality Assurance Requirements Document (HASQARD) is issued by the Analytical Services, Program of the Waste Management Division, US Department of Energy (US DOE), Richland Operations Office (DOE-RL). The HASQARD establishes quality requirements in response to DOE Order 5700.6C (DOE 1991b). The HASQARD is designed to meet the needs of DOE-RL for maintaining a consistent level of quality for sampling and field and laboratory analytical services provided by contractor and commercial field and laboratory analytical operations. The HASQARD serves as the quality basis for all sampling and field/laboratory analytical services provided to DOE-RL through the Analytical Services Program of the Waste Management Division in support of Hanford Site environmental cleanup efforts. This includes work performed by contractor and commercial laboratories and covers radiological and nonradiological analyses. The HASQARD applies to field sampling, field analysis, and research and development activities that support work conducted under the Hanford Federal Facility Agreement and Consent Order Tri-Party Agreement and regulatory permit applications and applicable permit requirements described in subsections of this volume. The HASQARD applies to work done to support process chemistry analysis (e.g., ongoing site waste treatment and characterization operations) and research and development projects related to Hanford Site environmental cleanup activities. This ensures a uniform quality umbrella to analytical site activities predicated on the concepts contained in the HASQARD. Using HASQARD will ensure data of known quality and technical defensibility of the methods used to obtain that data. The HASQARD is made up of four volumes: Volume 1, Administrative Requirements; Volume 2, Sampling Technical Requirements; Volume 3, Field Analytical Technical Requirements; and Volume 4, Laboratory Technical Requirements. Volume 1 describes the administrative requirements

  16. Symbolic computation of analytic approximate solutions for nonlinear differential equations with initial conditions

    Science.gov (United States)

    Lin, Yezhi; Liu, Yinping; Li, Zhibin

    2012-01-01

    The Adomian decomposition method (ADM) is one of the most effective methods for constructing analytic approximate solutions of nonlinear differential equations. In this paper, based on the new definition of the Adomian polynomials, and the two-step Adomian decomposition method (TSADM) combined with the Padé technique, a new algorithm is proposed to construct accurate analytic approximations of nonlinear differential equations with initial conditions. Furthermore, a MAPLE package is developed, which is user-friendly and efficient. One only needs to input a system, initial conditions and several necessary parameters, then our package will automatically deliver analytic approximate solutions within a few seconds. Several different types of examples are given to illustrate the validity of the package. Our program provides a helpful and easy-to-use tool in science and engineering to deal with initial value problems. Program summaryProgram title: NAPA Catalogue identifier: AEJZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4060 No. of bytes in distributed program, including test data, etc.: 113 498 Distribution format: tar.gz Programming language: MAPLE R13 Computer: PC Operating system: Windows XP/7 RAM: 2 Gbytes Classification: 4.3 Nature of problem: Solve nonlinear differential equations with initial conditions. Solution method: Adomian decomposition method and Padé technique. Running time: Seconds at most in routine uses of the program. Special tasks may take up to some minutes.

  17. Polyether based segmented copolymers with uniform aramid units

    NARCIS (Netherlands)

    Niesten, M.C.E.J.

    2000-01-01

    Segmented copolymers with short, glassy or crystalline hard segments and long, amorphous soft segments (multi-block copolymers) are thermoplastic elastomers (TPE’s). The hard segments form physical crosslinks for the amorphous (rubbery) soft segments. As a result, this type of materials combines

  18. Examination of fast reactor fuels, FBR analytical quality assurance standards and methods, and analytical methods development: irradiation tests. Progress report, April 1--June 30, 1976, and FY 1976

    International Nuclear Information System (INIS)

    Baker, R.D.

    1976-08-01

    Characterization of unirradiated and irradiated LMFBR fuels by analytical chemistry methods will continue, and additional methods will be modified and mechanized for hot cell application. Macro- and microexaminations will be made on fuel and cladding using the shielded electron microprobe, emission spectrograph, radiochemistry, gamma scanner, mass spectrometers, and other analytical facilities. New capabilities will be developed in gamma scanning, analyses to assess spatial distributions of fuel and fission products, mass spectrometric measurements of burnup and fission gas constituents and other chemical analyses. Microstructural analyses of unirradiated and irradiated materials will continue using optical and electron microscopy and autoradiographic and x-ray techniques. Analytical quality assurance standards tasks are designed to assure the quality of the chemical characterizations necessary to evaluate reactor components relative to specifications. Tasks include: (1) the preparation and distribution of calibration materials and quality control samples for use in quality assurance surveillance programs, (2) the development of and the guidance in the use of quality assurance programs for sampling and analysis, (3) the development of improved methods of analysis, and (4) the preparation of continuously updated analytical method manuals. Reliable analytical methods development for the measurement of burnup, oxygen-to-metal (O/M) ratio, and various gases in irradiated fuels is described

  19. Unsupervised motion-based object segmentation refined by color

    Science.gov (United States)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the

  20. Tank 241-T-105, cores 205 and 207 analytical results for the final report

    International Nuclear Information System (INIS)

    Esch, R.A.

    1997-01-01

    This document is the final laboratory report for tank 241-T-105 push mode core segments collected between June 24, 1997 and June 30, 1997. The segments were subsampled and analyzed in accordance with the Tank Push Mode Core Sampling and Analysis Plan (TSAP) (Field,1997), the Tank Safety Screening Data Quality Objective (Safety DQO) (Dukelow, et al., 1995) and Tank 241-T-105 Sample Analysis (memo) (Field, 1997a). The analytical results are included in Table 1. None of the subsamples submitted for the differential scanning calorimetry (DSC) analysis or total alpha activity (AT) exceeded the notification limits as stated in the TSAP (Field, 1997). The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems (TWRS) Technical Basis Group in accordance with the Memorandum of Understanding (Schreiber, 1997) and not considered in this report

  1. Impact of metabolic syndrome on ST segment resolution after thrombolytic therapy for acute myocardial infarction

    Directory of Open Access Journals (Sweden)

    Ayşe Saatçı Yaşar

    2010-09-01

    Full Text Available Objectives: It has been shown that metabolic syndrome is associated with poor short-term outcome and poor long-term survival in patients with acute myocardial infarction. We aimed to investigate the effect of metabolic syndrome on ST segment resolution in patients received thrombolytic therapy for acute myocardial infarction.Materials and methods: We retrospectively analyzed 161 patients, who were admitted to our clinics with acute ST-elevated-myocardial infarction and received thrombolytic therapy within 12 hours of chest pain. Metabolic syndrome was diagnosed according to National Cholesterol Education Program Adult Treatment Panel III criteria. Resolution of ST segment elevation was assessed on the baseline and 90-minute electrocardiograms. ST segment resolution ≥70% was defined as complete resolution.Results: Metabolic syndrome was found in 56.5% of patients. The proportion of patients with metabolic syndrome who achieved complete ST segment resolution after thrombolysis was significantly lower than that of patients without metabolic syndrome (32.9% versus 58.6%, p=0.001. On multivariate analysis metabolic syndrome was the only independent predictor of ST segment resolution (p=0.01, Odds ratio=2.543, %95 CI:1.248-5.179Conclusion: The patients with metabolic syndrome had lower rates of complete ST segment resolution after thrombolytic therapy for acute myocardial infarction. This finding may contribute to the higher morbidity and mortality of patients with metabolic syndrome.

  2. Market Segmentation from a Behavioral Perspective

    Science.gov (United States)

    Wells, Victoria K.; Chang, Shing Wan; Oliveira-Castro, Jorge; Pallister, John

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847…

  3. Computerized Analytical Data Management System and Automated Analytical Sample Transfer System at the COGEMA Reprocessing Plants in La Hague

    International Nuclear Information System (INIS)

    Flament, T.; Goasmat, F.; Poilane, F.

    2002-01-01

    Managing the operation of large commercial spent nuclear fuel reprocessing plants, such as UP3 and UP2-800 in La Hague, France, requires an extensive analytical program and the shortest possible analysis response times. COGEMA, together with its engineering subsidiary SGN, decided to build high-performance laboratories to support operations in its plants. These laboratories feature automated equipment, safe environments for operators, and short response times, all in centralized installations. Implementation of a computerized analytical data management system and a fully automated pneumatic system for the transfer of radioactive samples was a key factor contributing to the successful operation of the laboratories and plants

  4. Skip segment Hirschsprung disease and Waardenburg syndrome

    OpenAIRE

    Gross, Erica R.; Geddes, Gabrielle C.; McCarrier, Julie A.; Jarzembowski, Jason A.; Arca, Marjorie J.

    2015-01-01

    Skip segment Hirschsprung disease describes a segment of ganglionated bowel between two segments of aganglionated bowel. It is a rare phenomenon that is difficult to diagnose. We describe a recent case of skip segment Hirschsprung disease in a neonate with a family history of Waardenburg syndrome and the genetic profile that was identified.

  5. NPOESS Interface Data Processing Segment Product Generation

    Science.gov (United States)

    Grant, K. D.

    2009-12-01

    The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD. The NPOESS satellites carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The NPOESS design allows centralized mission management and delivers high quality environmental products to military, civil and scientific users. The ground data processing segment for NPOESS is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence and Information Systems. The IDPS processes NPOESS satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. The IDPS will process environmental data products beginning with the NPOESS Preparatory Project (NPP) and continuing through the lifetime of the NPOESS system. Within the overall NPOESS processing environment, the IDPS must process a data volume nearly 1000 times the size of current systems -- in one-quarter of the time. Further, it must support the calibration, validation, and data quality improvement initiatives of the NPOESS program to ensure the production of atmospheric and environmental products that meet strict requirements for accuracy and precision. This paper will describe the architecture approach that is necessary to meet these challenging, and seemingly exclusive, NPOESS IDPS design requirements, with a focus on the processing relationships required to generate the NPP products.

  6. NPOESS Interface Data Processing Segment (IDPS) Hardware

    Science.gov (United States)

    Sullivan, W. J.; Grant, K. D.; Bergeron, C.

    2008-12-01

    The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD. The NPOESS satellites carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The NPOESS design allows centralized mission management and delivers high quality environmental products to military, civil and scientific users. The ground data processing segment for NPOESS is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence and Information Systems. IDPS processes NPOESS satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. IDPS will process environmental data products beginning with the NPOESS Preparatory Project (NPP) and continuing through the lifetime of the NPOESS system. Within the overall NPOESS processing environment, the IDPS must process a data volume several orders of magnitude the size of current systems -- in one-quarter of the time. Further, it must support the calibration, validation, and data quality improvement initiatives of the NPOESS program to ensure the production of atmospheric and environmental products that meet strict requirements for accuracy and precision. This poster will illustrate and describe the IDPS HW architecture that is necessary to meet these challenging design requirements. In addition, it will illustrate the expandability features of the architecture in support of future data processing and data distribution needs.

  7. Automated synovium segmentation in doppler ultrasound images for rheumatoid arthritis assessment

    Science.gov (United States)

    Yeung, Pak-Hei; Tan, York-Kiat; Xu, Shuoyu

    2018-02-01

    We need better clinical tools to improve monitoring of synovitis, synovial inflammation in the joints, in rheumatoid arthritis (RA) assessment. Given its economical, safe and fast characteristics, ultrasound (US) especially Doppler ultrasound is frequently used. However, manual scoring of synovitis in US images is subjective and prone to observer variations. In this study, we propose a new and robust method for automated synovium segmentation in the commonly affected joints, i.e. metacarpophalangeal (MCP) and metatarsophalangeal (MTP) joints, which would facilitate automation in quantitative RA assessment. The bone contour in the US image is firstly detected based on a modified dynamic programming method, incorporating angular information for detecting curved bone surface and using image fuzzification to identify missing bone structure. K-means clustering is then performed to initialize potential synovium areas by utilizing the identified bone contour as boundary reference. After excluding invalid candidate regions, the final segmented synovium is identified by reconnecting remaining candidate regions using level set evolution. 15 MCP and 15 MTP US images were analyzed in this study. For each image, segmentations by our proposed method as well as two sets of annotations performed by an experienced clinician at different time-points were acquired. Dice's coefficient is 0.77+/-0.12 between the two sets of annotations. Similar Dice's coefficients are achieved between automated segmentation and either the first set of annotations (0.76+/-0.12) or the second set of annotations (0.75+/-0.11), with no significant difference (P = 0.77). These results verify that the accuracy of segmentation by our proposed method and by clinician is comparable. Therefore, reliable synovium identification can be made by our proposed method.

  8. Effectiveness of mentoring programs for youth: a meta-analytic review.

    Science.gov (United States)

    DuBois, David L; Holloway, Bruce E; Valentine, Jeffrey C; Cooper, Harris

    2002-04-01

    We used meta-analysis to review 55 evaluations of the effects of mentoring programs on youth. Overall, findings provide evidence of only a modest or small benefit of program participation for the average youth. Program effects are enhanced significantly, however, when greater numbers of both theory-based and empirically based "best practices" are utilized and when strong relationships are formed between mentors and youth. Youth from backgrounds of environmental risk and disadvantage appear most likely to benefit from participation in mentoring programs. Outcomes for youth at-risk due to personal vulnerabilities have varied substantially in relation to program characteristics, with a noteworthy potential evident for poorly implemented programs to actually have an adverse effect on such youth. Recommendations include greater adherence to guidelines for the design and implementation of effective mentoring programs as well as more in-depth assessment of relationship and contextual factors in the evaluation of programs.

  9. Analytics for vaccine economics and pricing: insights and observations.

    Science.gov (United States)

    Robbins, Matthew J; Jacobson, Sheldon H

    2015-04-01

    Pediatric immunization programs in the USA are a successful and cost-effective public health endeavor, profoundly reducing mortalities caused by infectious diseases. Two important issues relate to the success of the immunization programs, the selection of cost-effective vaccines and the appropriate pricing of vaccines. The recommended childhood immunization schedule, published annually by the CDC, continues to expand with respect to the number of injections required and the number of vaccines available for selection. The advent of new vaccines to meet the growing requirements of the schedule results: in a large, combinatorial number of possible vaccine formularies. The expansion of the schedule and the increase in the number of available vaccines constitutes a challenge for state health departments, large city immunization programs, private practices and other vaccine purchasers, as a cost-effective vaccine formulary must be selected from an increasingly large set of possible vaccine combinations to satisfy the schedule. The pediatric vaccine industry consists of a relatively small number of pharmaceutical firms engaged in the research, development, manufacture and distribution of pediatric vaccines. The number of vaccine manufacturers has dramatically decreased in the past few decades for a myriad of reasons, most notably due to low profitability. The contraction of the industry negatively impacts the reliable provision of pediatric vaccines. The determination of appropriate vaccine prices is an important issue and influences a vaccine manufacturer's decision to remain in the market. Operations research is a discipline that applies advanced analytical methods to improve decision making; analytics is the application of operations research to a particular problem using pertinent data to provide a practical result. Analytics provides a mechanism to resolve the challenges facing stakeholders in the vaccine development and delivery system, in particular, the selection

  10. Spinal cord grey matter segmentation challenge.

    Science.gov (United States)

    Prados, Ferran; Ashburner, John; Blaiotta, Claudia; Brosch, Tom; Carballido-Gamio, Julio; Cardoso, Manuel Jorge; Conrad, Benjamin N; Datta, Esha; Dávid, Gergely; Leener, Benjamin De; Dupont, Sara M; Freund, Patrick; Wheeler-Kingshott, Claudia A M Gandini; Grussu, Francesco; Henry, Roland; Landman, Bennett A; Ljungberg, Emil; Lyttle, Bailey; Ourselin, Sebastien; Papinutto, Nico; Saporito, Salvatore; Schlaeger, Regina; Smith, Seth A; Summers, Paul; Tam, Roger; Yiannakas, Marios C; Zhu, Alyssa; Cohen-Adad, Julien

    2017-05-15

    An important image processing step in spinal cord magnetic resonance imaging is the ability to reliably and accurately segment grey and white matter for tissue specific analysis. There are several semi- or fully-automated segmentation methods for cervical cord cross-sectional area measurement with an excellent performance close or equal to the manual segmentation. However, grey matter segmentation is still challenging due to small cross-sectional size and shape, and active research is being conducted by several groups around the world in this field. Therefore a grey matter spinal cord segmentation challenge was organised to test different capabilities of various methods using the same multi-centre and multi-vendor dataset acquired with distinct 3D gradient-echo sequences. This challenge aimed to characterize the state-of-the-art in the field as well as identifying new opportunities for future improvements. Six different spinal cord grey matter segmentation methods developed independently by various research groups across the world and their performance were compared to manual segmentation outcomes, the present gold-standard. All algorithms provided good overall results for detecting the grey matter butterfly, albeit with variable performance in certain quality-of-segmentation metrics. The data have been made publicly available and the challenge web site remains open to new submissions. No modifications were introduced to any of the presented methods as a result of this challenge for the purposes of this publication. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Corporate social marketing: message design to recruit program participants.

    Science.gov (United States)

    Black, David R; Blue, Carolyn L; Coster, Daniel C; Chrysler, Lisa M

    2002-01-01

    To identify variables for a corporate social marketing (SM) health message based on the 4 Ps of SM in order to recruit future participants to an existing national, commercial, self-administered weight-loss program. A systematically evaluated, author-developed, 310-response survey was administered to a random sample of 270 respondents. A previously established research plan was used to empirically identify the audience segments and the "marketing mix" appropriate for the total sample and each segment. Tangible product, pertaining to the unique program features, should be emphasized rather than positive core product and outcome expectation related to use of the program.

  12. Multi-scale Modelling of Segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2016-01-01

    pieces. In a second experiment on non-real-time segmentation, musicians indicated boundaries and their strength for six examples. Kernel density estimation was used to develop multi-scale segmentation models. Contrary to previous research, no relationship was found between boundary strength and boundary......While listening to music, people often unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects...

  13. The Savannah River Site's Groundwater Monitoring Program

    Energy Technology Data Exchange (ETDEWEB)

    1991-06-18

    This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted in the fourth quarter of 1990. It includes the analytical data, field data, well activity data, and other documentation for this program, provides a record of the program's activities and rationale, and serves as an official document of the analytical results. The groundwater monitoring program includes the following activities: installation, maintenance, and abandonment of monitoring wells, environmental soil borings, development of the sampling and analytical schedule, collection and analyses of groundwater samples, review of analytical and other data, maintenance of the databases containing groundwater monitoring data, quality assurance (QA) evaluations of laboratory performance, and reports of results to waste-site facility custodians and to the Environmental Protection Section (EPS) of EPD.

  14. Skip segment Hirschsprung disease and Waardenburg syndrome

    Directory of Open Access Journals (Sweden)

    Erica R. Gross

    2015-04-01

    Full Text Available Skip segment Hirschsprung disease describes a segment of ganglionated bowel between two segments of aganglionated bowel. It is a rare phenomenon that is difficult to diagnose. We describe a recent case of skip segment Hirschsprung disease in a neonate with a family history of Waardenburg syndrome and the genetic profile that was identified.

  15. Three-dimensional segmentation and skeletonization to build an airway tree data structure for small animals

    International Nuclear Information System (INIS)

    Chaturvedi, Ashutosh; Lee, Zhenghong

    2005-01-01

    Quantitative analysis of intrathoracic airway tree geometry is important for objective evaluation of bronchial tree structure and function. Currently, there is more human data than small animal data on airway morphometry. In this study, we implemented a semi-automatic approach to quantitatively describe airway tree geometry by using high-resolution computed tomography (CT) images to build a tree data structure for small animals such as rats and mice. Silicon lung casts of the excised lungs from a canine and a mouse were used for micro-CT imaging of the airway trees. The programming language IDL was used to implement a 3D region-growing threshold algorithm for segmenting out the airway lung volume from the CT data. Subsequently, a fully-parallel 3D thinning algorithm was implemented in order to complete the skeletonization of the segmented airways. A tree data structure was then created and saved by parsing through the skeletonized volume using the Python programming language. Pertinent information such as the length of all airway segments was stored in the data structure. This approach was shown to be accurate and efficient for up to six generations for the canine lung cast and ten generations for the mouse lung cast

  16. Segmentation-Based And Segmentation-Free Methods for Spotting Handwritten Arabic Words

    OpenAIRE

    Ball , Gregory R.; Srihari , Sargur N.; Srinivasan , Harish

    2006-01-01

    http://www.suvisoft.com; Given a set of handwritten documents, a common goal is to search for a relevant subset. Attempting to find a query word or image in such a set of documents is called word spotting. Spotting handwritten words in documents written in the Latin alphabet, and more recently in Arabic, has received considerable attention. One issue is generating candidate word regions on a page. Attempting to definitely segment the document into such regions (automatic segmentation) can mee...

  17. Analytical study on model tests of soil-structure interaction

    International Nuclear Information System (INIS)

    Odajima, M.; Suzuki, S.; Akino, K.

    1987-01-01

    Since nuclear power plant (NPP) structures are stiff, heavy and partly-embedded, the behavior of those structures during an earthquake depends on the vibrational characteristics of not only the structure but also the soil. Accordingly, seismic response analyses considering the effects of soil-structure interaction (SSI) are extremely important for seismic design of NPP structures. Many studies have been conducted on analytical techniques concerning SSI and various analytical models and approaches have been proposed. Based on the studies, SSI analytical codes (computer programs) for NPP structures have been improved at JINS (Japan Institute of Nuclear Safety), one of the departments of NUPEC (Nuclear Power Engineering Test Center) in Japan. These codes are soil-spring lumped-mass code (SANLUM), finite element code (SANSSI), thin layered element code (SANSOL). In proceeding with the improvement of the analytical codes, in-situ large-scale forced vibration SSI tests were performed using models simulating light water reactor buildings, and simulation analyses were performed to verify the codes. This paper presents an analytical study to demonstrate the usefulness of the codes

  18. Strategic analytics: towards fully embedding evidence in healthcare decision-making.

    Science.gov (United States)

    Garay, Jason; Cartagena, Rosario; Esensoy, Ali Vahit; Handa, Kiren; Kane, Eli; Kaw, Neal; Sadat, Somayeh

    2015-01-01

    Cancer Care Ontario (CCO) has implemented multiple information technology solutions and collected health-system data to support its programs. There is now an opportunity to leverage these data and perform advanced end-to-end analytics that inform decisions around improving health-system performance. In 2014, CCO engaged in an extensive assessment of its current data capacity and capability, with the intent to drive increased use of data for evidence-based decision-making. The breadth and volume of data at CCO uniquely places the organization to contribute to not only system-wide operational reporting, but more advanced modelling of current and future state system management and planning. In 2012, CCO established a strategic analytics practice to assist the agency's programs contextualize and inform key business decisions and to provide support through innovative predictive analytics solutions. This paper describes the organizational structure, services and supporting operations that have enabled progress to date, and discusses the next steps towards the vision of embedding evidence fully into healthcare decision-making. Copyright © 2014 Longwoods Publishing.

  19. Monitoring fish distributions along electrofishing segments

    Science.gov (United States)

    Miranda, Leandro E.

    2014-01-01

    Electrofishing is widely used to monitor fish species composition and relative abundance in streams and lakes. According to standard protocols, multiple segments are selected in a body of water to monitor population relative abundance as the ratio of total catch to total sampling effort. The standard protocol provides an assessment of fish distribution at a macrohabitat scale among segments, but not within segments. An ancillary protocol was developed for assessing fish distribution at a finer scale within electrofishing segments. The ancillary protocol was used to estimate spacing, dispersion, and association of two species along shore segments in two local reservoirs. The added information provided by the ancillary protocol may be useful for assessing fish distribution relative to fish of the same species, to fish of different species, and to environmental or habitat characteristics.

  20. CellSegm - a MATLAB toolbox for high-throughput 3D cell segmentation

    Science.gov (United States)

    2013-01-01

    The application of fluorescence microscopy in cell biology often generates a huge amount of imaging data. Automated whole cell segmentation of such data enables the detection and analysis of individual cells, where a manual delineation is often time consuming, or practically not feasible. Furthermore, compared to manual analysis, automation normally has a higher degree of reproducibility. CellSegm, the software presented in this work, is a Matlab based command line software toolbox providing an automated whole cell segmentation of images showing surface stained cells, acquired by fluorescence microscopy. It has options for both fully automated and semi-automated cell segmentation. Major algorithmic steps are: (i) smoothing, (ii) Hessian-based ridge enhancement, (iii) marker-controlled watershed segmentation, and (iv) feature-based classfication of cell candidates. Using a wide selection of image recordings and code snippets, we demonstrate that CellSegm has the ability to detect various types of surface stained cells in 3D. After detection and outlining of individual cells, the cell candidates can be subject to software based analysis, specified and programmed by the end-user, or they can be analyzed by other software tools. A segmentation of tissue samples with appropriate characteristics is also shown to be resolvable in CellSegm. The command-line interface of CellSegm facilitates scripting of the separate tools, all implemented in Matlab, offering a high degree of flexibility and tailored workflows for the end-user. The modularity and scripting capabilities of CellSegm enable automated workflows and quantitative analysis of microscopic data, suited for high-throughput image based screening. PMID:23938087

  1. Color image Segmentation using automatic thresholding techniques

    International Nuclear Information System (INIS)

    Harrabi, R.; Ben Braiek, E.

    2011-01-01

    In this paper, entropy and between-class variance based thresholding methods for color images segmentation are studied. The maximization of the between-class variance (MVI) and the entropy (ME) have been used as a criterion functions to determine an optimal threshold to segment images into nearly homogenous regions. Segmentation results from the two methods are validated and the segmentation sensitivity for the test data available is evaluated, and a comparative study between these methods in different color spaces is presented. The experimental results demonstrate the superiority of the MVI method for color image segmentation.

  2. Process Segmentation Typology in Czech Companies

    Directory of Open Access Journals (Sweden)

    Tucek David

    2016-03-01

    Full Text Available This article describes process segmentation typology during business process management implementation in Czech companies. Process typology is important for a manager’s overview of process orientation as well as for a manager’s general understanding of business process management. This article provides insight into a process-oriented organizational structure. The first part analyzes process segmentation typology itself as well as some original results of quantitative research evaluating process segmentation typology in the specific context of Czech company strategies. Widespread data collection was carried out in 2006 and 2013. The analysis of this data showed that managers have more options regarding process segmentation and its selection. In terms of practicality and ease of use, the most frequently used method of process segmentation (managerial, main, and supportive stems directly from the requirements of ISO 9001. Because of ISO 9001:2015, managers must now apply risk planning in relation to the selection of processes that are subjected to process management activities. It is for this fundamental reason that this article focuses on process segmentation typology.

  3. Edge preserving smoothing and segmentation of 4-D images via transversely isotropic scale-space processing and fingerprint analysis

    International Nuclear Information System (INIS)

    Reutter, Bryan W.; Algazi, V. Ralph; Gullberg, Grant T; Huesman, Ronald H.

    2004-01-01

    Enhancements are described for an approach that unifies edge preserving smoothing with segmentation of time sequences of volumetric images, based on differential edge detection at multiple spatial and temporal scales. Potential applications of these 4-D methods include segmentation of respiratory gated positron emission tomography (PET) transmission images to improve accuracy of attenuation correction for imaging heart and lung lesions, and segmentation of dynamic cardiac single photon emission computed tomography (SPECT) images to facilitate unbiased estimation of time-activity curves and kinetic parameters for left ventricular volumes of interest. Improved segmentation of lung surfaces in simulated respiratory gated cardiac PET transmission images is achieved with a 4-D edge detection operator composed of edge preserving 1-D operators applied in various spatial and temporal directions. Smoothing along the axis of a 1-D operator is driven by structure separation seen in the scale-space fingerprint, rather than by image contrast. Spurious noise structures are reduced with use of small-scale isotropic smoothing in directions transverse to the 1-D operator axis. Analytic expressions are obtained for directional derivatives of the smoothed, edge preserved image, and the expressions are used to compose a 4-D operator that detects edges as zero-crossings in the second derivative in the direction of the image intensity gradient. Additional improvement in segmentation is anticipated with use of multiscale transversely isotropic smoothing and a novel interpolation method that improves the behavior of the directional derivatives. The interpolation method is demonstrated on a simulated 1-D edge and incorporation of the method into the 4-D algorithm is described

  4. Argonne program to assess superconducting stability

    International Nuclear Information System (INIS)

    Wang, S.T.; Turner, L.R.; Huang, Y.C.; Dawson, J.W.; Harrang, J.; Hilal, M.A.; Lieberg, M.; Gonczy, J.D.; Kim, S.H.

    1978-01-01

    To assess superconductor stability, a complete program is developed to obtain basic information on the effects of local mechanical perturbations on the cryostatic stability. An analytical model for computing the transient recovery following the mechanical perturbation is developed. A test program is undertaken to develop data needed to verify the conclusions reached through the analytical studies

  5. Sport-Specific Training Targeting the Proximal Segments and Throwing Velocity in Collegiate Throwing Athletes

    Science.gov (United States)

    Palmer, Thomas; Uhl, Timothy L.; Howell, Dana; Hewett, Timothy E.; Viele, Kert; Mattacola, Carl G.

    2015-01-01

    Context The ability to generate, absorb, and transmit forces through the proximal segments of the pelvis, spine, and trunk has been proposed to influence sport performance, yet traditional training techniques targeting the proximal segments have had limited success improving sport-specific performance. Objective To investigate the effects of a traditional endurance-training program and a sport-specific power-training program targeting the muscles that support the proximal segments and throwing velocity. Design Randomized controlled clinical trial. Setting University research laboratory and gymnasium. Patients or Other Participants A total of 46 (age = 20 ± 1.3 years, height = 175.7 ± 8.7 cm) healthy National Collegiate Athletic Association Division III female softball (n = 17) and male baseball (n = 29) players. Intervention(s) Blocked stratification for sex and position was used to randomly assign participants to 1 of 2 training groups for 7 weeks: a traditional endurance-training group (ET group; n = 21) or a power-stability–training group (PS group; n = 25). Mean Outcome Measure(s) The change score in peak throwing velocity (km/h) normalized for body weight (BW; kilograms) and change score in tests that challenge the muscles of the proximal segments normalized for BW (kilograms). We used 2-tailed independent-samples t tests to compare differences between the change scores. Results The peak throwing velocity (ET group = 0.01 ± 0.1 km/h/kg of BW, PS group = 0.08 ± 0.03 km/h/kg of BW; P < .001) and muscle power outputs for the chop (ET group = 0.22 ± 0.91 W/kg of BW, PS group = 1.3 ± 0.91 W/kg of BW; P < .001) and lift (ET group = 0.59 ± 0.67 W/kg of BW, PS group = 1.4 ± 0.87 W/kg of BW; P < .001) tests were higher at postintervention in the PT than in the ET group. Conclusions An improvement in throwing velocity occurred simultaneously with measures of muscular endurance and power after a sport-specific training regimen targeting the proximal segments

  6. Typology of consumer behavior in times of economic crisis: A segmentation study from Bulgaria

    Directory of Open Access Journals (Sweden)

    Katrandjiev Hristo

    2011-01-01

    Full Text Available This paper presents the second part of results from a survey-based market research of Bulgarian households. In the first part of the paper the author analyzes the changes of consumer behavior in times of economic crisis in Bulgaria. Here, the author presents market segmentation from the point of view of consumer behavior changes in times of economic crisis. Four segments (clusters were discovered, and profiled. The similarities/dissimilarities between clusters are presented through the technique of multidimensional scaling (MDS The research project is planned, organized and realized within the Scientific Research Program of University of National and World Economy, Sofia, Bulgaria.

  7. Algorithms for Cytoplasm Segmentation of Fluorescence Labelled Cells

    Directory of Open Access Journals (Sweden)

    Carolina Wählby

    2002-01-01

    Full Text Available Automatic cell segmentation has various applications in cytometry, and while the nucleus is often very distinct and easy to identify, the cytoplasm provides a lot more challenge. A new combination of image analysis algorithms for segmentation of cells imaged by fluorescence microscopy is presented. The algorithm consists of an image pre‐processing step, a general segmentation and merging step followed by a segmentation quality measurement. The quality measurement consists of a statistical analysis of a number of shape descriptive features. Objects that have features that differ to that of correctly segmented single cells can be further processed by a splitting step. By statistical analysis we therefore get a feedback system for separation of clustered cells. After the segmentation is completed, the quality of the final segmentation is evaluated. By training the algorithm on a representative set of training images, the algorithm is made fully automatic for subsequent images created under similar conditions. Automatic cytoplasm segmentation was tested on CHO‐cells stained with calcein. The fully automatic method showed between 89% and 97% correct segmentation as compared to manual segmentation.

  8. Benchmarking of Remote Sensing Segmentation Methods

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.; Gaetano, R.

    2015-01-01

    Roč. 8, č. 5 (2015), s. 2240-2248 ISSN 1939-1404 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : benchmark * remote sensing segmentation * unsupervised segmentation * supervised segmentation Subject RIV: BD - Theory of Information Impact factor: 2.145, year: 2015 http://library.utia.cas.cz/separaty/2015/RO/haindl-0445995.pdf

  9. Unsupervised Retinal Vessel Segmentation Using Combined Filters.

    Directory of Open Access Journals (Sweden)

    Wendeson S Oliveira

    Full Text Available Image segmentation of retinal blood vessels is a process that can help to predict and diagnose cardiovascular related diseases, such as hypertension and diabetes, which are known to affect the retinal blood vessels' appearance. This work proposes an unsupervised method for the segmentation of retinal vessels images using a combined matched filter, Frangi's filter and Gabor Wavelet filter to enhance the images. The combination of these three filters in order to improve the segmentation is the main motivation of this work. We investigate two approaches to perform the filter combination: weighted mean and median ranking. Segmentation methods are tested after the vessel enhancement. Enhanced images with median ranking are segmented using a simple threshold criterion. Two segmentation procedures are applied when considering enhanced retinal images using the weighted mean approach. The first method is based on deformable models and the second uses fuzzy C-means for the image segmentation. The procedure is evaluated using two public image databases, Drive and Stare. The experimental results demonstrate that the proposed methods perform well for vessel segmentation in comparison with state-of-the-art methods.

  10. Automatic segmentation of psoriasis lesions

    Science.gov (United States)

    Ning, Yang; Shi, Chenbo; Wang, Li; Shu, Chang

    2014-10-01

    The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema are often mixed together. In order to get the segmentation of lesions area - this paper proposes an algorithm based on Random forests with color and texture features. The algorithm has three steps. The first step, the polarized light is applied based on the skin's Tyndall-effect in the imaging to eliminate the reflection and Lab color space are used for fitting the human perception. The second step, sliding window and its sub windows are used to get textural feature and color feature. In this step, a feature of image roughness has been defined, so that scaling can be easily separated from normal skin. In the end, Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can give reliable segmentation results even the image has different lighting conditions, skin types. In the data set offered by Union Hospital, more than 90% images can be segmented accurately.

  11. 2005 Annual Synthesis Report, Pallid Sturgeon Population Assessment Program and Associated Fish Community Monitoring for the Missouri River

    Energy Technology Data Exchange (ETDEWEB)

    Oldenburg, Eric W.; Hanrahan, Timothy P.; Harnish, Ryan A.; Bellgraph, Brian J.; Duncan, Joanne P.; Allwardt, Craig H.

    2008-08-12

    Pallid sturgeon, Scaphirhynchus albus, have declined throughout the Missouri River since dam construction and inception of the Bank Stabilization and Navigation Project in 1912. Their decline likely is due to the loss and degradation of their natural habitat as a result of changes in the river’s structure and function, as well as the pallid sturgeon’s inability to adapt to these changes. The U. S. Army Corps of Engineers has been working with state and federal agencies to develop and conduct a Pallid Sturgeon Monitoring and Assessment Program (Program), with the goal of recovering pallid sturgeon populations. The Program has organized the monitoring and assessment efforts into distinct geographic segments, with state and federal resource management agencies possessing primary responsibility for one or more segment. To date, the results from annual monitoring have been reported for individual Program segments. However, monitoring results have not been summarized or evaluated for larger spatial scales, encompassing more than one Program segment. This report describes a summary conducted by the Pacific Northwest National Laboratory (PNNL) that synthesizes the 2005 sampling year monitoring results from individual segments.

  12. 2006 Annual Synthesis Report, Pallid Sturgeon Population Assessment Program and Associated Fish Community Monitoring for the Missouri River

    Energy Technology Data Exchange (ETDEWEB)

    Oldenburg, Eric W.; Hanrahan, Timothy P.; Harnish, Ryan A.; Bellgraph, Brian J.; Duncan, Joanne P.; Allwardt, Craig H.

    2008-08-12

    Pallid sturgeon, Scaphirhynchus albus, have declined throughout the Missouri River since dam construction and inception of the Bank Stabilization and Navigation Project in 1912. Their decline likely is due to the loss and degradation of their natural habitat as a result of changes in the river’s structure and function, as well as the pallid sturgeon’s inability to adapt to these changes. The U. S. Army Corps of Engineers has been working with state and federal agencies to develop and conduct a Pallid Sturgeon Monitoring and Assessment Program (Program), with the goal of recovering pallid sturgeon populations. The Program has organized the monitoring and assessment efforts into distinct geographic segments, with state and federal resource management agencies possessing primary responsibility for one or more segment. To date, the results from annual monitoring have been reported for individual Program segments. However, monitoring results have not been summarized or evaluated for larger spatial scales, encompassing more than one Program segment. This report describes a summary conducted by the Pacific Northwest National Laboratory (PNNL) that synthesizes the 2006 sampling year monitoring results from individual segments.

  13. A Meta-Analytic Review of School-Based Prevention for Cannabis Use

    Science.gov (United States)

    Porath-Waller, Amy J.; Beasley, Erin; Beirness, Douglas J.

    2010-01-01

    This investigation used meta-analytic techniques to evaluate the effectiveness of school-based prevention programming in reducing cannabis use among youth aged 12 to 19. It summarized the results from 15 studies published in peer-reviewed journals since 1999 and identified features that influenced program effectiveness. The results from the set of…

  14. Flammable gas safety program. Analytical methods development: FY 1994 progress report

    International Nuclear Information System (INIS)

    Campbell, J.A.; Clauss, S.; Grant, K.; Hoopes, V.; Lerner, B.; Lucke, R.; Mong, G.; Rau, J.; Wahl, K.; Steele, R.

    1994-09-01

    This report describes the status of developing analytical methods to account for the organic components in Hanford waste tanks, with particular focus on tanks assigned to the Flammable Gas Watch List. The methods that have been developed are illustrated by their application to samples obtained from Tank 241-SY-101 (Tank 101-SY)

  15. Core components of a comprehensive quality assurance program in anatomic pathology.

    Science.gov (United States)

    Nakhleh, Raouf E

    2009-11-01

    In this article the core components of a comprehensive quality assurance and improvement plan are outlined. Quality anatomic pathology work comes with focus on accurate, timely, and complete reports. A commitment to continuous quality improvement and a systems approach with a persistent effort helps to achieve this end. Departments should have a quality assurance and improvement plan that includes a risk assessment of real and potential problems facing the laboratory. The plan should also list the individuals responsible for carrying out the program with adequate resources, a defined timetable, and annual assessment for progress and future directions. Quality assurance monitors should address regulatory requirements and be organized by laboratory division (surgical pathology, cytology, etc) as well as 5 segments (preanalytic, analytic, postanalytic phases of the test cycle, turn-around-time, and customer satisfaction). Quality assurance data can also be used to evaluate individual pathologists using multiple parameters with peer group comparison.

  16. Discourse segmentation and the management of multiple tasks in single episodes of air traffic controller-pilot spoken radio communication

    Directory of Open Access Journals (Sweden)

    Paul A. Falzon

    2009-06-01

    Full Text Available Episodes of VHF radio-mediated pilot-controller spoken communication in which multiple tasks are conducted are engendered in and through the skilful deployment and combination, by the parties to the talk, of multiple orders of discourse segmentation. These orders of segmentation are manifest at the levels of transmission design and sequential organisation. Both of these features are analysed from a Conversation Analytic standpoint in order to track their segment by segment genesis, development and completion. From the analysis it emerges that in addition to the serial type of sequential organisations described by Schegloff (1986, there exists an alternative form of organisation that enables tasks to be managed in a quasi-parallel manner, and which affords controllers and pilots a number of practical advantages in the conduct of their radio-mediated service encounters.Cet article présente des extraits d’échanges oraux entre pilots et contrôleurs du ciel via la radio VHF. On peut y voir comment le déploiement et la combinaison habile de plusieurs ordres de segmentation discursive, engageant les deux coénonciateurs de la conversation, leur permet d’accomplir des tâches multiples. Ces ordres de segmentation se manifestent aux niveaux du plan de la transmission et de l’organisation séquentielle. Ces deux niveaux sont envisagées du point de vue de l’analyse conversationnelle dans le but d’examiner, segment après segment, comment ils se mettent en place, se développent puis prennent fin. Notre étude montre que, outre le type sériel d’organisations séquentielles décrit par Schegloff (1986, il existe une forme alternative d’organisation qui permet de gérer les tâches de manière quasi parallèle, et qui fournit aux contrôleurs aériens ainsi qu’aux pilotes de nombreux avantages pratiques dans la conduite de leurs radio.

  17. Simultaneous minimizing monitor units and number of segments without leaf end abutment for segmental intensity modulated radiation therapy delivery

    International Nuclear Information System (INIS)

    Li Kaile; Dai Jianrong; Ma Lijun

    2004-01-01

    Leaf end abutment is seldom studied when delivering segmental intensity modulated radiation therapy (IMRT) fields. We developed an efficient leaf sequencing method to eliminate leaf end abutment for segmental IMRT delivery. Our method uses simple matrix and sorting operations to obtain a solution that simultaneously minimizes total monitor units and number of segments without leaf end abutment between segments. We implemented and demonstrated our method for multiple clinical cases. We compared the results of our method with the results from exhaustive search method. We found that our solution without leaf end abutment produced equivalent results to the unconstrained solutions in terms of minimum total monitor units and minimum number of leaf segments. We conclude that the leaf end abutment fields can be avoided without affecting the efficiency of segmental IMRT delivery. The major strength of our method is its simplicity and high computing speed. This potentially provides a useful means for generating segmental IMRT fields that require high spatial resolution or complex intensity distributions

  18. Hierarchical image segmentation for learning object priors

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, Lakshman [Los Alamos National Laboratory; Yang, Xingwei [TEMPLE UNIV.; Latecki, Longin J [TEMPLE UNIV.; Li, Nan [TEMPLE UNIV.

    2010-11-10

    The proposed segmentation approach naturally combines experience based and image based information. The experience based information is obtained by training a classifier for each object class. For a given test image, the result of each classifier is represented as a probability map. The final segmentation is obtained with a hierarchial image segmentation algorithm that considers both the probability maps and the image features such as color and edge strength. We also utilize image region hierarchy to obtain not only local but also semi-global features as input to the classifiers. Moreover, to get robust probability maps, we take into account the region context information by averaging the probability maps over different levels of the hierarchical segmentation algorithm. The obtained segmentation results are superior to the state-of-the-art supervised image segmentation algorithms.

  19. Effect of the average soft-segment length on the morphology and properties of segmented polyurethane nanocomposites

    International Nuclear Information System (INIS)

    Finnigan, Bradley; Halley, Peter; Jack, Kevin; McDowell, Alasdair; Truss, Rowan; Casey, Phil; Knott, Robert; Martin, Darren

    2006-01-01

    Two organically modified layered silicates (with small and large diameters) were incorporated into three segmented polyurethanes with various degrees of microphase separation. Microphase separation increased with the molecular weight of the poly(hexamethylene oxide) soft segment. The molecular weight of the soft segment did not influence the amount of polyurethane intercalating the interlayer spacing. Small-angle neutron scattering and differential scanning calorimetry data indicated that the layered silicates did not affect the microphase morphology of any host polymer, regardless of the particle diameter. The stiffness enhancement on filler addition increased as the microphase separation of the polyurethane decreased, presumably because a greater number of urethane linkages were available to interact with the filler. For comparison, the small nanofiller was introduced into a polyurethane with a poly(tetramethylene oxide) soft segment, and a significant increase in the tensile strength and a sharper upturn in the stress-strain curve resulted. No such improvement occurred in the host polymers with poly(hexamethylene oxide) soft segments. It is proposed that the nanocomposite containing the more hydrophilic and mobile poly(tetramethylene oxide) soft segment is capable of greater secondary bonding between the polyurethane chains and the organosilicate surface, resulting in improved stress transfer to the filler and reduced molecular slippage.

  20. Mastering Search Analytics Measuring SEO, SEM and Site Search

    CERN Document Server

    Chaters, Brent

    2011-01-01

    Many companies still approach Search Engine Optimization (SEO) and paid search as separate initiatives. This in-depth guide shows you how to use these programs as part of a comprehensive strategy-not just to improve your site's search rankings, but to attract the right people and increase your conversion rate. Learn how to measure, test, analyze, and interpret all of your search data with a wide array of analytic tools. Gain the knowledge you need to determine the strategy's return on investment. Ideal for search specialists, webmasters, and search marketing managers, Mastering Search Analyt

  1. Essays in international market segmentation

    NARCIS (Netherlands)

    Hofstede, ter F.

    1999-01-01

    The primary objective of this thesis is to develop and validate new methodologies to improve the effectiveness of international segmentation strategies. The current status of international market segmentation research is reviewed in an introductory chapter, which provided a number of

  2. Analytical approaches used in stream benthic macroinvertebrate biomonitoring programs of State agencies in the United States

    Science.gov (United States)

    Carter, James L.; Resh, Vincent H.

    2013-01-01

    Biomonitoring programs based on benthic macroinvertebrates are well-established worldwide. Their value, however, depends on the appropriateness of the analytical techniques used. All United States State, benthic macroinvertebrate biomonitoring programs were surveyed regarding the purposes of their programs, quality-assurance and quality-control procedures used, habitat and water-chemistry data collected, treatment of macroinvertebrate data prior to analysis, statistical methods used, and data-storage considerations. State regulatory mandates (59 percent of programs), biotic index development (17 percent), and Federal requirements (15 percent) were the most frequently reported purposes of State programs, with the specific tasks of satisfying the requirements for 305b/303d reports (89 percent), establishment and monitoring of total maximum daily loads, and developing biocriteria being the purposes most often mentioned. Most states establish reference sites (81 percent), but classify them using State-specific methods. The most often used technique for determining the appropriateness of a reference site was Best Professional Judgment (86 percent of these states). Macroinvertebrate samples are almost always collected by using a D-frame net, and duplicate samples are collected from approximately 10 percent of sites for quality assurance and quality control purposes. Most programs have macroinvertebrate samples processed by contractors (53 percent) and have identifications confirmed by a second taxonomist (85 percent). All States collect habitat data, with most using the Rapid Bioassessment Protocol visual-assessment approach, which requires ~1 h/site. Dissolved oxygen, pH, and conductivity are measured in more than 90 percent of programs. Wide variation exists in which taxa are excluded from analyses and the level of taxonomic resolution used. Species traits, such as functional feeding groups, are commonly used (96 percent), as are tolerance values for organic pollution

  3. Analytical Chemistry Division annual progress report for period ending December 31, 1989

    Energy Technology Data Exchange (ETDEWEB)

    1990-04-01

    The Analytical Chemistry Division of Oak Ridge National Laboratory (ORNL) is a large and diversified organization. As such, it serves a multitude of functions for a clientele that exists both in and outside of ORNL. These functions fall into the following general categories: Analytical Research, Development and Implementation; Programmatic Research, Development, and Utilization; and Technical Support. The Analytical Chemistry Division is organized into four major sections, each which may carry out any of the three types of work mentioned above. Chapters 1 through 4 of this report highlight progress within the four sections during the period January 1 to December 31, 1989. A brief discussion of the division's role in an especially important environmental program is given in Chapter 5. Information about quality assurance, safety, and training programs is presented in Chapter 6, along with a tabulation of analyses rendered. Publications, oral presentations, professional activities, educational programs, and seminars are cited in Chapters 7 and 8. Approximately 69 articles, 41 proceedings, and 31 reports were published, and 151 oral presentations were given during this reporting period. Some 308,981 determinations were performed.

  4. Roentgenological diagnoss of central segmental lung cancer

    International Nuclear Information System (INIS)

    Gurevich, L.A.; Fedchenko, G.G.

    1984-01-01

    Basing on an analysis of the results of clinicoroentgenological examination of 268 patments roentgenological semiotics of segmental lung cancer is presented. Some peculiarities of the X-ray picture of cancer of different segments of the lungs were revealed depending on tumor site and growth type. For the syndrome of segmental darkening the comprehensive X-ray methods where the chief method is tomography of the segmental bronchi are proposed

  5. Method of manufacturing a large-area segmented photovoltaic module

    Science.gov (United States)

    Lenox, Carl

    2013-11-05

    One embodiment of the invention relates to a segmented photovoltaic (PV) module which is manufactured from laminate segments. The segmented PV module includes rectangular-shaped laminate segments formed from rectangular-shaped PV laminates and further includes non-rectangular-shaped laminate segments formed from rectangular-shaped and approximately-triangular-shaped PV laminates. The laminate segments are mechanically joined and electrically interconnected to form the segmented module. Another embodiment relates to a method of manufacturing a large-area segmented photovoltaic module from laminate segments of various shapes. Other embodiments relate to processes for providing a photovoltaic array for installation at a site. Other embodiments and features are also disclosed.

  6. Study of the morphology exhibited by linear segmented polyurethanes

    International Nuclear Information System (INIS)

    Pereira, I.M.; Orefice, R.L.

    2009-01-01

    Five series of segmented polyurethanes with different hard segment content were prepared by the prepolymer mixing method. The nano-morphology of the obtained polyurethanes and their microphase separation were investigated by infrared spectroscopy, modulated differential scanning calorimetry and small-angle X-ray scattering. Although highly hydrogen bonded hard segments were formed, high hard segment contents promoted phase mixture and decreased the chain mobility, decreasing the hard segment domain precipitation and the soft segments crystallization. The applied techniques were able to show that the hard-segment content and the hard-segment interactions were the two controlling factors for determining the structure of segmented polyurethanes. (author)

  7. The Savannah River Site`s Groundwater Monitoring Program. Fourth quarter, 1990

    Energy Technology Data Exchange (ETDEWEB)

    1991-06-18

    This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted in the fourth quarter of 1990. It includes the analytical data, field data, well activity data, and other documentation for this program, provides a record of the program`s activities and rationale, and serves as an official document of the analytical results. The groundwater monitoring program includes the following activities: installation, maintenance, and abandonment of monitoring wells, environmental soil borings, development of the sampling and analytical schedule, collection and analyses of groundwater samples, review of analytical and other data, maintenance of the databases containing groundwater monitoring data, quality assurance (QA) evaluations of laboratory performance, and reports of results to waste-site facility custodians and to the Environmental Protection Section (EPS) of EPD.

  8. Desafios da química analítica frente às necessidades da indústria farmacêutica Challenges of analytical chemistry in face of the needs of the pharmaceutical industry

    Directory of Open Access Journals (Sweden)

    Alberto dos Santos Pereira

    2005-12-01

    Full Text Available The development of liquid chromatography-mass spectrometric (LC-MS techniques in the last few decades has made possible the analysis of trace amounts of analytes from complex matrices. With LC, the analytes of interest can be separated from each other as well as from the interfering matrix, after which they can be reliably identified thanks to the sensitivity and specificity of MS. LC-MS has become an irreplaceable tool for many applications, ranging from the analysis of proteins or pharmaceuticals in biological fluids to the analysis of toxic substances in environmental samples. In different segments of Brazilian Industry mass spectrometry has an important role, e.g. in the pharmaceutical industry in the development of generic formulations, contributing to the growth of Industry and social inclusion. However, the Brazilian chemists until this moment don't have an effective role in this new segment of the analytical chemistry in Brazil. The present paper shows the actual scenario for mass spectrometry in the pharmaceutical industry, emphasizing the need of a revision of graduation courses to attend the needs of this growing market.

  9. Smart markers for watershed-based cell segmentation.

    Directory of Open Access Journals (Sweden)

    Can Fahrettin Koyuncu

    Full Text Available Automated cell imaging systems facilitate fast and reliable analysis of biological events at the cellular level. In these systems, the first step is usually cell segmentation that greatly affects the success of the subsequent system steps. On the other hand, similar to other image segmentation problems, cell segmentation is an ill-posed problem that typically necessitates the use of domain-specific knowledge to obtain successful segmentations even by human subjects. The approaches that can incorporate this knowledge into their segmentation algorithms have potential to greatly improve segmentation results. In this work, we propose a new approach for the effective segmentation of live cells from phase contrast microscopy. This approach introduces a new set of "smart markers" for a marker-controlled watershed algorithm, for which the identification of its markers is critical. The proposed approach relies on using domain-specific knowledge, in the form of visual characteristics of the cells, to define the markers. We evaluate our approach on a total of 1,954 cells. The experimental results demonstrate that this approach, which uses the proposed definition of smart markers, is quite effective in identifying better markers compared to its counterparts. This will, in turn, be effective in improving the segmentation performance of a marker-controlled watershed algorithm.

  10. Smart markers for watershed-based cell segmentation.

    Science.gov (United States)

    Koyuncu, Can Fahrettin; Arslan, Salim; Durmaz, Irem; Cetin-Atalay, Rengul; Gunduz-Demir, Cigdem

    2012-01-01

    Automated cell imaging systems facilitate fast and reliable analysis of biological events at the cellular level. In these systems, the first step is usually cell segmentation that greatly affects the success of the subsequent system steps. On the other hand, similar to other image segmentation problems, cell segmentation is an ill-posed problem that typically necessitates the use of domain-specific knowledge to obtain successful segmentations even by human subjects. The approaches that can incorporate this knowledge into their segmentation algorithms have potential to greatly improve segmentation results. In this work, we propose a new approach for the effective segmentation of live cells from phase contrast microscopy. This approach introduces a new set of "smart markers" for a marker-controlled watershed algorithm, for which the identification of its markers is critical. The proposed approach relies on using domain-specific knowledge, in the form of visual characteristics of the cells, to define the markers. We evaluate our approach on a total of 1,954 cells. The experimental results demonstrate that this approach, which uses the proposed definition of smart markers, is quite effective in identifying better markers compared to its counterparts. This will, in turn, be effective in improving the segmentation performance of a marker-controlled watershed algorithm.

  11. An unsupervised strategy for biomedical image segmentation

    Directory of Open Access Journals (Sweden)

    Roberto Rodríguez

    2010-09-01

    Full Text Available Roberto Rodríguez1, Rubén Hernández21Digital Signal Processing Group, Institute of Cybernetics, Mathematics, and Physics, Havana, Cuba; 2Interdisciplinary Professional Unit of Engineering and Advanced Technology, IPN, MexicoAbstract: Many segmentation techniques have been published, and some of them have been widely used in different application problems. Most of these segmentation techniques have been motivated by specific application purposes. Unsupervised methods, which do not assume any prior scene knowledge can be learned to help the segmentation process, and are obviously more challenging than the supervised ones. In this paper, we present an unsupervised strategy for biomedical image segmentation using an algorithm based on recursively applying mean shift filtering, where entropy is used as a stopping criterion. This strategy is proven with many real images, and a comparison is carried out with manual segmentation. With the proposed strategy, errors less than 20% for false positives and 0% for false negatives are obtained.Keywords: segmentation, mean shift, unsupervised segmentation, entropy

  12. MOVING WINDOW SEGMENTATION FRAMEWORK FOR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2012-07-01

    Full Text Available As lidar point clouds become larger streamed processing becomes more attractive. This paper presents a framework for the streamed segmentation of point clouds with the intention of segmenting unstructured point clouds in real-time. The framework is composed of two main components. The first component segments points within a window shifting over the point cloud. The second component stitches the segments within the windows together. In this fashion a point cloud can be streamed through these two components in sequence, thus producing a segmentation. The algorithm has been tested on airborne lidar point cloud and some results of the performance of the framework are presented.

  13. Review of segmentation process in consumer markets

    OpenAIRE

    Veronika Jadczaková

    2013-01-01

    Although there has been a considerable debate on market segmentation over five decades, attention was merely devoted to single stages of the segmentation process. In doing so, stages as segmentation base selection or segments profiling have been heavily covered in the extant literature, whereas stages as implementation of the marketing strategy or market definition were of a comparably lower interest. Capitalizing on this shortcoming, this paper strives to close the gap and provide each step...

  14. SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT

    International Nuclear Information System (INIS)

    Wu, P; Mao, T; Gong, S; Wang, J; Niu, T; Sheng, K; Xie, Y

    2016-01-01

    Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimization trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R

  15. SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT

    Energy Technology Data Exchange (ETDEWEB)

    Wu, P; Mao, T; Gong, S; Wang, J; Niu, T [Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang (China); Sheng, K [Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA (United States); Xie, Y [Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong (China)

    2016-06-15

    Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimization trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R

  16. Analysis of computer programming languages

    International Nuclear Information System (INIS)

    Risset, Claude Alain

    1967-01-01

    This research thesis aims at trying to identify some methods of syntax analysis which can be used for computer programming languages while putting aside computer devices which influence the choice of the programming language and methods of analysis and compilation. In a first part, the author proposes attempts of formalization of Chomsky grammar languages. In a second part, he studies analytical grammars, and then studies a compiler or analytic grammar for the Fortran language

  17. Accelerating SPARQL Queries and Analytics on RDF Data

    KAUST Repository

    Al-Harbi, Razen

    2016-11-09

    proposed. The framework, named SPARTex, bridges the gap between RDF and graph processing. To do so, SPARTex: (i) implements a generic SPARQL operator as a vertex-centric program. The operator is coupled with an optimizer that generates e cient execution plans. (ii) It allows SPARQL to invoke vertex-centric programs as stored procedures. Finally, (iii) it provides a unified in- memory data store that allows the persistence of intermediate results. Consequently, SPARTex can e ciently support RDF analytical tasks consisting of complex pipeline of operators.

  18. IFRS 8 – OPERATING SEGMENTS

    Directory of Open Access Journals (Sweden)

    BOCHIS LEONICA

    2009-05-01

    Full Text Available Segment reporting in accordance with IFRS 8 will be mandatory for annual financial statements covering periods beginning on or after 1 January 2009. The standards replaces IAS 14, Segment Reporting, from that date. The objective of IFRS 8 is to require

  19. Analytical and experimental investigations of magnetohydrodynamic flows near the entrance to a strong magnetic field

    International Nuclear Information System (INIS)

    Picologlou, B.F.; Reed, C.B.; Dauzvardis, P.V.; Walker, J.S.

    1986-01-01

    A program of analytical and experimental investigations in MHD flows has been established at Argonne National Lab. (ANL) within the framework of the Blanket Technology Program. An experimental facility for such investigations has been built and is being operated at ANL. The investigations carried out on the Argonne Liquid-Metal engineering EXperiment (ALEX) are complemented by analysis carried out at the Univ. of Illinois. The first phase of the experimental program is devoted to investigations of well-defined cases for which analytical solutions exist. Such testing will allow validation and increased confidence in the theory. Because analytical solutions exist for only a few cases, which do not cover the entire range of anticipated flow behavior, confining testing to these cases will not be an adequate validation of the theory. For this reason, this phase involves testing and a companion analytical effort aimed toward obtaining solutions for a broad range of cases, which, although simple in geometry, are believed to encompass the range of flow phenomena relevant to fusion. This parallel approach is necessary so that analysis will guide and help plan the experiments, whereas the experimental results will provide information needed to validate and/or refine the analysis

  20. Analytical techniques for the determination of radiochemical purity of radiopharmaceuticals prepared from kits

    International Nuclear Information System (INIS)

    McLean, J.R.; Rockwell, L.J.; Welsh, W.J.

    1977-01-01

    The evaluation of efficacy of commercially available kits used for the preparation of radiopharmaceuticals is one aspect of the Radiation Protection Bureau's radiopharmaceutical quality control program. This report describes some of the analytical methodology employed in the program. The techniques may be of interest to hospital radiopharmacy personnel as many of the tests can be performed rapidly and with a minimum of special equipment, thus enabling the confirmation of radiopharmaceutical purity prior to patient administration. Manufacturers of kits may also be interested in learning of the analytical methods used in the assessment of their products. (auth)

  1. The Hierarchy of Segment Reports

    Directory of Open Access Journals (Sweden)

    Danilo Dorović

    2015-05-01

    Full Text Available The article presents an attempt to find the connection between reports created for managers responsible for different business segments. With this purpose, the hierarchy of the business reporting segments is proposed. This can lead to better understanding of the expenses under common responsibility of more than one manager since these expenses should be in more than one report. The structure of cost defined per business segment hierarchy with the aim of new, unusual but relevant cost structure for management can be established. Both could potentially bring new information benefits for management in the context of profit reporting.

  2. Segmental dilatation of the ileum

    Directory of Open Access Journals (Sweden)

    Tune-Yie Shih

    2017-01-01

    Full Text Available A 2-year-old boy was sent to the emergency department with the chief problem of abdominal pain for 1 day. He was just discharged from the pediatric ward with the diagnosis of mycoplasmal pneumonia and paralytic ileus. After initial examinations and radiographic investigations, midgut volvulus was impressed. An emergency laparotomy was performed. Segmental dilatation of the ileum with volvulus was found. The operative procedure was resection of the dilated ileal segment with anastomosis. The postoperative recovery was uneventful. The unique abnormality of gastrointestinal tract – segmental dilatation of the ileum, is described in details and the literature is reviewed.

  3. Techniques on semiautomatic segmentation using the Adobe Photoshop

    Science.gov (United States)

    Park, Jin Seo; Chung, Min Suk; Hwang, Sung Bae

    2005-04-01

    The purpose of this research is to enable anybody to semiautomatically segment the anatomical structures in the MRIs, CTs, and other medical images on the personal computer. The segmented images are used for making three-dimensional images, which are helpful in medical education and research. To achieve this purpose, the following trials were performed. The entire body of a volunteer was MR scanned to make 557 MRIs, which were transferred to a personal computer. On Adobe Photoshop, contours of 19 anatomical structures in the MRIs were semiautomatically drawn using MAGNETIC LASSO TOOL; successively, manually corrected using either LASSO TOOL or DIRECT SELECTION TOOL to make 557 segmented images. In a likewise manner, 11 anatomical structures in the 8,500 anatomcial images were segmented. Also, 12 brain and 10 heart anatomical structures in anatomical images were segmented. Proper segmentation was verified by making and examining the coronal, sagittal, and three-dimensional images from the segmented images. During semiautomatic segmentation on Adobe Photoshop, suitable algorithm could be used, the extent of automatization could be regulated, convenient user interface could be used, and software bugs rarely occurred. The techniques of semiautomatic segmentation using Adobe Photoshop are expected to be widely used for segmentation of the anatomical structures in various medical images.

  4. Number of Clusters and the Quality of Hybrid Predictive Models in Analytical CRM

    Directory of Open Access Journals (Sweden)

    Łapczyński Mariusz

    2014-08-01

    Full Text Available Making more accurate marketing decisions by managers requires building effective predictive models. Typically, these models specify the probability of customer belonging to a particular category, group or segment. The analytical CRM categories refer to customers interested in starting cooperation with the company (acquisition models, customers who purchase additional products (cross- and up-sell models or customers intending to resign from the cooperation (churn models. During building predictive models researchers use analytical tools from various disciplines with an emphasis on their best performance. This article attempts to build a hybrid predictive model combining decision trees (C&RT algorithm and cluster analysis (k-means. During experiments five different cluster validity indices and eight datasets were used. The performance of models was evaluated by using popular measures such as: accuracy, precision, recall, G-mean, F-measure and lift in the first and in the second decile. The authors tried to find a connection between the number of clusters and models' quality.

  5. The Savannah River Site's Groundwater Monitoring Program

    Energy Technology Data Exchange (ETDEWEB)

    1992-08-03

    This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted during the first quarter of 1992. It includes the analytical data, field data, data review, quality control, and other documentation for this program; provides a record of the program's activities; and serves as an official document of the analytical results.

  6. CT identification of bronchopulmonary segments: 50 normal subjects

    International Nuclear Information System (INIS)

    Osbourne, D.; Vock, P.; Godwin, J.D.; Silverman, P.M.

    1984-01-01

    A systematic evaluation of the fissures, segmental bronchi and arteries, bronchopulmonary segments, and peripheral pulmonary parenchyma was made from computed tomographic (CT) scans of 50 patients with normal chest radiographs. Seventy percent of the segmental bronchi and 76% of the segmental arteries were identified. Arteries could be traced to their sixth- and seventh-order branches; their orientation to the plane of the CT section allowed gross identification and localization of bronchopulmonary segments

  7. Segmentation of liver tumors on CT images

    International Nuclear Information System (INIS)

    Pescia, D.

    2011-01-01

    This thesis is dedicated to 3D segmentation of liver tumors in CT images. This is a task of great clinical interest since it allows physicians benefiting from reproducible and reliable methods for segmenting such lesions. Accurate segmentation would indeed help them during the evaluation of the lesions, the choice of treatment and treatment planning. Such a complex segmentation task should cope with three main scientific challenges: (i) the highly variable shape of the structures being sought, (ii) their similarity of appearance compared with their surrounding medium and finally (iii) the low signal to noise ratio being observed in these images. This problem is addressed in a clinical context through a two step approach, consisting of the segmentation of the entire liver envelope, before segmenting the tumors which are present within the envelope. We begin by proposing an atlas-based approach for computing pathological liver envelopes. Initially images are pre-processed to compute the envelopes that wrap around binary masks in an attempt to obtain liver envelopes from estimated segmentation of healthy liver parenchyma. A new statistical atlas is then introduced and used to segmentation through its diffeomorphic registration to the new image. This segmentation is achieved through the combination of image matching costs as well as spatial and appearance prior using a multi-scale approach with MRF. The second step of our approach is dedicated to lesions segmentation contained within the envelopes using a combination of machine learning techniques and graph based methods. First, an appropriate feature space is considered that involves texture descriptors being determined through filtering using various scales and orientations. Then, state of the art machine learning techniques are used to determine the most relevant features, as well as the hyper plane that separates the feature space of tumoral voxels to the ones corresponding to healthy tissues. Segmentation is then

  8. Segmentation of knee injury swelling on infrared images

    Science.gov (United States)

    Puentes, John; Langet, Hélène; Herry, Christophe; Frize, Monique

    2011-03-01

    Interpretation of medical infrared images is complex due to thermal noise, absence of texture, and small temperature differences in pathological zones. Acute inflammatory response is a characteristic symptom of some knee injuries like anterior cruciate ligament sprains, muscle or tendons strains, and meniscus tear. Whereas artificial coloring of the original grey level images may allow to visually assess the extent inflammation in the area, their automated segmentation remains a challenging problem. This paper presents a hybrid segmentation algorithm to evaluate the extent of inflammation after knee injury, in terms of temperature variations and surface shape. It is based on the intersection of rapid color segmentation and homogeneous region segmentation, to which a Laplacian of a Gaussian filter is applied. While rapid color segmentation enables to properly detect the observed core of swollen area, homogeneous region segmentation identifies possible inflammation zones, combining homogeneous grey level and hue area segmentation. The hybrid segmentation algorithm compares the potential inflammation regions partially detected by each method to identify overlapping areas. Noise filtering and edge segmentation are then applied to common zones in order to segment the swelling surfaces of the injury. Experimental results on images of a patient with anterior cruciate ligament sprain show the improved performance of the hybrid algorithm with respect to its separated components. The main contribution of this work is a meaningful automatic segmentation of abnormal skin temperature variations on infrared thermography images of knee injury swelling.

  9. Validated automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images.

    Science.gov (United States)

    Chiu, Stephanie J; Izatt, Joseph A; O'Connell, Rachelle V; Winter, Katrina P; Toth, Cynthia A; Farsiu, Sina

    2012-01-05

    To automatically segment retinal spectral domain optical coherence tomography (SD-OCT) images of eyes with age-related macular degeneration (AMD) and various levels of image quality to advance the study of retinal pigment epithelium (RPE)+drusen complex (RPEDC) volume changes indicative of AMD progression. A general segmentation framework based on graph theory and dynamic programming was used to segment three retinal boundaries in SD-OCT images of eyes with drusen and geographic atrophy (GA). A validation study for eyes with nonneovascular AMD was conducted, forming subgroups based on scan quality and presence of GA. To test for accuracy, the layer thickness results from two certified graders were compared against automatic segmentation results for 220 B-scans across 20 patients. For reproducibility, automatic layer volumes were compared that were generated from 0° versus 90° scans in five volumes with drusen. The mean differences in the measured thicknesses of the total retina and RPEDC layers were 4.2 ± 2.8 and 3.2 ± 2.6 μm for automatic versus manual segmentation. When the 0° and 90° datasets were compared, the mean differences in the calculated total retina and RPEDC volumes were 0.28% ± 0.28% and 1.60% ± 1.57%, respectively. The average segmentation time per image was 1.7 seconds automatically versus 3.5 minutes manually. The automatic algorithm accurately and reproducibly segmented three retinal boundaries in images containing drusen and GA. This automatic approach can reduce time and labor costs and yield objective measurements that potentially reveal quantitative RPE changes in longitudinal clinical AMD studies. (ClinicalTrials.gov number, NCT00734487.).

  10. Reliability of a Seven-Segment Foot Model with Medial and Lateral Midfoot and Forefoot Segments During Walking Gait.

    Science.gov (United States)

    Cobb, Stephen C; Joshi, Mukta N; Pomeroy, Robin L

    2016-12-01

    In-vitro and invasive in-vivo studies have reported relatively independent motion in the medial and lateral forefoot segments during gait. However, most current surface-based models have not defined medial and lateral forefoot or midfoot segments. The purpose of the current study was to determine the reliability of a 7-segment foot model that includes medial and lateral midfoot and forefoot segments during walking gait. Three-dimensional positions of marker clusters located on the leg and 6 foot segments were tracked as 10 participants completed 5 walking trials. To examine the reliability of the foot model, coefficients of multiple correlation (CMC) were calculated across the trials for each participant. Three-dimensional stance time series and range of motion (ROM) during stance were also calculated for each functional articulation. CMCs for all of the functional articulations were ≥ 0.80. Overall, the rearfoot complex (leg-calcaneus segments) was the most reliable articulation and the medial midfoot complex (calcaneus-navicular segments) was the least reliable. With respect to ROM, reliability was greatest for plantarflexion/dorsiflexion and least for abduction/adduction. Further, the stance ROM and time-series patterns results between the current study and previous invasive in-vivo studies that have assessed actual bone motion were generally consistent.

  11. Spectral embedding based active contour (SEAC): application to breast lesion segmentation on DCE-MRI

    Science.gov (United States)

    Agner, Shannon C.; Xu, Jun; Rosen, Mark; Karthigeyan, Sudha; Englander, Sarah; Madabhushi, Anant

    2011-03-01

    Spectral embedding (SE), a graph-based manifold learning method, has previously been shown to be useful in high dimensional data classification. In this work, we present a novel SE based active contour (SEAC) segmentation scheme and demonstrate its applications in lesion segmentation on breast dynamic contrast enhance magnetic resonance imaging (DCE-MRI). In this work, we employ SE on DCE-MRI on a per voxel basis to embed the high dimensional time series intensity vector into a reduced dimensional space, where the reduced embedding space is characterized by the principal eigenvectors. The orthogonal eigenvector-based data representation allows for computation of strong tensor gradients in the spectrally embedded space and also yields improved region statistics that serve as optimal stopping criteria for SEAC. We demonstrate both analytically and empirically that the tensor gradients in the spectrally embedded space are stronger than the corresponding gradients in the original grayscale intensity space. On a total of 50 breast DCE-MRI studies, SEAC yielded a mean absolute difference (MAD) of 3.2+/-2.1 pixels and mean Dice similarity coefficient (DSC) of 0.74+/-0.13 compared to manual ground truth segmentation. An active contour in conjunction with fuzzy c-means (FCM+AC), a commonly used segmentation method for breast DCE-MRI, produced a corresponding MAD of 7.2+/-7.4 pixels and mean DSC of 0.58+/-0.32. In conjunction with a set of 6 quantitative morphological features automatically extracted from the SEAC derived lesion boundary, a support vector machine (SVM) classifier yielded an area under the curve (AUC) of 0.73, for discriminating between 10 benign and 30 malignant lesions; the corresponding SVM classifier with the FCM+AC derived morphological features yielded an AUC of 0.65.

  12. Market Segmentation: An Instructional Module.

    Science.gov (United States)

    Wright, Peter H.

    A concept-based introduction to market segmentation is provided in this instructional module for undergraduate and graduate transportation-related courses. The material can be used in many disciplines including engineering, business, marketing, and technology. The concept of market segmentation is primarily a transportation planning technique by…

  13. Analytical and policy issues in energy economics: Uses of the FRS data base

    Science.gov (United States)

    1981-12-01

    The relevant literature concerning several major analytical and policy issues in energy economics is reviewed and criticized. The possible uses of the Financial Reporting System (FRS) data base for the analysis of energy policy issues are investigated. Certain features of FRS data suggest several ways in which the data base can be used by policy makers. FRS data are collected on the firm level, and different segments of the same firm operating in different markets can be separately identified. The methods of collection as well as FRS's elaborate data verification process guarantee a high degree of accuracy and consistency among firms.

  14. Using learning analytics to evaluate a video-based lecture series.

    Science.gov (United States)

    Lau, K H Vincent; Farooque, Pue; Leydon, Gary; Schwartz, Michael L; Sadler, R Mark; Moeller, Jeremy J

    2018-01-01

    The video-based lecture (VBL), an important component of the flipped classroom (FC) and massive open online course (MOOC) approaches to medical education, has primarily been evaluated through direct learner feedback. Evaluation may be enhanced through learner analytics (LA) - analysis of quantitative audience usage data generated by video-sharing platforms. We applied LA to an experimental series of ten VBLs on electroencephalography (EEG) interpretation, uploaded to YouTube in the model of a publicly accessible MOOC. Trends in view count; total percentage of video viewed and audience retention (AR) (percentage of viewers watching at a time point compared to the initial total) were examined. The pattern of average AR decline was characterized using regression analysis, revealing a uniform linear decline in viewership for each video, with no evidence of an optimal VBL length. Segments with transient increases in AR corresponded to those focused on core concepts, indicative of content requiring more detailed evaluation. We propose a model for applying LA at four levels: global, series, video, and feedback. LA may be a useful tool in evaluating a VBL series. Our proposed model combines analytics data and learner self-report for comprehensive evaluation.

  15. Biosensors: Future Analytical Tools

    Directory of Open Access Journals (Sweden)

    Vikas

    2007-02-01

    Full Text Available Biosensors offer considerable promises for attaining the analytic information in a faster, simpler and cheaper manner compared to conventional assays. Biosensing approach is rapidly advancing and applications ranging from metabolite, biological/ chemical warfare agent, food pathogens and adulterant detection to genetic screening and programmed drug delivery have been demonstrated. Innovative efforts, coupling micromachining and nanofabrication may lead to even more powerful devices that would accelerate the realization of large-scale and routine screening. With gradual increase in commercialization a wide range of new biosensors are thus expected to reach the market in the coming years.

  16. Educational intervention together with an on-line quality control program achieve recommended analytical goals for bedside blood glucose monitoring in a 1200-bed university hospital.

    Science.gov (United States)

    Sánchez-Margalet, Víctor; Rodriguez-Oliva, Manuel; Sánchez-Pozo, Cristina; Fernández-Gallardo, María Francisca; Goberna, Raimundo

    2005-01-01

    Portable meters for blood glucose concentrations are used at the patients bedside, as well as by patients for self-monitoring of blood glucose. Even though most devices have important technological advances that decrease operator error, the analytical goals proposed for the performance of glucose meters have been recently changed by the American Diabetes Association (ADA) to reach nurses in a 1200-bed University Hospital to achieve recommended analytical goals, so that we could improve the quality of diabetes care. We used portable glucose meters connected on-line to the laboratory after an educational program for nurses with responsibilities in point-of-care testing. We evaluated the system by assessing total error of the glucometers using high- and low-level glucose control solutions. In a period of 6 months, we collected data from 5642 control samples obtained by 14 devices (Precision PCx) directly from the control program (QC manager). The average total error for the low-level glucose control (2.77 mmol/l) was 6.3% (range 5.5-7.6%), and even lower for the high-level glucose control (16.66 mmol/l), at 4.8% (range 4.1-6.5%). In conclusion, the performance of glucose meters used in our University Hospital with more than 1000 beds not only improved after the intervention, but the meters achieved the analytical goals of the suggested ADA/National Academy of Clinical Biochemistry criteria for total error (<7.9% in the range 2.77-16.66 mmol/l glucose) and optimal total error for high glucose concentrations of <5%, which will improve the quality of care of our patients.

  17. Medical image segmentation using genetic algorithms.

    Science.gov (United States)

    Maulik, Ujjwal

    2009-03-01

    Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation.

  18. Quality Measures in Pre-Analytical Phase of Tissue Processing: Understanding Its Value in Histopathology.

    Science.gov (United States)

    Rao, Shalinee; Masilamani, Suresh; Sundaram, Sandhya; Duvuru, Prathiba; Swaminathan, Rajendiran

    2016-01-01

    Quality monitoring in histopathology unit is categorized into three phases, pre-analytical, analytical and post-analytical, to cover various steps in the entire test cycle. Review of literature on quality evaluation studies pertaining to histopathology revealed that earlier reports were mainly focused on analytical aspects with limited studies on assessment of pre-analytical phase. Pre-analytical phase encompasses several processing steps and handling of specimen/sample by multiple individuals, thus allowing enough scope for errors. Due to its critical nature and limited studies in the past to assess quality in pre-analytical phase, it deserves more attention. This study was undertaken to analyse and assess the quality parameters in pre-analytical phase in a histopathology laboratory. This was a retrospective study done on pre-analytical parameters in histopathology laboratory of a tertiary care centre on 18,626 tissue specimens received in 34 months. Registers and records were checked for efficiency and errors for pre-analytical quality variables: specimen identification, specimen in appropriate fixatives, lost specimens, daily internal quality control performance on staining, performance in inter-laboratory quality assessment program {External quality assurance program (EQAS)} and evaluation of internal non-conformities (NC) for other errors. The study revealed incorrect specimen labelling in 0.04%, 0.01% and 0.01% in 2007, 2008 and 2009 respectively. About 0.04%, 0.07% and 0.18% specimens were not sent in fixatives in 2007, 2008 and 2009 respectively. There was no incidence of specimen lost. A total of 113 non-conformities were identified out of which 92.9% belonged to the pre-analytical phase. The predominant NC (any deviation from normal standard which may generate an error and result in compromising with quality standards) identified was wrong labelling of slides. Performance in EQAS for pre-analytical phase was satisfactory in 6 of 9 cycles. A low incidence

  19. Automated image segmentation using information theory

    International Nuclear Information System (INIS)

    Hibbard, L.S.

    2001-01-01

    Full text: Our development of automated contouring of CT images for RT planning is based on maximum a posteriori (MAP) analyses of region textures, edges, and prior shapes, and assumes stationary Gaussian distributions for voxel textures and contour shapes. Since models may not accurately represent image data, it would be advantageous to compute inferences without relying on models. The relative entropy (RE) from information theory can generate inferences based solely on the similarity of probability distributions. The entropy of a distribution of a random variable X is defined as -Σ x p(x)log 2 p(x) for all the values x which X may assume. The RE (Kullback-Liebler divergence) of two distributions p(X), q(X), over X is Σ x p(x)log 2 {p(x)/q(x)}. The RE is a kind of 'distance' between p,q, equaling zero when p=q and increasing as p,q are more different. Minimum-error MAP and likelihood ratio decision rules have RE equivalents: minimum error decisions obtain with functions of the differences between REs of compared distributions. One applied result is the contour ideally separating two regions is that which maximizes the relative entropy of the two regions' intensities. A program was developed that automatically contours the outlines of patients in stereotactic headframes, a situation most often requiring manual drawing. The relative entropy of intensities inside the contour (patient) versus outside (background) was maximized by conjugate gradient descent over the space of parameters of a deformable contour. shows the computed segmentation of a patient from headframe backgrounds. This program is particularly useful for preparing images for multimodal image fusion. Relative entropy and allied measures of distribution similarity provide automated contouring criteria that do not depend on statistical models of image data. This approach should have wide utility in medical image segmentation applications. Copyright (2001) Australasian College of Physical Scientists and

  20. Metrics for image segmentation

    Science.gov (United States)

    Rees, Gareth; Greenway, Phil; Morray, Denise

    1998-07-01

    An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements are to be decomposed into sub-system requirements which can be understood in terms of algorithm selection and performance optimization. Nowhere in computer vision is this more evident than in the area of image segmentation. This is a vigorous and innovative research activity, but even after nearly two decades of progress, it remains almost impossible to answer the question 'what would the performance of this segmentation algorithm be under these new conditions?' To begin to address this shortcoming, we have devised a well-principled metric for assessing the relative performance of two segmentation algorithms. This allows meaningful objective comparisons to be made between their outputs. It also estimates the absolute performance of an algorithm given ground truth. Our approach is an information theoretic one. In this paper, we describe the theory and motivation of our method, and present practical results obtained from a range of state of the art segmentation methods. We demonstrate that it is possible to measure the objective performance of these algorithms, and to use the information so gained to provide clues about how their performance might be improved.

  1. Identifying uniformly mutated segments within repeats.

    Science.gov (United States)

    Sahinalp, S Cenk; Eichler, Evan; Goldberg, Paul; Berenbrink, Petra; Friedetzky, Tom; Ergun, Funda

    2004-12-01

    Given a long string of characters from a constant size alphabet we present an algorithm to determine whether its characters have been generated by a single i.i.d. random source. More specifically, consider all possible n-coin models for generating a binary string S, where each bit of S is generated via an independent toss of one of the n coins in the model. The choice of which coin to toss is decided by a random walk on the set of coins where the probability of a coin change is much lower than the probability of using the same coin repeatedly. We present a procedure to evaluate the likelihood of a n-coin model for given S, subject a uniform prior distribution over the parameters of the model (that represent mutation rates and probabilities of copying events). In the absence of detailed prior knowledge of these parameters, the algorithm can be used to determine whether the a posteriori probability for n=1 is higher than for any other n>1. Our algorithm runs in time O(l4logl), where l is the length of S, through a dynamic programming approach which exploits the assumed convexity of the a posteriori probability for n. Our test can be used in the analysis of long alignments between pairs of genomic sequences in a number of ways. For example, functional regions in genome sequences exhibit much lower mutation rates than non-functional regions. Because our test provides means for determining variations in the mutation rate, it may be used to distinguish functional regions from non-functional ones. Another application is in determining whether two highly similar, thus evolutionarily related, genome segments are the result of a single copy event or of a complex series of copy events. This is particularly an issue in evolutionary studies of genome regions rich with repeat segments (especially tandemly repeated segments).

  2. The Savannah River Site's groundwater monitoring program

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-18

    This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted by EPD/EMS in the first quarter of 1991. In includes the analytical data, field data, data review, quality control, and other documentation for this program, provides a record of the program's activities and rationale, and serves as an official document of the analytical results.

  3. Incorporation of squalene into rod outer segments

    International Nuclear Information System (INIS)

    Keller, R.K.; Fliesler, S.J.

    1990-01-01

    We have reported previously that squalene is the major radiolabeled nonsaponifiable lipid product derived from [ 3 H]acetate in short term incubations of frog retinas. In the present study, we demonstrate that newly synthesized squalene is incorporated into rod outer segments under similar in vitro conditions. We show further that squalene is an endogenous constituent of frog rod outer segment membranes; its concentration is approximately 9.5 nmol/mumol of phospholipid or about 9% of the level of cholesterol. Pulse-chase experiments with radiolabeled precursors revealed no metabolism of outer segment squalene to sterols in up to 20 h of chase. Taken together with our previous absolute rate studies, these results suggest that most, if not all, of the squalene synthesized by the frog retina is transported to rod outer segments. Synthesis of protein is not required for squalene transport since puromycin had no effect on squalene incorporation into outer segments. Conversely, inhibition of isoprenoid synthesis with mevinolin had no effect on the incorporation of opsin into the outer segment. These latter results support the conclusion that the de novo synthesis and subsequent intracellular trafficking of opsin and isoprenoid lipids destined for the outer segment occur via independent mechanisms

  4. Interactive segmentation techniques algorithms and performance evaluation

    CERN Document Server

    He, Jia; Kuo, C-C Jay

    2013-01-01

    This book focuses on interactive segmentation techniques, which have been extensively studied in recent decades. Interactive segmentation emphasizes clear extraction of objects of interest, whose locations are roughly indicated by human interactions based on high level perception. This book will first introduce classic graph-cut segmentation algorithms and then discuss state-of-the-art techniques, including graph matching methods, region merging and label propagation, clustering methods, and segmentation methods based on edge detection. A comparative analysis of these methods will be provided

  5. Detection and characterization of flaws in segments of light water reactor pressure vessels

    International Nuclear Information System (INIS)

    Cook, K.V.; Cunningham, R.A. Jr.; McClung, R.W.

    1988-01-01

    Studies have been conducted to determine flaw density in segments cut from light water reactor )LWR) pressure vessels as part of the Oak Ridge National Laboratory's Heavy-Section Steel Technology (H SST) Program. Segments from the Hope Creek Unit 2 vessel and the Pilgrim Unit 2 Vessel were purchased from salvage dealers. Hope Creek was a boiling water reactor (BWR) design and Pilgrim was a pressurized water reactor (PWR) design. Neither were ever placed in service. Objectives were to evaluate these LWR segments for flaws with ultrasonic and liquid penetrant techniques. Both objectives were successfully completed. One significant indication was detected in a Hope Creek seam weld by ultrasonic techniques and characterized by further analyses terminating with destructive correlation. This indication [with a through-wall dimension of ∼6 mm (∼0.24 in.)] was detected in only 3 m (10 ft) of weldment and offers extremely limited data when compared to the extent of welding even in a single pressure vessel. However, the detection and confirmation of the flaw in the arbitrarily selected sections implies the Marshall report estimates (and others) are nonconservative for such small flaws. No significant indications were detected in the Pilgrim material by ultrasonic techniques. Unfortunately, the Pilgrim segments contained relatively little weldment; thus, we limited our ultrasonic examinations to the cladding and subcladding regions. Fluorescent liquid penetrant inspection of the cladding surfaces for both LWR segments detected no significant indications [i.e., for a total of approximately 6.8 m 2 (72 ft 2 ) of cladding surface]. (author)

  6. Market segmentation using perceived constraints

    Science.gov (United States)

    Jinhee Jun; Gerard Kyle; Andrew Mowen

    2008-01-01

    We examined the practical utility of segmenting potential visitors to Cleveland Metroparks using their constraint profiles. Our analysis identified three segments based on their scores on the dimensions of constraints: Other priorities--visitors who scored the highest on 'other priorities' dimension; Highly Constrained--visitors who scored relatively high on...

  7. Reduplication Facilitates Early Word Segmentation

    Science.gov (United States)

    Ota, Mitsuhiko; Skarabela, Barbora

    2018-01-01

    This study explores the possibility that early word segmentation is aided by infants' tendency to segment words with repeated syllables ("reduplication"). Twenty-four nine-month-olds were familiarized with passages containing one novel reduplicated word and one novel non-reduplicated word. Their central fixation times in response to…

  8. 78 FR 7476 - Airport Improvement Program

    Science.gov (United States)

    2013-02-01

    ... Airports, Airport Planning and Programming, Routing Symbol APP-501, 800 Independence Avenue SW., Room 619... Programming, Routing Symbol APP-501, 800 Independence Avenue SW., Room 619, Washington, DC 20591; between 9 a... recognition of the interest of all segments of the airport community in the AIP. The agency will consider all...

  9. Recognition Using Classification and Segmentation Scoring

    National Research Council Canada - National Science Library

    Kimball, Owen; Ostendorf, Mari; Rohlicek, Robin

    1992-01-01

    .... We describe an approach to connected word recognition that allows the use of segmental information through an explicit decomposition of the recognition criterion into classification and segmentation scoring...

  10. Hepatic vessel segmentation for 3D planning of liver surgery experimental evaluation of a new fully automatic algorithm.

    Science.gov (United States)

    Conversano, Francesco; Franchini, Roberto; Demitri, Christian; Massoptier, Laurent; Montagna, Francesco; Maffezzoli, Alfonso; Malvasi, Antonio; Casciaro, Sergio

    2011-04-01

    The aim of this study was to identify the optimal parameter configuration of a new algorithm for fully automatic segmentation of hepatic vessels, evaluating its accuracy in view of its use in a computer system for three-dimensional (3D) planning of liver surgery. A phantom reproduction of a human liver with vessels up to the fourth subsegment order, corresponding to a minimum diameter of 0.2 mm, was realized through stereolithography, exploiting a 3D model derived from a real human computed tomographic data set. Algorithm parameter configuration was experimentally optimized, and the maximum achievable segmentation accuracy was quantified for both single two-dimensional slices and 3D reconstruction of the vessel network, through an analytic comparison of the automatic segmentation performed on contrast-enhanced computed tomographic phantom images with actual model features. The optimal algorithm configuration resulted in a vessel detection sensitivity of 100% for vessels > 1 mm in diameter, 50% in the range 0.5 to 1 mm, and 14% in the range 0.2 to 0.5 mm. An average area overlap of 94.9% was obtained between automatically and manually segmented vessel sections, with an average difference of 0.06 mm(2). The average values of corresponding false-positive and false-negative ratios were 7.7% and 2.3%, respectively. A robust and accurate algorithm for automatic extraction of the hepatic vessel tree from contrast-enhanced computed tomographic volume images was proposed and experimentally assessed on a liver model, showing unprecedented sensitivity in vessel delineation. This automatic segmentation algorithm is promising for supporting liver surgery planning and for guiding intraoperative resections. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.

  11. Analytical expressions for the correlation function of a hard sphere dimer fluid

    Science.gov (United States)

    Kim, Soonho; Chang, Jaeeon; Kim, Hwayong

    A closed form expression is given for the correlation function of a hard sphere dimer fluid. A set of integral equations is obtained from Wertheim's multidensity Ornstein-Zernike integral equation theory with Percus-Yevick approximation. Applying the Laplace transformation method to the integral equations and then solving the resulting equations algebraically, the Laplace transforms of the individual correlation functions are obtained. By the inverse Laplace transformation, the radial distribution function (RDF) is obtained in closed form out to 3D (D is the segment diameter). The analytical expression for the RDF of the hard dimer should be useful in developing the perturbation theory of dimer fluids.

  12. Analytical expression for the correlation function of a hard sphere chain fluid

    Science.gov (United States)

    Chang, Jaeeon; Kim, Hwayong

    A closed form expression is given for the correlation function of flexible hard sphere chain fluid. A set of integral equations obtained from Wertheim's multidensity Ornstein-Zernike integral equation theory with the polymer Percus-Yevick ideal chain approximation is considered. Applying the Laplace transformation method to the integral equations and then solving the resulting equations algebraically, the Laplace transforms of individual correlation functions are obtained. By inverse Laplace transformation the inter- and intramolecular radial distribution functions (RDFs) are obtained in closed forms up to 3D(D is segment diameter). These analytical expressions for the RDFs would be useful in developing the perturbation theory of chain fluids.

  13. Multifractal-based nuclei segmentation in fish images.

    Science.gov (United States)

    Reljin, Nikola; Slavkovic-Ilic, Marijeta; Tapia, Coya; Cihoric, Nikola; Stankovic, Srdjan

    2017-09-01

    The method for nuclei segmentation in fluorescence in-situ hybridization (FISH) images, based on the inverse multifractal analysis (IMFA) is proposed. From the blue channel of the FISH image in RGB format, the matrix of Holder exponents, with one-by-one correspondence with the image pixels, is determined first. The following semi-automatic procedure is proposed: initial nuclei segmentation is performed automatically from the matrix of Holder exponents by applying predefined hard thresholding; then the user evaluates the result and is able to refine the segmentation by changing the threshold, if necessary. After successful nuclei segmentation, the HER2 (human epidermal growth factor receptor 2) scoring can be determined in usual way: by counting red and green dots within segmented nuclei, and finding their ratio. The IMFA segmentation method is tested over 100 clinical cases, evaluated by skilled pathologist. Testing results show that the new method has advantages compared to already reported methods.

  14. Analysis of System-Wide Investment in the National Airspace System: A Portfolio Analytical Framework and an Example

    Science.gov (United States)

    Bhadra, Dipasis; Morser, Frederick R.

    2006-01-01

    In this paper, the authors review the FAA s current program investments and lay out a preliminary analytical framework to undertake projects that may address some of the noted deficiencies. By drawing upon the well developed theories from corporate finance, an analytical framework is offered that can be used for choosing FAA s investments taking into account risk, expected returns and inherent dependencies across NAS programs. The framework can be expanded into taking multiple assets and realistic values for parameters in drawing an efficient risk-return frontier for the entire FAA investment programs.

  15. The Savannah River Site`s Groundwater Monitoring Program. First quarter 1992

    Energy Technology Data Exchange (ETDEWEB)

    1992-08-03

    This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted during the first quarter of 1992. It includes the analytical data, field data, data review, quality control, and other documentation for this program; provides a record of the program`s activities; and serves as an official document of the analytical results.

  16. Autonomous Segmentation of Outcrop Images Using Computer Vision and Machine Learning

    Science.gov (United States)

    Francis, R.; McIsaac, K.; Osinski, G. R.; Thompson, D. R.

    2013-12-01

    As planetary exploration missions become increasingly complex and capable, the motivation grows for improved autonomous science. New capabilities for onboard science data analysis may relieve radio-link data limits and provide greater throughput of scientific information. Adaptive data acquisition, storage and downlink may ultimately hold implications for mission design and operations. For surface missions, geology remains an essential focus, and the investigation of in place, exposed geological materials provides the greatest scientific insight and context for the formation and history of planetary materials and processes. The goal of this research program is to develop techniques for autonomous segmentation of images of rock outcrops. Recognition of the relationships between different geological units is the first step in mapping and interpreting a geological setting. Applications of automatic segmentation include instrument placement and targeting and data triage for downlink. Here, we report on the development of a new technique in which a photograph of a rock outcrop is processed by several elementary image processing techniques, generating a feature space which can be interrogated and classified. A distance metric learning technique (Multiclass Discriminant Analysis, or MDA) is tested as a means of finding the best numerical representation of the feature space. MDA produces a linear transformation that maximizes the separation between data points from different geological units. This ';training step' is completed on one or more images from a given locality. Then we apply the same transformation to improve the segmentation of new scenes containing similar materials to those used for training. The technique was tested using imagery from Mars analogue settings at the Cima volcanic flows in the Mojave Desert, California; impact breccias from the Sudbury impact structure in Ontario, Canada; and an outcrop showing embedded mineral veins in Gale Crater on Mars

  17. A semi-analytical method to evaluate the dielectric response of a tokamak plasma accounting for drift orbit effects

    Science.gov (United States)

    Van Eester, Dirk

    2005-03-01

    A semi-analytical method is proposed to evaluate the dielectric response of a plasma to electromagnetic waves in the ion cyclotron domain of frequencies in a D-shaped but axisymmetric toroidal geometry. The actual drift orbit of the particles is accounted for. The method hinges on subdividing the orbit into elementary segments in which the integrations can be performed analytically or by tabulation, and it relies on the local book-keeping of the relation between the toroidal angular momentum and the poloidal flux function. Depending on which variables are chosen, the method allows computation of elementary building blocks for either the wave or the Fokker-Planck equation, but the accent is mainly on the latter. Two types of tangent resonance are distinguished.

  18. The Importance of Marketing Segmentation

    Science.gov (United States)

    Martin, Gillian

    2011-01-01

    The rationale behind marketing segmentation is to allow businesses to focus on their consumers' behaviors and purchasing patterns. If done effectively, marketing segmentation allows an organization to achieve its highest return on investment (ROI) in turn for its marketing and sales expenses. If an organization markets its products or services to…

  19. A Unified Channel Charges Expression for Analytic MOSFET Modeling

    Directory of Open Access Journals (Sweden)

    Hugues Murray

    2012-01-01

    Full Text Available Based on a 1D Poissons equation resolution, we present an analytic model of inversion charges allowing calculation of the drain current and transconductance in the Metal Oxide Semiconductor Field Effect Transistor. The drain current and transconductance are described by analytical functions including mobility corrections and short channel effects (CLM, DIBL. The comparison with the Pao-Sah integral shows excellent accuracy of the model in all inversion modes from strong to weak inversion in submicronics MOSFET. All calculations are encoded with a simple C program and give instantaneous results that provide an efficient tool for microelectronics users.

  20. Boundary segmentation for fluorescence microscopy using steerable filters

    Science.gov (United States)

    Ho, David Joon; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2017-02-01

    Fluorescence microscopy is used to image multiple subcellular structures in living cells which are not readily observed using conventional optical microscopy. Moreover, two-photon microscopy is widely used to image structures deeper in tissue. Recent advancement in fluorescence microscopy has enabled the generation of large data sets of images at different depths, times, and spectral channels. Thus, automatic object segmentation is necessary since manual segmentation would be inefficient and biased. However, automatic segmentation is still a challenging problem as regions of interest may not have well defined boundaries as well as non-uniform pixel intensities. This paper describes a method for segmenting tubular structures in fluorescence microscopy images of rat kidney and liver samples using adaptive histogram equalization, foreground/background segmentation, steerable filters to capture directional tendencies, and connected-component analysis. The results from several data sets demonstrate that our method can segment tubular boundaries successfully. Moreover, our method has better performance when compared to other popular image segmentation methods when using ground truth data obtained via manual segmentation.