WorldWideScience

Sample records for level set model

  1. A new level set model for multimaterial flows

    Energy Technology Data Exchange (ETDEWEB)

    Starinshak, David P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Karni, Smadar [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of Mathematics; Roe, Philip L. [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of AerospaceEngineering

    2014-01-08

    We present a new level set model for representing multimaterial flows in multiple space dimensions. Instead of associating a level set function with a specific fluid material, the function is associated with a pair of materials and the interface that separates them. A voting algorithm collects sign information from all level sets and determines material designations. M(M ₋1)/2 level set functions might be needed to represent a general M-material configuration; problems of practical interest use far fewer functions, since not all pairs of materials share an interface. The new model is less prone to producing indeterminate material states, i.e. regions claimed by more than one material (overlaps) or no material at all (vacuums). It outperforms existing material-based level set models without the need for reinitialization schemes, thereby avoiding additional computational costs and preventing excessive numerical diffusion.

  2. A new level set model for cell image segmentation

    Science.gov (United States)

    Ma, Jing-Feng; Hou, Kai; Bao, Shang-Lian; Chen, Chun

    2011-02-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.

  3. Level-set techniques for facies identification in reservoir modeling

    Science.gov (United States)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  4. Level-set techniques for facies identification in reservoir modeling

    International Nuclear Information System (INIS)

    Iglesias, Marco A; McLaughlin, Dennis

    2011-01-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil–water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301–29; 2004 Inverse Problems 20 259–82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg–Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush–Kuhn–Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies

  5. A new level set model for cell image segmentation

    International Nuclear Information System (INIS)

    Ma Jing-Feng; Chen Chun; Hou Kai; Bao Shang-Lian

    2011-01-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing. (cross-disciplinary physics and related areas of science and technology)

  6. A Variational Level Set Model Combined with FCMS for Image Clustering Segmentation

    Directory of Open Access Journals (Sweden)

    Liming Tang

    2014-01-01

    Full Text Available The fuzzy C means clustering algorithm with spatial constraint (FCMS is effective for image segmentation. However, it lacks essential smoothing constraints to the cluster boundaries and enough robustness to the noise. Samson et al. proposed a variational level set model for image clustering segmentation, which can get the smooth cluster boundaries and closed cluster regions due to the use of level set scheme. However it is very sensitive to the noise since it is actually a hard C means clustering model. In this paper, based on Samson’s work, we propose a new variational level set model combined with FCMS for image clustering segmentation. Compared with FCMS clustering, the proposed model can get smooth cluster boundaries and closed cluster regions due to the use of level set scheme. In addition, a block-based energy is incorporated into the energy functional, which enables the proposed model to be more robust to the noise than FCMS clustering and Samson’s model. Some experiments on the synthetic and real images are performed to assess the performance of the proposed model. Compared with some classical image segmentation models, the proposed model has a better performance for the images contaminated by different noise levels.

  7. Two-phase electro-hydrodynamic flow modeling by a conservative level set model.

    Science.gov (United States)

    Lin, Yuan

    2013-03-01

    The principles of electro-hydrodynamic (EHD) flow have been known for more than a century and have been adopted for various industrial applications, for example, fluid mixing and demixing. Analytical solutions of such EHD flow only exist in a limited number of scenarios, for example, predicting a small deformation of a single droplet in a uniform electric field. Numerical modeling of such phenomena can provide significant insights about EHDs multiphase flows. During the last decade, many numerical results have been reported to provide novel and useful tools of studying the multiphase EHD flow. Based on a conservative level set method, the proposed model is able to simulate large deformations of a droplet by a steady electric field, which is beyond the region of theoretic prediction. The model is validated for both leaky dielectrics and perfect dielectrics, and is found to be in excellent agreement with existing analytical solutions and numerical studies in the literature. Furthermore, simulations of the deformation of a water droplet in decyl alcohol in a steady electric field match better with published experimental data than the theoretical prediction for large deformations. Therefore the proposed model can serve as a practical and accurate tool for simulating two-phase EHD flow. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. A thick level set interface model for simulating fatigue-drive delamination in composites

    NARCIS (Netherlands)

    Latifi, M.; Van der Meer, F.P.; Sluys, L.J.

    2015-01-01

    This paper presents a new damage model for simulating fatigue-driven delamination in composite laminates. This model is developed based on the Thick Level Set approach (TLS) and provides a favorable link between damage mechanics and fracture mechanics through the non-local evaluation of the energy

  9. INTEGRATED SFM TECHNIQUES USING DATA SET FROM GOOGLE EARTH 3D MODEL AND FROM STREET LEVEL

    Directory of Open Access Journals (Sweden)

    L. Inzerillo

    2017-08-01

    Full Text Available Structure from motion (SfM represents a widespread photogrammetric method that uses the photogrammetric rules to carry out a 3D model from a photo data set collection. Some complex ancient buildings, such as Cathedrals, or Theatres, or Castles, etc. need to implement the data set (realized from street level with the UAV one in order to have the 3D roof reconstruction. Nevertheless, the use of UAV is strong limited from the government rules. In these last years, Google Earth (GE has been enriched with the 3D models of the earth sites. For this reason, it seemed convenient to start to test the potentiality offered by GE in order to extract from it a data set that replace the UAV function, to close the aerial building data set, using screen images of high resolution 3D models. Users can take unlimited “aerial photos” of a scene while flying around in GE at any viewing angle and altitude. The challenge is to verify the metric reliability of the SfM model carried out with an integrated data set (the one from street level and the one from GE aimed at replace the UAV use in urban contest. This model is called integrated GE SfM model (i-GESfM. In this paper will be present a case study: the Cathedral of Palermo.

  10. Aerostructural Level Set Topology Optimization for a Common Research Model Wing

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2014-01-01

    The purpose of this work is to use level set topology optimization to improve the design of a representative wing box structure for the NASA common research model. The objective is to minimize the total compliance of the structure under aerodynamic and body force loading, where the aerodynamic loading is coupled to the structural deformation. A taxi bump case was also considered, where only body force loads were applied. The trim condition that aerodynamic lift must balance the total weight of the aircraft is enforced by allowing the root angle of attack to change. The level set optimization method is implemented on an unstructured three-dimensional grid, so that the method can optimize a wing box with arbitrary geometry. Fast matching and upwind schemes are developed for an unstructured grid, which make the level set method robust and efficient. The adjoint method is used to obtain the coupled shape sensitivities required to perform aerostructural optimization of the wing box structure.

  11. Numerical Modelling of Three-Fluid Flow Using The Level-set Method

    Science.gov (United States)

    Li, Hongying; Lou, Jing; Shang, Zhi

    2014-11-01

    This work presents a numerical model for simulation of three-fluid flow involving two different moving interfaces. These interfaces are captured using the level-set method via two different level-set functions. A combined formulation with only one set of conservation equations for the whole physical domain, consisting of the three different immiscible fluids, is employed. Numerical solution is performed on a fixed mesh using the finite volume method. Surface tension effect is incorporated using the Continuum Surface Force model. Validation of the present model is made against available results for stratified flow and rising bubble in a container with a free surface. Applications of the present model are demonstrated by a variety of three-fluid flow systems including (1) three-fluid stratified flow, (2) two-fluid stratified flow carrying the third fluid in the form of drops and (3) simultaneous rising and settling of two drops in a stationary third fluid. The work is supported by a Thematic and Strategic Research from A*STAR, Singapore (Ref. #: 1021640075).

  12. Modeling Restrained Shrinkage Induced Cracking in Concrete Rings Using the Thick Level Set Approach

    Directory of Open Access Journals (Sweden)

    Rebecca Nakhoul

    2018-03-01

    Full Text Available Modeling restrained shrinkage-induced damage and cracking in concrete is addressed herein. The novel Thick Level Set (TLS damage growth and crack propagation model is used and adapted by introducing shrinkage contribution into the formulation. The TLS capacity to predict damage evolution, crack initiation and growth triggered by restrained shrinkage in absence of external loads is evaluated. A study dealing with shrinkage-induced cracking in elliptical concrete rings is presented herein. Key results such as the effect of rings oblateness on stress distribution and critical shrinkage strain needed to initiate damage are highlighted. In addition, crack positions are compared to those observed in experiments and are found satisfactory.

  13. HPC in Basin Modeling: Simulating Mechanical Compaction through Vertical Effective Stress using Level Sets

    Science.gov (United States)

    McGovern, S.; Kollet, S. J.; Buerger, C. M.; Schwede, R. L.; Podlaha, O. G.

    2017-12-01

    In the context of sedimentary basins, we present a model for the simulation of the movement of ageological formation (layers) during the evolution of the basin through sedimentation and compactionprocesses. Assuming a single phase saturated porous medium for the sedimentary layers, the modelfocuses on the tracking of the layer interfaces, through the use of the level set method, as sedimentationdrives fluid-flow and reduction of pore space by compaction. On the assumption of Terzaghi's effectivestress concept, the coupling of the pore fluid pressure to the motion of interfaces in 1-D is presented inMcGovern, et.al (2017) [1] .The current work extends the spatial domain to 3-D, though we maintain the assumption ofvertical effective stress to drive the compaction. The idealized geological evolution is conceptualized asthe motion of interfaces between rock layers, whose paths are determined by the magnitude of a speedfunction in the direction normal to the evolving layer interface. The speeds normal to the interface aredependent on the change in porosity, determined through an effective stress-based compaction law,such as the exponential Athy's law. Provided with the speeds normal to the interface, the level setmethod uses an advection equation to evolve a potential function, whose zero level set defines theinterface. Thus, the moving layer geometry influences the pore pressure distribution which couplesback to the interface speeds. The flexible construction of the speed function allows extension, in thefuture, to other terms to represent different physical processes, analogous to how the compaction rulerepresents material deformation.The 3-D model is implemented using the generic finite element method framework Deal II,which provides tools, building on p4est and interfacing to PETSc, for the massively parallel distributedsolution to the model equations [2]. Experiments are being run on the Juelich Supercomputing Center'sJureca cluster. [1] McGovern, et.al. (2017

  14. On the modeling of bubble evolution and transport using coupled level-set/CFD method

    International Nuclear Information System (INIS)

    Bartlomiej Wierzbicki; Steven P Antal; Michael Z Podowski

    2005-01-01

    Full text of publication follows: The ability to predict the shape of the gas/liquid/solid interfaces is important for various multiphase flow and heat transfer applications. Specific issues of interest to nuclear reactor thermal-hydraulics, include the evolution of the shape of bubbles attached to solid surfaces during nucleation, bubble surface interactions in complex geometries, etc. Additional problems, making the overall task even more complicated, are associated with the effect of material properties that may be significantly altered by the addition of minute amounts of impurities, such as surfactants or nano-particles. The present paper is concerned with the development of an innovative approach to model time-dependent shape of gas/liquid interfaces in the presence of solid walls. The proposed approach combines a modified level-set method with an advanced CFD code, NPHASE. The coupled numerical solver can be used to simulate the evolution of gas/liquid interfaces in two-phase flows for a variety of geometries and flow conditions, from individual bubbles to free surfaces (stratified flows). The issues discussed in the full paper will include: a description of the novel aspects of the proposed level-set concept based method, an overview of the NPHASE code modeling framework and a description of the coupling method between these two elements of the overall model. A particular attention will be give to the consistency and completeness of model formulation for the interfacial phenomena near the liquid/gas/solid triple line, and to the impact of the proposed numerical approach on the accuracy and consistency of predictions. The accuracy will be measured in terms of both the calculated shape of the interfaces and the gas and liquid velocity fields around the interfaces and in the entire computational domain. The results of model testing and validation will also be shown in the full paper. The situations analyzed will include: bubbles of different sizes and varying

  15. Application of physiologically based pharmacokinetic modeling in setting acute exposure guideline levels for methylene chloride.

    NARCIS (Netherlands)

    Bos, Peter Martinus Jozef; Zeilmaker, Marco Jacob; Eijkeren, Jan Cornelis Henri van

    2006-01-01

    Acute exposure guideline levels (AEGLs) are derived to protect the human population from adverse health effects in case of single exposure due to an accidental release of chemicals into the atmosphere. AEGLs are set at three different levels of increasing toxicity for exposure durations ranging from

  16. A finite element/level set model of polyurethane foam expansion and polymerization

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Long, Kevin Nicholas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Roberts, Christine Cardinal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Celina, Mathias C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brunini, Victor [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Soehnel, Melissa Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Noble, David R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tinsley, James [Honeywell Federal Manufacturing & Technologies, Kansas City, MO (United States); Mondy, Lisa [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    Polyurethane foams are used widely for encapsulation and structural purposes because they are inexpensive, straightforward to process, amenable to a wide range of density variations (1 lb/ft3 - 50 lb/ft3), and able to fill complex molds quickly and effectively. Computational model of the filling and curing process are needed to reduce defects such as voids, out-of-specification density, density gradients, foam decomposition from high temperatures due to exotherms, and incomplete filling. This paper details the development of a computational fluid dynamics model of a moderate density PMDI structural foam, PMDI-10. PMDI is an isocyanate-based polyurethane foam, which is chemically blown with water. The polyol reacts with isocyanate to produces the polymer. PMDI- 10 is catalyzed giving it a short pot life: it foams and polymerizes to a solid within 5 minutes during normal processing. To achieve a higher density, the foam is over-packed to twice or more of its free rise density of 10 lb/ft3. The goal for modeling is to represent the expansion, filling of molds, and the polymerization of the foam. This will be used to reduce defects, optimize the mold design, troubleshoot the processed, and predict the final foam properties. A homogenized continuum model foaming and curing was developed based on reaction kinetics, documented in a recent paper; it uses a simplified mathematical formalism that decouples these two reactions. The chemo-rheology of PMDI is measured experimentally and fit to a generalized- Newtonian viscosity model that is dependent on the extent of cure, gas fraction, and temperature. The conservation equations, including the equations of motion, an energy balance, and three rate equations are solved via a stabilized finite element method. The equations are combined with a level set method to determine the location of the foam-gas interface as it evolves to fill the mold. Understanding the thermal history and loads on the foam due to exothermicity and oven

  17. Modeling of Two-Phase Flow in Rough-Walled Fracture Using Level Set Method

    Directory of Open Access Journals (Sweden)

    Yunfeng Dai

    2017-01-01

    Full Text Available To describe accurately the flow characteristic of fracture scale displacements of immiscible fluids, an incompressible two-phase (crude oil and water flow model incorporating interfacial forces and nonzero contact angles is developed. The roughness of the two-dimensional synthetic rough-walled fractures is controlled with different fractal dimension parameters. Described by the Navier–Stokes equations, the moving interface between crude oil and water is tracked using level set method. The method accounts for differences in densities and viscosities of crude oil and water and includes the effect of interfacial force. The wettability of the rough fracture wall is taken into account by defining the contact angle and slip length. The curve of the invasion pressure-water volume fraction is generated by modeling two-phase flow during a sudden drainage. The volume fraction of water restricted in the rough-walled fracture is calculated by integrating the water volume and dividing by the total cavity volume of the fracture while the two-phase flow is quasistatic. The effect of invasion pressure of crude oil, roughness of fracture wall, and wettability of the wall on two-phase flow in rough-walled fracture is evaluated.

  18. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Energy Technology Data Exchange (ETDEWEB)

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  19. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    International Nuclear Information System (INIS)

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang; Hu, Ying; Xiong, Jing; Zhang, Jianwei

    2015-01-01

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm 3 ) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm 3 , 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm 3 , 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm

  20. The Model Confidence Set

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.

    The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS......, beyond the comparison of models. We apply the MCS procedure to two empirical problems. First, we revisit the inflation forecasting problem posed by Stock and Watson (1999), and compute the MCS for their set of inflation forecasts. Second, we compare a number of Taylor rule regressions and determine...... the MCS of the best in terms of in-sample likelihood criteria....

  1. An Accurate Fire-Spread Algorithm in the Weather Research and Forecasting Model Using the Level-Set Method

    Science.gov (United States)

    Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.

    2018-04-01

    The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.

  2. Introduction to the level-set full field modeling of laths spheroidization phenomenon in α/β titanium alloys

    Directory of Open Access Journals (Sweden)

    Polychronopoulou D.

    2016-01-01

    Full Text Available Fragmentation of α lamellae and subsequent spheroidization of α laths in α/β titanium alloys occurring during and after deformation are well known phenomena. We will illustrate the development of a new finite element methodology to model them. This new methodology is based on a level set framework to model the deformation and the ad hoc simultaneous and/or subsequent interfaces kinetics. We will focus, at yet, on the modeling of the surface diffusion at the α/β phase interfaces and the motion by mean curvature at the α/α grain interfaces.

  3. Fast Sparse Level Sets on Graphics Hardware

    NARCIS (Netherlands)

    Jalba, Andrei C.; Laan, Wladimir J. van der; Roerdink, Jos B.T.M.

    The level-set method is one of the most popular techniques for capturing and tracking deformable interfaces. Although level sets have demonstrated great potential in visualization and computer graphics applications, such as surface editing and physically based modeling, their use for interactive

  4. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  5. Economic communication model set

    Science.gov (United States)

    Zvereva, Olga M.; Berg, Dmitry B.

    2017-06-01

    This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.

  6. Continuous soil maps - a fuzzy set approach to bridge the gap between aggregation levels of process and distribution models

    NARCIS (Netherlands)

    Gruijter, de J.J.; Walvoort, D.J.J.; Gaans, van P.F.M.

    1997-01-01

    Soil maps as multi-purpose models of spatial soil distribution have a much higher level of aggregation (map units) than the models of soil processes and land-use effects that need input from soil maps. This mismatch between aggregation levels is particularly detrimental in the context of precision

  7. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    Science.gov (United States)

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Economic comparison of food, non food crops, set-aside at a regional level with a linear programming model

    International Nuclear Information System (INIS)

    Sourie, J.C.; Hautcolas, J.C.; Blanchet, J.

    1992-01-01

    This paper is concerned with a regional linear programming model. Its purpose is a simulation of the European Economic Community supply of non-food crops at the farm gate according to different sets of European Common Agriculture Policy (CAP) measures. The methodology is first described with a special emphasis on the aggregation problem. The model allows the simultaneous calculation of the impact of non food crops on the farmer's income and on the agricultural budget. The model is then applied to an intensive agricultural region (400 000 ha of arable land). In this region, sugar beet and rape seem the less costly resources, both for the farmers and the CAP taxpayers. An improvement of the economic situation of the two previous agents can be obtained only if a tax exemption on ethanol and rape oil and a subsidy per hactare are allowed. This subsidy can be lower than the set aside premium. (author)

  9. On reinitializing level set functions

    Science.gov (United States)

    Min, Chohong

    2010-04-01

    In this paper, we consider reinitializing level functions through equation ϕt+sgn(ϕ0)(‖∇ϕ‖-1)=0[16]. The method of Russo and Smereka [11] is taken in the spatial discretization of the equation. The spatial discretization is, simply speaking, the second order ENO finite difference with subcell resolution near the interface. Our main interest is on the temporal discretization of the equation. We compare the three temporal discretizations: the second order Runge-Kutta method, the forward Euler method, and a Gauss-Seidel iteration of the forward Euler method. The fact that the time in the equation is fictitious makes a hypothesis that all the temporal discretizations result in the same result in their stationary states. The fact that the absolute stability region of the forward Euler method is not wide enough to include all the eigenvalues of the linearized semi-discrete system of the second order ENO spatial discretization makes another hypothesis that the forward Euler temporal discretization should invoke numerical instability. Our results in this paper contradict both the hypotheses. The Runge-Kutta and Gauss-Seidel methods obtain the second order accuracy, and the forward Euler method converges with order between one and two. Examining all their properties, we conclude that the Gauss-Seidel method is the best among the three. Compared to the Runge-Kutta, it is twice faster and requires memory two times less with the same accuracy.

  10. Settings for Physical Activity – Developing a Site-specific Physical Activity Behavior Model based on Multi-level Intervention Studies

    DEFF Research Database (Denmark)

    Troelsen, Jens; Klinker, Charlotte Demant; Breum, Lars

    Settings for Physical Activity – Developing a Site-specific Physical Activity Behavior Model based on Multi-level Intervention Studies Introduction: Ecological models of health behavior have potential as theoretical framework to comprehend the multiple levels of factors influencing physical...... to be taken into consideration. A theoretical implication of this finding is to develop a site-specific physical activity behavior model adding a layered structure to the ecological model representing the determinants related to the specific site. Support: This study was supported by TrygFonden, Realdania...... activity (PA). The potential is shown by the fact that there has been a dramatic increase in application of ecological models in research and practice. One proposed core principle is that an ecological model is most powerful if the model is behavior-specific. However, based on multi-level interventions...

  11. Ultrasonic scalpel causes greater depth of soft tissue necrosis compared to monopolar electrocautery at standard power level settings in a pig model.

    Science.gov (United States)

    Homayounfar, Kia; Meis, Johanna; Jung, Klaus; Klosterhalfen, Bernd; Sprenger, Thilo; Conradi, Lena-Christin; Langer, Claus; Becker, Heinz

    2012-02-23

    Ultrasonic scalpel (UC) and monopolar electrocautery (ME) are common tools for soft tissue dissection. However, morphological data on the related tissue alteration are discordant. We developed an automatic device for standardized sample excision and compared quality and depth of morphological changes caused by UC and ME in a pig model. 100 tissue samples (5 × 3 cm) of the abdominal wall were excised in 16 pigs. Excisions were randomly performed manually or by using the self-constructed automatic device at standard power levels (60 W cutting in ME, level 5 in UC) for abdominal surgery. Quality of tissue alteration and depth of coagulation necrosis were examined histopathologically. Device (UC vs. ME) and mode (manually vs. automatic) effects were studied by two-way analysis of variance at a significance level of 5%. At the investigated power level settings UC and ME induced qualitatively similar coagulation necroses. Mean depth of necrosis was 450.4 ± 457.8 μm for manual UC and 553.5 ± 326.9 μm for automatic UC versus 149.0 ± 74.3 μm for manual ME and 257.6 ± 119.4 μm for automatic ME. Coagulation necrosis was significantly deeper (p power levels.

  12. A Level Set Discontinuous Galerkin Method for Free Surface Flows

    DEFF Research Database (Denmark)

    Grooss, Jesper; Hesthaven, Jan

    2006-01-01

    We present a discontinuous Galerkin method on a fully unstructured grid for the modeling of unsteady incompressible fluid flows with free surfaces. The surface is modeled by embedding and represented by a levelset. We discuss the discretization of the flow equations and the level set equation...

  13. Ultrasonic scalpel causes greater depth of soft tissue necrosis compared to monopolar electrocautery at standard power level settings in a pig model

    Science.gov (United States)

    2012-01-01

    Background Ultrasonic scalpel (UC) and monopolar electrocautery (ME) are common tools for soft tissue dissection. However, morphological data on the related tissue alteration are discordant. We developed an automatic device for standardized sample excision and compared quality and depth of morphological changes caused by UC and ME in a pig model. Methods 100 tissue samples (5 × 3 cm) of the abdominal wall were excised in 16 pigs. Excisions were randomly performed manually or by using the self-constructed automatic device at standard power levels (60 W cutting in ME, level 5 in UC) for abdominal surgery. Quality of tissue alteration and depth of coagulation necrosis were examined histopathologically. Device (UC vs. ME) and mode (manually vs. automatic) effects were studied by two-way analysis of variance at a significance level of 5%. Results At the investigated power level settings UC and ME induced qualitatively similar coagulation necroses. Mean depth of necrosis was 450.4 ± 457.8 μm for manual UC and 553.5 ± 326.9 μm for automatic UC versus 149.0 ± 74.3 μm for manual ME and 257.6 ± 119.4 μm for automatic ME. Coagulation necrosis was significantly deeper (p < 0.01) when UC was used compared to ME. The mode of excision (manual versus automatic) did not influence the depth of necrosis (p = 0.85). There was no significant interaction between dissection tool and mode of excision (p = 0.93). Conclusions Thermal injury caused by UC and ME results in qualitatively similar coagulation necrosis. The depth of necrosis is significantly greater in UC compared to ME at investigated standard power levels. PMID:22361346

  14. Numerical simulations of natural or mixed convection in vertical channels: comparisons of level-set numerical schemes for the modeling of immiscible incompressible fluid flows

    International Nuclear Information System (INIS)

    Li, R.

    2012-01-01

    The aim of this research dissertation is at studying natural and mixed convections of fluid flows, and to develop and validate numerical schemes for interface tracking in order to treat incompressible and immiscible fluid flows, later. In a first step, an original numerical method, based on Finite Volume discretizations, is developed for modeling low Mach number flows with large temperature gaps. Three physical applications on air flowing through vertical heated parallel plates were investigated. We showed that the optimum spacing corresponding to the peak heat flux transferred from an array of isothermal parallel plates cooled by mixed convection is smaller than those for natural or forced convections when the pressure drop at the outlet keeps constant. We also proved that mixed convection flows resulting from an imposed flow rate may exhibit unexpected physical solutions; alternative model based on prescribed total pressure at inlet and fixed pressure at outlet sections gives more realistic results. For channels heated by heat flux on one wall only, surface radiation tends to suppress the onset of re-circulations at the outlet and to unify the walls temperature. In a second step, the mathematical model coupling the incompressible Navier-Stokes equations and the Level-Set method for interface tracking is derived. Improvements in fluid volume conservation by using high order discretization (ENO-WENO) schemes for the transport equation and variants of the signed distance equation are discussed. (author)

  15. Compositional models for credal sets

    Czech Academy of Sciences Publication Activity Database

    Vejnarová, Jiřina

    2017-01-01

    Roč. 90, č. 1 (2017), s. 359-373 ISSN 0888-613X R&D Projects: GA ČR(CZ) GA16-12010S Institutional support: RVO:67985556 Keywords : Imprecise probabilities * Credal sets * Multidimensional models * Conditional independence Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 2.845, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/vejnarova-0483288.pdf

  16. Level set methods for inverse scattering—some recent developments

    International Nuclear Information System (INIS)

    Dorn, Oliver; Lesselier, Dominique

    2009-01-01

    We give an update on recent techniques which use a level set representation of shapes for solving inverse scattering problems, completing in that matter the exposition made in (Dorn and Lesselier 2006 Inverse Problems 22 R67) and (Dorn and Lesselier 2007 Deformable Models (New York: Springer) pp 61–90), and bringing it closer to the current state of the art

  17. A deep level set method for image segmentation

    OpenAIRE

    Tang, Min; Valipour, Sepehr; Zhang, Zichen Vincent; Cobzas, Dana; MartinJagersand

    2017-01-01

    This paper proposes a novel image segmentation approachthat integrates fully convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the integrated method can incorporatesmoothing and prior information to achieve an accurate segmentation.Furthermore, different than using the level set model as a post-processingtool, we integrate it into the training phase to fine-tune the FCN. Thisallows the use of unlabeled data during training in a semi-supervisedsetting. Using two types o...

  18. Exploring the level sets of quantum control landscapes

    International Nuclear Information System (INIS)

    Rothman, Adam; Ho, Tak-San; Rabitz, Herschel

    2006-01-01

    A quantum control landscape is defined by the value of a physical observable as a functional of the time-dependent control field E(t) for a given quantum-mechanical system. Level sets through this landscape are prescribed by a particular value of the target observable at the final dynamical time T, regardless of the intervening dynamics. We present a technique for exploring a landscape level set, where a scalar variable s is introduced to characterize trajectories along these level sets. The control fields E(s,t) accomplishing this exploration (i.e., that produce the same value of the target observable for a given system) are determined by solving a differential equation over s in conjunction with the time-dependent Schroedinger equation. There is full freedom to traverse a level set, and a particular trajectory is realized by making an a priori choice for a continuous function f(s,t) that appears in the differential equation for the control field. The continuous function f(s,t) can assume an arbitrary form, and thus a level set generally contains a family of controls, where each control takes the quantum system to the same final target value, but produces a distinct control mechanism. In addition, although the observable value remains invariant over the level set, other dynamical properties (e.g., the degree of robustness to control noise) are not specifically preserved and can vary greatly. Examples are presented to illustrate the continuous nature of level-set controls and their associated induced dynamical features, including continuously morphing mechanisms for population control in model quantum systems

  19. Structural level set inversion for microwave breast screening

    International Nuclear Information System (INIS)

    Irishina, Natalia; Álvarez, Diego; Dorn, Oliver; Moscoso, Miguel

    2010-01-01

    We present a new inversion strategy for the early detection of breast cancer from microwave data which is based on a new multiphase level set technique. This novel structural inversion method uses a modification of the color level set technique adapted to the specific situation of structural breast imaging taking into account the high complexity of the breast tissue. We only use data of a few microwave frequencies for detecting the tumors hidden in this complex structure. Three level set functions are employed for describing four different types of breast tissue, where each of these four regions is allowed to have a complicated topology and to have an interior structure which needs to be estimated from the data simultaneously with the region interfaces. The algorithm consists of several stages of increasing complexity. In each stage more details about the anatomical structure of the breast interior is incorporated into the inversion model. The synthetic breast models which are used for creating simulated data are based on real MRI images of the breast and are therefore quite realistic. Our results demonstrate the potential and feasibility of the proposed level set technique for detecting, locating and characterizing a small tumor in its early stage of development embedded in such a realistic breast model. Both the data acquisition simulation and the inversion are carried out in 2D

  20. Setting the stage for master's level success

    Science.gov (United States)

    Roberts, Donna

    Comprehensive reading, writing, research, and study skills play a critical role in a graduate student's success and ability to contribute to a field of study effectively. The literature indicated a need to support graduate student success in the areas of mentoring, navigation, as well as research and writing. The purpose of this two-phased mixed methods explanatory study was to examine factors that characterize student success at the Master's level in the fields of education, sociology and social work. The study was grounded in a transformational learning framework which focused on three levels of learning: technical knowledge, practical or communicative knowledge, and emancipatory knowledge. The study included two data collection points. Phase one consisted of a Master's Level Success questionnaire that was sent via Qualtrics to graduate level students at three colleges and universities in the Central Valley of California: a California State University campus, a University of California campus, and a private college campus. The results of the chi-square indicated that seven questionnaire items were significant with p values less than .05. Phase two in the data collection included semi-structured interview questions that resulted in three themes emerged using Dedoose software: (1) the need for more language and writing support at the Master's level, (2) the need for mentoring, especially for second-language learners, and (3) utilizing the strong influence of faculty in student success. It is recommended that institutions continually assess and strengthen their programs to meet the full range of learners and to support students to degree completion.

  1. Level Set Approach to Anisotropic Wet Etching of Silicon

    Directory of Open Access Journals (Sweden)

    Branislav Radjenović

    2010-05-01

    Full Text Available In this paper a methodology for the three dimensional (3D modeling and simulation of the profile evolution during anisotropic wet etching of silicon based on the level set method is presented. Etching rate anisotropy in silicon is modeled taking into account full silicon symmetry properties, by means of the interpolation technique using experimentally obtained values for the etching rates along thirteen principal and high index directions in KOH solutions. The resulting level set equations are solved using an open source implementation of the sparse field method (ITK library, developed in medical image processing community, extended for the case of non-convex Hamiltonians. Simulation results for some interesting initial 3D shapes, as well as some more practical examples illustrating anisotropic etching simulation in the presence of masks (simple square aperture mask, convex corner undercutting and convex corner compensation, formation of suspended structures are shown also. The obtained results show that level set method can be used as an effective tool for wet etching process modeling, and that is a viable alternative to the Cellular Automata method which now prevails in the simulations of the wet etching process.

  2. Discretisation Schemes for Level Sets of Planar Gaussian Fields

    Science.gov (United States)

    Beliaev, D.; Muirhead, S.

    2018-01-01

    Smooth random Gaussian functions play an important role in mathematical physics, a main example being the random plane wave model conjectured by Berry to give a universal description of high-energy eigenfunctions of the Laplacian on generic compact manifolds. Our work is motivated by questions about the geometry of such random functions, in particular relating to the structure of their nodal and level sets. We study four discretisation schemes that extract information about level sets of planar Gaussian fields. Each scheme recovers information up to a different level of precision, and each requires a maximum mesh-size in order to be valid with high probability. The first two schemes are generalisations and enhancements of similar schemes that have appeared in the literature (Beffara and Gayet in Publ Math IHES, 2017. https://doi.org/10.1007/s10240-017-0093-0; Mischaikow and Wanner in Ann Appl Probab 17:980-1018, 2007); these give complete topological information about the level sets on either a local or global scale. As an application, we improve the results in Beffara and Gayet (2017) on Russo-Seymour-Welsh estimates for the nodal set of positively-correlated planar Gaussian fields. The third and fourth schemes are, to the best of our knowledge, completely new. The third scheme is specific to the nodal set of the random plane wave, and provides global topological information about the nodal set up to `visible ambiguities'. The fourth scheme gives a way to approximate the mean number of excursion domains of planar Gaussian fields.

  3. Skull defect reconstruction based on a new hybrid level set.

    Science.gov (United States)

    Zhang, Ziqun; Zhang, Ran; Song, Zhijian

    2014-01-01

    Skull defect reconstruction is an important aspect of surgical repair. Historically, a skull defect prosthesis was created by the mirroring technique, surface fitting, or formed templates. These methods are not based on the anatomy of the individual patient's skull, and therefore, the prosthesis cannot precisely correct the defect. This study presented a new hybrid level set model, taking into account both the global optimization region information and the local accuracy edge information, while avoiding re-initialization during the evolution of the level set function. Based on the new method, a skull defect was reconstructed, and the skull prosthesis was produced by rapid prototyping technology. This resulted in a skull defect prosthesis that well matched the skull defect with excellent individual adaptation.

  4. Multi-Level Model

    Directory of Open Access Journals (Sweden)

    Constanta Nicoleta BODEA

    2008-01-01

    Full Text Available Is an original paper, which contains a hierarchical model with three levels, for determining the linearized non-homogeneous and homogeneous credibility premiums at company level, at sector level and at contract level, founded on the relevant covariance relations between the risk premium, the observations and the weighted averages. We give a rather explicit description of the input data for the multi- level hierarchical model used, only to show that in practical situations, there will always be enough data to apply credibility theory to a real insurance portfolio.

  5. The Influence of Study-Level Inference Models and Study Set Size on Coordinate-Based fMRI Meta-Analyses

    Directory of Open Access Journals (Sweden)

    Han Bossier

    2018-01-01

    Full Text Available Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1 the balance between false and true positives and (2 the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS, or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35. To do this, we apply a resampling scheme on a large dataset (N = 1,400 to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results.

  6. Presenting a Model for Setting in Narrative Fiction Illustration

    Directory of Open Access Journals (Sweden)

    Hajar Salimi Namin

    2017-12-01

    Full Text Available The present research aims at presenting a model for evaluating and enhancing training the setting in illustration for narrative fictions for undergraduate students of graphic design who are weak in setting. The research utilized expert’s opinions through a survey. The designed model was submitted to eight experts, and their opinions were used to have the model adjusted and improved. Used as research instruments were notes, materials in text books, papers, and related websites, as well as questionnaires. Results indicated that, for evaluating and enhancing the level of training the setting in illustration for narrative fiction to students, one needs to extract sub-indexes of setting. Moreover, definition and recognition of the model of setting helps undergraduate students of graphic design enhance the level of setting in their works skill by recognizing details of setting. Accordingly, it is recommended to design training packages to enhance these sub-indexes and hence improve the setting for narrative fiction illustration.

  7. Analysis of carotid artery plaque and wall boundaries on CT images by using a semi-automatic method based on level set model

    International Nuclear Information System (INIS)

    Saba, Luca; Sannia, Stefano; Ledda, Giuseppe; Gao, Hao; Acharya, U.R.; Suri, Jasjit S.

    2012-01-01

    The purpose of this study was to evaluate the potentialities of a semi-automated technique in the detection and measurement of the carotid artery plaque. Twenty-two consecutive patients (18 males, 4 females; mean age 62 years) examined with MDCTA from January 2011 to March 2011 were included in this retrospective study. Carotid arteries are examined with a 16-multi-detector-row CT system, and for each patient, the most diseased carotid was selected. In the first phase, the carotid plaque was identified and one experienced radiologist manually traced the inner and outer boundaries by using polyline and radial distance method (PDM and RDM, respectively). In the second phase, the carotid inner and outer boundaries were traced with an automated algorithm: level-set-method (LSM). Data were compared by using Pearson rho correlation, Bland-Altman, and regression. A total of 715 slices were analyzed. The mean thickness of the plaque using the reference PDM was 1.86 mm whereas using the LSM-PDM was 1.96 mm; using the reference RDM was 2.06 mm whereas using the LSM-RDM was 2.03 mm. The correlation values between the references, the LSM, the PDM and the RDM were 0.8428, 0.9921, 0.745 and 0.6425. Bland-Altman demonstrated a very good agreement in particular with the RDM method. Results of our study indicate that LSM method can automatically measure the thickness of the plaque and that the best results are obtained with the RDM. Our results suggest that advanced computer-based algorithms can identify and trace the plaque boundaries like an experienced human reader. (orig.)

  8. Analysis of carotid artery plaque and wall boundaries on CT images by using a semi-automatic method based on level set model

    Energy Technology Data Exchange (ETDEWEB)

    Saba, Luca; Sannia, Stefano; Ledda, Giuseppe [University of Cagliari - Azienda Ospedaliero Universitaria di Cagliari, Department of Radiology, Monserrato, Cagliari (Italy); Gao, Hao [University of Strathclyde, Signal Processing Centre for Excellence in Signal and Image Processing, Department of Electronic and Electrical Engineering, Glasgow (United Kingdom); Acharya, U.R. [Ngee Ann Polytechnic University, Department of Electronics and Computer Engineering, Clementi (Singapore); Suri, Jasjit S. [Biomedical Technologies Inc., Denver, CO (United States); Idaho State University (Aff.), Pocatello, ID (United States)

    2012-11-15

    The purpose of this study was to evaluate the potentialities of a semi-automated technique in the detection and measurement of the carotid artery plaque. Twenty-two consecutive patients (18 males, 4 females; mean age 62 years) examined with MDCTA from January 2011 to March 2011 were included in this retrospective study. Carotid arteries are examined with a 16-multi-detector-row CT system, and for each patient, the most diseased carotid was selected. In the first phase, the carotid plaque was identified and one experienced radiologist manually traced the inner and outer boundaries by using polyline and radial distance method (PDM and RDM, respectively). In the second phase, the carotid inner and outer boundaries were traced with an automated algorithm: level-set-method (LSM). Data were compared by using Pearson rho correlation, Bland-Altman, and regression. A total of 715 slices were analyzed. The mean thickness of the plaque using the reference PDM was 1.86 mm whereas using the LSM-PDM was 1.96 mm; using the reference RDM was 2.06 mm whereas using the LSM-RDM was 2.03 mm. The correlation values between the references, the LSM, the PDM and the RDM were 0.8428, 0.9921, 0.745 and 0.6425. Bland-Altman demonstrated a very good agreement in particular with the RDM method. Results of our study indicate that LSM method can automatically measure the thickness of the plaque and that the best results are obtained with the RDM. Our results suggest that advanced computer-based algorithms can identify and trace the plaque boundaries like an experienced human reader. (orig.)

  9. A parametric level-set method for partially discrete tomography

    NARCIS (Netherlands)

    A. Kadu (Ajinkya); T. van Leeuwen (Tristan); K.J. Batenburg (Joost)

    2017-01-01

    textabstractThis paper introduces a parametric level-set method for tomographic reconstruction of partially discrete images. Such images consist of a continuously varying background and an anomaly with a constant (known) grey-value. We express the geometry of the anomaly using a level-set function,

  10. Identifying Heterogeneities in Subsurface Environment using the Level Set Method

    Energy Technology Data Exchange (ETDEWEB)

    Lei, Hongzhuan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lu, Zhiming [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vesselinov, Velimir Valentinov [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-25

    These are slides from a presentation on identifying heterogeneities in subsurface environment using the level set method. The slides start with the motivation, then explain Level Set Method (LSM), the algorithms, some examples are given, and finally future work is explained.

  11. Novel gene sets improve set-level classification of prokaryotic gene expression data.

    Science.gov (United States)

    Holec, Matěj; Kuželka, Ondřej; Železný, Filip

    2015-10-28

    Set-level classification of gene expression data has received significant attention recently. In this setting, high-dimensional vectors of features corresponding to genes are converted into lower-dimensional vectors of features corresponding to biologically interpretable gene sets. The dimensionality reduction brings the promise of a decreased risk of overfitting, potentially resulting in improved accuracy of the learned classifiers. However, recent empirical research has not confirmed this expectation. Here we hypothesize that the reported unfavorable classification results in the set-level framework were due to the adoption of unsuitable gene sets defined typically on the basis of the Gene ontology and the KEGG database of metabolic networks. We explore an alternative approach to defining gene sets, based on regulatory interactions, which we expect to collect genes with more correlated expression. We hypothesize that such more correlated gene sets will enable to learn more accurate classifiers. We define two families of gene sets using information on regulatory interactions, and evaluate them on phenotype-classification tasks using public prokaryotic gene expression data sets. From each of the two gene-set families, we first select the best-performing subtype. The two selected subtypes are then evaluated on independent (testing) data sets against state-of-the-art gene sets and against the conventional gene-level approach. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. Novel gene sets defined on the basis of regulatory interactions improve set-level classification of gene expression data. The experimental scripts and other material needed to reproduce the experiments are available at http://ida.felk.cvut.cz/novelgenesets.tar.gz.

  12. Settings in Social Networks : a Measurement Model

    NARCIS (Netherlands)

    Schweinberger, Michael; Snijders, Tom A.B.

    2003-01-01

    A class of statistical models is proposed that aims to recover latent settings structures in social networks. Settings may be regarded as clusters of vertices. The measurement model is based on two assumptions. (1) The observed network is generated by hierarchically nested latent transitive

  13. Multi-phase flow monitoring with electrical impedance tomography using level set based method

    International Nuclear Information System (INIS)

    Liu, Dong; Khambampati, Anil Kumar; Kim, Sin; Kim, Kyung Youn

    2015-01-01

    Highlights: • LSM has been used for shape reconstruction to monitor multi-phase flow using EIT. • Multi-phase level set model for conductivity is represented by two level set functions. • LSM handles topological merging and breaking naturally during evolution process. • To reduce the computational time, a narrowband technique was applied. • Use of narrowband and optimization approach results in efficient and fast method. - Abstract: In this paper, a level set-based reconstruction scheme is applied to multi-phase flow monitoring using electrical impedance tomography (EIT). The proposed scheme involves applying a narrowband level set method to solve the inverse problem of finding the interface between the regions having different conductivity values. The multi-phase level set model for the conductivity distribution inside the domain is represented by two level set functions. The key principle of the level set-based method is to implicitly represent the shape of interface as the zero level set of higher dimensional function and then solve a set of partial differential equations. The level set-based scheme handles topological merging and breaking naturally during the evolution process. It also offers several advantages compared to traditional pixel-based approach. Level set-based method for multi-phase flow is tested with numerical and experimental data. It is found that level set-based method has better reconstruction performance when compared to pixel-based method

  14. Volume Sculpting Using the Level-Set Method

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Christensen, Niels Jørgen

    2002-01-01

    In this paper, we propose the use of the Level--Set Method as the underlying technology of a volume sculpting system. The main motivation is that this leads to a very generic technique for deformation of volumetric solids. In addition, our method preserves a distance field volume representation....... A scaling window is used to adapt the Level--Set Method to local deformations and to allow the user to control the intensity of the tool. Level--Set based tools have been implemented in an interactive sculpting system, and we show sculptures created using the system....

  15. A level set method for multiple sclerosis lesion segmentation.

    Science.gov (United States)

    Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming

    2018-06-01

    In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Some numerical studies of interface advection properties of level set ...

    Indian Academy of Sciences (India)

    explicit computational elements moving through an Eulerian grid. ... location. The interface is implicitly defined (captured) as the location of the discontinuity in the ... This level set function is advected with the background flow field and thus ...

  17. On multiple level-set regularization methods for inverse problems

    International Nuclear Information System (INIS)

    DeCezaro, A; Leitão, A; Tai, X-C

    2009-01-01

    We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels

  18. Modélisation du procédé de soudage hybride Arc / Laser par une approche level set application aux toles d'aciers de fortes épaisseurs A level-set approach for the modelling of hybrid arc/laser welding process application for high thickness steel sheets joining

    Directory of Open Access Journals (Sweden)

    Desmaison Olivier

    2013-11-01

    Full Text Available Le procédé de soudage hybride Arc/Laser est une solution aux assemblages difficiles de tôles de fortes épaisseurs. Ce procédé innovant associe deux sources de chaleur : un arc électrique produit par une torche MIG et une source laser placée en amont. Ce couplage améliore le rendement du procédé, la qualité du cordon et les déformations finales. La modélisation de ce procédé par une approche Level Set permet une prédiction du développement du cordon et du champ de température associé. La simulation du soudage multi-passes d'une nuance d'acier 18MnNiMo5 est présentée ici et les résultats sont comparés aux observations expérimentales. The hybrid arc/laser welding process has been developed in order to overcome the difficulties encountered for joining high thickness steel sheets. This innovative process gathers two heat sources: an arc source developed by a MIG torch and a pre-located laser source. This coupling improves the efficiency of the process, the weld bead quality and the final deformations. The Level-Set approach for the modelling of this process enables the prediction of the weld bead development and the temperature field evolution. The simulation of the multi-passes welding of a 18MnNiMo5 steel grade is detailed and the results are compared to the experimental observations.

  19. Feasibility of disposal of high-level radioactive waste into the seabed. Volume 5: Dispersal of radionuclides in the oceans: Models, data sets and regional descriptions

    International Nuclear Information System (INIS)

    Marietta, M.G.; Simmons, W.F.

    1988-01-01

    One of the options suggested for disposal of high-level radioactive waste resulting from the generation of nuclear power is burial beneath the deep ocean floor in geologically stable sediment formations which have no economic value. The 8-volume series provides an assessment of the technical feasibility and radiological safety of this disposal concept based on the results obtained by ten years of co-operation and information exchange among the Member countries participating in the NEA Seabed Working Group. This report summarizes the development of a realistic and credible methodology to describe the oceanic dispersion of radionuclides for risk assessment calculations

  20. Simulation-based evaluation of the performance of the F test in a linear multilevel model setting with sparseness at the level of the primary unit.

    Science.gov (United States)

    Bruyndonckx, Robin; Aerts, Marc; Hens, Niel

    2016-09-01

    In a linear multilevel model, significance of all fixed effects can be determined using F tests under maximum likelihood (ML) or restricted maximum likelihood (REML). In this paper, we demonstrate that in the presence of primary unit sparseness, the performance of the F test under both REML and ML is rather poor. Using simulations based on the structure of a data example on ceftriaxone consumption in hospitalized children, we studied variability, type I error rate and power in scenarios with a varying number of secondary units within the primary units. In general, the variability in the estimates for the effect of the primary unit decreased as the number of secondary units increased. In the presence of singletons (i.e., only one secondary unit within a primary unit), REML consistently outperformed ML, although even under REML the performance of the F test was found inadequate. When modeling the primary unit as a random effect, the power was lower while the type I error rate was unstable. The options of dropping, regrouping, or splitting the singletons could solve either the problem of a high type I error rate or a low power, while worsening the other. The permutation test appeared to be a valid alternative as it outperformed the F test, especially under REML. We conclude that in the presence of singletons, one should be careful in using the F test to determine the significance of the fixed effects, and propose the permutation test (under REML) as an alternative. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Modelling occupants’ heating set-point prefferences

    DEFF Research Database (Denmark)

    Andersen, Rune Vinther; Olesen, Bjarne W.; Toftum, Jørn

    2011-01-01

    consumption. Simultaneous measurement of the set-point of thermostatic radiator valves (trv), and indoor and outdoor environment characteristics was carried out in 15 dwellings in Denmark in 2008. Linear regression was used to infer a model of occupants’ interactions with trvs. This model could easily...... be implemented in most simulation software packages to increase the validity of the simulation outcomes....

  2. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...

  3. Setting limits on supersymmetry using simplified models

    CERN Document Server

    Gutschow, C.

    2012-01-01

    Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical implications. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be re-cast in this manner into almost any theoretical framework, includ...

  4. Level-Set Topology Optimization with Aeroelastic Constraints

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2015-01-01

    Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.

  5. Level Set Structure of an Integrable Cellular Automaton

    Directory of Open Access Journals (Sweden)

    Taichiro Takagi

    2010-03-01

    Full Text Available Based on a group theoretical setting a sort of discrete dynamical system is constructed and applied to a combinatorial dynamical system defined on the set of certain Bethe ansatz related objects known as the rigged configurations. This system is then used to study a one-dimensional periodic cellular automaton related to discrete Toda lattice. It is shown for the first time that the level set of this cellular automaton is decomposed into connected components and every such component is a torus.

  6. Set-Theoretic Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan

    Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...

  7. Transport and diffusion of material quantities on propagating interfaces via level set methods

    CERN Document Server

    Adalsteinsson, D

    2003-01-01

    We develop theory and numerical algorithms to apply level set methods to problems involving the transport and diffusion of material quantities in a level set framework. Level set methods are computational techniques for tracking moving interfaces; they work by embedding the propagating interface as the zero level set of a higher dimensional function, and then approximate the solution of the resulting initial value partial differential equation using upwind finite difference schemes. The traditional level set method works in the trace space of the evolving interface, and hence disregards any parameterization in the interface description. Consequently, material quantities on the interface which themselves are transported under the interface motion are not easily handled in this framework. We develop model equations and algorithmic techniques to extend the level set method to include these problems. We demonstrate the accuracy of our approach through a series of test examples and convergence studies.

  8. Transport and diffusion of material quantities on propagating interfaces via level set methods

    International Nuclear Information System (INIS)

    Adalsteinsson, David; Sethian, J.A.

    2003-01-01

    We develop theory and numerical algorithms to apply level set methods to problems involving the transport and diffusion of material quantities in a level set framework. Level set methods are computational techniques for tracking moving interfaces; they work by embedding the propagating interface as the zero level set of a higher dimensional function, and then approximate the solution of the resulting initial value partial differential equation using upwind finite difference schemes. The traditional level set method works in the trace space of the evolving interface, and hence disregards any parameterization in the interface description. Consequently, material quantities on the interface which themselves are transported under the interface motion are not easily handled in this framework. We develop model equations and algorithmic techniques to extend the level set method to include these problems. We demonstrate the accuracy of our approach through a series of test examples and convergence studies

  9. Level Sets and Voronoi based Feature Extraction from any Imagery

    DEFF Research Database (Denmark)

    Sharma, O.; Anton, François; Mioc, Darka

    2012-01-01

    Polygon features are of interest in many GEOProcessing applications like shoreline mapping, boundary delineation, change detection, etc. This paper presents a unique new GPU-based methodology to automate feature extraction combining level sets, or mean shift based segmentation together with Voron...

  10. Level-Set Methodology on Adaptive Octree Grids

    Science.gov (United States)

    Gibou, Frederic; Guittet, Arthur; Mirzadeh, Mohammad; Theillard, Maxime

    2017-11-01

    Numerical simulations of interfacial problems in fluids require a methodology capable of tracking surfaces that can undergo changes in topology and capable to imposing jump boundary conditions in a sharp manner. In this talk, we will discuss recent advances in the level-set framework, in particular one that is based on adaptive grids.

  11. A Memory and Computation Efficient Sparse Level-Set Method

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.

    Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the

  12. Topology optimization of hyperelastic structures using a level set method

    Science.gov (United States)

    Chen, Feifei; Wang, Yiqiang; Wang, Michael Yu; Zhang, Y. F.

    2017-12-01

    Soft rubberlike materials, due to their inherent compliance, are finding widespread implementation in a variety of applications ranging from assistive wearable technologies to soft material robots. Structural design of such soft and rubbery materials necessitates the consideration of large nonlinear deformations and hyperelastic material models to accurately predict their mechanical behaviour. In this paper, we present an effective level set-based topology optimization method for the design of hyperelastic structures that undergo large deformations. The method incorporates both geometric and material nonlinearities where the strain and stress measures are defined within the total Lagrange framework and the hyperelasticity is characterized by the widely-adopted Mooney-Rivlin material model. A shape sensitivity analysis is carried out, in the strict sense of the material derivative, where the high-order terms involving the displacement gradient are retained to ensure the descent direction. As the design velocity enters into the shape derivative in terms of its gradient and divergence terms, we develop a discrete velocity selection strategy. The whole optimization implementation undergoes a two-step process, where the linear optimization is first performed and its optimized solution serves as the initial design for the subsequent nonlinear optimization. It turns out that this operation could efficiently alleviate the numerical instability and facilitate the optimization process. To demonstrate the validity and effectiveness of the proposed method, three compliance minimization problems are studied and their optimized solutions present significant mechanical benefits of incorporating the nonlinearities, in terms of remarkable enhancement in not only the structural stiffness but also the critical buckling load.

  13. Gradient augmented level set method for phase change simulations

    Science.gov (United States)

    Anumolu, Lakshman; Trujillo, Mario F.

    2018-01-01

    A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.

  14. Level sets and extrema of random processes and fields

    CERN Document Server

    Azais, Jean-Marc

    2009-01-01

    A timely and comprehensive treatment of random field theory with applications across diverse areas of study Level Sets and Extrema of Random Processes and Fields discusses how to understand the properties of the level sets of paths as well as how to compute the probability distribution of its extremal values, which are two general classes of problems that arise in the study of random processes and fields and in related applications. This book provides a unified and accessible approach to these two topics and their relationship to classical theory and Gaussian processes and fields, and the most modern research findings are also discussed. The authors begin with an introduction to the basic concepts of stochastic processes, including a modern review of Gaussian fields and their classical inequalities. Subsequent chapters are devoted to Rice formulas, regularity properties, and recent results on the tails of the distribution of the maximum. Finally, applications of random fields to various areas of mathematics a...

  15. A level set approach for shock-induced α-γ phase transition of RDX

    Science.gov (United States)

    Josyula, Kartik; Rahul; De, Suvranu

    2018-02-01

    We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.

  16. Spatial occupancy models for large data sets

    Science.gov (United States)

    Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.

    2013-01-01

    Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.

  17. Reevaluation of steam generator level trip set point

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Yoon Sub; Soh, Dong Sub; Kim, Sung Oh; Jung, Se Won; Sung, Kang Sik; Lee, Joon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1994-06-01

    The reactor trip by the low level of steam generator water accounts for a substantial portion of reactor scrams in a nuclear plant and the feasibility of modification of the steam generator water level trip system of YGN 1/2 was evaluated in this study. The study revealed removal of the reactor trip function from the SG water level trip system is not possible because of plant safety but relaxation of the trip set point by 9 % is feasible. The set point relaxation requires drilling of new holes for level measurement to operating steam generators. Characteristics of negative neutron flux rate trip and reactor trip were also reviewed as an additional work. Since the purpose of the trip system modification for reduction of a reactor scram frequency is not to satisfy legal requirements but to improve plant performance and the modification yields positive and negative aspects, the decision of actual modification needs to be made based on the results of this study and also the policy of a plant owner. 37 figs, 6 tabs, 14 refs. (Author).

  18. Benchmark data set for wheat growth models

    DEFF Research Database (Denmark)

    Asseng, S; Ewert, F.; Martre, P

    2015-01-01

    The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, max...... analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario....

  19. Level set method for image segmentation based on moment competition

    Science.gov (United States)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  20. Route constraints model based on polychromatic sets

    Science.gov (United States)

    Yin, Xianjun; Cai, Chao; Wang, Houjun; Li, Dongwu

    2018-03-01

    With the development of unmanned aerial vehicle (UAV) technology, the fields of its application are constantly expanding. The mission planning of UAV is especially important, and the planning result directly influences whether the UAV can accomplish the task. In order to make the results of mission planning for unmanned aerial vehicle more realistic, it is necessary to consider not only the physical properties of the aircraft, but also the constraints among the various equipment on the UAV. However, constraints among the equipment of UAV are complex, and the equipment has strong diversity and variability, which makes these constraints difficult to be described. In order to solve the above problem, this paper, referring to the polychromatic sets theory used in the advanced manufacturing field to describe complex systems, presents a mission constraint model of UAV based on polychromatic sets.

  1. Particle filters for random set models

    CERN Document Server

    Ristic, Branko

    2013-01-01

    “Particle Filters for Random Set Models” presents coverage of state estimation of stochastic dynamic systems from noisy measurements, specifically sequential Bayesian estimation and nonlinear or stochastic filtering. The class of solutions presented in this book is based  on the Monte Carlo statistical method. The resulting  algorithms, known as particle filters, in the last decade have become one of the essential tools for stochastic filtering, with applications ranging from  navigation and autonomous vehicles to bio-informatics and finance. While particle filters have been around for more than a decade, the recent theoretical developments of sequential Bayesian estimation in the framework of random set theory have provided new opportunities which are not widely known and are covered in this book. These recent developments have dramatically widened the scope of applications, from single to multiple appearing/disappearing objects, from precise to imprecise measurements and measurement models. This book...

  2. Models for setting ATM parameter values

    DEFF Research Database (Denmark)

    Blaabjerg, Søren; Gravey, A.; Romæuf, L.

    1996-01-01

    essential to set traffic characteristic values that are relevant to the considered cell stream, and that ensure that the amount of non-conforming traffic is small. Using a queueing model representation for the GCRA formalism, several methods are available for choosing the traffic characteristics. This paper......In ATM networks, a user should negotiate at connection set-up a traffic contract which includes traffic characteristics and requested QoS. The traffic characteristics currently considered are the Peak Cell Rate, the Sustainable Cell Rate, the Intrinsic Burst Tolerance and the Cell Delay Variation...... (CDV) tolerance(s). The values taken by these traffic parameters characterize the so-called ''Worst Case Traffic'' that is used by CAC procedures for accepting a new connection and allocating resources to it. Conformance to the negotiated traffic characteristics is defined, at the ingress User...

  3. Surface-to-surface registration using level sets

    DEFF Research Database (Denmark)

    Hansen, Mads Fogtmann; Erbou, Søren G.; Vester-Christensen, Martin

    2007-01-01

    This paper presents a general approach for surface-to-surface registration (S2SR) with the Euclidean metric using signed distance maps. In addition, the method is symmetric such that the registration of a shape A to a shape B is identical to the registration of the shape B to the shape A. The S2SR...... problem can be approximated by the image registration (IR) problem of the signed distance maps (SDMs) of the surfaces confined to some narrow band. By shrinking the narrow bands around the zero level sets the solution to the IR problem converges towards the S2SR problem. It is our hypothesis...... that this approach is more robust and less prone to fall into local minima than ordinary surface-to-surface registration. The IR problem is solved using the inverse compositional algorithm. In this paper, a set of 40 pelvic bones of Duroc pigs are registered to each other w.r.t. the Euclidean transformation...

  4. Variational Level Set Method for Two-Stage Image Segmentation Based on Morphological Gradients

    Directory of Open Access Journals (Sweden)

    Zemin Ren

    2014-01-01

    Full Text Available We use variational level set method and transition region extraction techniques to achieve image segmentation task. The proposed scheme is done by two steps. We first develop a novel algorithm to extract transition region based on the morphological gradient. After this, we integrate the transition region into a variational level set framework and develop a novel geometric active contour model, which include an external energy based on transition region and fractional order edge indicator function. The external energy is used to drive the zero level set toward the desired image features, such as object boundaries. Due to this external energy, the proposed model allows for more flexible initialization. The fractional order edge indicator function is incorporated into the length regularization term to diminish the influence of noise. Moreover, internal energy is added into the proposed model to penalize the deviation of the level set function from a signed distance function. The results evolution of the level set function is the gradient flow that minimizes the overall energy functional. The proposed model has been applied to both synthetic and real images with promising results.

  5. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    Science.gov (United States)

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  6. An experimental methodology for a fuzzy set preference model

    Science.gov (United States)

    Turksen, I. B.; Willson, Ian A.

    1992-01-01

    A flexible fuzzy set preference model first requires approximate methodologies for implementation. Fuzzy sets must be defined for each individual consumer using computer software, requiring a minimum of time and expertise on the part of the consumer. The amount of information needed in defining sets must also be established. The model itself must adapt fully to the subject's choice of attributes (vague or precise), attribute levels, and importance weights. The resulting individual-level model should be fully adapted to each consumer. The methodologies needed to develop this model will be equally useful in a new generation of intelligent systems which interact with ordinary consumers, controlling electronic devices through fuzzy expert systems or making recommendations based on a variety of inputs. The power of personal computers and their acceptance by consumers has yet to be fully utilized to create interactive knowledge systems that fully adapt their function to the user. Understanding individual consumer preferences is critical to the design of new products and the estimation of demand (market share) for existing products, which in turn is an input to management systems concerned with production and distribution. The question of what to make, for whom to make it and how much to make requires an understanding of the customer's preferences and the trade-offs that exist between alternatives. Conjoint analysis is a widely used methodology which de-composes an overall preference for an object into a combination of preferences for its constituent parts (attributes such as taste and price), which are combined using an appropriate combination function. Preferences are often expressed using linguistic terms which cannot be represented in conjoint models. Current models are also not implemented an individual level, making it difficult to reach meaningful conclusions about the cause of an individual's behavior from an aggregate model. The combination of complex aggregate

  7. Two Surface-Tension Formulations For The Level Set Interface-Tracking Method

    International Nuclear Information System (INIS)

    Shepel, S.V.; Smith, B.L.

    2005-01-01

    The paper describes a comparative study of two surface-tension models for the Level Set interface tracking method. In both models, the surface tension is represented as a body force, concentrated near the interface, but the technical implementation of the two options is different. The first is based on a traditional Level Set approach, in which the surface tension is distributed over a narrow band around the interface using a smoothed Delta function. In the second model, which is based on the integral form of the fluid-flow equations, the force is imposed only in those computational cells through which the interface passes. Both models have been incorporated into the Finite-Element/Finite-Volume Level Set method, previously implemented into the commercial Computational Fluid Dynamics (CFD) code CFX-4. A critical evaluation of the two models, undertaken in the context of four standard Level Set benchmark problems, shows that the first model, based on the smoothed Delta function approach, is the more general, and more robust, of the two. (author)

  8. Fluoroscopy in paediatric fractures - Setting a local diagnostic reference level

    International Nuclear Information System (INIS)

    Pillai, A.; McAuley, A.; McMurray, K.; Jain, M.

    2006-01-01

    Background: The ionizing radiations (Medical Exposure) Regulation 2000 has made it mandatory to establish diagnostic reference levels (DRLs) for all typical radiological examinations. Objectives: We attempt to provide dose data for some common fluoroscopic procedures used in orthopaedic trauma that may be used as the basis for setting DRLs for paediatric patients. Materials and methods: The dose area product (DAP) in 865 paediatric trauma examinations was analysed. Median DAP values and screening times for each procedure type along with quartile values for each range are presented. Results: In the upper limb, elbow examinations had maximum exposure with a median DAP value of 1.21 cGy cm 2 . Median DAP values for forearm and wrist examinations were 0.708 and 0.538 cGy cm 2 , respectively. In lower limb, tibia and fibula examinations had a median DAP value of 3.23 cGy cm 2 followed by ankle examinations with a median DAP of 3.10 cGy cm 2 . The rounded third quartile DAP value for each distribution can be used as a provisional DRL for the specific procedure type. (authors)

  9. Mass functions from the excursion set model

    Science.gov (United States)

    Hiotelis, Nicos; Del Popolo, Antonino

    2017-11-01

    Aims: We aim to study the stochastic evolution of the smoothed overdensity δ at scale S of the form δ(S) = ∫0S K(S,u)dW(u), where K is a kernel and dW is the usual Wiener process. Methods: For a Gaussian density field, smoothed by the top-hat filter, in real space, we used a simple kernel that gives the correct correlation between scales. A Monte Carlo procedure was used to construct random walks and to calculate first crossing distributions and consequently mass functions for a constant barrier. Results: We show that the evolution considered here improves the agreement with the results of N-body simulations relative to analytical approximations which have been proposed from the same problem by other authors. In fact, we show that an evolution which is fully consistent with the ideas of the excursion set model, describes accurately the mass function of dark matter haloes for values of ν ≤ 1 and underestimates the number of larger haloes. Finally, we show that a constant threshold of collapse, lower than it is usually used, it is able to produce a mass function which approximates the results of N-body simulations for a variety of redshifts and for a wide range of masses. Conclusions: A mass function in good agreement with N-body simulations can be obtained analytically using a lower than usual constant collapse threshold.

  10. Global and local level density models

    International Nuclear Information System (INIS)

    Koning, A.J.; Hilaire, S.; Goriely, S.

    2008-01-01

    Four different level density models, three phenomenological and one microscopic, are consistently parameterized using the same set of experimental observables. For each of the phenomenological models, the Constant Temperature Model, the Back-shifted Fermi gas Model and the Generalized Superfluid Model, a version without and with explicit collective enhancement is considered. Moreover, a recently published microscopic combinatorial model is compared with the phenomenological approaches and with the same set of experimental data. For each nuclide for which sufficient experimental data exists, a local level density parameterization is constructed for each model. Next, these local models have helped to construct global level density prescriptions, to be used for cases for which no experimental data exists. Altogether, this yields a collection of level density formulae and parameters that can be used with confidence in nuclear model calculations. To demonstrate this, a large-scale validation with experimental discrete level schemes and experimental cross sections and neutron emission spectra for various different reaction channels has been performed

  11. A fuzzy set preference model for market share analysis

    Science.gov (United States)

    Turksen, I. B.; Willson, Ian A.

    1992-01-01

    Consumer preference models are widely used in new product design, marketing management, pricing, and market segmentation. The success of new products depends on accurate market share prediction and design decisions based on consumer preferences. The vague linguistic nature of consumer preferences and product attributes, combined with the substantial differences between individuals, creates a formidable challenge to marketing models. The most widely used methodology is conjoint analysis. Conjoint models, as currently implemented, represent linguistic preferences as ratio or interval-scaled numbers, use only numeric product attributes, and require aggregation of individuals for estimation purposes. It is not surprising that these models are costly to implement, are inflexible, and have a predictive validity that is not substantially better than chance. This affects the accuracy of market share estimates. A fuzzy set preference model can easily represent linguistic variables either in consumer preferences or product attributes with minimal measurement requirements (ordinal scales), while still estimating overall preferences suitable for market share prediction. This approach results in flexible individual-level conjoint models which can provide more accurate market share estimates from a smaller number of more meaningful consumer ratings. Fuzzy sets can be incorporated within existing preference model structures, such as a linear combination, using the techniques developed for conjoint analysis and market share estimation. The purpose of this article is to develop and fully test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation), and how much to make (market share

  12. Mapping topographic structure in white matter pathways with level set trees.

    Directory of Open Access Journals (Sweden)

    Brian P Kent

    Full Text Available Fiber tractography on diffusion imaging data offers rich potential for describing white matter pathways in the human brain, but characterizing the spatial organization in these large and complex data sets remains a challenge. We show that level set trees--which provide a concise representation of the hierarchical mode structure of probability density functions--offer a statistically-principled framework for visualizing and analyzing topography in fiber streamlines. Using diffusion spectrum imaging data collected on neurologically healthy controls (N = 30, we mapped white matter pathways from the cortex into the striatum using a deterministic tractography algorithm that estimates fiber bundles as dimensionless streamlines. Level set trees were used for interactive exploration of patterns in the endpoint distributions of the mapped fiber pathways and an efficient segmentation of the pathways that had empirical accuracy comparable to standard nonparametric clustering techniques. We show that level set trees can also be generalized to model pseudo-density functions in order to analyze a broader array of data types, including entire fiber streamlines. Finally, resampling methods show the reliability of the level set tree as a descriptive measure of topographic structure, illustrating its potential as a statistical descriptor in brain imaging analysis. These results highlight the broad applicability of level set trees for visualizing and analyzing high-dimensional data like fiber tractography output.

  13. Fast Streaming 3D Level set Segmentation on the GPU for Smooth Multi-phase Segmentation

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Zhang, Qin; Anton, François

    2011-01-01

    Level set method based segmentation provides an efficient tool for topological and geometrical shape handling, but it is slow due to high computational burden. In this work, we provide a framework for streaming computations on large volumetric images on the GPU. A streaming computational model...

  14. Segmenting the Parotid Gland using Registration and Level Set Methods

    DEFF Research Database (Denmark)

    Hollensen, Christian; Hansen, Mads Fogtmann; Højgaard, Liselotte

    . The method was evaluated on a test set consisting of 8 corresponding data sets. The attained total volume Dice coefficient and mean Haussdorff distance were 0.61 ± 0.20 and 15.6 ± 7.4 mm respectively. The method has improvement potential which could be exploited in order for clinical introduction....

  15. Mental models of audit and feedback in primary care settings.

    Science.gov (United States)

    Hysong, Sylvia J; Smitham, Kristen; SoRelle, Richard; Amspoker, Amber; Hughes, Ashley M; Haidet, Paul

    2018-05-30

    Audit and feedback has been shown to be instrumental in improving quality of care, particularly in outpatient settings. The mental model individuals and organizations hold regarding audit and feedback can moderate its effectiveness, yet this has received limited study in the quality improvement literature. In this study we sought to uncover patterns in mental models of current feedback practices within high- and low-performing healthcare facilities. We purposively sampled 16 geographically dispersed VA hospitals based on high and low performance on a set of chronic and preventive care measures. We interviewed up to 4 personnel from each location (n = 48) to determine the facility's receptivity to audit and feedback practices. Interview transcripts were analyzed via content and framework analysis to identify emergent themes. We found high variability in the mental models of audit and feedback, which we organized into positive and negative themes. We were unable to associate mental models of audit and feedback with clinical performance due to high variance in facility performance over time. Positive mental models exhibit perceived utility of audit and feedback practices in improving performance; whereas, negative mental models did not. Results speak to the variability of mental models of feedback, highlighting how facilities perceive current audit and feedback practices. Findings are consistent with prior research  in that variability in feedback mental models is associated with lower performance.; Future research should seek to empirically link mental models revealed in this paper to high and low levels of clinical performance.

  16. Robust boundary detection of left ventricles on ultrasound images using ASM-level set method.

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Li, Hong; Teng, Yueyang; Kang, Yan

    2015-01-01

    Level set method has been widely used in medical image analysis, but it has difficulties when being used in the segmentation of left ventricular (LV) boundaries on echocardiography images because the boundaries are not very distinguish, and the signal-to-noise ratio of echocardiography images is not very high. In this paper, we introduce the Active Shape Model (ASM) into the traditional level set method to enforce shape constraints. It improves the accuracy of boundary detection and makes the evolution more efficient. The experiments conducted on the real cardiac ultrasound image sequences show a positive and promising result.

  17. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  18. Adaptable Value-Set Analysis for Low-Level Code

    OpenAIRE

    Brauer, Jörg; Hansen, René Rydhof; Kowalewski, Stefan; Larsen, Kim G.; Olesen, Mads Chr.

    2012-01-01

    This paper presents a framework for binary code analysis that uses only SAT-based algorithms. Within the framework, incremental SAT solving is used to perform a form of weakly relational value-set analysis in a novel way, connecting the expressiveness of the value sets to computational complexity. Another key feature of our framework is that it translates the semantics of binary code into an intermediate representation. This allows for a straightforward translation of the program semantics in...

  19. Reconstruction of thin electromagnetic inclusions by a level-set method

    International Nuclear Information System (INIS)

    Park, Won-Kwang; Lesselier, Dominique

    2009-01-01

    In this contribution, we consider a technique of electromagnetic imaging (at a single, non-zero frequency) which uses the level-set evolution method for reconstructing a thin inclusion (possibly made of disconnected parts) with either dielectric or magnetic contrast with respect to the embedding homogeneous medium. Emphasis is on the proof of the concept, the scattering problem at hand being so far based on a two-dimensional scalar model. To do so, two level-set functions are employed; the first one describes location and shape, and the other one describes connectivity and length. Speeds of evolution of the level-set functions are calculated via the introduction of Fréchet derivatives of a least-square cost functional. Several numerical experiments on noiseless and noisy data as well illustrate how the proposed method behaves

  20. Level set methods for detonation shock dynamics using high-order finite elements

    Energy Technology Data Exchange (ETDEWEB)

    Dobrev, V. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Grogan, F. C. [Univ. of California, San Diego, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kolev, T. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rieben, R [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Tomov, V. Z. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-26

    Level set methods are a popular approach to modeling evolving interfaces. We present a level set ad- vection solver in two and three dimensions using the discontinuous Galerkin method with high-order nite elements. During evolution, the level set function is reinitialized to a signed distance function to maintain ac- curacy. Our approach leads to stable front propagation and convergence on high-order, curved, unstructured meshes. The ability of the solver to implicitly track moving fronts lends itself to a number of applications; in particular, we highlight applications to high-explosive (HE) burn and detonation shock dynamics (DSD). We provide results for two- and three-dimensional benchmark problems as well as applications to DSD.

  1. A parametric level-set approach for topology optimization of flow domains

    DEFF Research Database (Denmark)

    Pingen, Georg; Waidmann, Matthias; Evgrafov, Anton

    2010-01-01

    of the design variables in the traditional approaches is seen as a possible cause for the slow convergence. Non-smooth material distributions are suspected to trigger premature onset of instationary flows which cannot be treated by steady-state flow models. In the present work, we study whether the convergence...... and the versatility of topology optimization methods for fluidic systems can be improved by employing a parametric level-set description. In general, level-set methods allow controlling the smoothness of boundaries, yield a non-local influence of design variables, and decouple the material description from the flow...... field discretization. The parametric level-set method used in this study utilizes a material distribution approach to represent flow boundaries, resulting in a non-trivial mapping between design variables and local material properties. Using a hydrodynamic lattice Boltzmann method, we study...

  2. Goal oriented Mathematics Survey at Preparatory Level- Revised set ...

    African Journals Online (AJOL)

    This cross sectional study design on mathematical syllabi at preparatory levels of the high schools was to investigate the efficiency of the subject at preparatory level education serving as a basis for several streams, like Natural science, Technology, Computer Science, Health Science and Agriculture found at tertiary levels.

  3. Fate modelling of chemical compounds with incomplete data sets

    DEFF Research Database (Denmark)

    Birkved, Morten; Heijungs, Reinout

    2011-01-01

    Impact assessment of chemical compounds in Life Cycle Impact Assessment (LCIA) and Environmental Risk Assessment (ERA) requires a vast amount of data on the properties of the chemical compounds being assessed. These data are used in multi-media fate and exposure models, to calculate risk levels...... in an approximate way. The idea is that not all data needed in a multi-media fate and exposure model are completely independent and equally important, but that there are physical-chemical and biological relationships between sets of chemical properties. A statistical model is constructed to underpin this assumption...... and other indicators. ERA typically addresses one specific chemical, but in an LCIA, the number of chemicals encountered may be quite high, up to hundreds or thousands. This study explores the development of meta-models, which are supposed to reflect the “true”multi-media fate and exposure model...

  4. Integer Set Compression and Statistical Modeling

    DEFF Research Database (Denmark)

    Larsson, N. Jesper

    2014-01-01

    enumeration of elements may be arbitrary or random, but where statistics is kept in order to estimate probabilities of elements. We present a recursive subset-size encoding method that is able to benefit from statistics, explore the effects of permuting the enumeration order based on element probabilities......Compression of integer sets and sequences has been extensively studied for settings where elements follow a uniform probability distribution. In addition, methods exist that exploit clustering of elements in order to achieve higher compression performance. In this work, we address the case where...

  5. Modeling Multi-Level Systems

    CERN Document Server

    Iordache, Octavian

    2011-01-01

    This book is devoted to modeling of multi-level complex systems, a challenging domain for engineers, researchers and entrepreneurs, confronted with the transition from learning and adaptability to evolvability and autonomy for technologies, devices and problem solving methods. Chapter 1 introduces the multi-scale and multi-level systems and highlights their presence in different domains of science and technology. Methodologies as, random systems, non-Archimedean analysis, category theory and specific techniques as model categorification and integrative closure, are presented in chapter 2. Chapters 3 and 4 describe polystochastic models, PSM, and their developments. Categorical formulation of integrative closure offers the general PSM framework which serves as a flexible guideline for a large variety of multi-level modeling problems. Focusing on chemical engineering, pharmaceutical and environmental case studies, the chapters 5 to 8 analyze mixing, turbulent dispersion and entropy production for multi-scale sy...

  6. Setting Parameters for Biological Models With ANIMO

    NARCIS (Netherlands)

    Schivo, Stefano; Scholma, Jetse; Karperien, Hermanus Bernardus Johannes; Post, Janine Nicole; van de Pol, Jan Cornelis; Langerak, Romanus; André, Étienne; Frehse, Goran

    2014-01-01

    ANIMO (Analysis of Networks with Interactive MOdeling) is a software for modeling biological networks, such as e.g. signaling, metabolic or gene networks. An ANIMO model is essentially the sum of a network topology and a number of interaction parameters. The topology describes the interactions

  7. Setting development goals using stochastic dynamical system models.

    Science.gov (United States)

    Ranganathan, Shyam; Nicolis, Stamatios C; Bali Swain, Ranjula; Sumpter, David J T

    2017-01-01

    The Millennium Development Goals (MDG) programme was an ambitious attempt to encourage a globalised solution to important but often-overlooked development problems. The programme led to wide-ranging development but it has also been criticised for unrealistic and arbitrary targets. In this paper, we show how country-specific development targets can be set using stochastic, dynamical system models built from historical data. In particular, we show that the MDG target of two-thirds reduction of child mortality from 1990 levels was infeasible for most countries, especially in sub-Saharan Africa. At the same time, the MDG targets were not ambitious enough for fast-developing countries such as Brazil and China. We suggest that model-based setting of country-specific targets is essential for the success of global development programmes such as the Sustainable Development Goals (SDG). This approach should provide clear, quantifiable targets for policymakers.

  8. Individual-and Setting-Level Correlates of Secondary Traumatic Stress in Rape Crisis Center Staff.

    Science.gov (United States)

    Dworkin, Emily R; Sorell, Nicole R; Allen, Nicole E

    2016-02-01

    Secondary traumatic stress (STS) is an issue of significant concern among providers who work with survivors of sexual assault. Although STS has been studied in relation to individual-level characteristics of a variety of types of trauma responders, less research has focused specifically on rape crisis centers as environments that might convey risk or protection from STS, and no research to knowledge has modeled setting-level variation in correlates of STS. The current study uses a sample of 164 staff members representing 40 rape crisis centers across a single Midwestern state to investigate the staff member-and agency-level correlates of STS. Results suggest that correlates exist at both levels of analysis. Younger age and greater severity of sexual assault history were statistically significant individual-level predictors of increased STS. Greater frequency of supervision was more strongly related to secondary stress for non-advocates than for advocates. At the setting level, lower levels of supervision and higher client loads agency-wide accounted for unique variance in staff members' STS. These findings suggest that characteristics of both providers and their settings are important to consider when understanding their STS. © The Author(s) 2014.

  9. Automatic segmentation of Leishmania parasite in microscopic images using a modified CV level set method

    Science.gov (United States)

    Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab

    2015-12-01

    Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.

  10. Level Design as Model Transformation

    NARCIS (Netherlands)

    Dormans, Joris

    2011-01-01

    This paper frames the process of designing a level in a game as a series of model transformations. The transformations correspond to the application of particular design principles, such as the use of locks and keys to transform a linear mission into a branching space. It shows that by using rewrite

  11. Annotation-based feature extraction from sets of SBML models.

    Science.gov (United States)

    Alm, Rebekka; Waltemath, Dagmar; Wolfien, Markus; Wolkenhauer, Olaf; Henkel, Ron

    2015-01-01

    Model repositories such as BioModels Database provide computational models of biological systems for the scientific community. These models contain rich semantic annotations that link model entities to concepts in well-established bio-ontologies such as Gene Ontology. Consequently, thematically similar models are likely to share similar annotations. Based on this assumption, we argue that semantic annotations are a suitable tool to characterize sets of models. These characteristics improve model classification, allow to identify additional features for model retrieval tasks, and enable the comparison of sets of models. In this paper we discuss four methods for annotation-based feature extraction from model sets. We tested all methods on sets of models in SBML format which were composed from BioModels Database. To characterize each of these sets, we analyzed and extracted concepts from three frequently used ontologies, namely Gene Ontology, ChEBI and SBO. We find that three out of the methods are suitable to determine characteristic features for arbitrary sets of models: The selected features vary depending on the underlying model set, and they are also specific to the chosen model set. We show that the identified features map on concepts that are higher up in the hierarchy of the ontologies than the concepts used for model annotations. Our analysis also reveals that the information content of concepts in ontologies and their usage for model annotation do not correlate. Annotation-based feature extraction enables the comparison of model sets, as opposed to existing methods for model-to-keyword comparison, or model-to-model comparison.

  12. Reconstruction of incomplete cell paths through a 3D-2D level set segmentation

    Science.gov (United States)

    Hariri, Maia; Wan, Justin W. L.

    2012-02-01

    Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.

  13. A combined single-multiphase flow formulation of the premixing phase using the level set method

    International Nuclear Information System (INIS)

    Leskovar, M.; Marn, J.

    1999-01-01

    The premixing phase of a steam explosion covers the interaction of the melt jet or droplets with the water prior to any steam explosion occurring. To get a better insight of the hydrodynamic processes during the premixing phase beside hot premixing experiments, where the water evaporation is significant, also cold isothermal premixing experiments are performed. The specialty of isothermal premixing experiments is that three phases are involved: the water, the air and the spheres phase, but only the spheres phase mixes with the other two phases whereas the water and air phases do not mix and remain separated by a free surface. Our idea therefore was to treat the isothermal premixing process with a combined single-multiphase flow model. In this combined model the water and air phase are treated as a single phase with discontinuous phase properties at the water air interface, whereas the spheres are treated as usually with a multiphase flow model, where the spheres represent the dispersed phase and the common water-air phase represents the continuous phase. The common water-air phase was described with the front capturing method based on the level set formulation. In the level set formulation, the boundary of two-fluid interfaces is modeled as the zero set of a smooth signed normal distance function defined on the entire physical domain. The boundary is then updated by solving a nonlinear equation of the Hamilton-Jacobi type on the whole domain. With this single-multiphase flow model the Queos isothermal premixing Q08 has been simulated. A numerical analysis using different treatments of the water-air interface (level set, high-resolution and upwind) has been performed for the incompressible and compressible case and the results were compared to experimental measurements.(author)

  14. Some free boundary problems in potential flow regime usinga based level set method

    Energy Technology Data Exchange (ETDEWEB)

    Garzon, M.; Bobillo-Ares, N.; Sethian, J.A.

    2008-12-09

    Recent advances in the field of fluid mechanics with moving fronts are linked to the use of Level Set Methods, a versatile mathematical technique to follow free boundaries which undergo topological changes. A challenging class of problems in this context are those related to the solution of a partial differential equation posed on a moving domain, in which the boundary condition for the PDE solver has to be obtained from a partial differential equation defined on the front. This is the case of potential flow models with moving boundaries. Moreover the fluid front will possibly be carrying some material substance which will diffuse in the front and be advected by the front velocity, as for example the use of surfactants to lower surface tension. We present a Level Set based methodology to embed this partial differential equations defined on the front in a complete Eulerian framework, fully avoiding the tracking of fluid particles and its known limitations. To show the advantages of this approach in the field of Fluid Mechanics we present in this work one particular application: the numerical approximation of a potential flow model to simulate the evolution and breaking of a solitary wave propagating over a slopping bottom and compare the level set based algorithm with previous front tracking models.

  15. Trusting Politicians and Institutions in a Multi-Level Setting

    DEFF Research Database (Denmark)

    Hansen, Sune Welling; Kjær, Ulrik

    Trust in government and in politicians is a very crucial prerequisite for democratic processes. This goes not only for the national level of government but also for the regional and local. We make use of a large scale survey among citizens in Denmark to evaluate trust in politicians at different...... formation processes can negatively influence trust in the mayor and the councilors. Reaching out for the local power by being disloyal to one’s own party or by breaking deals already made can sometimes secure the mayoralty but it comes with a prize: lower trust among the electorate....

  16. High-level waste tank farm set point document

    International Nuclear Information System (INIS)

    Anthony, J.A. III.

    1995-01-01

    Setpoints for nuclear safety-related instrumentation are required for actions determined by the design authorization basis. Minimum requirements need to be established for assuring that setpoints are established and held within specified limits. This document establishes the controlling methodology for changing setpoints of all classifications. The instrumentation under consideration involve the transfer, storage, and volume reduction of radioactive liquid waste in the F- and H-Area High-Level Radioactive Waste Tank Farms. The setpoint document will encompass the PROCESS AREA listed in the Safety Analysis Report (SAR) (DPSTSA-200-10 Sup 18) which includes the diversion box HDB-8 facility. In addition to the PROCESS AREAS listed in the SAR, Building 299-H and the Effluent Transfer Facility (ETF) are also included in the scope

  17. High-level waste tank farm set point document

    Energy Technology Data Exchange (ETDEWEB)

    Anthony, J.A. III

    1995-01-15

    Setpoints for nuclear safety-related instrumentation are required for actions determined by the design authorization basis. Minimum requirements need to be established for assuring that setpoints are established and held within specified limits. This document establishes the controlling methodology for changing setpoints of all classifications. The instrumentation under consideration involve the transfer, storage, and volume reduction of radioactive liquid waste in the F- and H-Area High-Level Radioactive Waste Tank Farms. The setpoint document will encompass the PROCESS AREA listed in the Safety Analysis Report (SAR) (DPSTSA-200-10 Sup 18) which includes the diversion box HDB-8 facility. In addition to the PROCESS AREAS listed in the SAR, Building 299-H and the Effluent Transfer Facility (ETF) are also included in the scope.

  18. On the Relationship between Variational Level Set-Based and SOM-Based Active Contours

    Science.gov (United States)

    Abdelsamea, Mohammed M.; Gnecco, Giorgio; Gaber, Mohamed Medhat; Elyan, Eyad

    2015-01-01

    Most Active Contour Models (ACMs) deal with the image segmentation problem as a functional optimization problem, as they work on dividing an image into several regions by optimizing a suitable functional. Among ACMs, variational level set methods have been used to build an active contour with the aim of modeling arbitrarily complex shapes. Moreover, they can handle also topological changes of the contours. Self-Organizing Maps (SOMs) have attracted the attention of many computer vision scientists, particularly in modeling an active contour based on the idea of utilizing the prototypes (weights) of a SOM to control the evolution of the contour. SOM-based models have been proposed in general with the aim of exploiting the specific ability of SOMs to learn the edge-map information via their topology preservation property and overcoming some drawbacks of other ACMs, such as trapping into local minima of the image energy functional to be minimized in such models. In this survey, we illustrate the main concepts of variational level set-based ACMs, SOM-based ACMs, and their relationship and review in a comprehensive fashion the development of their state-of-the-art models from a machine learning perspective, with a focus on their strengths and weaknesses. PMID:25960736

  19. Mind-sets, low-level exposures, and research

    International Nuclear Information System (INIS)

    Sagan, L.A.

    1993-01-01

    Much of our environmental policy is based on the notion that carcinogenic agents are harmful at even minuscule doses. From where does this thinking come? What is the scientific evidence that supports such policy? Moreover, why is the public willing to buy into this? Or is it the other way around: Has the scientific community bought into a paradigm that has its origins in public imagery? Or, most likely, are there interactions between the two? It is essential that we find out whether or not there are risks associated with low-level exposures to radiation. The author can see three obvious areas where the future depends on better information: The increasing radiation exposures resulting from the use of medical diagnostic and therapeutic practices need to be properly evaluated for safety; Environmental policies, which direct enormous resources to the reduction of small radiation exposures, needs to be put on a firmer scientific basis; The future of nuclear energy, dependent as it is on public acceptance, may well rely upon a better understanding of low-dose effects. Nuclear energy could provide an important solution of global warming and other possible environmental hazards, but will probably not be implemented as long as fear of low-dose radiation persists. Although an established paradigm has great resilience, it cannot resist the onslaught of inconsistent scientific observations or of the social value system that supports it. Only new research will enable us to determine if a paradigm shift is in order here

  20. Comparing Fuzzy Sets and Random Sets to Model the Uncertainty of Fuzzy Shorelines

    NARCIS (Netherlands)

    Dewi, Ratna Sari; Bijker, Wietske; Stein, Alfred

    2017-01-01

    This paper addresses uncertainty modelling of shorelines by comparing fuzzy sets and random sets. Both methods quantify extensional uncertainty of shorelines extracted from remote sensing images. Two datasets were tested: pan-sharpened Pleiades with four bands (Pleiades) and pan-sharpened Pleiades

  1. Level Set Projection Method for Incompressible Navier-Stokes on Arbitrary Boundaries

    KAUST Repository

    Williams-Rioux, Bertrand

    2012-01-12

    Second order level set projection method for incompressible Navier-Stokes equations is proposed to solve flow around arbitrary geometries. We used rectilinear grid with collocated cell centered velocity and pressure. An explicit Godunov procedure is used to address the nonlinear advection terms, and an implicit Crank-Nicholson method to update viscous effects. An approximate pressure projection is implemented at the end of the time stepping using multigrid as a conventional fast iterative method. The level set method developed by Osher and Sethian [17] is implemented to address real momentum and pressure boundary conditions by the advection of a distance function, as proposed by Aslam [3]. Numerical results for the Strouhal number and drag coefficients validated the model with good accuracy for flow over a cylinder in the parallel shedding regime (47 < Re < 180). Simulations for an array of cylinders and an oscillating cylinder were performed, with the latter demonstrating our methods ability to handle dynamic boundary conditions.

  2. Hybrid Compensatory-Noncompensatory Choice Sets in Semicompensatory Models

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Bekhor, Shlomo; Shiftan, Yoram

    2013-01-01

    Semicompensatory models represent a choice process consisting of an elimination-based choice set formation on satisfaction of criterion thresholds and a utility-based choice. Current semicompensatory models assume a purely noncompensatory choice set formation and therefore do not support multinom...

  3. process setting models for the minimization of costs defectives

    African Journals Online (AJOL)

    Dr Obe

    determine the mean setting so as to minimise the total loss through under-limit complaints and loss of sales and goodwill as well as over-limit losses through excess materials and rework costs. Models are developed for the two types of setting of the mean so that the minimum costs of losses are achieved. Also, a model is ...

  4. Generalized algebra-valued models of set theory

    NARCIS (Netherlands)

    Löwe, B.; Tarafder, S.

    2015-01-01

    We generalize the construction of lattice-valued models of set theory due to Takeuti, Titani, Kozawa and Ozawa to a wider class of algebras and show that this yields a model of a paraconsistent logic that validates all axioms of the negation-free fragment of Zermelo-Fraenkel set theory.

  5. Level-set simulations of buoyancy-driven motion of single and multiple bubbles

    International Nuclear Information System (INIS)

    Balcázar, Néstor; Lehmkuhl, Oriol; Jofre, Lluís; Oliva, Assensi

    2015-01-01

    Highlights: • A conservative level-set method is validated and verified. • An extensive study of buoyancy-driven motion of single bubbles is performed. • The interactions of two spherical and ellipsoidal bubbles is studied. • The interaction of multiple bubbles is simulated in a vertical channel. - Abstract: This paper presents a numerical study of buoyancy-driven motion of single and multiple bubbles by means of the conservative level-set method. First, an extensive study of the hydrodynamics of single bubbles rising in a quiescent liquid is performed, including its shape, terminal velocity, drag coefficients and wake patterns. These results are validated against experimental and numerical data well established in the scientific literature. Then, a further study on the interaction of two spherical and ellipsoidal bubbles is performed for different orientation angles. Finally, the interaction of multiple bubbles is explored in a periodic vertical channel. The results show that the conservative level-set approach can be used for accurate modelling of bubble dynamics. Moreover, it is demonstrated that the present method is numerically stable for a wide range of Morton and Reynolds numbers.

  6. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shenggao, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Mathematical Center for Interdiscipline Research, Soochow University, 1 Shizi Street, Jiangsu, Suzhou 215006 (China); Sun, Hui; Cheng, Li-Tien [Department of Mathematics, University of California, San Diego, La Jolla, California 92093-0112 (United States); Dzubiella, Joachim [Soft Matter and Functional Materials, Helmholtz-Zentrum Berlin, 14109 Berlin, Germany and Institut für Physik, Humboldt-Universität zu Berlin, 12489 Berlin (Germany); Li, Bo, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Quantitative Biology Graduate Program, University of California, San Diego, La Jolla, California 92093-0112 (United States); McCammon, J. Andrew [Department of Chemistry and Biochemistry, Department of Pharmacology, Howard Hughes Medical Institute, University of California, San Diego, La Jolla, California 92093-0365 (United States)

    2016-08-07

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the

  7. Improved inhalation technology for setting safe exposure levels for workplace chemicals

    Science.gov (United States)

    Stuart, Bruce O.

    1993-01-01

    Threshold Limit Values recommended as allowable air concentrations of a chemical in the workplace are often based upon a no-observable-effect-level (NOEL) determined by experimental inhalation studies using rodents. A 'safe level' for human exposure must then be estimated by the use of generalized safety factors in attempts to extrapolate from experimental rodents to man. The recent development of chemical-specific physiologically-based toxicokinetics makes use of measured physiological, biochemical, and metabolic parameters to construct a validated model that is able to 'scale-up' rodent response data to predict the behavior of the chemical in man. This procedure is made possible by recent advances in personal computer software and the emergence of appropriate biological data, and provides an analytical tool for much more reliable risk evaluation and airborne chemical exposure level setting for humans.

  8. A local level set method based on a finite element method for unstructured meshes

    International Nuclear Information System (INIS)

    Ngo, Long Cu; Choi, Hyoung Gwon

    2016-01-01

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time

  9. A local level set method based on a finite element method for unstructured meshes

    Energy Technology Data Exchange (ETDEWEB)

    Ngo, Long Cu; Choi, Hyoung Gwon [School of Mechanical Engineering, Seoul National University of Science and Technology, Seoul (Korea, Republic of)

    2016-12-15

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time.

  10. A bottleneck model of set-specific capture.

    Directory of Open Access Journals (Sweden)

    Katherine Sledge Moore

    Full Text Available Set-specific contingent attentional capture is a particularly strong form of capture that occurs when multiple attentional sets guide visual search (e.g., "search for green letters" and "search for orange letters". In this type of capture, a potential target that matches one attentional set (e.g. a green stimulus impairs the ability to identify a temporally proximal target that matches another attentional set (e.g. an orange stimulus. In the present study, we investigated whether set-specific capture stems from a bottleneck in working memory or from a depletion of limited resources that are distributed across multiple attentional sets. In each trial, participants searched a rapid serial visual presentation (RSVP stream for up to three target letters (T1-T3 that could appear in any of three target colors (orange, green, or lavender. The most revealing findings came from trials in which T1 and T2 matched different attentional sets and were both identified. In these trials, T3 accuracy was lower when it did not match T1's set than when it did match, but only when participants failed to identify T2. These findings support a bottleneck model of set-specific capture in which a limited-capacity mechanism in working memory enhances only one attentional set at a time, rather than a resource model in which processing capacity is simultaneously distributed across multiple attentional sets.

  11. Numerical simulation of overflow at vertical weirs using a hybrid level set/VOF method

    Science.gov (United States)

    Lv, Xin; Zou, Qingping; Reeve, Dominic

    2011-10-01

    This paper presents the applications of a newly developed free surface flow model to the practical, while challenging overflow problems for weirs. Since the model takes advantage of the strengths of both the level set and volume of fluid methods and solves the Navier-Stokes equations on an unstructured mesh, it is capable of resolving the time evolution of very complex vortical motions, air entrainment and pressure variations due to violent deformations following overflow of the weir crest. In the present study, two different types of vertical weir, namely broad-crested and sharp-crested, are considered for validation purposes. The calculated overflow parameters such as pressure head distributions, velocity distributions, and water surface profiles are compared against experimental data as well as numerical results available in literature. A very good quantitative agreement has been obtained. The numerical model, thus, offers a good alternative to traditional experimental methods in the study of weir problems.

  12. Relationships between college settings and student alcohol use before, during and after events: a multi-level study.

    Science.gov (United States)

    Paschall, Mallie J; Saltz, Robert F

    2007-11-01

    We examined how alcohol risk is distributed based on college students' drinking before, during and after they go to certain settings. Students attending 14 California public universities (N=10,152) completed a web-based or mailed survey in the fall 2003 semester, which included questions about how many drinks they consumed before, during and after the last time they went to six settings/events: fraternity or sorority party, residence hall party, campus event (e.g. football game), off-campus party, bar/restaurant and outdoor setting (referent). Multi-level analyses were conducted in hierarchical linear modeling (HLM) to examine relationships between type of setting and level of alcohol use before, during and after going to the setting, and possible age and gender differences in these relationships. Drinking episodes (N=24,207) were level 1 units, students were level 2 units and colleges were level 3 units. The highest drinking levels were observed during all settings/events except campus events, with the highest number of drinks being consumed at off-campus parties, followed by residence hall and fraternity/sorority parties. The number of drinks consumed before a fraternity/sorority party was higher than other settings/events. Age group and gender differences in relationships between type of setting/event and 'before,''during' and 'after' drinking levels also were observed. For example, going to a bar/restaurant (relative to an outdoor setting) was positively associated with 'during' drinks among students of legal drinking age while no relationship was observed for underage students. Findings of this study indicate differences in the extent to which college settings are associated with student drinking levels before, during and after related events, and may have implications for intervention strategies targeting different types of settings.

  13. Analyzing ROC curves using the effective set-size model

    Science.gov (United States)

    Samuelson, Frank W.; Abbey, Craig K.; He, Xin

    2018-03-01

    The Effective Set-Size model has been used to describe uncertainty in various signal detection experiments. The model regards images as if they were an effective number (M*) of searchable locations, where the observer treats each location as a location-known-exactly detection task with signals having average detectability d'. The model assumes a rational observer behaves as if he searches an effective number of independent locations and follows signal detection theory at each location. Thus the location-known-exactly detectability (d') and the effective number of independent locations M* fully characterize search performance. In this model the image rating in a single-response task is assumed to be the maximum response that the observer would assign to these many locations. The model has been used by a number of other researchers, and is well corroborated. We examine this model as a way of differentiating imaging tasks that radiologists perform. Tasks involving more searching or location uncertainty may have higher estimated M* values. In this work we applied the Effective Set-Size model to a number of medical imaging data sets. The data sets include radiologists reading screening and diagnostic mammography with and without computer-aided diagnosis (CAD), and breast tomosynthesis. We developed an algorithm to fit the model parameters using two-sample maximum-likelihood ordinal regression, similar to the classic bi-normal model. The resulting model ROC curves are rational and fit the observed data well. We find that the distributions of M* and d' differ significantly among these data sets, and differ between pairs of imaging systems within studies. For example, on average tomosynthesis increased readers' d' values, while CAD reduced the M* parameters. We demonstrate that the model parameters M* and d' are correlated. We conclude that the Effective Set-Size model may be a useful way of differentiating location uncertainty from the diagnostic uncertainty in medical

  14. Setting ozone critical levels for protecting horticultural Mediterranean crops: Case study of tomato

    International Nuclear Information System (INIS)

    González-Fernández, I.; Calvo, E.; Gerosa, G.; Bermejo, V.; Marzuoli, R.; Calatayud, V.; Alonso, R.

    2014-01-01

    Seven experiments carried out in Italy and Spain have been used to parameterising a stomatal conductance model and establishing exposure– and dose–response relationships for yield and quality of tomato with the main goal of setting O 3 critical levels (CLe). CLe with confidence intervals, between brackets, were set at an accumulated hourly O 3 exposure over 40 nl l −1 , AOT40 = 8.4 (1.2, 15.6) ppm h and a phytotoxic ozone dose above a threshold of 6 nmol m −2 s −1 , POD6 = 2.7 (0.8, 4.6) mmol m −2 for yield and AOT40 = 18.7 (8.5, 28.8) ppm h and POD6 = 4.1 (2.0, 6.2) mmol m −2 for quality, both indices performing equally well. CLe confidence intervals provide information on the quality of the dataset and should be included in future calculations of O 3 CLe for improving current methodologies. These CLe, derived for sensitive tomato cultivars, should not be applied for quantifying O 3 -induced losses at the risk of making important overestimations of the economical losses associated with O 3 pollution. -- Highlights: • Seven independent experiments from Italy and Spain were analysed. • O 3 critical levels are proposed for the protection of summer horticultural crops. • Exposure- and flux-based O 3 indices performed equally well. • Confidence intervals of the new O 3 critical levels are calculated. • A new method to estimate the degree risk of O 3 damage is proposed. -- Critical levels for tomato yield were set at AOT40 = 8.4 ppm h and POD6 = 2.7 mmol m −2 and confidence intervals should be used for improving O 3 risk assessment

  15. Fuzzy GML Modeling Based on Vague Soft Sets

    Directory of Open Access Journals (Sweden)

    Bo Wei

    2017-01-01

    Full Text Available The Open Geospatial Consortium (OGC Geography Markup Language (GML explicitly represents geographical spatial knowledge in text mode. All kinds of fuzzy problems will inevitably be encountered in spatial knowledge expression. Especially for those expressions in text mode, this fuzziness will be broader. Describing and representing fuzziness in GML seems necessary. Three kinds of fuzziness in GML can be found: element fuzziness, chain fuzziness, and attribute fuzziness. Both element fuzziness and chain fuzziness belong to the reflection of the fuzziness between GML elements and, then, the representation of chain fuzziness can be replaced by the representation of element fuzziness in GML. On the basis of vague soft set theory, two kinds of modeling, vague soft set GML Document Type Definition (DTD modeling and vague soft set GML schema modeling, are proposed for fuzzy modeling in GML DTD and GML schema, respectively. Five elements or pairs, associated with vague soft sets, are introduced. Then, the DTDs and the schemas of the five elements are correspondingly designed and presented according to their different chains and different fuzzy data types. While the introduction of the five elements or pairs is the basis of vague soft set GML modeling, the corresponding DTD and schema modifications are key for implementation of modeling. The establishment of vague soft set GML enables GML to represent fuzziness and solves the problem of lack of fuzzy information expression in GML.

  16. An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs

    Directory of Open Access Journals (Sweden)

    Kishore R. Mosaliganti

    2013-12-01

    Full Text Available In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse and grid representations (point, mesh, and image-based. Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g. gradient and Hessians across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a

  17. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    Science.gov (United States)

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  18. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin; Matevosyan, Norayr; Wolfram, Marie-Therese

    2011-01-01

    analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its

  19. Benchmarking of protein descriptor sets in proteochemometric modeling (part 2): modeling performance of 13 amino acid descriptor sets

    Science.gov (United States)

    2013-01-01

    Background While a large body of work exists on comparing and benchmarking descriptors of molecular structures, a similar comparison of protein descriptor sets is lacking. Hence, in the current work a total of 13 amino acid descriptor sets have been benchmarked with respect to their ability of establishing bioactivity models. The descriptor sets included in the study are Z-scales (3 variants), VHSE, T-scales, ST-scales, MS-WHIM, FASGAI, BLOSUM, a novel protein descriptor set (termed ProtFP (4 variants)), and in addition we created and benchmarked three pairs of descriptor combinations. Prediction performance was evaluated in seven structure-activity benchmarks which comprise Angiotensin Converting Enzyme (ACE) dipeptidic inhibitor data, and three proteochemometric data sets, namely (1) GPCR ligands modeled against a GPCR panel, (2) enzyme inhibitors (NNRTIs) with associated bioactivities against a set of HIV enzyme mutants, and (3) enzyme inhibitors (PIs) with associated bioactivities on a large set of HIV enzyme mutants. Results The amino acid descriptor sets compared here show similar performance (set differences ( > 0.3 log units RMSE difference and >0.7 difference in MCC). Combining different descriptor sets generally leads to better modeling performance than utilizing individual sets. The best performers were Z-scales (3) combined with ProtFP (Feature), or Z-Scales (3) combined with an average Z-Scale value for each target, while ProtFP (PCA8), ST-Scales, and ProtFP (Feature) rank last. Conclusions While amino acid descriptor sets capture different aspects of amino acids their ability to be used for bioactivity modeling is still – on average – surprisingly similar. Still, combining sets describing complementary information consistently leads to small but consistent improvement in modeling performance (average MCC 0.01 better, average RMSE 0.01 log units lower). Finally, performance differences exist between the targets compared thereby underlining that

  20. A HIERARCHICAL SET OF MODELS FOR SPECIES RESPONSE ANALYSIS

    NARCIS (Netherlands)

    HUISMAN, J; OLFF, H; FRESCO, LFM

    Variation in the abundance of species in space and/or time can be caused by a wide range of underlying processes. Before such causes can be analysed we need simple mathematical models which can describe the observed response patterns. For this purpose a hierarchical set of models is presented. These

  1. A hierarchical set of models for species response analysis

    NARCIS (Netherlands)

    Huisman, J.; Olff, H.; Fresco, L.F.M.

    1993-01-01

    Variation in the abundance of species in space and/or time can be caused by a wide range of underlying processes. Before such causes can be analysed we need simple mathematical models which can describe the observed response patterns. For this purpose a hierarchical set of models is presented. These

  2. Robust space-time extraction of ventricular surface evolution using multiphase level sets

    Science.gov (United States)

    Drapaca, Corina S.; Cardenas, Valerie; Studholme, Colin

    2004-05-01

    This paper focuses on the problem of accurately extracting the CSF-tissue boundary, particularly around the ventricular surface, from serial structural MRI of the brain acquired in imaging studies of aging and dementia. This is a challenging problem because of the common occurrence of peri-ventricular lesions which locally alter the appearance of white matter. We examine a level set approach which evolves a four dimensional description of the ventricular surface over time. This has the advantage of allowing constraints on the contour in the temporal dimension, improving the consistency of the extracted object over time. We follow the approach proposed by Chan and Vese which is based on the Mumford and Shah model and implemented using the Osher and Sethian level set method. We have extended this to the 4 dimensional case to propagate a 4D contour toward the tissue boundaries through the evolution of a 5D implicit function. For convergence we use region-based information provided by the image rather than the gradient of the image. This is adapted to allow intensity contrast changes between time frames in the MRI sequence. Results on time sequences of 3D brain MR images are presented and discussed.

  3. Joint level-set and spatio-temporal motion detection for cell segmentation.

    Science.gov (United States)

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan

  4. Enhanced Waste Tank Level Model

    Energy Technology Data Exchange (ETDEWEB)

    Duignan, M.R.

    1999-06-24

    'With the increased sensitivity of waste-level measurements in the H-Area Tanks and with periods of isolation, when no mass transfer occurred for certain tanks, waste-level changes have been recorded with are unexplained.'

  5. Hybrid approach for detection of dental caries based on the methods FCM and level sets

    Science.gov (United States)

    Chaabene, Marwa; Ben Ali, Ramzi; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    This paper presents a new technique for detection of dental caries that is a bacterial disease that destroys the tooth structure. In our approach, we have achieved a new segmentation method that combines the advantages of fuzzy C mean algorithm and level set method. The results obtained by the FCM algorithm will be used by Level sets algorithm to reduce the influence of the noise effect on the working of each of these algorithms, to facilitate level sets manipulation and to lead to more robust segmentation. The sensitivity and specificity confirm the effectiveness of proposed method for caries detection.

  6. Out-of-Core Computations of High-Resolution Level Sets by Means of Code Transformation

    DEFF Research Database (Denmark)

    Christensen, Brian Bunch; Nielsen, Michael Bang; Museth, Ken

    2012-01-01

    We propose a storage efficient, fast and parallelizable out-of-core framework for streaming computations of high resolution level sets. The fundamental techniques are skewing and tiling transformations of streamed level set computations which allow for the combination of interface propagation, re...... computations are now CPU bound and consequently the overall performance is unaffected by disk latency and bandwidth limitations. We demonstrate this with several benchmark tests that show sustained out-of-core throughputs close to that of in-core level set simulations....

  7. Use of fuzzy sets in modeling of GIS objects

    Science.gov (United States)

    Mironova, Yu N.

    2018-05-01

    The paper discusses modeling and methods of data visualization in geographic information systems. Information processing in Geoinformatics is based on the use of models. Therefore, geoinformation modeling is a key in the chain of GEODATA processing. When solving problems, using geographic information systems often requires submission of the approximate or insufficient reliable information about the map features in the GIS database. Heterogeneous data of different origin and accuracy have some degree of uncertainty. In addition, not all information is accurate: already during the initial measurements, poorly defined terms and attributes (e.g., "soil, well-drained") are used. Therefore, there are necessary methods for working with uncertain requirements, classes, boundaries. The author proposes using spatial information fuzzy sets. In terms of a characteristic function, a fuzzy set is a natural generalization of ordinary sets, when one rejects the binary nature of this feature and assumes that it can take any value in the interval.

  8. Optimisation-Based Solution Methods for Set Partitioning Models

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel

    The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...

  9. Setting conservation management thresholds using a novel participatory modeling approach.

    Science.gov (United States)

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The

  10. Reservoir characterisation by a binary level set method and adaptive multiscale estimation

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Lars Kristian

    2006-01-15

    The main focus of this work is on estimation of the absolute permeability as a solution of an inverse problem. We have both considered a single-phase and a two-phase flow model. Two novel approaches have been introduced and tested numerical for solving the inverse problems. The first approach is a multi scale zonation technique which is treated in Paper A. The purpose of the work in this paper is to find a coarse scale solution based on production data from wells. In the suggested approach, the robustness of an already developed method, the adaptive multi scale estimation (AME), has been improved by utilising information from several candidate solutions generated by a stochastic optimizer. The new approach also suggests a way of combining a stochastic and a gradient search method, which in general is a problematic issue. The second approach is a piecewise constant level set approach and is applied in Paper B, C, D and E. Paper B considers the stationary single-phase problem, while Paper C, D and E use a two-phase flow model. In the two-phase flow problem we have utilised information from both production data in wells and spatially distributed data gathered from seismic surveys. Due to the higher content of information provided by the spatially distributed data, we search solutions on a slightly finer scale than one typically does with only production data included. The applied level set method is suitable for reconstruction of fields with a supposed known facies-type of solution. That is, the solution should be close to piecewise constant. This information is utilised through a strong restriction of the number of constant levels in the estimate. On the other hand, the flexibility in the geometries of the zones is much larger for this method than in a typical zonation approach, for example the multi scale approach applied in Paper A. In all these papers, the numerical studies are done on synthetic data sets. An advantage of synthetic data studies is that the true

  11. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin

    2011-04-01

    In this paper, we construct a level set method for an elliptic obstacle problem, which can be reformulated as a shape optimization problem. We provide a detailed shape sensitivity analysis for this reformulation and a stability result for the shape Hessian at the optimal shape. Using the shape sensitivities, we construct a geometric gradient flow, which can be realized in the context of level set methods. We prove the convergence of the gradient flow to an optimal shape and provide a complete analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its behavior through several computational experiments. © 2011 World Scientific Publishing Company.

  12. A variational approach to multi-phase motion of gas, liquid and solid based on the level set method

    Science.gov (United States)

    Yokoi, Kensuke

    2009-07-01

    We propose a simple and robust numerical algorithm to deal with multi-phase motion of gas, liquid and solid based on the level set method [S. Osher, J.A. Sethian, Front propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation, J. Comput. Phys. 79 (1988) 12; M. Sussman, P. Smereka, S. Osher, A level set approach for capturing solution to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146; J.A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, 1999; S. Osher, R. Fedkiw, Level Set Methods and Dynamics Implicit Surface, Applied Mathematical Sciences, vol. 153, Springer, 2003]. In Eulerian framework, to simulate interaction between a moving solid object and an interfacial flow, we need to define at least two functions (level set functions) to distinguish three materials. In such simulations, in general two functions overlap and/or disagree due to numerical errors such as numerical diffusion. In this paper, we resolved the problem using the idea of the active contour model [M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International Journal of Computer Vision 1 (1988) 321; V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, International Journal of Computer Vision 22 (1997) 61; G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2001; R. Kimmel, Numerical Geometry of Images: Theory, Algorithms, and Applications, Springer-Verlag, 2003] introduced in the field of image processing.

  13. Home advantage in high-level volleyball varies according to set number.

    Science.gov (United States)

    Marcelino, Rui; Mesquita, Isabel; Palao Andrés, José Manuel; Sampaio, Jaime

    2009-01-01

    The aim of the present study was to identify the probability of winning each Volleyball set according to game location (home, away). Archival data was obtained from 275 sets in the 2005 Men's Senior World League and 65,949 actions were analysed. Set result (win, loss), game location (home, away), set number (first, second, third, fourth and fifth) and performance indicators (serve, reception, set, attack, dig and block) were the variables considered in this study. In a first moment, performance indicators were used in a logistic model of set result, by binary logistic regression analysis. After finding the adjusted logistic model, the log-odds of winning the set were analysed according to game location and set number. The results showed that winning a set is significantly related to performance indicators (Chisquare(18)=660.97, padvantage at the beginning of the game (first set) and in the two last sets of the game (fourth and fifth sets), probably due to facilities familiarity and crowd effects. Different game actions explain these advantages and showed that to win the first set is more important to take risk, through a better performance in the attack and block, and to win the final set is important to manage the risk through a better performance on the reception. These results may suggest intra-game variation in home advantage and can be most useful to better prepare and direct the competition. Key pointsHome teams always have more probability of winning the game than away teams.Home teams have higher performance in reception, set and attack in the total of the sets.The advantage of home teams is more pronounced at the beginning of the game (first set) and in two last sets of the game (fourth and fifth sets) suggesting intra-game variation in home advantage.Analysis by sets showed that home teams have a better performance in the attack and block in the first set and in the reception in the third and fifth sets.

  14. Cooperative Fuzzy Games Approach to Setting Target Levels of ECs in Quality Function Deployment

    Directory of Open Access Journals (Sweden)

    Zhihui Yang

    2014-01-01

    Full Text Available Quality function deployment (QFD can provide a means of translating customer requirements (CRs into engineering characteristics (ECs for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach.

  15. Cooperative fuzzy games approach to setting target levels of ECs in quality function deployment.

    Science.gov (United States)

    Yang, Zhihui; Chen, Yizeng; Yin, Yunqiang

    2014-01-01

    Quality function deployment (QFD) can provide a means of translating customer requirements (CRs) into engineering characteristics (ECs) for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach.

  16. An accurate conservative level set/ghost fluid method for simulating turbulent atomization

    International Nuclear Information System (INIS)

    Desjardins, Olivier; Moureau, Vincent; Pitsch, Heinz

    2008-01-01

    This paper presents a novel methodology for simulating incompressible two-phase flows by combining an improved version of the conservative level set technique introduced in [E. Olsson, G. Kreiss, A conservative level set method for two phase flow, J. Comput. Phys. 210 (2005) 225-246] with a ghost fluid approach. By employing a hyperbolic tangent level set function that is transported and re-initialized using fully conservative numerical schemes, mass conservation issues that are known to affect level set methods are greatly reduced. In order to improve the accuracy of the conservative level set method, high order numerical schemes are used. The overall robustness of the numerical approach is increased by computing the interface normals from a signed distance function reconstructed from the hyperbolic tangent level set by a fast marching method. The convergence of the curvature calculation is ensured by using a least squares reconstruction. The ghost fluid technique provides a way of handling the interfacial forces and large density jumps associated with two-phase flows with good accuracy, while avoiding artificial spreading of the interface. Since the proposed approach relies on partial differential equations, its implementation is straightforward in all coordinate systems, and it benefits from high parallel efficiency. The robustness and efficiency of the approach is further improved by using implicit schemes for the interface transport and re-initialization equations, as well as for the momentum solver. The performance of the method is assessed through both classical level set transport tests and simple two-phase flow examples including topology changes. It is then applied to simulate turbulent atomization of a liquid Diesel jet at Re=3000. The conservation errors associated with the accurate conservative level set technique are shown to remain small even for this complex case

  17. A simple mass-conserved level set method for simulation of multiphase flows

    Science.gov (United States)

    Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.

    2018-04-01

    In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.

  18. Evaluating healthcare priority setting at the meso level: A thematic review of empirical literature

    Science.gov (United States)

    Waithaka, Dennis; Tsofa, Benjamin; Barasa, Edwine

    2018-01-01

    Background: Decentralization of health systems has made sub-national/regional healthcare systems the backbone of healthcare delivery. These regions are tasked with the difficult responsibility of determining healthcare priorities and resource allocation amidst scarce resources. We aimed to review empirical literature that evaluated priority setting practice at the meso (sub-national) level of health systems. Methods: We systematically searched PubMed, ScienceDirect and Google scholar databases and supplemented these with manual searching for relevant studies, based on the reference list of selected papers. We only included empirical studies that described and evaluated, or those that only evaluated priority setting practice at the meso-level. A total of 16 papers were identified from LMICs and HICs. We analyzed data from the selected papers by thematic review. Results: Few studies used systematic priority setting processes, and all but one were from HICs. Both formal and informal criteria are used in priority-setting, however, informal criteria appear to be more perverse in LMICs compared to HICs. The priority setting process at the meso-level is a top-down approach with minimal involvement of the community. Accountability for reasonableness was the most common evaluative framework as it was used in 12 of the 16 studies. Efficiency, reallocation of resources and options for service delivery redesign were the most common outcome measures used to evaluate priority setting. Limitations: Our study was limited by the fact that there are very few empirical studies that have evaluated priority setting at the meso-level and there is likelihood that we did not capture all the studies. Conclusions: Improving priority setting practices at the meso level is crucial to strengthening health systems. This can be achieved through incorporating and adapting systematic priority setting processes and frameworks to the context where used, and making considerations of both process

  19. Translation of a High-Level Temporal Model into Lower Level Models: Impact of Modelling at Different Description Levels

    DEFF Research Database (Denmark)

    Kraft, Peter; Sørensen, Jens Otto

    2001-01-01

    The paper attempts theoretically to clarify the interrelation between various levels of descriptions used in the modelling and the programming of information systems. We suggest an analysis where we characterise the description levels with respect to how precisely they may handle information abou...... and other textual models. We also consider the aptness of models that include procedural mechanisms such as active and object databases...

  20. Automatic Fontanel Extraction from Newborns' CT Images Using Variational Level Set

    Science.gov (United States)

    Kazemi, Kamran; Ghadimi, Sona; Lyaghat, Alireza; Tarighati, Alla; Golshaeyan, Narjes; Abrishami-Moghaddam, Hamid; Grebe, Reinhard; Gondary-Jouet, Catherine; Wallois, Fabrice

    A realistic head model is needed for source localization methods used for the study of epilepsy in neonates applying Electroencephalographic (EEG) measurements from the scalp. The earliest models consider the head as a series of concentric spheres, each layer corresponding to a different tissue whose conductivity is assumed to be homogeneous. The results of the source reconstruction depend highly on the electric conductivities of the tissues forming the head.The most used model is constituted of three layers (scalp, skull, and intracranial). Most of the major bones of the neonates’ skull are ossified at birth but can slightly move relative to each other. This is due to the sutures, fibrous membranes that at this stage of development connect the already ossified flat bones of the neurocranium. These weak parts of the neurocranium are called fontanels. Thus it is important to enter the exact geometry of fontaneles and flat bone in a source reconstruction because they show pronounced in conductivity. Computer Tomography (CT) imaging provides an excellent tool for non-invasive investigation of the skull which expresses itself in high contrast to all other tissues while the fontanels only can be identified as absence of bone, gaps in the skull formed by flat bone. Therefore, the aim of this paper is to extract the fontanels from CT images applying a variational level set method. We applied the proposed method to CT-images of five different subjects. The automatically extracted fontanels show good agreement with the manually extracted ones.

  1. Dynamic-thresholding level set: a novel computer-aided volumetry method for liver tumors in hepatic CT images

    Science.gov (United States)

    Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.

    2007-03-01

    Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.

  2. A two-level strategy to realize life-cycle production optimization in an operational setting

    NARCIS (Netherlands)

    Essen, van G.M.; Hof, Van den P.M.J.; Jansen, J.D.

    2012-01-01

    We present a two-level strategy to improve robustness against uncertainty and model errors in life-cycle flooding optimization. At the upper level, a physics-based large-scale reservoir model is used to determine optimal life-cycle injection and production profiles. At the lower level these profiles

  3. A two-level strategy to realize life-cycle production optimization in an operational setting

    NARCIS (Netherlands)

    Essen, van G.M.; Hof, Van den P.M.J.; Jansen, J.D.

    2013-01-01

    We present a two-level strategy to improve robustness against uncertainty and model errors in life-cycle flooding optimization. At the upper level, a physics-based large-scale reservoir model is used to determine optimal life-cycle injection and production profiles. At the lower level these profiles

  4. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation.

    Science.gov (United States)

    Barasa, Edwine W; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-09-16

    Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these complementary schools of thought. © 2015

  5. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    Science.gov (United States)

    Barasa, Edwine W.; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-01-01

    Background: Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods: We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results: Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Conclusion: Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these

  6. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    Directory of Open Access Journals (Sweden)

    Edwine W. Barasa

    2015-11-01

    Full Text Available Background Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1 Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a Stakeholder satisfaction, (b Stakeholder understanding, (c Shifted priorities (reallocation of resources, and (d Implementation of decisions. (2 Priority setting processes should also meet the procedural conditions of (a Stakeholder engagement, (b Stakeholder empowerment, (c Transparency, (d Use of evidence, (e Revisions, (f Enforcement, and (g Being grounded on community values. Conclusion Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from

  7. Data Set for Emperical Validation of Double Skin Facade Model

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per

    2008-01-01

    During the recent years the attention to the double skin facade (DSF) concept has greatly increased. Nevertheless, the application of the concept depends on whether a reliable model for simulation of the DSF performance will be developed or pointed out. This is, however, not possible to do, until...... the International Energy Agency (IEA) Task 34 Annex 43. This paper describes the full-scale outdoor experimental test facility ‘the Cube', where the experiments were conducted, the experimental set-up and the measurements procedure for the data sets. The empirical data is composed for the key-functioning modes...

  8. 76 FR 9004 - Public Comment on Setting Achievement Levels in Writing

    Science.gov (United States)

    2011-02-16

    ... DEPARTMENT OF EDUCATION Public Comment on Setting Achievement Levels in Writing AGENCY: U.S... Achievement Levels in Writing. SUMMARY: The National Assessment Governing Board (Governing Board) is... for NAEP in writing. This notice provides opportunity for public comment and submitting...

  9. Tokunaga and Horton self-similarity for level set trees of Markov chains

    International Nuclear Information System (INIS)

    Zaliapin, Ilia; Kovchegov, Yevgeniy

    2012-01-01

    Highlights: ► Self-similar properties of the level set trees for Markov chains are studied. ► Tokunaga and Horton self-similarity are established for symmetric Markov chains and regular Brownian motion. ► Strong, distributional self-similarity is established for symmetric Markov chains with exponential jumps. ► It is conjectured that fractional Brownian motions are Tokunaga self-similar. - Abstract: The Horton and Tokunaga branching laws provide a convenient framework for studying self-similarity in random trees. The Horton self-similarity is a weaker property that addresses the principal branching in a tree; it is a counterpart of the power-law size distribution for elements of a branching system. The stronger Tokunaga self-similarity addresses so-called side branching. The Horton and Tokunaga self-similarity have been empirically established in numerous observed and modeled systems, and proven for two paradigmatic models: the critical Galton–Watson branching process with finite progeny and the finite-tree representation of a regular Brownian excursion. This study establishes the Tokunaga and Horton self-similarity for a tree representation of a finite symmetric homogeneous Markov chain. We also extend the concept of Horton and Tokunaga self-similarity to infinite trees and establish self-similarity for an infinite-tree representation of a regular Brownian motion. We conjecture that fractional Brownian motions are also Tokunaga and Horton self-similar, with self-similarity parameters depending on the Hurst exponent.

  10. Appropriate criteria set for personnel promotion across organizational levels using analytic hierarchy process (AHP

    Directory of Open Access Journals (Sweden)

    Charles Noven Castillo

    2017-01-01

    Full Text Available Currently, there has been limited established specific set of criteria for personnel promotion to each level of the organization. This study is conducted in order to develop a personnel promotion strategy by identifying specific sets of criteria for each level of the organization. The complexity of identifying the criteria set along with the subjectivity of these criteria require the use of multi-criteria decision-making approach particularly the analytic hierarchy process (AHP. Results show different sets of criteria for each management level which are consistent with several frameworks in literature. These criteria sets would help avoid mismatch of employee skills and competencies and their job, and at the same time eliminate the issues in personnel promotion such as favouritism, glass ceiling, and gender and physical attractiveness preference. This work also shows that personality and traits, job satisfaction and experience and skills are more critical rather than social capital across different organizational levels. The contribution of this work is in identifying relevant criteria in developing a personnel promotion strategy across organizational levels.

  11. A highly efficient 3D level-set grain growth algorithm tailored for ccNUMA architecture

    Science.gov (United States)

    Mießen, C.; Velinov, N.; Gottstein, G.; Barrales-Mora, L. A.

    2017-12-01

    A highly efficient simulation model for 2D and 3D grain growth was developed based on the level-set method. The model introduces modern computational concepts to achieve excellent performance on parallel computer architectures. Strong scalability was measured on cache-coherent non-uniform memory access (ccNUMA) architectures. To achieve this, the proposed approach considers the application of local level-set functions at the grain level. Ideal and non-ideal grain growth was simulated in 3D with the objective to study the evolution of statistical representative volume elements in polycrystals. In addition, microstructure evolution in an anisotropic magnetic material affected by an external magnetic field was simulated.

  12. Modelling fatigue and the use of fatigue models in work settings.

    Science.gov (United States)

    Dawson, Drew; Ian Noy, Y; Härmä, Mikko; Akerstedt, Torbjorn; Belenky, Gregory

    2011-03-01

    In recent years, theoretical models of the sleep and circadian system developed in laboratory settings have been adapted to predict fatigue and, by inference, performance. This is typically done using the timing of prior sleep and waking or working hours as the primary input and the time course of the predicted variables as the primary output. The aim of these models is to provide employers, unions and regulators with quantitative information on the likely average level of fatigue, or risk, associated with a given pattern of work and sleep with the goal of better managing the risk of fatigue-related errors and accidents/incidents. The first part of this review summarises the variables known to influence workplace fatigue and draws attention to the considerable variability attributable to individual and task variables not included in current models. The second part reviews the current fatigue models described in the scientific and technical literature and classifies them according to whether they predict fatigue directly by using the timing of prior sleep and wake (one-step models) or indirectly by using work schedules to infer an average sleep-wake pattern that is then used to predict fatigue (two-step models). The third part of the review looks at the current use of fatigue models in field settings by organizations and regulators. Given their limitations it is suggested that the current generation of models may be appropriate for use as one element in a fatigue risk management system. The final section of the review looks at the future of these models and recommends a standardised approach for their use as an element of the 'defenses-in-depth' approach to fatigue risk management. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. The null hypothesis of GSEA, and a novel statistical model for competitive gene set analysis

    DEFF Research Database (Denmark)

    Debrabant, Birgit

    2017-01-01

    MOTIVATION: Competitive gene set analysis intends to assess whether a specific set of genes is more associated with a trait than the remaining genes. However, the statistical models assumed to date to underly these methods do not enable a clear cut formulation of the competitive null hypothesis....... This is a major handicap to the interpretation of results obtained from a gene set analysis. RESULTS: This work presents a hierarchical statistical model based on the notion of dependence measures, which overcomes this problem. The two levels of the model naturally reflect the modular structure of many gene set...... analysis methods. We apply the model to show that the popular GSEA method, which recently has been claimed to test the self-contained null hypothesis, actually tests the competitive null if the weight parameter is zero. However, for this result to hold strictly, the choice of the dependence measures...

  14. Priority setting at the micro-, meso- and macro-levels in Canada, Norway and Uganda.

    Science.gov (United States)

    Kapiriri, Lydia; Norheim, Ole Frithjof; Martin, Douglas K

    2007-06-01

    The objectives of this study were (1) to describe the process of healthcare priority setting in Ontario-Canada, Norway and Uganda at the three levels of decision-making; (2) to evaluate the description using the framework for fair priority setting, accountability for reasonableness; so as to identify lessons of good practices. We carried out case studies involving key informant interviews, with 184 health practitioners and health planners from the macro-level, meso-level and micro-level from Canada-Ontario, Norway and Uganda (selected by virtue of their varying experiences in priority setting). Interviews were audio-recorded, transcribed and analyzed using a modified thematic approach. The descriptions were evaluated against the four conditions of "accountability for reasonableness", relevance, publicity, revisions and enforcement. Areas of adherence to these conditions were identified as lessons of good practices; areas of non-adherence were identified as opportunities for improvement. (i) at the macro-level, in all three countries, cabinet makes most of the macro-level resource allocation decisions and they are influenced by politics, public pressure, and advocacy. Decisions within the ministries of health are based on objective formulae and evidence. International priorities influenced decisions in Uganda. Some priority-setting reasons are publicized through circulars, printed documents and the Internet in Canada and Norway. At the meso-level, hospital priority-setting decisions were made by the hospital managers and were based on national priorities, guidelines, and evidence. Hospital departments that handle emergencies, such as surgery, were prioritized. Some of the reasons are available on the hospital intranet or presented at meetings. Micro-level practitioners considered medical and social worth criteria. These reasons are not publicized. Many practitioners lacked knowledge of the macro- and meso-level priority-setting processes. (ii) Evaluation

  15. Methods of mathematical modeling using polynomials of algebra of sets

    Science.gov (United States)

    Kazanskiy, Alexandr; Kochetkov, Ivan

    2018-03-01

    The article deals with the construction of discrete mathematical models for solving applied problems arising from the operation of building structures. Security issues in modern high-rise buildings are extremely serious and relevant, and there is no doubt that interest in them will only increase. The territory of the building is divided into zones for which it is necessary to observe. Zones can overlap and have different priorities. Such situations can be described using formulas algebra of sets. Formulas can be programmed, which makes it possible to work with them using computer models.

  16. IMPORTANCE OF PROBLEM SETTING BEFORE DEVELOPING A BUSINESS MODEL CANVAS

    OpenAIRE

    Bekhradi , Alborz; Yannou , Bernard; Cluzel , François

    2016-01-01

    International audience; In this paper, the importance of problem setting in front end of innovation to radically innovate is emphasized prior to the use of the BMC. After discussing the context of the Business Model Canvas usage, the failure reasons of a premature use (in early design stages) of the BMC tool is discussed through some real examples of innovative startups in Paris area. This paper ends with the proposition of three main rules to follow when one wants to use the Business Model C...

  17. Multi person detection and tracking based on hierarchical level-set method

    Science.gov (United States)

    Khraief, Chadia; Benzarti, Faouzi; Amiri, Hamid

    2018-04-01

    In this paper, we propose an efficient unsupervised method for mutli-person tracking based on hierarchical level-set approach. The proposed method uses both edge and region information in order to effectively detect objects. The persons are tracked on each frame of the sequence by minimizing an energy functional that combines color, texture and shape information. These features are enrolled in covariance matrix as region descriptor. The present method is fully automated without the need to manually specify the initial contour of Level-set. It is based on combined person detection and background subtraction methods. The edge-based is employed to maintain a stable evolution, guide the segmentation towards apparent boundaries and inhibit regions fusion. The computational cost of level-set is reduced by using narrow band technique. Many experimental results are performed on challenging video sequences and show the effectiveness of the proposed method.

  18. Data sets for modeling: A retrospective collection of Bidirectional Reflectance and Forest Ecosystems Dynamics Multisensor Aircraft Campaign data sets

    Energy Technology Data Exchange (ETDEWEB)

    Walthall, C.L.; Kim, M. (Univ. of Maryland, College Park, MD (United States). Dept. of Geography); Williams, D.L.; Meeson, B.W.; Agbu, P.A.; Newcomer, J.A.; Levine, E.R.

    1993-12-01

    The Biospheric Sciences Branch, within the Laboratory for Terrestrial Physics at NASA's Goddard Space Flight Center, has assembled two data sets for free dissemination to the remote sensing research community. One data set, referred to as the Retrospective Bidirectional Reflectance Distribution Function (BRDF) Data Collection, is a collection of bidirectional reflectance and supporting biophysical measurements of surfaces ranging in diversity from bare soil to heavily forested canopies. The other data collection, resulting from measurements made in association with the Forest Ecosystems Dynamic Multisensor Aircraft Campaign (FED MAC), contains data that are relevant to ecosystem process models, particularly those which have been modified to incorporate remotely sensed data. Both of these collections are being made available to the science community at large in order to facilitate model development, validation, and usage. These data collections are subsets which have been compiled and consolidated from individual researcher or from several large data set collections including: the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE); FED MAC; the Superior National Forest Project (SNF); the Geologic Remote Sensing Field Experiment (GRSFE); and Agricultural Inventories through Space Applications of Remote Sensing (AgriStars). The complete, stand-along FED MAC Data Collection contains atmospheric, vegetation, and soils data acquired during field measurement campaigns conducted at international Papers' Northern Experimental Forest located approximately 40 km north of Bangor, Maine. Reflectance measurements at the canopy, branch, and needle level are available, along with the detailed canopy architectural measurements.

  19. Setting-level influences on implementation of the responsive classroom approach.

    Science.gov (United States)

    Wanless, Shannon B; Patton, Christine L; Rimm-Kaufman, Sara E; Deutsch, Nancy L

    2013-02-01

    We used mixed methods to examine the association between setting-level factors and observed implementation of a social and emotional learning intervention (Responsive Classroom® approach; RC). In study 1 (N = 33 3rd grade teachers after the first year of RC implementation), we identified relevant setting-level factors and uncovered the mechanisms through which they related to implementation. In study 2 (N = 50 4th grade teachers after the second year of RC implementation), we validated our most salient Study 1 finding across multiple informants. Findings suggested that teachers perceived setting-level factors, particularly principal buy-in to the intervention and individualized coaching, as influential to their degree of implementation. Further, we found that intervention coaches' perspectives of principal buy-in were more related to implementation than principals' or teachers' perspectives. Findings extend the application of setting theory to the field of implementation science and suggest that interventionists may want to consider particular accounts of school setting factors before determining the likelihood of schools achieving high levels of implementation.

  20. On piecewise constant level-set (PCLS) methods for the identification of discontinuous parameters in ill-posed problems

    International Nuclear Information System (INIS)

    De Cezaro, A; Leitão, A; Tai, X-C

    2013-01-01

    We investigate level-set-type methods for solving ill-posed problems with discontinuous (piecewise constant) coefficients. The goal is to identify the level sets as well as the level values of an unknown parameter function on a model described by a nonlinear ill-posed operator equation. The PCLS approach is used here to parametrize the solution of a given operator equation in terms of a L 2 level-set function, i.e. the level-set function itself is assumed to be a piecewise constant function. Two distinct methods are proposed for computing stable solutions of the resulting ill-posed problem: the first is based on Tikhonov regularization, while the second is based on the augmented Lagrangian approach with total variation penalization. Classical regularization results (Engl H W et al 1996 Mathematics and its Applications (Dordrecht: Kluwer)) are derived for the Tikhonov method. On the other hand, for the augmented Lagrangian method, we succeed in proving the existence of (generalized) Lagrangian multipliers in the sense of (Rockafellar R T and Wets R J-B 1998 Grundlehren der Mathematischen Wissenschaften (Berlin: Springer)). Numerical experiments are performed for a 2D inverse potential problem (Hettlich F and Rundell W 1996 Inverse Problems 12 251–66), demonstrating the capabilities of both methods for solving this ill-posed problem in a stable way (complicated inclusions are recovered without any a priori geometrical information on the unknown parameter). (paper)

  1. Translation of a High-Level Temporal Model into Lower Level Models: Impact of Modelling at Different Description Levels

    DEFF Research Database (Denmark)

    Kraft, Peter; Sørensen, Jens Otto

    2001-01-01

    given types of properties, and examine how descriptions on higher levels translate into descriptions on lower levels. Our example looks at temporal properties where the information is concerned with the existence in time. In a high level temporal model with information kept in a three-dimensional space...... the existences in time can be mapped precisely and consistently securing a consistent handling of the temporal properties. We translate the high level temporal model into an entity-relationship model, with the information in a two-dimensional graph, and finally we look at the translations into relational...... and other textual models. We also consider the aptness of models that include procedural mechanisms such as active and object databases...

  2. Demons versus level-set motion registration for coronary 18F-sodium fluoride PET

    Science.gov (United States)

    Rubeaux, Mathieu; Joshi, Nikhil; Dweck, Marc R.; Fletcher, Alison; Motwani, Manish; Thomson, Louise E.; Germano, Guido; Dey, Damini; Berman, Daniel S.; Newby, David E.; Slomka, Piotr J.

    2016-03-01

    Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18F-sodium fluoride (18F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18F-NaF PET. To this end, fifteen patients underwent 18F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically

  3. [Cardiac Synchronization Function Estimation Based on ASM Level Set Segmentation Method].

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Tang, Liang; He, Ying; Zhang, Huie

    At present, there is no accurate and quantitative methods for the determination of cardiac mechanical synchronism, and quantitative determination of the synchronization function of the four cardiac cavities with medical images has a great clinical value. This paper uses the whole heart ultrasound image sequence, and segments the left & right atriums and left & right ventricles of each frame. After the segmentation, the number of pixels in each cavity and in each frame is recorded, and the areas of the four cavities of the image sequence are therefore obtained. The area change curves of the four cavities are further extracted, and the synchronous information of the four cavities is obtained. Because of the low SNR of Ultrasound images, the boundary lines of cardiac cavities are vague, so the extraction of cardiac contours is still a challenging problem. Therefore, the ASM model information is added to the traditional level set method to force the curve evolution process. According to the experimental results, the improved method improves the accuracy of the segmentation. Furthermore, based on the ventricular segmentation, the right and left ventricular systolic functions are evaluated, mainly according to the area changes. The synchronization of the four cavities of the heart is estimated based on the area changes and the volume changes.

  4. An integrated extended Kalman filter–implicit level set algorithm for monitoring planar hydraulic fractures

    International Nuclear Information System (INIS)

    Peirce, A; Rochinha, F

    2012-01-01

    We describe a novel approach to the inversion of elasto-static tiltmeter measurements to monitor planar hydraulic fractures propagating within three-dimensional elastic media. The technique combines the extended Kalman filter (EKF), which predicts and updates state estimates using tiltmeter measurement time-series, with a novel implicit level set algorithm (ILSA), which solves the coupled elasto-hydrodynamic equations. The EKF and ILSA are integrated to produce an algorithm to locate the unknown fracture-free boundary. A scaling argument is used to derive a strategy to tune the algorithm parameters to enable measurement information to compensate for unmodeled dynamics. Synthetic tiltmeter data for three numerical experiments are generated by introducing significant changes to the fracture geometry by altering the confining geological stress field. Even though there is no confining stress field in the dynamic model used by the new EKF-ILSA scheme, it is able to use synthetic data to arrive at remarkably accurate predictions of the fracture widths and footprints. These experiments also explore the robustness of the algorithm to noise and to placement of tiltmeter arrays operating in the near-field and far-field regimes. In these experiments, the appropriate parameter choices and strategies to improve the robustness of the algorithm to significant measurement noise are explored. (paper)

  5. Delineating Facies Spatial Distribution by Integrating Ensemble Data Assimilation and Indicator Geostatistics with Level Set Transformation.

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn Edward; Song, Xuehang; Ye, Ming; Dai, Zhenxue; Zachara, John; Chen, Xingyuan

    2017-03-01

    A new approach is developed to delineate the spatial distribution of discrete facies (geological units that have unique distributions of hydraulic, physical, and/or chemical properties) conditioned not only on direct data (measurements directly related to facies properties, e.g., grain size distribution obtained from borehole samples) but also on indirect data (observations indirectly related to facies distribution, e.g., hydraulic head and tracer concentration). Our method integrates for the first time ensemble data assimilation with traditional transition probability-based geostatistics. The concept of level set is introduced to build shape parameterization that allows transformation between discrete facies indicators and continuous random variables. The spatial structure of different facies is simulated by indicator models using conditioning points selected adaptively during the iterative process of data assimilation. To evaluate the new method, a two-dimensional semi-synthetic example is designed to estimate the spatial distribution and permeability of two distinct facies from transient head data induced by pumping tests. The example demonstrates that our new method adequately captures the spatial pattern of facies distribution by imposing spatial continuity through conditioning points. The new method also reproduces the overall response in hydraulic head field with better accuracy compared to data assimilation with no constraints on spatial continuity on facies.

  6. Regional Dimensions of the Triple Helix Model: Setting the Context

    Science.gov (United States)

    Todeva, Emanuela; Danson, Mike

    2016-01-01

    This paper introduces the rationale for the special issue and its contributions, which bridge the literature on regional development and the Triple Helix model. The concept of the Triple Helix at the sub-national, and specifically regional, level is established and examined, with special regard to regional economic development founded on…

  7. System level modeling and component level control of fuel cells

    Science.gov (United States)

    Xue, Xingjian

    This dissertation investigates the fuel cell systems and the related technologies in three aspects: (1) system-level dynamic modeling of both PEM fuel cell (PEMFC) and solid oxide fuel cell (SOFC); (2) condition monitoring scheme development of PEM fuel cell system using model-based statistical method; and (3) strategy and algorithm development of precision control with potential application in energy systems. The dissertation first presents a system level dynamic modeling strategy for PEM fuel cells. It is well known that water plays a critical role in PEM fuel cell operations. It makes the membrane function appropriately and improves the durability. The low temperature operating conditions, however, impose modeling difficulties in characterizing the liquid-vapor two phase change phenomenon, which becomes even more complex under dynamic operating conditions. This dissertation proposes an innovative method to characterize this phenomenon, and builds a comprehensive model for PEM fuel cell at the system level. The model features the complete characterization of multi-physics dynamic coupling effects with the inclusion of dynamic phase change. The model is validated using Ballard stack experimental result from open literature. The system behavior and the internal coupling effects are also investigated using this model under various operating conditions. Anode-supported tubular SOFC is also investigated in the dissertation. While the Nernst potential plays a central role in characterizing the electrochemical performance, the traditional Nernst equation may lead to incorrect analysis results under dynamic operating conditions due to the current reverse flow phenomenon. This dissertation presents a systematic study in this regard to incorporate a modified Nernst potential expression and the heat/mass transfer into the analysis. The model is used to investigate the limitations and optimal results of various operating conditions; it can also be utilized to perform the

  8. Flipping for success: evaluating the effectiveness of a novel teaching approach in a graduate level setting.

    Science.gov (United States)

    Moraros, John; Islam, Adiba; Yu, Stan; Banow, Ryan; Schindelka, Barbara

    2015-02-28

    Flipped Classroom is a model that's quickly gaining recognition as a novel teaching approach among health science curricula. The purpose of this study was four-fold and aimed to compare Flipped Classroom effectiveness ratings with: 1) student socio-demographic characteristics, 2) student final grades, 3) student overall course satisfaction, and 4) course pre-Flipped Classroom effectiveness ratings. The participants in the study consisted of 67 Masters-level graduate students in an introductory epidemiology class. Data was collected from students who completed surveys during three time points (beginning, middle and end) in each term. The Flipped Classroom was employed for the academic year 2012-2013 (two terms) using both pre-class activities and in-class activities. Among the 67 Masters-level graduate students, 80% found the Flipped Classroom model to be either somewhat effective or very effective (M = 4.1/5.0). International students rated the Flipped Classroom to be significantly more effective when compared to North American students (X(2) = 11.35, p Students' perceived effectiveness of the Flipped Classroom had no significant association to their academic performance in the course as measured by their final grades (r s = 0.70). However, students who found the Flipped Classroom to be effective were also more likely to be satisfied with their course experience. Additionally, it was found that the SEEQ variable scores for students enrolled in the Flipped Classroom were significantly higher than the ones for students enrolled prior to the implementation of the Flipped Classroom (p = 0.003). Overall, the format of the Flipped Classroom provided more opportunities for students to engage in critical thinking, independently facilitate their own learning, and more effectively interact with and learn from their peers. Additionally, the instructor was given more flexibility to cover a wider range and depth of material, provide in-class applied learning

  9. Analysis of Forensic Autopsy in 120 Cases of Medical Disputes Among Different Levels of Institutional Settings.

    Science.gov (United States)

    Yu, Lin-Sheng; Ye, Guang-Hua; Fan, Yan-Yan; Li, Xing-Biao; Feng, Xiang-Ping; Han, Jun-Ge; Lin, Ke-Zhi; Deng, Miao-Wu; Li, Feng

    2015-09-01

    Despite advances in medical science, the causes of death can sometimes only be determined by pathologists after a complete autopsy. Few studies have investigated the importance of forensic autopsy in medically disputed cases among different levels of institutional settings. Our study aimed to analyze forensic autopsy in 120 cases of medical disputes among five levels of institutional settings between 2001 and 2012 in Wenzhou, China. The results showed an overall concordance rate of 55%. Of the 39% of clinically missed diagnosis, cardiovascular pathology comprises 55.32%, while respiratory pathology accounts for the remaining 44. 68%. Factors that increase the likelihood of missed diagnoses were private clinics, community settings, and county hospitals. These results support that autopsy remains an important tool in establishing causes of death in medically disputed case, which may directly determine or exclude the fault of medical care and therefore in helping in resolving these cases. © 2015 American Academy of Forensic Sciences.

  10. An investigation of children's levels of inquiry in an informal science setting

    Science.gov (United States)

    Clark-Thomas, Beth Anne

    Elementary school students' understanding of both science content and processes are enhanced by the higher level thinking associated with inquiry-based science investigations. Informal science setting personnel, elementary school teachers, and curriculum specialists charged with designing inquiry-based investigations would be well served by an understanding of the varying influence of certain present factors upon the students' willingness and ability to delve into such higher level inquiries. This study examined young children's use of inquiry-based materials and factors which may influence the level of inquiry they engaged in during informal science activities. An informal science setting was selected as the context for the examination of student inquiry behaviors because of the rich inquiry-based environment present at the site and the benefits previously noted in the research regarding the impact of informal science settings upon the construction of knowledge in science. The study revealed several patterns of behavior among children when they are engaged in inquiry-based activities at informal science exhibits. These repeated behaviors varied in the children's apparent purposeful use of the materials at the exhibits. These levels of inquiry behavior were taxonomically defined as high/medium/low within this study utilizing a researcher-developed tool. Furthermore, in this study adult interventions, questions, or prompting were found to impact the level of inquiry engaged in by the children. This study revealed that higher levels of inquiry were preceded by task directed and physical feature prompts. Moreover, the levels of inquiry behaviors were haltered, even lowered, when preceded by a prompt that focused on a science content or concept question. Results of this study have implications for the enhancement of inquiry-based science activities in elementary schools as well as in informal science settings. These findings have significance for all science educators

  11. Education level inequalities and transportation injury mortality in the middle aged and elderly in European settings

    NARCIS (Netherlands)

    Borrell, C.; Plasència, A.; Huisman, M.; Costa, G.; Kunst, A.; Andersen, O.; Bopp, M.; Borgan, J.-K.; Deboosere, P.; Glickman, M.; Gadeyne, S.; Minder, C.; Regidor, E.; Spadea, T.; Valkonen, T.; Mackenbach, J. P.

    2005-01-01

    OBJECTIVE: To study the differential distribution of transportation injury mortality by educational level in nine European settings, among people older than 30 years, during the 1990s. METHODS: Deaths of men and women older than 30 years from transportation injuries were studied. Rate differences

  12. Level of health care and services in a tertiary health setting in Nigeria

    African Journals Online (AJOL)

    Level of health care and services in a tertiary health setting in Nigeria. ... Background: There is a growing awareness and demand for quality health care across the world; hence the ... Doctors and nurses formed 64.3% of the study population.

  13. Multi-domain, higher order level set scheme for 3D image segmentation on the GPU

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Zhang, Qin; Anton, François

    2010-01-01

    to evaluate level set surfaces that are $C^2$ continuous, but are slow due to high computational burden. In this paper, we provide a higher order GPU based solver for fast and efficient segmentation of large volumetric images. We also extend the higher order method to multi-domain segmentation. Our streaming...

  14. An Optimized, Grid Independent, Narrow Band Data Structure for High Resolution Level Sets

    DEFF Research Database (Denmark)

    Nielsen, Michael Bang; Museth, Ken

    2004-01-01

    enforced by the convex boundaries of an underlying cartesian computational grid. Here we present a novel very memory efficient narrow band data structure, dubbed the Sparse Grid, that enables the representation of grid independent high resolution level sets. The key features our new data structure are...

  15. Quasi-min-max Fuzzy MPC of UTSG Water Level Based on Off-Line Invariant Set

    Science.gov (United States)

    Liu, Xiangjie; Jiang, Di; Lee, Kwang Y.

    2015-10-01

    In a nuclear power plant, the water level of the U-tube steam generator (UTSG) must be maintained within a safe range. Traditional control methods encounter difficulties due to the complexity, strong nonlinearity and “swell and shrink” effects, especially at low power levels. A properly designed robust model predictive control can well solve this problem. In this paper, a quasi-min-max fuzzy model predictive controller is developed for controlling the constrained UTSG system. While the online computational burden could be quite large for the real-time control, a bank of ellipsoid invariant sets together with the corresponding feedback control laws are obtained by off-line solving linear matrix inequalities (LMIs). Based on the UTSG states, the online optimization is simplified as a constrained optimization problem with a bisection search for the corresponding ellipsoid invariant set. Simulation results are given to show the effectiveness of the proposed controller.

  16. Scope of physician procedures independently billed by mid-level providers in the office setting.

    Science.gov (United States)

    Coldiron, Brett; Ratnarathorn, Mondhipa

    2014-11-01

    Mid-level providers (nurse practitioners and physician assistants) were originally envisioned to provide primary care services in underserved areas. This study details the current scope of independent procedural billing to Medicare of difficult, invasive, and surgical procedures by medical mid-level providers. To understand the scope of independent billing to Medicare for procedures performed by mid-level providers in an outpatient office setting for a calendar year. Analyses of the 2012 Medicare Physician/Supplier Procedure Summary Master File, which reflects fee-for-service claims that were paid by Medicare, for Current Procedural Terminology procedures independently billed by mid-level providers. Outpatient office setting among health care providers. The scope of independent billing to Medicare for procedures performed by mid-level providers. In 2012, nurse practitioners and physician assistants billed independently for more than 4 million procedures at our cutoff of 5000 paid claims per procedure. Most (54.8%) of these procedures were performed in the specialty area of dermatology. The findings of this study are relevant to safety and quality of care. Recently, the shortage of primary care clinicians has prompted discussion of widening the scope of practice for mid-level providers. It would be prudent to temper widening the scope of practice of mid-level providers by recognizing that mid-level providers are not solely limited to primary care, and may involve procedures for which they may not have formal training.

  17. Online monitoring of oil film using electrical capacitance tomography and level set method

    International Nuclear Information System (INIS)

    Xue, Q.; Ma, M.; Sun, B. Y.; Cui, Z. Q.; Wang, H. X.

    2015-01-01

    In the application of oil-air lubrication system, electrical capacitance tomography (ECT) provides a promising way for monitoring oil film in the pipelines by reconstructing cross sectional oil distributions in real time. While in the case of small diameter pipe and thin oil film, the thickness of the oil film is hard to be observed visually since the interface of oil and air is not obvious in the reconstructed images. And the existence of artifacts in the reconstructions has seriously influenced the effectiveness of image segmentation techniques such as level set method. Besides, level set method is also unavailable for online monitoring due to its low computation speed. To address these problems, a modified level set method is developed: a distance regularized level set evolution formulation is extended to image two-phase flow online using an ECT system, a narrowband image filter is defined to eliminate the influence of artifacts, and considering the continuity of the oil distribution variation, the detected oil-air interface of a former image can be used as the initial contour for the detection of the subsequent frame; thus, the propagation from the initial contour to the boundary can be greatly accelerated, making it possible for real time tracking. To testify the feasibility of the proposed method, an oil-air lubrication facility with 4 mm inner diameter pipe is measured in normal operation using an 8-electrode ECT system. Both simulation and experiment results indicate that the modified level set method is capable of visualizing the oil-air interface accurately online

  18. Combinatorial nuclear level-density model

    International Nuclear Information System (INIS)

    Uhrenholt, H.; Åberg, S.; Dobrowolski, A.; Døssing, Th.; Ichikawa, T.; Möller, P.

    2013-01-01

    A microscopic nuclear level-density model is presented. The model is a completely combinatorial (micro-canonical) model based on the folded-Yukawa single-particle potential and includes explicit treatment of pairing, rotational and vibrational states. The microscopic character of all states enables extraction of level-distribution functions with respect to pairing gaps, parity and angular momentum. The results of the model are compared to available experimental data: level spacings at neutron separation energy, data on total level-density functions from the Oslo method, cumulative level densities from low-lying discrete states, and data on parity ratios. Spherical and deformed nuclei follow basically different coupling schemes, and we focus on deformed nuclei

  19. Diverse Data Sets Can Yield Reliable Information through Mechanistic Modeling: Salicylic Acid Clearance.

    Science.gov (United States)

    Raymond, G M; Bassingthwaighte, J B

    This is a practical example of a powerful research strategy: putting together data from studies covering a diversity of conditions can yield a scientifically sound grasp of the phenomenon when the individual observations failed to provide definitive understanding. The rationale is that defining a realistic, quantitative, explanatory hypothesis for the whole set of studies, brings about a "consilience" of the often competing hypotheses considered for individual data sets. An internally consistent conjecture linking multiple data sets simultaneously provides stronger evidence on the characteristics of a system than does analysis of individual data sets limited to narrow ranges of conditions. Our example examines three very different data sets on the clearance of salicylic acid from humans: a high concentration set from aspirin overdoses; a set with medium concentrations from a research study on the influences of the route of administration and of sex on the clearance kinetics, and a set on low dose aspirin for cardiovascular health. Three models were tested: (1) a first order reaction, (2) a Michaelis-Menten (M-M) approach, and (3) an enzyme kinetic model with forward and backward reactions. The reaction rates found from model 1 were distinctly different for the three data sets, having no commonality. The M-M model 2 fitted each of the three data sets but gave a reliable estimates of the Michaelis constant only for the medium level data (K m = 24±5.4 mg/L); analyzing the three data sets together with model 2 gave K m = 18±2.6 mg/L. (Estimating parameters using larger numbers of data points in an optimization increases the degrees of freedom, constraining the range of the estimates). Using the enzyme kinetic model (3) increased the number of free parameters but nevertheless improved the goodness of fit to the combined data sets, giving tighter constraints, and a lower estimated K m = 14.6±2.9 mg/L, demonstrating that fitting diverse data sets with a single model

  20. Analysis and classification of data sets for calibration and validation of agro-ecosystem models

    DEFF Research Database (Denmark)

    Kersebaum, K C; Boote, K J; Jorgenson, J S

    2015-01-01

    Experimental field data are used at different levels of complexity to calibrate, validate and improve agro-ecosystem models to enhance their reliability for regional impact assessment. A methodological framework and software are presented to evaluate and classify data sets into four classes regar...

  1. Preference Mining Using Neighborhood Rough Set Model on Two Universes.

    Science.gov (United States)

    Zeng, Kai

    2016-01-01

    Preference mining plays an important role in e-commerce and video websites for enhancing user satisfaction and loyalty. Some classical methods are not available for the cold-start problem when the user or the item is new. In this paper, we propose a new model, called parametric neighborhood rough set on two universes (NRSTU), to describe the user and item data structures. Furthermore, the neighborhood lower approximation operator is used for defining the preference rules. Then, we provide the means for recommending items to users by using these rules. Finally, we give an experimental example to show the details of NRSTU-based preference mining for cold-start problem. The parameters of the model are also discussed. The experimental results show that the proposed method presents an effective solution for preference mining. In particular, NRSTU improves the recommendation accuracy by about 19% compared to the traditional method.

  2. Information behavior versus communication: application models in multidisciplinary settings

    Directory of Open Access Journals (Sweden)

    Cecília Morena Maria da Silva

    2015-05-01

    Full Text Available This paper deals with the information behavior as support for models of communication design in the areas of Information Science, Library and Music. The communication models proposition is based on models of Tubbs and Moss (2003, Garvey and Griffith (1972, adapted by Hurd (1996 and Wilson (1999. Therefore, the questions arose: (i what are the informational skills required of librarians who act as mediators in scholarly communication process and informational user behavior in the educational environment?; (ii what are the needs of music related researchers and as produce, seek, use and access the scientific knowledge of your area?; and (iii as the contexts involved in scientific collaboration processes influence in the scientific production of information science field in Brazil? The article includes a literature review on the information behavior and its insertion in scientific communication considering the influence of context and/or situation of the objects involved in motivating issues. The hypothesis is that the user information behavior in different contexts and situations influence the definition of a scientific communication model. Finally, it is concluded that the same concept or a set of concepts can be used in different perspectives, reaching up, thus, different results.

  3. KFUPM-KAUST Red Sea model: Digital viscoelastic depth model and synthetic seismic data set

    KAUST Repository

    Al-Shuhail, Abdullatif A.; Mousa, Wail A.; Alkhalifah, Tariq Ali

    2017-01-01

    The Red Sea is geologically interesting due to its unique structures and abundant mineral and petroleum resources, yet no digital geologic models or synthetic seismic data of the Red Sea are publicly available for testing algorithms to image and analyze the area's interesting features. This study compiles a 2D viscoelastic model of the Red Sea and calculates a corresponding multicomponent synthetic seismic data set. The models and data sets are made publicly available for download. We hope this effort will encourage interested researchers to test their processing algorithms on this data set and model and share their results publicly as well.

  4. KFUPM-KAUST Red Sea model: Digital viscoelastic depth model and synthetic seismic data set

    KAUST Repository

    Al-Shuhail, Abdullatif A.

    2017-06-01

    The Red Sea is geologically interesting due to its unique structures and abundant mineral and petroleum resources, yet no digital geologic models or synthetic seismic data of the Red Sea are publicly available for testing algorithms to image and analyze the area\\'s interesting features. This study compiles a 2D viscoelastic model of the Red Sea and calculates a corresponding multicomponent synthetic seismic data set. The models and data sets are made publicly available for download. We hope this effort will encourage interested researchers to test their processing algorithms on this data set and model and share their results publicly as well.

  5. Sensitivity Analysis of features in tolerancing based on constraint function level sets

    International Nuclear Information System (INIS)

    Ziegler, Philipp; Wartzack, Sandro

    2015-01-01

    Usually, the geometry of the manufactured product inherently varies from the nominal geometry. This may negatively affect the product functions and properties (such as quality and reliability), as well as the assemblability of the single components. In order to avoid this, the geometric variation of these component surfaces and associated geometry elements (like hole axes) are restricted by tolerances. Since tighter tolerances lead to significant higher manufacturing costs, tolerances should be specified carefully. Therefore, the impact of deviating component surfaces on functions, properties and assemblability of the product has to be analyzed. As physical experiments are expensive, methods of statistical tolerance analysis tools are widely used in engineering design. Current tolerance simulation tools lack of an appropriate indicator for the impact of deviating component surfaces. In the adoption of Sensitivity Analysis methods, there are several challenges, which arise from the specific framework in tolerancing. This paper presents an approach to adopt Sensitivity Analysis methods on current tolerance simulations with an interface module, which bases on level sets of constraint functions for parameters of the simulation model. The paper is an extension and generalization of Ziegler and Wartzack [1]. Mathematical properties of the constraint functions (convexity, homogeneity), which are important for the computational costs of the Sensitivity Analysis, are shown. The practical use of the method is illustrated in a case study of a plain bearing. - Highlights: • Alternative definition of Deviation Domains. • Proof of mathematical properties of the Deviation Domains. • Definition of the interface between Deviation Domains and Sensitivity Analysis. • Sensitivity analysis of a gearbox to show the methods practical use

  6. Level set segmentation of bovine corpora lutea in ex situ ovarian ultrasound images

    Directory of Open Access Journals (Sweden)

    Adams Gregg P

    2008-08-01

    Full Text Available Abstract Background The objective of this study was to investigate the viability of level set image segmentation methods for the detection of corpora lutea (corpus luteum, CL boundaries in ultrasonographic ovarian images. It was hypothesized that bovine CL boundaries could be located within 1–2 mm by a level set image segmentation methodology. Methods Level set methods embed a 2D contour in a 3D surface and evolve that surface over time according to an image-dependent speed function. A speed function suitable for segmentation of CL's in ovarian ultrasound images was developed. An initial contour was manually placed and contour evolution was allowed to proceed until the rate of change of the area was sufficiently small. The method was tested on ovarian ultrasonographic images (n = 8 obtained ex situ. A expert in ovarian ultrasound interpretation delineated CL boundaries manually to serve as a "ground truth". Accuracy of the level set segmentation algorithm was determined by comparing semi-automatically determined contours with ground truth contours using the mean absolute difference (MAD, root mean squared difference (RMSD, Hausdorff distance (HD, sensitivity, and specificity metrics. Results and discussion The mean MAD was 0.87 mm (sigma = 0.36 mm, RMSD was 1.1 mm (sigma = 0.47 mm, and HD was 3.4 mm (sigma = 2.0 mm indicating that, on average, boundaries were accurate within 1–2 mm, however, deviations in excess of 3 mm from the ground truth were observed indicating under- or over-expansion of the contour. Mean sensitivity and specificity were 0.814 (sigma = 0.171 and 0.990 (sigma = 0.00786, respectively, indicating that CLs were consistently undersegmented but rarely did the contour interior include pixels that were judged by the human expert not to be part of the CL. It was observed that in localities where gradient magnitudes within the CL were strong due to high contrast speckle, contour expansion stopped too early. Conclusion The

  7. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    Science.gov (United States)

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    Science.gov (United States)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  9. A level-set method for two-phase flows with soluble surfactant

    Science.gov (United States)

    Xu, Jian-Jun; Shi, Weidong; Lai, Ming-Chih

    2018-01-01

    A level-set method is presented for solving two-phase flows with soluble surfactant. The Navier-Stokes equations are solved along with the bulk surfactant and the interfacial surfactant equations. In particular, the convection-diffusion equation for the bulk surfactant on the irregular moving domain is solved by using a level-set based diffusive-domain method. A conservation law for the total surfactant mass is derived, and a re-scaling procedure for the surfactant concentrations is proposed to compensate for the surfactant mass loss due to numerical diffusion. The whole numerical algorithm is easy for implementation. Several numerical simulations in 2D and 3D show the effects of surfactant solubility on drop dynamics under shear flow.

  10. Application of the level set method for multi-phase flow computation in fusion engineering

    International Nuclear Information System (INIS)

    Luo, X-Y.; Ni, M-J.; Ying, A.; Abdou, M.

    2006-01-01

    Numerical simulation of multi-phase flow is essential to evaluate the feasibility of a liquid protection scheme for the power plant chamber. The level set method is one of the best methods for computing and analyzing the motion of interface among the multi-phase flow. This paper presents a general formula for the second-order projection method combined with the level set method to simulate unsteady incompressible multi-phase flow with/out phase change flow encountered in fusion science and engineering. The third-order ENO scheme and second-order semi-implicit Crank-Nicholson scheme is used to update the convective and diffusion term. The numerical results show this method can handle the complex deformation of the interface and the effect of liquid-vapor phase change will be included in the future work

  11. Embedded Real-Time Architecture for Level-Set-Based Active Contours

    Directory of Open Access Journals (Sweden)

    Dejnožková Eva

    2005-01-01

    Full Text Available Methods described by partial differential equations have gained a considerable interest because of undoubtful advantages such as an easy mathematical description of the underlying physics phenomena, subpixel precision, isotropy, or direct extension to higher dimensions. Though their implementation within the level set framework offers other interesting advantages, their vast industrial deployment on embedded systems is slowed down by their considerable computational effort. This paper exploits the high parallelization potential of the operators from the level set framework and proposes a scalable, asynchronous, multiprocessor platform suitable for system-on-chip solutions. We concentrate on obtaining real-time execution capabilities. The performance is evaluated on a continuous watershed and an object-tracking application based on a simple gradient-based attraction force driving the active countour. The proposed architecture can be realized on commercially available FPGAs. It is built around general-purpose processor cores, and can run code developed with usual tools.

  12. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    Science.gov (United States)

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  13. Level-set-based reconstruction algorithm for EIT lung images: first clinical results

    International Nuclear Information System (INIS)

    Rahmati, Peyman; Adler, Andy; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz

    2012-01-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure–volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM. (paper)

  14. Kir2.1 channels set two levels of resting membrane potential with inward rectification.

    Science.gov (United States)

    Chen, Kuihao; Zuo, Dongchuan; Liu, Zheng; Chen, Haijun

    2018-04-01

    Strong inward rectifier K + channels (Kir2.1) mediate background K + currents primarily responsible for maintenance of resting membrane potential. Multiple types of cells exhibit two levels of resting membrane potential. Kir2.1 and K2P1 currents counterbalance, partially accounting for the phenomenon of human cardiomyocytes in subphysiological extracellular K + concentrations or pathological hypokalemic conditions. The mechanism of how Kir2.1 channels contribute to the two levels of resting membrane potential in different types of cells is not well understood. Here we test the hypothesis that Kir2.1 channels set two levels of resting membrane potential with inward rectification. Under hypokalemic conditions, Kir2.1 currents counterbalance HCN2 or HCN4 cation currents in CHO cells that heterologously express both channels, generating N-shaped current-voltage relationships that cross the voltage axis three times and reconstituting two levels of resting membrane potential. Blockade of HCN channels eliminated the phenomenon in K2P1-deficient Kir2.1-expressing human cardiomyocytes derived from induced pluripotent stem cells or CHO cells expressing both Kir2.1 and HCN2 channels. Weakly inward rectifier Kir4.1 or inward rectification-deficient Kir2.1•E224G mutant channels do not set such two levels of resting membrane potential when co-expressed with HCN2 channels in CHO cells or when overexpressed in human cardiomyocytes derived from induced pluripotent stem cells. These findings demonstrate a common mechanism that Kir2.1 channels set two levels of resting membrane potential with inward rectification by balancing inward currents through different cation channels such as hyperpolarization-activated HCN channels or hypokalemia-induced K2P1 leak channels.

  15. Computing the dynamics of biomembranes by combining conservative level set and adaptive finite element methods

    OpenAIRE

    Laadhari , Aymen; Saramito , Pierre; Misbah , Chaouqi

    2014-01-01

    International audience; The numerical simulation of the deformation of vesicle membranes under simple shear external fluid flow is considered in this paper. A new saddle-point approach is proposed for the imposition of the fluid incompressibility and the membrane inextensibility constraints, through Lagrange multipliers defined in the fluid and on the membrane respectively. Using a level set formulation, the problem is approximated by mixed finite elements combined with an automatic adaptive ...

  16. System level modelling with open source tools

    DEFF Research Database (Denmark)

    Jakobsen, Mikkel Koefoed; Madsen, Jan; Niaki, Seyed Hosein Attarzadeh

    , called ForSyDe. ForSyDe is available under the open Source approach, which allows small and medium enterprises (SME) to get easy access to advanced modeling capabilities and tools. We give an introduction to the design methodology through the system level modeling of a simple industrial use case, and we...

  17. A level set method for cupping artifact correction in cone-beam CT

    International Nuclear Information System (INIS)

    Xie, Shipeng; Li, Haibo; Ge, Qi; Li, Chunming

    2015-01-01

    Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts in CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts

  18. Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement

    Science.gov (United States)

    Shervani-Tabar, Navid; Vasilyev, Oleg V.

    2016-11-01

    This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.

  19. Considering Actionability at the Participant's Research Setting Level for Anticipatable Incidental Findings from Clinical Research.

    Science.gov (United States)

    Ortiz-Osorno, Alberto Betto; Ehler, Linda A; Brooks, Judith

    2015-01-01

    Determining what constitutes an anticipatable incidental finding (IF) from clinical research and defining whether, and when, this IF should be returned to the participant have been topics of discussion in the field of human subject protections for the last 10 years. It has been debated that implementing a comprehensive IF-approach that addresses both the responsibility of researchers to return IFs and the expectation of participants to receive them can be logistically challenging. IFs have been debated at different levels, such as the ethical reasoning for considering their disclosure or the need for planning for them during the development of the research study. Some authors have discussed the methods for re-contacting participants for disclosing IFs, as well as the relevance of considering the clinical importance of the IFs. Similarly, other authors have debated about when IFs should be disclosed to participants. However, no author has addressed how the "actionability" of the IFs should be considered, evaluated, or characterized at the participant's research setting level. This paper defines the concept of "Actionability at the Participant's Research Setting Level" (APRSL) for anticipatable IFs from clinical research, discusses some related ethical concepts to justify the APRSL concept, proposes a strategy to incorporate APRSL into the planning and management of IFs, and suggests a strategy for integrating APRSL at each local research setting. © 2015 American Society of Law, Medicine & Ethics, Inc.

  20. Mathematical Modelling with Fuzzy Sets of Sustainable Tourism Development

    Directory of Open Access Journals (Sweden)

    Nenad Stojanović

    2011-10-01

    Full Text Available In the first part of the study we introduce fuzzy sets that correspond to comparative indicators for measuring sustainable development of tourism. In the second part of the study it is shown, on the base of model created, how one can determine the value of sustainable tourism development in protected areas based on the following established groups of indicators: to assess the economic status, to assess the impact of tourism on the social component, to assess the impact of tourism on cultural identity, to assess the environmental conditions and indicators as well as to assess tourist satisfaction, all using fuzzy logic.It is also shown how to test the confidence in the rules by which, according to experts, appropriate decisions can be created in order to protect biodiversity of protected areas.

  1. Model-based gene set analysis for Bioconductor.

    Science.gov (United States)

    Bauer, Sebastian; Robinson, Peter N; Gagneur, Julien

    2011-07-01

    Gene Ontology and other forms of gene-category analysis play a major role in the evaluation of high-throughput experiments in molecular biology. Single-category enrichment analysis procedures such as Fisher's exact test tend to flag large numbers of redundant categories as significant, which can complicate interpretation. We have recently developed an approach called model-based gene set analysis (MGSA), that substantially reduces the number of redundant categories returned by the gene-category analysis. In this work, we present the Bioconductor package mgsa, which makes the MGSA algorithm available to users of the R language. Our package provides a simple and flexible application programming interface for applying the approach. The mgsa package has been made available as part of Bioconductor 2.8. It is released under the conditions of the Artistic license 2.0. peter.robinson@charite.de; julien.gagneur@embl.de.

  2. Three-Dimensional Simulation of DRIE Process Based on the Narrow Band Level Set and Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Jia-Cheng Yu

    2018-02-01

    Full Text Available A three-dimensional topography simulation of deep reactive ion etching (DRIE is developed based on the narrow band level set method for surface evolution and Monte Carlo method for flux distribution. The advanced level set method is implemented to simulate the time-related movements of etched surface. In the meanwhile, accelerated by ray tracing algorithm, the Monte Carlo method incorporates all dominant physical and chemical mechanisms such as ion-enhanced etching, ballistic transport, ion scattering, and sidewall passivation. The modified models of charged particles and neutral particles are epitomized to determine the contributions of etching rate. The effects such as scalloping effect and lag effect are investigated in simulations and experiments. Besides, the quantitative analyses are conducted to measure the simulation error. Finally, this simulator will be served as an accurate prediction tool for some MEMS fabrications.

  3. Joint inversion of geophysical data using petrophysical clustering and facies deformation wth the level set technique

    Science.gov (United States)

    Revil, A.

    2015-12-01

    Geological expertise and petrophysical relationships can be brought together to provide prior information while inverting multiple geophysical datasets. The merging of such information can result in more realistic solution in the distribution of the model parameters, reducing ipse facto the non-uniqueness of the inverse problem. We consider two level of heterogeneities: facies, described by facies boundaries and heteroegenities inside each facies determined by a correlogram. In this presentation, we pose the geophysical inverse problem in terms of Gaussian random fields with mean functions controlled by petrophysical relationships and covariance functions controlled by a prior geological cross-section, including the definition of spatial boundaries for the geological facies. The petrophysical relationship problem is formulated as a regression problem upon each facies. The inversion of the geophysical data is performed in a Bayesian framework. We demonstrate the usefulness of this strategy using a first synthetic case for which we perform a joint inversion of gravity and galvanometric resistivity data with the stations located at the ground surface. The joint inversion is used to recover the density and resistivity distributions of the subsurface. In a second step, we consider the possibility that the facies boundaries are deformable and their shapes are inverted as well. We use the level set approach to perform such deformation preserving prior topological properties of the facies throughout the inversion. With the help of prior facies petrophysical relationships and topological characteristic of each facies, we make posterior inference about multiple geophysical tomograms based on their corresponding geophysical data misfits. The method is applied to a second synthetic case showing that we can recover the heterogeneities inside the facies, the mean values for the petrophysical properties, and, to some extent, the facies boundaries using the 2D joint inversion of

  4. Records for radioactive waste management up to repository closure: Managing the primary level information (PLI) set

    International Nuclear Information System (INIS)

    2004-07-01

    The objective of this publication is to highlight the importance of the early establishment of a comprehensive records system to manage primary level information (PLI) as an integrated set of information, not merely as a collection of information, throughout all the phases of radioactive waste management. Early establishment of a comprehensive records system to manage Primary Level Information as an integrated set of information throughout all phases of radioactive waste management is important. In addition to the information described in the waste inventory record keeping system (WIRKS), the PLI of a radioactive waste repository consists of the entire universe of information, data and records related to any aspect of the repository's life cycle. It is essential to establish PLI requirements based on integrated set of needs from Regulators and Waste Managers involved in the waste management chain and to update these requirements as needs change over time. Information flow for radioactive waste management should be back-end driven. Identification of an Authority that will oversee the management of PLI throughout all phases of the radioactive waste management life cycle would guarantee the information flow to future generations. The long term protection of information essential to future generations can only be assured by the timely establishment of a comprehensive and effective RMS capable of capturing, indexing and evaluating all PLI. The loss of intellectual control over the PLI will make it very difficult to subsequently identify the ILI and HLI information sets. At all times prior to the closure of a radioactive waste repository, there should be an identifiable entity with a legally enforceable financial and management responsibility for the continued operation of a PLI Records Management System. The information presented in this publication will assist Member States in ensuring that waste and repository records, relevant for retention after repository closure

  5. Topological Hausdorff dimension and level sets of generic continuous functions on fractals

    International Nuclear Information System (INIS)

    Balka, Richárd; Buczolich, Zoltán; Elekes, Márton

    2012-01-01

    Highlights: ► We examine a new fractal dimension, the so called topological Hausdorff dimension. ► The generic continuous function has a level set of maximal Hausdorff dimension. ► This maximal dimension is the topological Hausdorff dimension minus one. ► Homogeneity implies that “most” level sets are of this dimension. ► We calculate the various dimensions of the graph of the generic function. - Abstract: In an earlier paper we introduced a new concept of dimension for metric spaces, the so called topological Hausdorff dimension. For a compact metric space K let dim H K and dim tH K denote its Hausdorff and topological Hausdorff dimension, respectively. We proved that this new dimension describes the Hausdorff dimension of the level sets of the generic continuous function on K, namely sup{ dim H f -1 (y):y∈R} =dim tH K-1 for the generic f ∈ C(K), provided that K is not totally disconnected, otherwise every non-empty level set is a singleton. We also proved that if K is not totally disconnected and sufficiently homogeneous then dim H f −1 (y) = dim tH K − 1 for the generic f ∈ C(K) and the generic y ∈ f(K). The most important goal of this paper is to make these theorems more precise. As for the first result, we prove that the supremum is actually attained on the left hand side of the first equation above, and also show that there may only be a unique level set of maximal Hausdorff dimension. As for the second result, we characterize those compact metric spaces for which for the generic f ∈ C(K) and the generic y ∈ f(K) we have dim H f −1 (y) = dim tH K − 1. We also generalize a result of B. Kirchheim by showing that if K is self-similar then for the generic f ∈ C(K) for every y∈intf(K) we have dim H f −1 (y) = dim tH K − 1. Finally, we prove that the graph of the generic f ∈ C(K) has the same Hausdorff and topological Hausdorff dimension as K.

  6. County-level poverty is equally associated with unmet health care needs in rural and urban settings.

    Science.gov (United States)

    Peterson, Lars E; Litaker, David G

    2010-01-01

    Regional poverty is associated with reduced access to health care. Whether this relationship is equally strong in both rural and urban settings or is affected by the contextual and individual-level characteristics that distinguish these areas, is unclear. Compare the association between regional poverty with self-reported unmet need, a marker of health care access, by rural/urban setting. Multilevel, cross-sectional analysis of a state-representative sample of 39,953 adults stratified by rural/urban status, linked at the county level to data describing contextual characteristics. Weighted random intercept models examined the independent association of regional poverty with unmet needs, controlling for a range of contextual and individual-level characteristics. The unadjusted association between regional poverty levels and unmet needs was similar in both rural (OR = 1.06 [95% CI, 1.04-1.08]) and urban (OR = 1.03 [1.02-1.05]) settings. Adjusting for other contextual characteristics increased the size of the association in both rural (OR = 1.11 [1.04-1.19]) and urban (OR = 1.11 [1.05-1.18]) settings. Further adjustment for individual characteristics had little additional effect in rural (OR = 1.10 [1.00-1.20]) or urban (OR = 1.11 [1.01-1.22]) settings. To better meet the health care needs of all Americans, health care systems in areas with high regional poverty should acknowledge the relationship between poverty and unmet health care needs. Investments, or other interventions, that reduce regional poverty may be useful strategies for improving health through better access to health care. © 2010 National Rural Health Association.

  7. Noise in restaurants: levels and mathematical model.

    Science.gov (United States)

    To, Wai Ming; Chung, Andy

    2014-01-01

    Noise affects the dining atmosphere and is an occupational hazard to restaurant service employees worldwide. This paper examines the levels of noise in dining areas during peak hours in different types of restaurants in Hong Kong SAR, China. A mathematical model that describes the noise level in a restaurant is presented. The 1-h equivalent continuous noise level (L(eq,1-h)) was measured using a Type-1 precision integral sound level meter while the occupancy density, the floor area of the dining area, and the ceiling height of each of the surveyed restaurants were recorded. It was found that the measured noise levels using Leq,1-h ranged from 67.6 to 79.3 dBA in Chinese restaurants, from 69.1 to 79.1 dBA in fast food restaurants, and from 66.7 to 82.6 dBA in Western restaurants. Results of the analysis of variance show that there were no significant differences between means of the measured noise levels among different types of restaurants. A stepwise multiple regression analysis was employed to determine the relationships between geometrical and operational parameters and the measured noise levels. Results of the regression analysis show that the measured noise levels depended on the levels of occupancy density only. By reconciling the measured noise levels and the mathematical model, it was found that people in restaurants increased their voice levels when the occupancy density increased. Nevertheless, the maximum measured hourly noise level indicated that the noise exposure experienced by restaurant service employees was below the regulated daily noise exposure value level of 85 dBA.

  8. Noise in restaurants: Levels and mathematical model

    Directory of Open Access Journals (Sweden)

    Wai Ming To

    2014-01-01

    Full Text Available Noise affects the dining atmosphere and is an occupational hazard to restaurant service employees worldwide. This paper examines the levels of noise in dining areas during peak hours in different types of restaurants in Hong Kong SAR, China. A mathematical model that describes the noise level in a restaurant is presented. The 1-h equivalent continuous noise level (Leq,1-h was measured using a Type-1 precision integral sound level meter while the occupancy density, the floor area of the dining area, and the ceiling height of each of the surveyed restaurants were recorded. It was found that the measured noise levels using Leq,1-h ranged from 67.6 to 79.3 dBA in Chinese restaurants, from 69.1 to 79.1 dBA in fast food restaurants, and from 66.7 to 82.6 dBA in Western restaurants. Results of the analysis of variance show that there were no significant differences between means of the measured noise levels among different types of restaurants. A stepwise multiple regression analysis was employed to determine the relationships between geometrical and operational parameters and the measured noise levels. Results of the regression analysis show that the measured noise levels depended on the levels of occupancy density only. By reconciling the measured noise levels and the mathematical model, it was found that people in restaurants increased their voice levels when the occupancy density increased. Nevertheless, the maximum measured hourly noise level indicated that the noise exposure experienced by restaurant service employees was below the regulated daily noise exposure value level of 85 dBA.

  9. RESPONSIVE URBAN MODELS BY PROCESSING SETS OF HETEROGENEOUS DATA

    Directory of Open Access Journals (Sweden)

    M. Calvano

    2018-05-01

    Full Text Available This paper presents some steps in experimentation aimed at describing urban spaces made following the series of earthquakes that affected a vast area of central Italy starting on 24 August 2016. More specifically, these spaces pertain to historical centres of limited size and case studies that can be called “problematic” (due to complex morphological and settlement conditions, because they are difficult to access, or because they have been affected by calamitous events, etc.. The main objectives were to verify the use of sets of heterogeneous data that are already largely available to define a workflow and develop procedures that would allow some of the steps to be automated as much as possible. The most general goal was to use the experimentation to define a methodology to approach the problem aimed at developing descriptive responsive models of the urban space, that is, morphological and computer-based models capable of being modified in relation to the constantly updated flow of input data.

  10. A comprehensive dwelling unit choice model accommodating psychological constructs within a search strategy for consideration set formation.

    Science.gov (United States)

    2015-12-01

    This study adopts a dwelling unit level of analysis and considers a probabilistic choice set generation approach for residential choice modeling. In doing so, we accommodate the fact that housing choices involve both characteristics of the dwelling u...

  11. The level density parameters for fermi gas model

    International Nuclear Information System (INIS)

    Zuang Youxiang; Wang Cuilan; Zhou Chunmei; Su Zongdi

    1986-01-01

    Nuclear level densities are crucial ingredient in the statistical models, for instance, in the calculations of the widths, cross sections, emitted particle spectra, etc. for various reaction channels. In this work 667 sets of more reliable and new experimental data are adopted, which include average level spacing D D , radiative capture width Γ γ 0 at neutron binding energy and cumulative level number N 0 at the low excitation energy. They are published during 1973 to 1983. Based on the parameters given by Gilbert-Cameon and Cook the physical quantities mentioned above are calculated. The calculated results have the deviation obviously from experimental values. In order to improve the fitting, the parameters in the G-C formula are adjusted and new set of level density parameters is obsained. The parameters is this work are more suitable to fit new measurements

  12. Unified model of nuclear mass and level density formulas

    International Nuclear Information System (INIS)

    Nakamura, Hisashi

    2001-01-01

    The objective of present work is to obtain a unified description of nuclear shell, pairing and deformation effects for both ground state masses and level densities, and to find a new set of parameter systematics for both the mass and the level density formulas on the basis of a model for new single-particle state densities. In this model, an analytical expression is adopted for the anisotropic harmonic oscillator spectra, but the shell-pairing correlation are introduced in a new way. (author)

  13. Optimization models using fuzzy sets and possibility theory

    CERN Document Server

    Orlovski, S

    1987-01-01

    Optimization is of central concern to a number of discip­ lines. Operations Research and Decision Theory are often consi­ dered to be identical with optimizationo But also in other areas such as engineering design, regional policy, logistics and many others, the search for optimal solutions is one of the prime goals. The methods and models which have been used over the last decades in these areas have primarily been "hard" or "crisp", i. e. the solutions were considered to be either fea­ sible or unfeasible, either above a certain aspiration level or below. This dichotomous structure of methods very often forced the modeller to approximate real problem situations of the more-or-less type by yes-or-no-type models, the solutions of which might turn out not to be the solutions to the real prob­ lems. This is particularly true if the problem under considera­ tion includes vaguely defined relationships, human evaluations, uncertainty due to inconsistent or incomplete evidence, if na­ tural language has to be...

  14. Nurses' comfort level with spiritual assessment: a study among nurses working in diverse healthcare settings.

    Science.gov (United States)

    Cone, Pamela H; Giske, Tove

    2017-10-01

    To gain knowledge about nurses' comfort level in assessing spiritual matters and to learn what questions nurses use in practice related to spiritual assessment. Spirituality is important in holistic nursing care; however, nurses report feeling uncomfortable and ill-prepared to address this domain with patients. Education is reported to impact nurses' ability to engage in spiritual care. This cross-sectional exploratory survey reports on a mixed-method study examining how comfortable nurses are with spiritual assessment. In 2014, a 21-item survey with 10 demographic variables and three open-ended questions were distributed to Norwegian nurses working in diverse care settings with 172 nurse responses (72 % response rate). SPSS was used to analyse quantitative data; thematic analysis examined the open-ended questions. Norwegian nurses reported a high level of comfort with most questions even though spirituality is seen as private. Nurses with some preparation or experience in spiritual care were most comfortable assessing spirituality. Statistically significant correlations were found between the nurses' comfort level with spiritual assessment and their preparedness and sense of the importance of spiritual assessment. How well-prepared nurses felt was related to years of experience, degree of spirituality and religiosity, and importance of spiritual assessment. Many nurses are poorly prepared for spiritual assessment and care among patients in diverse care settings; educational preparation increases their comfort level with facilitating such care. Nurses who feel well prepared with spirituality feel more comfortable with the spiritual domain. By fostering a culture where patients' spirituality is discussed and reflected upon in everyday practice and in continued education, nurses' sense of preparedness, and thus their level of comfort, can increase. Clinical supervision and interprofessional collaboration with hospital chaplains and/or other spiritual leaders can

  15. Modelling of Signal - Level Crossing System

    Directory of Open Access Journals (Sweden)

    Daniel Novak

    2006-01-01

    Full Text Available The author presents an object-oriented model of a railway level-crossing system created for the purpose of functional requirements specification. Unified Modelling Language (UML, version 1.4, which enables specification, visualisation, construction and documentation of software system artefacts, was used. The main attention was paid to analysis and design phases. The former phase resulted in creation of use case diagrams and sequential diagrams, the latter in creation of class/object diagrams and statechart diagrams.

  16. Models, controls, and levels of semiotic autonomy

    Energy Technology Data Exchange (ETDEWEB)

    Joslyn, C.

    1998-12-01

    In this paper the authors consider forms of autonomy, forms of semiotic systems, and any necessary relations among them. Levels of autonomy are identified as levels of system identity, from adiabatic closure to disintegration. Forms of autonomy or closure in systems are also recognized, including physical, dynamical, functional, and semiotic. Models and controls are canonical linear and circular (closed) semiotic relations respectively. They conclude that only at higher levels of autonomy do semiotic properties become necessary. In particular, all control systems display at least a minimal degree of semiotic autonomy; and all systems with sufficiently interesting functional autonomy are semiotically related to their environments.

  17. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    International Nuclear Information System (INIS)

    Hosntalab, Mohammad; Aghaeizadeh Zoroofi, Reza; Abbaspour Tehrani-Fard, Ali; Shirani, Gholamreza

    2008-01-01

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  18. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    Energy Technology Data Exchange (ETDEWEB)

    Hosntalab, Mohammad [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Aghaeizadeh Zoroofi, Reza [University of Tehran, Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, Tehran (Iran); Abbaspour Tehrani-Fard, Ali [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Sharif University of Technology, Department of Electrical Engineering, Tehran (Iran); Shirani, Gholamreza [Faculty of Dentistry Medical Science of Tehran University, Oral and Maxillofacial Surgery Department, Tehran (Iran)

    2008-09-15

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  19. Level Set-Based Topology Optimization for the Design of an Electromagnetic Cloak With Ferrite Material

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Andkjær, Jacob Anders

    2013-01-01

    . A level set-based topology optimization method incorporating a fictitious interface energy is used to find optimized configurations of the ferrite material. The numerical results demonstrate that the optimization successfully found an appropriate ferrite configuration that functions as an electromagnetic......This paper presents a structural optimization method for the design of an electromagnetic cloak made of ferrite material. Ferrite materials exhibit a frequency-dependent degree of permeability, due to a magnetic resonance phenomenon that can be altered by changing the magnitude of an externally...

  20. A multilevel, level-set method for optimizing eigenvalues in shape design problems

    International Nuclear Information System (INIS)

    Haber, E.

    2004-01-01

    In this paper, we consider optimal design problems that involve shape optimization. The goal is to determine the shape of a certain structure such that it is either as rigid or as soft as possible. To achieve this goal we combine two new ideas for an efficient solution of the problem. First, we replace the eigenvalue problem with an approximation by using inverse iteration. Second, we use a level set method but rather than propagating the front we use constrained optimization methods combined with multilevel continuation techniques. Combining these two ideas we obtain a robust and rapid method for the solution of the optimal design problem

  1. Microwave imaging of dielectric cylinder using level set method and conjugate gradient algorithm

    International Nuclear Information System (INIS)

    Grayaa, K.; Bouzidi, A.; Aguili, T.

    2011-01-01

    In this paper, we propose a computational method for microwave imaging cylinder and dielectric object, based on combining level set technique and the conjugate gradient algorithm. By measuring the scattered field, we tried to retrieve the shape, localisation and the permittivity of the object. The forward problem is solved by the moment method, while the inverse problem is reformulate in an optimization one and is solved by the proposed scheme. It found that the proposed method is able to give good reconstruction quality in terms of the reconstructed shape and permittivity.

  2. Improving a Lecture-Size Molecular Model Set by Repurposing Used Whiteboard Markers

    Science.gov (United States)

    Dragojlovic, Veljko

    2015-01-01

    Preparation of an inexpensive model set from whiteboard markers and either HGS molecular model set or atoms made of wood is described. The model set is relatively easy to prepare and is sufficiently large to be suitable as an instructor set for use in lectures.

  3. Implications of sea-level rise in a modern carbonate ramp setting

    Science.gov (United States)

    Lokier, Stephen W.; Court, Wesley M.; Onuma, Takumi; Paul, Andreas

    2018-03-01

    This study addresses a gap in our understanding of the effects of sea-level rise on the sedimentary systems and morphological development of recent and ancient carbonate ramp settings. Many ancient carbonate sequences are interpreted as having been deposited in carbonate ramp settings. These settings are poorly-represented in the Recent. The study documents the present-day transgressive flooding of the Abu Dhabi coastline at the southern shoreline of the Arabian/Persian Gulf, a carbonate ramp depositional system that is widely employed as a Recent analogue for numerous ancient carbonate systems. Fourteen years of field-based observations are integrated with historical and recent high-resolution satellite imagery in order to document and assess the onset of flooding. Predicted rates of transgression (i.e. landward movement of the shoreline) of 2.5 m yr- 1 (± 0.2 m yr- 1) based on global sea-level rise alone were far exceeded by the flooding rate calculated from the back-stepping of coastal features (10-29 m yr- 1). This discrepancy results from the dynamic nature of the flooding with increased water depth exposing the coastline to increased erosion and, thereby, enhancing back-stepping. A non-accretionary transgressive shoreline trajectory results from relatively rapid sea-level rise coupled with a low-angle ramp geometry and a paucity of sediments. The flooding is represented by the landward migration of facies belts, a range of erosive features and the onset of bioturbation. Employing Intergovernmental Panel on Climate Change (Church et al., 2013) predictions for 21st century sea-level rise, and allowing for the post-flooding lag time that is typical for the start-up of carbonate factories, it is calculated that the coastline will continue to retrograde for the foreseeable future. Total passive flooding (without considering feedback in the modification of the shoreline) by the year 2100 is calculated to likely be between 340 and 571 m with a flooding rate of 3

  4. Individual and setting level predictors of the implementation of a skin cancer prevention program: a multilevel analysis

    Directory of Open Access Journals (Sweden)

    Brownson Ross C

    2010-05-01

    Full Text Available Abstract Background To achieve widespread cancer control, a better understanding is needed of the factors that contribute to successful implementation of effective skin cancer prevention interventions. This study assessed the relative contributions of individual- and setting-level characteristics to implementation of a widely disseminated skin cancer prevention program. Methods A multilevel analysis was conducted using data from the Pool Cool Diffusion Trial from 2004 and replicated with data from 2005. Implementation of Pool Cool by lifeguards was measured using a composite score (implementation variable, range 0 to 10 that assessed whether the lifeguard performed different components of the intervention. Predictors included lifeguard background characteristics, lifeguard sun protection-related attitudes and behaviors, pool characteristics, and enhanced (i.e., more technical assistance, tailored materials, and incentives are provided versus basic treatment group. Results The mean value of the implementation variable was 4 in both years (2004 and 2005; SD = 2 in 2004 and SD = 3 in 2005 indicating a moderate implementation for most lifeguards. Several individual-level (lifeguard characteristics and setting-level (pool characteristics and treatment group factors were found to be significantly associated with implementation of Pool Cool by lifeguards. All three lifeguard-level domains (lifeguard background characteristics, lifeguard sun protection-related attitudes and behaviors and six pool-level predictors (number of weekly pool visitors, intervention intensity, geographic latitude, pool location, sun safety and/or skin cancer prevention programs, and sun safety programs and policies were included in the final model. The most important predictors of implementation were the number of weekly pool visitors (inverse association and enhanced treatment group (positive association. That is, pools with fewer weekly visitors and pools in the enhanced

  5. An improved level set method for brain MR images segmentation and bias correction.

    Science.gov (United States)

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  6. Topology optimization in acoustics and elasto-acoustics via a level-set method

    Science.gov (United States)

    Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.

    2018-04-01

    Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.

  7. A mass conserving level set method for detailed numerical simulation of liquid atomization

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Kun; Shao, Changxiao [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China); Yang, Yue [State Key Laboratory of Turbulence and Complex Systems, Peking University, Beijing 100871 (China); Fan, Jianren, E-mail: fanjr@zju.edu.cn [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)

    2015-10-01

    An improved mass conserving level set method for detailed numerical simulations of liquid atomization is developed to address the issue of mass loss in the existing level set method. This method introduces a mass remedy procedure based on the local curvature at the interface, and in principle, can ensure the absolute mass conservation of the liquid phase in the computational domain. Three benchmark cases, including Zalesak's disk, a drop deforming in a vortex field, and the binary drop head-on collision, are simulated to validate the present method, and the excellent agreement with exact solutions or experimental results is achieved. It is shown that the present method is able to capture the complex interface with second-order accuracy and negligible additional computational cost. The present method is then applied to study more complex flows, such as a drop impacting on a liquid film and the swirling liquid sheet atomization, which again, demonstrates the advantages of mass conservation and the capability to represent the interface accurately.

  8. Towards Precise Metadata-set for Discovering 3D Geospatial Models in Geo-portals

    Science.gov (United States)

    Zamyadi, A.; Pouliot, J.; Bédard, Y.

    2013-09-01

    Accessing 3D geospatial models, eventually at no cost and for unrestricted use, is certainly an important issue as they become popular among participatory communities, consultants, and officials. Various geo-portals, mainly established for 2D resources, have tried to provide access to existing 3D resources such as digital elevation model, LIDAR or classic topographic data. Describing the content of data, metadata is a key component of data discovery in geo-portals. An inventory of seven online geo-portals and commercial catalogues shows that the metadata referring to 3D information is very different from one geo-portal to another as well as for similar 3D resources in the same geo-portal. The inventory considered 971 data resources affiliated with elevation. 51% of them were from three geo-portals running at Canadian federal and municipal levels whose metadata resources did not consider 3D model by any definition. Regarding the remaining 49% which refer to 3D models, different definition of terms and metadata were found, resulting in confusion and misinterpretation. The overall assessment of these geo-portals clearly shows that the provided metadata do not integrate specific and common information about 3D geospatial models. Accordingly, the main objective of this research is to improve 3D geospatial model discovery in geo-portals by adding a specific metadata-set. Based on the knowledge and current practices on 3D modeling, and 3D data acquisition and management, a set of metadata is proposed to increase its suitability for 3D geospatial models. This metadata-set enables the definition of genuine classes, fields, and code-lists for a 3D metadata profile. The main structure of the proposal contains 21 metadata classes. These classes are classified in three packages as General and Complementary on contextual and structural information, and Availability on the transition from storage to delivery format. The proposed metadata set is compared with Canadian Geospatial

  9. Models of Music Therapy Intervention in School Settings

    Science.gov (United States)

    Wilson, Brian L., Ed.

    2002-01-01

    This completely revised 2nd edition edited by Brian L. Wilson, addresses both theoretical issues and practical applications of music therapy in educational settings. 17 chapters written by a variety of authors, each dealing with a different setting or issue. A valuable resource for demonstrating the efficacy of music therapy to school…

  10. Using Set Covering with Item Sampling to Analyze the Infeasibility of Linear Programming Test Assembly Models

    Science.gov (United States)

    Huitzing, Hiddo A.

    2004-01-01

    This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…

  11. Study on high-level waste geological disposal metadata model

    International Nuclear Information System (INIS)

    Ding Xiaobin; Wang Changhong; Zhu Hehua; Li Xiaojun

    2008-01-01

    This paper expatiated the concept of metadata and its researches within china and abroad, then explain why start the study on the metadata model of high-level nuclear waste deep geological disposal project. As reference to GML, the author first set up DML under the framework of digital underground space engineering. Based on DML, a standardized metadata employed in high-level nuclear waste deep geological disposal project is presented. Then, a Metadata Model with the utilization of internet is put forward. With the standardized data and CSW services, this model may solve the problem in the data sharing and exchanging of different data form A metadata editor is build up in order to search and maintain metadata based on this model. (authors)

  12. Flipping for success: evaluating the effectiveness of a novel teaching approach in a graduate level setting

    OpenAIRE

    Moraros, John; Islam, Adiba; Yu, Stan; Banow, Ryan; Schindelka, Barbara

    2015-01-01

    Background Flipped Classroom is a model that?s quickly gaining recognition as a novel teaching approach among health science curricula. The purpose of this study was four-fold and aimed to compare Flipped Classroom effectiveness ratings with: 1) student socio-demographic characteristics, 2) student final grades, 3) student overall course satisfaction, and 4) course pre-Flipped Classroom effectiveness ratings. Methods The participants in the study consisted of 67 Masters-level graduate student...

  13. CT Findings of Disease with Elevated Serum D-Dimer Levels in an Emergency Room Setting

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Ji Youn; Kwon, Woo Cheol; Kim, Young Ju [Dept. of Radiology, Wonju Christian Hospital, Yensei University Wonju College of Medicine, Wonju (Korea, Republic of)

    2012-01-15

    Pulmonary embolism and deep vein thrombosis are the leading causes of elevated serum D-dimer levels in the emergency room. Although D-dimer is a useful screening test because of its high sensitivity and negative predictive value, it has a low specificity. In addition, D-dimer can be elevated in various diseases. Therefore, information on the various diseases with elevated D-dimer levels and their radiologic findings may allow for accurate diagnosis and proper management. Herein, we report the CT findings of various diseases with elevated D-dimer levels in an emergency room setting, including an intravascular contrast filling defect with associated findings in a venous thromboembolism, fracture with soft tissue swelling and hematoma formation in a trauma patient, enlargement with contrast enhancement in the infected organ of a patient, coronary artery stenosis with a perfusion defect of the myocardium in a patient with acute myocardial infarction, high density of acute thrombus in a cerebral vessel with a low density of affected brain parenchyma in an acute cerebral infarction, intimal flap with two separated lumens in a case of aortic dissection, organ involvement of malignancy in a cancer patient, and atrophy of a liver with a dilated portal vein and associated findings.

  14. Glycated albumin is set lower in relation to plasma glucose levels in patients with Cushing's syndrome.

    Science.gov (United States)

    Kitamura, Tetsuhiro; Otsuki, Michio; Tamada, Daisuke; Tabuchi, Yukiko; Mukai, Kosuke; Morita, Shinya; Kasayama, Soji; Shimomura, Iichiro; Koga, Masafumi

    2013-09-23

    Glycated albumin (GA) is an indicator of glycemic control, which has some specific characters in comparison with HbA1c. Since glucocorticoids (GC) promote protein catabolism including serum albumin, GC excess state would influence GA levels. We therefore investigated GA levels in patients with Cushing's syndrome. We studied 16 patients with Cushing's syndrome (8 patients had diabetes mellitus and the remaining 8 patients were non-diabetic). Thirty-two patients with type 2 diabetes mellitus and 32 non-diabetic subjects matched for age, sex and BMI were used as controls. In the patients with Cushing's syndrome, GA was significantly correlated with HbA1c, but the regression line shifted downwards as compared with the controls. The GA/HbA1c ratio in the patients with Cushing's syndrome was also significantly lower than the controls. HbA1c in the non-diabetic patients with Cushing's syndrome was not different from the non-diabetic controls, whereas GA was significantly lower. In 7 patients with Cushing's syndrome who performed self-monitoring of blood glucose, the measured HbA1c was matched with HbA1c estimated from mean blood glucose, whereas the measured GA was significantly lower than the estimated GA. We clarified that GA is set lower in relation to plasma glucose levels in patients with Cushing's syndrome. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. CT Findings of Disease with Elevated Serum D-Dimer Levels in an Emergency Room Setting

    International Nuclear Information System (INIS)

    Choi, Ji Youn; Kwon, Woo Cheol; Kim, Young Ju

    2012-01-01

    Pulmonary embolism and deep vein thrombosis are the leading causes of elevated serum D-dimer levels in the emergency room. Although D-dimer is a useful screening test because of its high sensitivity and negative predictive value, it has a low specificity. In addition, D-dimer can be elevated in various diseases. Therefore, information on the various diseases with elevated D-dimer levels and their radiologic findings may allow for accurate diagnosis and proper management. Herein, we report the CT findings of various diseases with elevated D-dimer levels in an emergency room setting, including an intravascular contrast filling defect with associated findings in a venous thromboembolism, fracture with soft tissue swelling and hematoma formation in a trauma patient, enlargement with contrast enhancement in the infected organ of a patient, coronary artery stenosis with a perfusion defect of the myocardium in a patient with acute myocardial infarction, high density of acute thrombus in a cerebral vessel with a low density of affected brain parenchyma in an acute cerebral infarction, intimal flap with two separated lumens in a case of aortic dissection, organ involvement of malignancy in a cancer patient, and atrophy of a liver with a dilated portal vein and associated findings.

  16. Area-level risk factors for adverse birth outcomes: trends in urban and rural settings.

    Science.gov (United States)

    Kent, Shia T; McClure, Leslie A; Zaitchik, Ben F; Gohlke, Julia M

    2013-06-10

    Significant and persistent racial and income disparities in birth outcomes exist in the US. The analyses in this manuscript examine whether adverse birth outcome time trends and associations between area-level variables and adverse birth outcomes differ by urban-rural status. Alabama births records were merged with ZIP code-level census measures of race, poverty, and rurality. B-splines were used to determine long-term preterm birth (PTB) and low birth weight (LBW) trends by rurality. Logistic regression models were used to examine differences in the relationships between ZIP code-level percent poverty or percent African-American with either PTB or LBW. Interactions with rurality were examined. Population dense areas had higher adverse birth outcome rates compared to other regions. For LBW, the disparity between population dense and other regions increased during the 1991-2005 time period, and the magnitude of the disparity was maintained through 2010. Overall PTB and LBW rates have decreased since 2006, except within isolated rural regions. The addition of individual-level socioeconomic or race risk factors greatly attenuated these geographical disparities, but isolated rural regions maintained increased odds of adverse birth outcomes. ZIP code-level percent poverty and percent African American both had significant relationships with adverse birth outcomes. Poverty associations remained significant in the most population-dense regions when models were adjusted for individual-level risk factors. Population dense urban areas have heightened rates of adverse birth outcomes. High-poverty African American areas have higher odds of adverse birth outcomes in urban versus rural regions. These results suggest there are urban-specific social or environmental factors increasing risk for adverse birth outcomes in underserved communities. On the other hand, trends in PTBs and LBWs suggest interventions that have decreased adverse birth outcomes elsewhere may not be reaching

  17. Transport equations, Level Set and Eulerian mechanics. Application to fluid-structure coupling

    International Nuclear Information System (INIS)

    Maitre, E.

    2008-11-01

    My works were devoted to numerical analysis of non-linear elliptic-parabolic equations, to neutron transport equation and to the simulation of fabrics draping. More recently I developed an Eulerian method based on a level set formulation of the immersed boundary method to deal with fluid-structure coupling problems arising in bio-mechanics. Some of the more efficient algorithms to solve the neutron transport equation make use of the splitting of the transport operator taking into account its characteristics. In the present work we introduced a new algorithm based on this splitting and an adaptation of minimal residual methods to infinite dimensional case. We present the case where the velocity space is of dimension 1 (slab geometry) and 2 (plane geometry) because the splitting is simpler in the former

  18. Level set method for optimal shape design of MRAM core. Micromagnetic approach

    International Nuclear Information System (INIS)

    Melicher, Valdemar; Cimrak, Ivan; Keer, Roger van

    2008-01-01

    We aim at optimizing the shape of the magnetic core in MRAM memories. The evolution of the magnetization during the writing process is described by the Landau-Lifshitz equation (LLE). The actual shape of the core in one cell is characterized by the coefficient γ. Cost functional f=f(γ) expresses the quality of the writing process having in mind the competition between the full-select and the half-select element. We derive an explicit form of the derivative F=∂f/∂γ which allows for the use of gradient-type methods for the actual computation of the optimized shape (e.g., steepest descend method). The level set method (LSM) is employed for the representation of the piecewise constant coefficient γ

  19. Model answers in pure mathematics for a-level students

    CERN Document Server

    Pratt, GA; Schofield, C W

    1967-01-01

    Model Answers in Pure Mathematics for A-Level Students provides a set of solutions that indicate what is required and expected in an Advanced Level examination in Pure Mathematics. This book serves as a guide to the length of answer required, layout of the solution, and methods of selecting the best approach to any particular type of math problem. This compilation intends to supplement, not replace, the normal textbook and provides a varied selection of questions for practice in addition to the worked solutions. The subjects covered in this text include algebra, trigonometry, coordinate geomet

  20. Level-set segmentation of pulmonary nodules in megavolt electronic portal images using a CT prior

    International Nuclear Information System (INIS)

    Schildkraut, J. S.; Prosser, N.; Savakis, A.; Gomez, J.; Nazareth, D.; Singh, A. K.; Malhotra, H. K.

    2010-01-01

    Purpose: Pulmonary nodules present unique problems during radiation treatment due to nodule position uncertainty that is caused by respiration. The radiation field has to be enlarged to account for nodule motion during treatment. The purpose of this work is to provide a method of locating a pulmonary nodule in a megavolt portal image that can be used to reduce the internal target volume (ITV) during radiation therapy. A reduction in the ITV would result in a decrease in radiation toxicity to healthy tissue. Methods: Eight patients with nonsmall cell lung cancer were used in this study. CT scans that include the pulmonary nodule were captured with a GE Healthcare LightSpeed RT 16 scanner. Megavolt portal images were acquired with a Varian Trilogy unit equipped with an AS1000 electronic portal imaging device. The nodule localization method uses grayscale morphological filtering and level-set segmentation with a prior. The treatment-time portion of the algorithm is implemented on a graphical processing unit. Results: The method was retrospectively tested on eight cases that include a total of 151 megavolt portal image frames. The method reduced the nodule position uncertainty by an average of 40% for seven out of the eight cases. The treatment phase portion of the method has a subsecond execution time that makes it suitable for near-real-time nodule localization. Conclusions: A method was developed to localize a pulmonary nodule in a megavolt portal image. The method uses the characteristics of the nodule in a prior CT scan to enhance the nodule in the portal image and to identify the nodule region by level-set segmentation. In a retrospective study, the method reduced the nodule position uncertainty by an average of 40% for seven out of the eight cases studied.

  1. Modeling Unobserved Consideration Sets for Household Panel Data

    NARCIS (Netherlands)

    J.E.M. van Nierop; R. Paap (Richard); B. Bronnenberg; Ph.H.B.F. Franses (Philip Hans)

    2000-01-01

    textabstractWe propose a new method to model consumers' consideration and choice processes. We develop a parsimonious probit type model for consideration and a multinomial probit model for choice, given consideration. Unlike earlier models of consideration ours is not prone to the curse of

  2. Does rational selection of training and test sets improve the outcome of QSAR modeling?

    Science.gov (United States)

    Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander

    2012-10-22

    Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.

  3. Level-set dynamics and mixing efficiency of passive and active scalars in DNS and LES of turbulent mixing layers

    NARCIS (Netherlands)

    Geurts, Bernard J.; Vreman, Bert; Kuerten, Hans; Luo, Kai H.

    2001-01-01

    The mixing efficiency in a turbulent mixing layer is quantified by monitoring the surface-area of level-sets of scalar fields. The Laplace transform is applied to numerically calculate integrals over arbitrary level-sets. The analysis includes both direct and large-eddy simulation and is used to

  4. The effectiveness of flipped classroom learning model in secondary physics classroom setting

    Science.gov (United States)

    Prasetyo, B. D.; Suprapto, N.; Pudyastomo, R. N.

    2018-03-01

    The research aimed to describe the effectiveness of flipped classroom learning model on secondary physics classroom setting during Fall semester of 2017. The research object was Secondary 3 Physics group of Singapore School Kelapa Gading. This research was initiated by giving a pre-test, followed by treatment setting of the flipped classroom learning model. By the end of the learning process, the pupils were given a post-test and questionnaire to figure out pupils' response to the flipped classroom learning model. Based on the data analysis, 89% of pupils had passed the minimum criteria of standardization. The increment level in the students' mark was analysed by normalized n-gain formula, obtaining a normalized n-gain score of 0.4 which fulfil medium category range. Obtains from the questionnaire distributed to the students that 93% of students become more motivated to study physics and 89% of students were very happy to carry on hands-on activity based on the flipped classroom learning model. Those three aspects were used to generate a conclusion that applying flipped classroom learning model in Secondary Physics Classroom setting is effectively applicable.

  5. Characterization of mammographic masses based on level set segmentation with new image features and patient information

    International Nuclear Information System (INIS)

    Shi Jiazheng; Sahiner, Berkman; Chan Heangping; Ge Jun; Hadjiiski, Lubomir; Helvie, Mark A.; Nees, Alexis; Wu Yita; Wei Jun; Zhou Chuan; Zhang Yiheng; Cui Jing

    2008-01-01

    Computer-aided diagnosis (CAD) for characterization of mammographic masses as malignant or benign has the potential to assist radiologists in reducing the biopsy rate without increasing false negatives. The purpose of this study was to develop an automated method for mammographic mass segmentation and explore new image based features in combination with patient information in order to improve the performance of mass characterization. The authors' previous CAD system, which used the active contour segmentation, and morphological, textural, and spiculation features, has achieved promising results in mass characterization. The new CAD system is based on the level set method and includes two new types of image features related to the presence of microcalcifications with the mass and abruptness of the mass margin, and patient age. A linear discriminant analysis (LDA) classifier with stepwise feature selection was used to merge the extracted features into a classification score. The classification accuracy was evaluated using the area under the receiver operating characteristic curve. The authors' primary data set consisted of 427 biopsy-proven masses (200 malignant and 227 benign) in 909 regions of interest (ROIs) (451 malignant and 458 benign) from multiple mammographic views. Leave-one-case-out resampling was used for training and testing. The new CAD system based on the level set segmentation and the new mammographic feature space achieved a view-based A z value of 0.83±0.01. The improvement compared to the previous CAD system was statistically significant (p=0.02). When patient age was included in the new CAD system, view-based and case-based A z values were 0.85±0.01 and 0.87±0.02, respectively. The study also demonstrated the consistency of the newly developed CAD system by evaluating the statistics of the weights of the LDA classifiers in leave-one-case-out classification. Finally, an independent test on the publicly available digital database for screening

  6. GeneTopics - interpretation of gene sets via literature-driven topic models

    Science.gov (United States)

    2013-01-01

    Background Annotation of a set of genes is often accomplished through comparison to a library of labelled gene sets such as biological processes or canonical pathways. However, this approach might fail if the employed libraries are not up to date with the latest research, don't capture relevant biological themes or are curated at a different level of granularity than is required to appropriately analyze the input gene set. At the same time, the vast biomedical literature offers an unstructured repository of the latest research findings that can be tapped to provide thematic sub-groupings for any input gene set. Methods Our proposed method relies on a gene-specific text corpus and extracts commonalities between documents in an unsupervised manner using a topic model approach. We automatically determine the number of topics summarizing the corpus and calculate a gene relevancy score for each topic allowing us to eliminate non-specific topics. As a result we obtain a set of literature topics in which each topic is associated with a subset of the input genes providing directly interpretable keywords and corresponding documents for literature research. Results We validate our method based on labelled gene sets from the KEGG metabolic pathway collection and the genetic association database (GAD) and show that the approach is able to detect topics consistent with the labelled annotation. Furthermore, we discuss the results on three different types of experimentally derived gene sets, (1) differentially expressed genes from a cardiac hypertrophy experiment in mice, (2) altered transcript abundance in human pancreatic beta cells, and (3) genes implicated by GWA studies to be associated with metabolite levels in a healthy population. In all three cases, we are able to replicate findings from the original papers in a quick and semi-automated manner. Conclusions Our approach provides a novel way of automatically generating meaningful annotations for gene sets that are directly

  7. A hybrid interface tracking - level set technique for multiphase flow with soluble surfactant

    Science.gov (United States)

    Shin, Seungwon; Chergui, Jalel; Juric, Damir; Kahouadji, Lyes; Matar, Omar K.; Craster, Richard V.

    2018-04-01

    A formulation for soluble surfactant transport in multiphase flows recently presented by Muradoglu and Tryggvason (JCP 274 (2014) 737-757) [17] is adapted to the context of the Level Contour Reconstruction Method, LCRM, (Shin et al. IJNMF 60 (2009) 753-778, [8]) which is a hybrid method that combines the advantages of the Front-tracking and Level Set methods. Particularly close attention is paid to the formulation and numerical implementation of the surface gradients of surfactant concentration and surface tension. Various benchmark tests are performed to demonstrate the accuracy of different elements of the algorithm. To verify surfactant mass conservation, values for surfactant diffusion along the interface are compared with the exact solution for the problem of uniform expansion of a sphere. The numerical implementation of the discontinuous boundary condition for the source term in the bulk concentration is compared with the approximate solution. Surface tension forces are tested for Marangoni drop translation. Our numerical results for drop deformation in simple shear are compared with experiments and results from previous simulations. All benchmarking tests compare well with existing data thus providing confidence that the adapted LCRM formulation for surfactant advection and diffusion is accurate and effective in three-dimensional multiphase flows with a structured mesh. We also demonstrate that this approach applies easily to massively parallel simulations.

  8. Natural setting of Japanese islands and geologic disposal of high-level waste

    International Nuclear Information System (INIS)

    Koide, Hitoshi

    1991-01-01

    The Japanese islands are a combination of arcuate islands along boundaries between four major plates: Eurasia, North America, Pacific and Philippine Sea plates. The interaction among the four plates formed complex geological structures which are basically patchworks of small blocks of land and sea-floor sediments piled up by the subduction of oceanic plates along the margin of the Eurasia continent. Although frequent earthquakes and volcanic eruptions clearly indicate active crustal deformation, the distribution of active faults and volcanoes is localized regionally in the Japanese islands. Crustal displacement faster than 1 mm/year takes place only in restricted regions near plate boundaries or close to major active faults. Volcanic activity is absent in the region between the volcanic front and the subduction zone. The site selection is especially important in Japan. The scenarios for the long-term performance assessment of high-level waste disposal are discussed with special reference to the geological setting of Japan. The long-term prediction of tectonic disturbance, evaluation of faults and fractures in rocks and estimation of long-term water-rock interaction are key issues in the performance assessment of the high-level waste disposal in the Japanese islands. (author)

  9. Physical property parameter set for modeling ICPP aqueous wastes with ASPEN electrolyte NRTL model

    International Nuclear Information System (INIS)

    Schindler, R.E.

    1996-09-01

    The aqueous waste evaporators at the Idaho Chemical Processing Plant (ICPP) are being modeled using ASPEN software. The ASPEN software calculates chemical and vapor-liquid equilibria with activity coefficients calculated using the electrolyte Non-Random Two Liquid (NRTL) model for local excess Gibbs free energies of interactions between ions and molecules in solution. The use of the electrolyte NRTL model requires the determination of empirical parameters for the excess Gibbs free energies of the interactions between species in solution. This report covers the development of a set parameters, from literature data, for the use of the electrolyte NRTL model with the major solutes in the ICPP aqueous wastes

  10. Separated set-systems and their geometric models

    Energy Technology Data Exchange (ETDEWEB)

    Danilov, Vladimir I; Koshevoy, Gleb A [Central Economics and Mathematics Institute, RAS, Moscow (Russian Federation); Karzanov, Aleksander V [Institute of Systems Analysis, Russian Academy of Sciences, Moscow (Russian Federation)

    2010-11-16

    This paper discusses strongly and weakly separated set-systems as well as rhombus tilings and wiring diagrams which are used to produce such systems. In particular, the Leclerc-Zelevinsky conjectures concerning weakly separated systems are proved. Bibliography: 54 titles.

  11. Generalized cost-effectiveness analysis for national-level priority-setting in the health sector

    Directory of Open Access Journals (Sweden)

    Edejer Tessa

    2003-12-01

    Full Text Available Abstract Cost-effectiveness analysis (CEA is potentially an important aid to public health decision-making but, with some notable exceptions, its use and impact at the level of individual countries is limited. A number of potential reasons may account for this, among them technical shortcomings associated with the generation of current economic evidence, political expediency, social preferences and systemic barriers to implementation. As a form of sectoral CEA, Generalized CEA sets out to overcome a number of these barriers to the appropriate use of cost-effectiveness information at the regional and country level. Its application via WHO-CHOICE provides a new economic evidence base, as well as underlying methodological developments, concerning the cost-effectiveness of a range of health interventions for leading causes of, and risk factors for, disease. The estimated sub-regional costs and effects of different interventions provided by WHO-CHOICE can readily be tailored to the specific context of individual countries, for example by adjustment to the quantity and unit prices of intervention inputs (costs or the coverage, efficacy and adherence rates of interventions (effectiveness. The potential usefulness of this information for health policy and planning is in assessing if current intervention strategies represent an efficient use of scarce resources, and which of the potential additional interventions that are not yet implemented, or not implemented fully, should be given priority on the grounds of cost-effectiveness. Health policy-makers and programme managers can use results from WHO-CHOICE as a valuable input into the planning and prioritization of services at national level, as well as a starting point for additional analyses of the trade-off between the efficiency of interventions in producing health and their impact on other key outcomes such as reducing inequalities and improving the health of the poor.

  12. Agenda Setting for Health Promotion: Exploring an Adapted Model for the Social Media Era.

    Science.gov (United States)

    Albalawi, Yousef; Sixsmith, Jane

    2015-01-01

    The foundation of best practice in health promotion is a robust theoretical base that informs design, implementation, and evaluation of interventions that promote the public's health. This study provides a novel contribution to health promotion through the adaptation of the agenda-setting approach in response to the contribution of social media. This exploration and proposed adaptation is derived from a study that examined the effectiveness of Twitter in influencing agenda setting among users in relation to road traffic accidents in Saudi Arabia. The proposed adaptations to the agenda-setting model to be explored reflect two levels of engagement: agenda setting within the social media sphere and the position of social media within classic agenda setting. This exploratory research aims to assess the veracity of the proposed adaptations on the basis of the hypotheses developed to test these two levels of engagement. To validate the hypotheses, we collected and analyzed data from two primary sources: Twitter activities and Saudi national newspapers. Keyword mentions served as indicators of agenda promotion; for Twitter, interactions were used to measure the process of agenda setting within the platform. The Twitter final dataset comprised 59,046 tweets and 38,066 users who contributed by tweeting, replying, or retweeting. Variables were collected for each tweet and user. In addition, 518 keyword mentions were recorded from six popular Saudi national newspapers. The results showed significant ratification of the study hypotheses at both levels of engagement that framed the proposed adaptions. The results indicate that social media facilitates the contribution of individuals in influencing agendas (individual users accounted for 76.29%, 67.79%, and 96.16% of retweet impressions, total impressions, and amplification multipliers, respectively), a component missing from traditional constructions of agenda-setting models. The influence of organizations on agenda setting is

  13. Agenda Setting for Health Promotion: Exploring an Adapted Model for the Social Media Era

    Science.gov (United States)

    2015-01-01

    Background The foundation of best practice in health promotion is a robust theoretical base that informs design, implementation, and evaluation of interventions that promote the public’s health. This study provides a novel contribution to health promotion through the adaptation of the agenda-setting approach in response to the contribution of social media. This exploration and proposed adaptation is derived from a study that examined the effectiveness of Twitter in influencing agenda setting among users in relation to road traffic accidents in Saudi Arabia. Objective The proposed adaptations to the agenda-setting model to be explored reflect two levels of engagement: agenda setting within the social media sphere and the position of social media within classic agenda setting. This exploratory research aims to assess the veracity of the proposed adaptations on the basis of the hypotheses developed to test these two levels of engagement. Methods To validate the hypotheses, we collected and analyzed data from two primary sources: Twitter activities and Saudi national newspapers. Keyword mentions served as indicators of agenda promotion; for Twitter, interactions were used to measure the process of agenda setting within the platform. The Twitter final dataset comprised 59,046 tweets and 38,066 users who contributed by tweeting, replying, or retweeting. Variables were collected for each tweet and user. In addition, 518 keyword mentions were recorded from six popular Saudi national newspapers. Results The results showed significant ratification of the study hypotheses at both levels of engagement that framed the proposed adaptions. The results indicate that social media facilitates the contribution of individuals in influencing agendas (individual users accounted for 76.29%, 67.79%, and 96.16% of retweet impressions, total impressions, and amplification multipliers, respectively), a component missing from traditional constructions of agenda-setting models. The influence

  14. Patient- and population-level health consequences of discontinuing antiretroviral therapy in settings with inadequate HIV treatment availability

    Directory of Open Access Journals (Sweden)

    Kimmel April D

    2012-09-01

    Full Text Available Abstract Background In resource-limited settings, HIV budgets are flattening or decreasing. A policy of discontinuing antiretroviral therapy (ART after HIV treatment failure was modeled to highlight trade-offs among competing policy goals of optimizing individual and population health outcomes. Methods In settings with two available ART regimens, we assessed two strategies: (1 continue ART after second-line failure (Status Quo and (2 discontinue ART after second-line failure (Alternative. A computer model simulated outcomes for a single cohort of newly detected, HIV-infected individuals. Projections were fed into a population-level model allowing multiple cohorts to compete for ART with constraints on treatment capacity. In the Alternative strategy, discontinuation of second-line ART occurred upon detection of antiretroviral failure, specified by WHO guidelines. Those discontinuing failed ART experienced an increased risk of AIDS-related mortality compared to those continuing ART. Results At the population level, the Alternative strategy increased the mean number initiating ART annually by 1,100 individuals (+18.7% to 6,980 compared to the Status Quo. More individuals initiating ART under the Alternative strategy increased total life-years by 15,000 (+2.8% to 555,000, compared to the Status Quo. Although more individuals received treatment under the Alternative strategy, life expectancy for those treated decreased by 0.7 years (−8.0% to 8.1 years compared to the Status Quo. In a cohort of treated patients only, 600 more individuals (+27.1% died by 5 years under the Alternative strategy compared to the Status Quo. Results were sensitive to the timing of detection of ART failure, number of ART regimens, and treatment capacity. Although we believe the results robust in the short-term, this analysis reflects settings where HIV case detection occurs late in the disease course and treatment capacity and the incidence of newly detected patients are

  15. Multi-level decision making models, methods and applications

    CERN Document Server

    Zhang, Guangquan; Gao, Ya

    2015-01-01

    This monograph presents new developments in multi-level decision-making theory, technique and method in both modeling and solution issues. It especially presents how a decision support system can support managers in reaching a solution to a multi-level decision problem in practice. This monograph combines decision theories, methods, algorithms and applications effectively. It discusses in detail the models and solution algorithms of each issue of bi-level and tri-level decision-making, such as multi-leaders, multi-followers, multi-objectives, rule-set-based, and fuzzy parameters. Potential readers include organizational managers and practicing professionals, who can use the methods and software provided to solve their real decision problems; PhD students and researchers in the areas of bi-level and multi-level decision-making and decision support systems; students at an advanced undergraduate, master’s level in information systems, business administration, or the application of computer science.  

  16. Accurate prediction of complex free surface flow around a high speed craft using a single-phase level set method

    Science.gov (United States)

    Broglia, Riccardo; Durante, Danilo

    2017-11-01

    This paper focuses on the analysis of a challenging free surface flow problem involving a surface vessel moving at high speeds, or planing. The investigation is performed using a general purpose high Reynolds free surface solver developed at CNR-INSEAN. The methodology is based on a second order finite volume discretization of the unsteady Reynolds-averaged Navier-Stokes equations (Di Mascio et al. in A second order Godunov—type scheme for naval hydrodynamics, Kluwer Academic/Plenum Publishers, Dordrecht, pp 253-261, 2001; Proceedings of 16th international offshore and polar engineering conference, San Francisco, CA, USA, 2006; J Mar Sci Technol 14:19-29, 2009); air/water interface dynamics is accurately modeled by a non standard level set approach (Di Mascio et al. in Comput Fluids 36(5):868-886, 2007a), known as the single-phase level set method. In this algorithm the governing equations are solved only in the water phase, whereas the numerical domain in the air phase is used for a suitable extension of the fluid dynamic variables. The level set function is used to track the free surface evolution; dynamic boundary conditions are enforced directly on the interface. This approach allows to accurately predict the evolution of the free surface even in the presence of violent breaking waves phenomena, maintaining the interface sharp, without any need to smear out the fluid properties across the two phases. This paper is aimed at the prediction of the complex free-surface flow field generated by a deep-V planing boat at medium and high Froude numbers (from 0.6 up to 1.2). In the present work, the planing hull is treated as a two-degree-of-freedom rigid object. Flow field is characterized by the presence of thin water sheets, several energetic breaking waves and plungings. The computational results include convergence of the trim angle, sinkage and resistance under grid refinement; high-quality experimental data are used for the purposes of validation, allowing to

  17. A computational model for three-dimensional jointed media with a single joint set

    International Nuclear Information System (INIS)

    Koteras, J.R.

    1994-02-01

    This report describes a three-dimensional model for jointed rock or other media with a single set of joints. The joint set consists of evenly spaced joint planes. The normal joint response is nonlinear elastic and is based on a rational polynomial. Joint shear stress is treated as being linear elastic in the shear stress versus slip displacement before attaining a critical stress level governed by a Mohr-Coulomb faction criterion. The three-dimensional model represents an extension of a two-dimensional, multi-joint model that has been in use for several years. Although most of the concepts in the two-dimensional model translate in a straightforward manner to three dimensions, the concept of slip on the joint planes becomes more complex in three dimensions. While slip in two dimensions can be treated as a scalar quantity, it must be treated as a vector in the joint plane in three dimensions. For the three-dimensional model proposed here, the slip direction is assumed to be the direction of maximum principal strain in the joint plane. Five test problems are presented to verify the correctness of the computational implementation of the model

  18. Paired fuzzy sets and other opposite-based models

    DEFF Research Database (Denmark)

    Montero, Javier; Gómez, Daniel; Tinguaro Rodríguez, J.

    2016-01-01

    In this paper we stress the relevance of those fuzzy models that impose a couple of simultaneous views in order to represent concepts. In particular, we point out that the basic model to start with should contain at least two somehow opposite valuations plus a number of neutral concepts that are ...

  19. Fluoroscopy-guided insertion of nasojejunal tubes in children - setting local diagnostic reference levels

    International Nuclear Information System (INIS)

    Vitta, Lavanya; Raghavan, Ashok; Sprigg, Alan; Morrell, Rachel

    2009-01-01

    Little is known about the radiation burden from fluoroscopy-guided insertions of nasojejunal tubes (NJTs) in children. There are no recommended or published standards of diagnostic reference levels (DRLs) available. To establish reference dose area product (DAP) levels for the fluoroscopy-guided insertion of nasojejunal tubes as a basis for setting DRLs for children. In addition, we wanted to assess our local practice and determine the success and complication rates associated with this procedure. Children who had NJT insertion procedures were identified retrospectively from the fluoroscopy database. The age of the child at the time of the procedure, DAP, screening time, outcome of the procedure, and any complications were recorded for each procedure. As the radiation dose depends on the size of the child, the children were assigned to three different age groups. The sample size, mean, median and third-quartile DAPs were calculated for each group. The third-quartile values were used to establish the DRLs. Of 186 procedures performed, 172 were successful on the first attempt. These were performed in a total of 43 children with 60% having multiple insertions over time. The third-quartile DAPs were as follows for each age group: 0-12 months, 2.6 cGy cm 2 ; 1-7 years, 2.45 cGy cm 2 ; >8 years, 14.6 cGy cm 2 . High DAP readings were obtained in the 0-12 months (n = 4) and >8 years (n = 2) age groups. No immediate complications were recorded. Fluoroscopy-guided insertion of NJTs is a highly successful procedure in a selected population of children and is associated with a low complication rate. The radiation dose per procedure is relatively low. (orig.)

  20. Evaluation of two-phase flow solvers using Level Set and Volume of Fluid methods

    Science.gov (United States)

    Bilger, C.; Aboukhedr, M.; Vogiatzaki, K.; Cant, R. S.

    2017-09-01

    Two principal methods have been used to simulate the evolution of two-phase immiscible flows of liquid and gas separated by an interface. These are the Level-Set (LS) method and the Volume of Fluid (VoF) method. Both methods attempt to represent the very sharp interface between the phases and to deal with the large jumps in physical properties associated with it. Both methods have their own strengths and weaknesses. For example, the VoF method is known to be prone to excessive numerical diffusion, while the basic LS method has some difficulty in conserving mass. Major progress has been made in remedying these deficiencies, and both methods have now reached a high level of physical accuracy. Nevertheless, there remains an issue, in that each of these methods has been developed by different research groups, using different codes and most importantly the implementations have been fine tuned to tackle different applications. Thus, it remains unclear what are the remaining advantages and drawbacks of each method relative to the other, and what might be the optimal way to unify them. In this paper, we address this gap by performing a direct comparison of two current state-of-the-art variations of these methods (LS: RCLSFoam and VoF: interPore) and implemented in the same code (OpenFoam). We subject both methods to a pair of benchmark test cases while using the same numerical meshes to examine a) the accuracy of curvature representation, b) the effect of tuning parameters, c) the ability to minimise spurious velocities and d) the ability to tackle fluids with very different densities. For each method, one of the test cases is chosen to be fairly benign while the other test case is expected to present a greater challenge. The results indicate that both methods can be made to work well on both test cases, while displaying different sensitivity to the relevant parameters.

  1. Measurement of thermally ablated lesions in sonoelastographic images using level set methods

    Science.gov (United States)

    Castaneda, Benjamin; Tamez-Pena, Jose Gerardo; Zhang, Man; Hoyt, Kenneth; Bylund, Kevin; Christensen, Jared; Saad, Wael; Strang, John; Rubens, Deborah J.; Parker, Kevin J.

    2008-03-01

    The capability of sonoelastography to detect lesions based on elasticity contrast can be applied to monitor the creation of thermally ablated lesion. Currently, segmentation of lesions depicted in sonoelastographic images is performed manually which can be a time consuming process and prone to significant intra- and inter-observer variability. This work presents a semi-automated segmentation algorithm for sonoelastographic data. The user starts by planting a seed in the perceived center of the lesion. Fast marching methods use this information to create an initial estimate of the lesion. Subsequently, level set methods refine its final shape by attaching the segmented contour to edges in the image while maintaining smoothness. The algorithm is applied to in vivo sonoelastographic images from twenty five thermal ablated lesions created in porcine livers. The estimated area is compared to results from manual segmentation and gross pathology images. Results show that the algorithm outperforms manual segmentation in accuracy, inter- and intra-observer variability. The processing time per image is significantly reduced.

  2. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

    Science.gov (United States)

    Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

    2016-01-01

    Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

  3. Computational Fluid Dynamics Analysis of Cold Plasma Plume Mixing with Blood Using Level Set Method Coupled with Heat Transfer

    Directory of Open Access Journals (Sweden)

    Mehrdad Shahmohammadi Beni

    2017-06-01

    Full Text Available Cold plasmas were proposed for treatment of leukemia. In the present work, conceptual designs of mixing chambers that increased the contact between the two fluids (plasma and blood through addition of obstacles within rectangular-block-shaped chambers were proposed and the dynamic mixing between the plasma and blood were studied using the level set method coupled with heat transfer. Enhancement of mixing between blood and plasma in the presence of obstacles was demonstrated. Continuous tracking of fluid mixing with determination of temperature distributions was enabled by the present model, which would be a useful tool for future development of cold plasma devices for treatment of blood-related diseases such as leukemia.

  4. DESIRE FOR LEVELS. Background study for the policy document "Setting Environmental Quality Standards for Water and Soil"

    NARCIS (Netherlands)

    van de Meent D; Aldenberg T; Canton JH; van Gestel CAM; Slooff W

    1990-01-01

    The report provides scientific support for setting environmental quality objectives for water, sediment and soil. Quality criteria are not set in this report. Only options for decisions are given. The report is restricted to the derivation of the 'maximally acceptable risk' levels (MAR)

  5. Model based decision support system of operating settings for MMAT nozzles

    Directory of Open Access Journals (Sweden)

    Fritz Bradley Keith

    2016-04-01

    Full Text Available Droplet size, which is affected by nozzle type, nozzle setups and operation, and spray solution, is one of the most critical factors influencing spray performance, environment pollution, food safety, and must be considered as part of any application scenario. Characterizing spray nozzles can be a timely and expensive proposition if the entire operational space (all combinations of spray pressure and orifice size, what influence flow rate is to be evaluated. This research proposes a structured, experimental design that allows for the development of computational models for droplet size based on any combination of a nozzle’s potential operational settings. The developed droplet size determination model can be used as Decision Support System (DSS for precise selection of sprayer working parameters to adapt to local field scenarios. Five nozzle types (designs were evaluated across their complete range of orifice size (flow rate* and spray pressures using a response surface experimental design. Several of the models showed high level fits of the modeled to the measured data while several did not as a result of the lack of significant effect from either orifice size (flow rate* or spray pressure. The computational models were integrated into a spreadsheet based user interface for ease of use. The proposed experimental design provides for efficient nozzle evaluations and development of computational models that allow for the determination of droplet size spectrum and spraying classification for any combination of a given nozzle’s operating settings. The proposed DSS will allow for the ready assessment and modification of a sprayers performance based on the operational settings, to ensure the application is made following recommendations in plant protection products (PPP labels.

  6. Setting up measuring campaigns for integrated wastewater modelling

    NARCIS (Netherlands)

    Vanrolleghem, P.A.; Schilling, W.; Rauch, W.; Krebs, P.; Aalderink, R.H.

    1999-01-01

    The steps of calibration/confirmation of models in a suggested 11-step procedure for analysis, planning and implementation of integrated urban wastewater management systems is focused upon in this paper. Based on ample experience obtained in comprehensive investigations throughout Europe

  7. Increasing Free Throw Accuracy through Behavior Modeling and Goal Setting.

    Science.gov (United States)

    Erffmeyer, Elizabeth S.

    A two-year behavior-modeling training program focusing on attention processes, retention processes, motor reproduction, and motivation processes was implemented to increase the accuracy of free throw shooting for a varsity intercollegiate women's basketball team. The training included specific learning keys, progressive relaxation, mental…

  8. Multiatlas segmentation of thoracic and abdominal anatomy with level set-based local search.

    Science.gov (United States)

    Schreibmann, Eduard; Marcus, David M; Fox, Tim

    2014-07-08

    Segmentation of organs at risk (OARs) remains one of the most time-consuming tasks in radiotherapy treatment planning. Atlas-based segmentation methods using single templates have emerged as a practical approach to automate the process for brain or head and neck anatomy, but pose significant challenges in regions where large interpatient variations are present. We show that significant changes are needed to autosegment thoracic and abdominal datasets by combining multi-atlas deformable registration with a level set-based local search. Segmentation is hierarchical, with a first stage detecting bulk organ location, and a second step adapting the segmentation to fine details present in the patient scan. The first stage is based on warping multiple presegmented templates to the new patient anatomy using a multimodality deformable registration algorithm able to cope with changes in scanning conditions and artifacts. These segmentations are compacted in a probabilistic map of organ shape using the STAPLE algorithm. Final segmentation is obtained by adjusting the probability map for each organ type, using customized combinations of delineation filters exploiting prior knowledge of organ characteristics. Validation is performed by comparing automated and manual segmentation using the Dice coefficient, measured at an average of 0.971 for the aorta, 0.869 for the trachea, 0.958 for the lungs, 0.788 for the heart, 0.912 for the liver, 0.884 for the kidneys, 0.888 for the vertebrae, 0.863 for the spleen, and 0.740 for the spinal cord. Accurate atlas segmentation for abdominal and thoracic regions can be achieved with the usage of a multi-atlas and perstructure refinement strategy. To improve clinical workflow and efficiency, the algorithm was embedded in a software service, applying the algorithm automatically on acquired scans without any user interaction.

  9. CT liver volumetry using geodesic active contour segmentation with a level-set algorithm

    Science.gov (United States)

    Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard

    2010-03-01

    Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.

  10. Shape Reconstruction of Thin Electromagnetic Inclusions via Boundary Measurements: Level-Set Method Combined with the Topological Derivative

    Directory of Open Access Journals (Sweden)

    Won-Kwang Park

    2013-01-01

    Full Text Available An inverse problem for reconstructing arbitrary-shaped thin penetrable electromagnetic inclusions concealed in a homogeneous material is considered in this paper. For this purpose, the level-set evolution method is adopted. The topological derivative concept is incorporated in order to evaluate the evolution speed of the level-set functions. The results of the corresponding numerical simulations with and without noise are presented in this paper.

  11. Improving district level health planning and priority setting in Tanzania through implementing accountability for reasonableness framework

    DEFF Research Database (Denmark)

    Maluka, Stephen; Kamuzora, Peter; Sebastián, Miguel San

    2010-01-01

    In 2006, researchers and decision-makers launched a five-year project - Response to Accountable Priority Setting for Trust in Health Systems (REACT) - to improve planning and priority-setting through implementing the Accountability for Reasonableness framework in Mbarali District, Tanzania...

  12. Setting up measuring campaigns for integrated wastewater modelling

    DEFF Research Database (Denmark)

    Vanrolleghem, P.A.; Schilling, W.; Rauch, Wolfgang

    1999-01-01

    The steps of calibration/confirmation of models in a suggested Ii-step procedure far analysis, planning and implementation of integrated urban wastewater management systems is focused upon in this paper. Based on ample experience obtained in comprehensive investigations throughout Europe recommen...... problems related to suspended solids, specific contaminants, hygienic hazards and total pollutant loss illustrate the recommendations presented. (C) 1999 IAWQ Published by Elsevier Science Ltd. All rights reserved....

  13. The use of gravity models in setting and location analysis

    Directory of Open Access Journals (Sweden)

    Zbigniew Drewniak

    2014-12-01

    Full Text Available The article discusses the gravity models as an example of a tool that helps to analyze localization and the market coverage. Especially Reilly’s law of retail gravitation was presented in details as the milestone. The discussion was supported by calculations concerning two cities – Torun and Bydgoszcz and thus their impact on shopping preferences of inhabitants of neighboring places. The issues are mainly used in logistics, but also in marketing, advertising and sales.

  14. Setting up a hydrological model based on global data for the Ayeyarwady basin in Myanmar

    Science.gov (United States)

    ten Velden, Corine; Sloff, Kees; Nauta, Tjitte

    2017-04-01

    The use of global datasets in local hydrological modelling can be of great value. It opens up the possibility to include data for areas where local data is not or only sparsely available. In hydrological modelling the existence of both static physical data such as elevation and land use, and dynamic meteorological data such as precipitation and temperature, is essential for setting up a hydrological model, but often such data is difficult to obtain at the local level. For the Ayeyarwady catchment in Myanmar a distributed hydrological model (Wflow: https://github.com/openstreams/wflow) was set up with only global datasets, as part of a water resources study. Myanmar is an emerging economy, which has only recently become more receptive to foreign influences. It has a very limited hydrometeorological measurement network, with large spatial and temporal gaps, and data that are of uncertain quality and difficult to obtain. The hydrological model was thus set up based on resampled versions of the SRTM digital elevation model, the GlobCover land cover dataset and the HWSD soil dataset. Three global meteorological datasets were assessed and compared for use in the hydrological model: TRMM, WFDEI and MSWEP. The meteorological datasets were assessed based on their conformity with several precipitation station measurements, and the overall model performance was assessed by calculating the NSE and RVE based on discharge measurements of several gauging stations. The model was run for the period 1979-2012 on a daily time step, and the results show an acceptable applicability of the used global datasets in the hydrological model. The WFDEI forcing dataset gave the best results, with a NSE of 0.55 at the outlet of the model and a RVE of 8.5%, calculated over the calibration period 2006-2012. As a general trend the modelled discharge at the upstream stations tends to be underestimated, and at the downstream stations slightly overestimated. The quality of the discharge measurements

  15. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  16. Causal Inference and Model Selection in Complex Settings

    Science.gov (United States)

    Zhao, Shandong

    Propensity score methods have become a part of the standard toolkit for applied researchers who wish to ascertain causal effects from observational data. While they were originally developed for binary treatments, several researchers have proposed generalizations of the propensity score methodology for non-binary treatment regimes. In this article, we firstly review three main methods that generalize propensity scores in this direction, namely, inverse propensity weighting (IPW), the propensity function (P-FUNCTION), and the generalized propensity score (GPS), along with recent extensions of the GPS that aim to improve its robustness. We compare the assumptions, theoretical properties, and empirical performance of these methods. We propose three new methods that provide robust causal estimation based on the P-FUNCTION and GPS. While our proposed P-FUNCTION-based estimator preforms well, we generally advise caution in that all available methods can be biased by model misspecification and extrapolation. In a related line of research, we consider adjustment for posttreatment covariates in causal inference. Even in a randomized experiment, observations might have different compliance performance under treatment and control assignment. This posttreatment covariate cannot be adjusted using standard statistical methods. We review the principal stratification framework which allows for modeling this effect as part of its Bayesian hierarchical models. We generalize the current model to add the possibility of adjusting for pretreatment covariates. We also propose a new estimator of the average treatment effect over the entire population. In a third line of research, we discuss the spectral line detection problem in high energy astrophysics. We carefully review how this problem can be statistically formulated as a precise hypothesis test with point null hypothesis, why a usual likelihood ratio test does not apply for problem of this nature, and a doable fix to correctly

  17. An evaluation of four crop:weed competition models using a common data set

    NARCIS (Netherlands)

    Deen, W.; Cousens, R.; Warringa, J.; Bastiaans, L.; Carberry, P.; Rebel, K.; Riha, S.; Murphy, C.; Benjamin, L.R.; Cloughley, C.; Cussans, J.; Forcella, F.

    2003-01-01

    To date, several crop : weed competition models have been developed. Developers of the various models were invited to compare model performance using a common data set. The data set consisted of wheat and Lolium rigidum grown in monoculture and mixtures under dryland and irrigated conditions.

  18. WHEN MODEL MEETS REALITY – A REVIEW OF SPAR LEVEL 2 MODEL AGAINST FUKUSHIMA ACCIDENT

    Energy Technology Data Exchange (ETDEWEB)

    Zhegang Ma

    2013-09-01

    The Standardized Plant Analysis Risk (SPAR) models are a set of probabilistic risk assessment (PRA) models used by the Nuclear Regulatory Commission (NRC) to evaluate the risk of operations at U.S. nuclear power plants and provide inputs to risk informed regulatory process. A small number of SPAR Level 2 models have been developed mostly for feasibility study purpose. They extend the Level 1 models to include containment systems, group plant damage states, and model containment phenomenology and accident progression in containment event trees. A severe earthquake and tsunami hit the eastern coast of Japan in March 2011 and caused significant damages on the reactors in Fukushima Daiichi site. Station blackout (SBO), core damage, containment damage, hydrogen explosion, and intensive radioactivity release, which have been previous analyzed and assumed as postulated accident progression in PRA models, now occurred with various degrees in the multi-units Fukushima Daiichi site. This paper reviews and compares a typical BWR SPAR Level 2 model with the “real” accident progressions and sequences occurred in Fukushima Daiichi Units 1, 2, and 3. It shows that the SPAR Level 2 model is a robust PRA model that could very reasonably describe the accident progression for a real and complicated nuclear accident in the world. On the other hand, the comparison shows that the SPAR model could be enhanced by incorporating some accident characteristics for better representation of severe accident progression.

  19. Review and evaluation of performance measures for survival prediction models in external validation settings

    Directory of Open Access Journals (Sweden)

    M. Shafiqur Rahman

    2017-04-01

    Full Text Available Abstract Background When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. Methods An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Results Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell’s concordance measure which tended to increase as censoring increased. Conclusions We recommend that Uno’s concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller’s measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston’s D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive

  20. Comparative analysis of modified PMV models and SET models to predict human thermal sensation in naturally ventilated buildings

    DEFF Research Database (Denmark)

    Gao, Jie; Wang, Yi; Wargocki, Pawel

    2015-01-01

    In this paper, a comparative analysis was performed on the human thermal sensation estimated by modified predicted mean vote (PMV) models and modified standard effective temperature (SET) models in naturally ventilated buildings; the data were collected in field study. These prediction models were....../s, the expectancy factors for the extended PMV model and the extended SET model were from 0.770 to 0.974 and from 1.330 to 1.363, and the adaptive coefficients for the adaptive PMV model and the adaptive SET model were from 0.029 to 0.167 and from-0.213 to-0.195. In addition, the difference in thermal sensation...... between the measured and predicted values using the modified PMV models exceeded 25%, while the difference between the measured thermal sensation and the predicted thermal sensation using modified SET models was approximately less than 25%. It is concluded that the modified SET models can predict human...

  1. MO-AB-BRA-01: A Global Level Set Based Formulation for Volumetric Modulated Arc Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, D; Lyu, Q; Ruan, D; O’Connor, D; Low, D; Sheng, K [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA (United States)

    2016-06-15

    Purpose: The current clinical Volumetric Modulated Arc Therapy (VMAT) optimization is formulated as a non-convex problem and various greedy heuristics have been employed for an empirical solution, jeopardizing plan consistency and quality. We introduce a novel global direct aperture optimization method for VMAT to overcome these limitations. Methods: The global VMAT (gVMAT) planning was formulated as an optimization problem with an L2-norm fidelity term and an anisotropic total variation term. A level set function was used to describe the aperture shapes and adjacent aperture shapes were penalized to control MLC motion range. An alternating optimization strategy was implemented to solve the fluence intensity and aperture shapes simultaneously. Single arc gVMAT plans, utilizing 180 beams with 2° angular resolution, were generated for a glioblastoma multiforme (GBM), lung (LNG), and 2 head and neck cases—one with 3 PTVs (H&N3PTV) and one with 4 PTVs (H&N4PTV). The plans were compared against the clinical VMAT (cVMAT) plans utilizing two overlapping coplanar arcs. Results: The optimization of the gVMAT plans had converged within 600 iterations. gVMAT reduced the average max and mean OAR dose by 6.59% and 7.45% of the prescription dose. Reductions in max dose and mean dose were as high as 14.5 Gy in the LNG case and 15.3 Gy in the H&N3PTV case. PTV coverages (D95, D98, D99) were within 0.25% of the prescription dose. By globally considering all beams, the gVMAT optimizer allowed some beams to deliver higher intensities, yielding a dose distribution that resembles a static beam IMRT plan with beam orientation optimization. Conclusions: The novel VMAT approach allows for the search of an optimal plan in the global solution space and generates deliverable apertures directly. The single arc VMAT approach fully utilizes the digital linacs’ capability in dose rate and gantry rotation speed modulation. Varian Medical Systems, NIH grant R01CA188300, NIH grant R43CA183390.

  2. Strengthening fairness, transparency and accountability in health care priority setting at district level in Tanzania

    Directory of Open Access Journals (Sweden)

    Stephen Maluka

    2011-11-01

    Full Text Available Health care systems are faced with the challenge of resource scarcity and have insufficient resources to respond to all health problems and target groups simultaneously. Hence, priority setting is an inevitable aspect of every health system. However, priority setting is complex and difficult because the process is frequently influenced by political, institutional and managerial factors that are not considered by conventional priority-setting tools. In a five-year EU-supported project, which started in 2006, ways of strengthening fairness and accountability in priority setting in district health management were studied. This review is based on a PhD thesis that aimed to analyse health care organisation and management systems, and explore the potential and challenges of implementing Accountability for Reasonableness (A4R approach to priority setting in Tanzania. A qualitative case study in Mbarali district formed the basis of exploring the sociopolitical and institutional contexts within which health care decision making takes place. The study also explores how the A4R intervention was shaped, enabled and constrained by the contexts. Key informant interviews were conducted. Relevant documents were also gathered and group priority-setting processes in the district were observed. The study revealed that, despite the obvious national rhetoric on decentralisation, actual practice in the district involved little community participation. The assumption that devolution to local government promotes transparency, accountability and community participation, is far from reality. The study also found that while the A4R approach was perceived to be helpful in strengthening transparency, accountability and stakeholder engagement, integrating the innovation into the district health system was challenging. This study underscores the idea that greater involvement and accountability among local actors may increase the legitimacy and fairness of priority-setting

  3. Three essays on multi-level optimization models and applications

    Science.gov (United States)

    Rahdar, Mohammad

    The general form of a multi-level mathematical programming problem is a set of nested optimization problems, in which each level controls a series of decision variables independently. However, the value of decision variables may also impact the objective function of other levels. A two-level model is called a bilevel model and can be considered as a Stackelberg game with a leader and a follower. The leader anticipates the response of the follower and optimizes its objective function, and then the follower reacts to the leader's action. The multi-level decision-making model has many real-world applications such as government decisions, energy policies, market economy, network design, etc. However, there is a lack of capable algorithms to solve medium and large scale these types of problems. The dissertation is devoted to both theoretical research and applications of multi-level mathematical programming models, which consists of three parts, each in a paper format. The first part studies the renewable energy portfolio under two major renewable energy policies. The potential competition for biomass for the growth of the renewable energy portfolio in the United States and other interactions between two policies over the next twenty years are investigated. This problem mainly has two levels of decision makers: the government/policy makers and biofuel producers/electricity generators/farmers. We focus on the lower-level problem to predict the amount of capacity expansions, fuel production, and power generation. In the second part, we address uncertainty over demand and lead time in a multi-stage mathematical programming problem. We propose a two-stage tri-level optimization model in the concept of rolling horizon approach to reducing the dimensionality of the multi-stage problem. In the third part of the dissertation, we introduce a new branch and bound algorithm to solve bilevel linear programming problems. The total time is reduced by solving a smaller relaxation

  4. H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    Rahman Ali

    2015-07-01

    Full Text Available Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body’s resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1 restricted one type of diabetes; (2 lack understandability and explanatory power of the techniques and decision; (3 limited either to prediction purpose or management over the structured contents; and (4 lack competence for dimensionality and vagueness of patient’s data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM and type-2 diabetes mellitus (T2DM. For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies.

  5. H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus.

    Science.gov (United States)

    Ali, Rahman; Hussain, Jamil; Siddiqi, Muhammad Hameed; Hussain, Maqbool; Lee, Sungyoung

    2015-07-03

    Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body's resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1) restricted one type of diabetes; (2) lack understandability and explanatory power of the techniques and decision; (3) limited either to prediction purpose or management over the structured contents; and (4) lack competence for dimensionality and vagueness of patient's data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM) that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM) and type-2 diabetes mellitus (T2DM). For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST) based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies.

  6. Modeling study of solute transport in the unsaturated zone. Information and data sets. Volume 1

    International Nuclear Information System (INIS)

    Polzer, W.L.; Fuentes, H.R.; Springer, E.P.; Nyhan, J.W.

    1986-05-01

    The Environmental Science Group (HSE-12) is conducting a study to compare various approaches of modeling water and solute transport in porous media. Various groups representing different approaches will model a common set of transport data so that the state of the art in modeling and field experimentation can be discussed in a positive framework with an assessment of current capabilities and future needs in this area of research. This paper provides information and sets of data that will be useful to the modelers in meeting the objectives of the modeling study. The information and data sets include: (1) a description of the experimental design and methods used in obtaining solute transport data, (2) supporting data that may be useful in modeling the data set of interest, and (3) the data set to be modeled

  7. Modelling uncertainty with generalized credal sets: application to conjunction and decision

    Science.gov (United States)

    Bronevich, Andrey G.; Rozenberg, Igor N.

    2018-01-01

    To model conflict, non-specificity and contradiction in information, upper and lower generalized credal sets are introduced. Any upper generalized credal set is a convex subset of plausibility measures interpreted as lower probabilities whose bodies of evidence consist of singletons and a certain event. Analogously, contradiction is modelled in the theory of evidence by a belief function that is greater than zero at empty set. Based on generalized credal sets, we extend the conjunctive rule for contradictory sources of information, introduce constructions like natural extension in the theory of imprecise probabilities and show that the model of generalized credal sets coincides with the model of imprecise probabilities if the profile of a generalized credal set consists of probability measures. We give ways how the introduced model can be applied to decision problems.

  8. A Validated Set of MIDAS V5 Task Network Model Scenarios to Evaluate Nextgen Closely Spaced Parallel Operations Concepts

    Science.gov (United States)

    Gore, Brian Francis; Hooey, Becky Lee; Haan, Nancy; Socash, Connie; Mahlstedt, Eric; Foyle, David C.

    2013-01-01

    The Closely Spaced Parallel Operations (CSPO) scenario is a complex, human performance model scenario that tested alternate operator roles and responsibilities to a series of off-nominal operations on approach and landing (see Gore, Hooey, Mahlstedt, Foyle, 2013). The model links together the procedures, equipment, crewstation, and external environment to produce predictions of operator performance in response to Next Generation system designs, like those expected in the National Airspaces NextGen concepts. The task analysis that is contained in the present report comes from the task analysis window in the MIDAS software. These tasks link definitions and states for equipment components, environmental features as well as operational contexts. The current task analysis culminated in 3300 tasks that included over 1000 Subject Matter Expert (SME)-vetted, re-usable procedural sets for three critical phases of flight; the Descent, Approach, and Land procedural sets (see Gore et al., 2011 for a description of the development of the tasks included in the model; Gore, Hooey, Mahlstedt, Foyle, 2013 for a description of the model, and its results; Hooey, Gore, Mahlstedt, Foyle, 2013 for a description of the guidelines that were generated from the models results; Gore, Hooey, Foyle, 2012 for a description of the models implementation and its settings). The rollout, after landing checks, taxi to gate and arrive at gate illustrated in Figure 1 were not used in the approach and divert scenarios exercised. The other networks in Figure 1 set up appropriate context settings for the flight deck.The current report presents the models task decomposition from the tophighest level and decomposes it to finer-grained levels. The first task that is completed by the model is to set all of the initial settings for the scenario runs included in the model (network 75 in Figure 1). This initialization process also resets the CAD graphic files contained with MIDAS, as well as the embedded

  9. SET overexpression in HEK293 cells regulates mitochondrial uncoupling proteins levels within a mitochondrial fission/reduced autophagic flux scenario

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Luciana O.; Goto, Renata N. [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Neto, Marinaldo P.C. [Department of Physics and Chemistry, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Sousa, Lucas O. [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Curti, Carlos [Department of Physics and Chemistry, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Leopoldino, Andréia M., E-mail: andreiaml@usp.br [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil)

    2015-03-06

    We hypothesized that SET, a protein accumulated in some cancer types and Alzheimer disease, is involved in cell death through mitochondrial mechanisms. We addressed the mRNA and protein levels of the mitochondrial uncoupling proteins UCP1, UCP2 and UCP3 (S and L isoforms) by quantitative real-time PCR and immunofluorescence as well as other mitochondrial involvements, in HEK293 cells overexpressing the SET protein (HEK293/SET), either in the presence or absence of oxidative stress induced by the pro-oxidant t-butyl hydroperoxide (t-BHP). SET overexpression in HEK293 cells decreased UCP1 and increased UCP2 and UCP3 (S/L) mRNA and protein levels, whilst also preventing lipid peroxidation and decreasing the content of cellular ATP. SET overexpression also (i) decreased the area of mitochondria and increased the number of organelles and lysosomes, (ii) increased mitochondrial fission, as demonstrated by increased FIS1 mRNA and FIS-1 protein levels, an apparent accumulation of DRP-1 protein, and an increase in the VDAC protein level, and (iii) reduced autophagic flux, as demonstrated by a decrease in LC3B lipidation (LC3B-II) in the presence of chloroquine. Therefore, SET overexpression in HEK293 cells promotes mitochondrial fission and reduces autophagic flux in apparent association with up-regulation of UCP2 and UCP3; this implies a potential involvement in cellular processes that are deregulated such as in Alzheimer's disease and cancer. - Highlights: • SET, UCPs and autophagy prevention are correlated. • SET action has mitochondrial involvement. • UCP2/3 may reduce ROS and prevent autophagy. • SET protects cell from ROS via UCP2/3.

  10. A comparison of foetal SAR in three sets of pregnant female models

    International Nuclear Information System (INIS)

    Dimbylow, Peter J; Nagaoka, Tomoaki; Xu, X George

    2009-01-01

    This paper compares the foetal SAR in the HPA hybrid mathematical phantoms with the 26-week foetal model developed at the National Institute of Information and Communications Technology, Tokyo, and the set of 13-, 26- and 38-week boundary representation models produced at Rensselaer Polytechnic Institute. FDTD calculations are performed at a resolution of 2 mm for a plane wave with a vertically aligned electric field incident upon the body from the front, back and two sides from 20 MHz to 3 GHz under isolated conditions. The external electric field values required to produce the ICNIRP public exposure localized restriction of 2 W kg -1 when averaged over 10 g of the foetus are compared with the ICNIRP reference levels.

  11. Computerized detection of multiple sclerosis candidate regions based on a level set method using an artificial neural network

    International Nuclear Information System (INIS)

    Kuwazuru, Junpei; Magome, Taiki; Arimura, Hidetaka; Yamashita, Yasuo; Oki, Masafumi; Toyofuku, Fukai; Kakeda, Shingo; Yamamoto, Daisuke

    2010-01-01

    Yamamoto et al. developed the system for computer-aided detection of multiple sclerosis (MS) candidate regions. In a level set method in their proposed method, they employed the constant threshold value for the edge indicator function related to a speed function of the level set method. However, it would be appropriate to adjust the threshold value to each MS candidate region, because the edge magnitudes in MS candidates differ from each other. Our purpose of this study was to develop a computerized detection of MS candidate regions in MR images based on a level set method using an artificial neural network (ANN). To adjust the threshold value for the edge indicator function in the level set method to each true positive (TP) and false positive (FP) region, we constructed the ANN. The ANN could provide the suitable threshold value for each candidate region in the proposed level set method so that TP regions can be segmented and FP regions can be removed. Our proposed method detected MS regions at a sensitivity of 82.1% with 0.204 FPs per slice and similarity index of MS candidate regions was 0.717 on average. (author)

  12. Prediction of South China sea level using seasonal ARIMA models

    Science.gov (United States)

    Fernandez, Flerida Regine; Po, Rodolfo; Montero, Neil; Addawe, Rizavel

    2017-11-01

    Accelerating sea level rise is an indicator of global warming and poses a threat to low-lying places and coastal countries. This study aims to fit a Seasonal Autoregressive Integrated Moving Average (SARIMA) model to the time series obtained from the TOPEX and Jason series of satellite radar altimetries of the South China Sea from the year 2008 to 2015. With altimetric measurements taken in a 10-day repeat cycle, monthly averages of the satellite altimetry measurements were taken to compose the data set used in the study. SARIMA models were then tried and fitted to the time series in order to find the best-fit model. Results show that the SARIMA(1,0,0)(0,1,1)12 model best fits the time series and was used to forecast the values for January 2016 to December 2016. The 12-month forecast using SARIMA(1,0,0)(0,1,1)12 shows that the sea level gradually increases from January to September 2016, and decreases until December 2016.

  13. Instruction-level performance modeling and characterization of multimedia applications

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Y. [Los Alamos National Lab., NM (United States). Scientific Computing Group; Cameron, K.W. [Louisiana State Univ., Baton Rouge, LA (United States). Dept. of Computer Science

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based on microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.

  14. A new method for fatigue life prediction based on the Thick Level Set approach

    NARCIS (Netherlands)

    Voormeeren, L.O.; van der Meer, F.P.; Maljaars, J.; Sluys, L.J.

    2017-01-01

    The last decade has seen a growing interest in cohesive zone models for fatigue applications. These cohesive zone models often suffer from a lack of generality and applying them typically requires calibrating a large number of model-specific parameters. To improve on these issues a new method has

  15. A new method for fatigue life prediction based on the Thick Level set approach

    NARCIS (Netherlands)

    Voormeeren, L.O.; Meer, F.P. van der; Maljaars, J.; Sluys, L.J.

    2017-01-01

    The last decade has seen a growing interest in cohesive zone models for fatigue applications. These cohesive zone models often suffer from a lack of generality and applying them typically requires calibrating a large number of model-specific parameters. To improve on these issues a new method has

  16. Comparing Panelists' Understanding of Standard Setting across Multiple Levels of an Alternate Science Assessment

    Science.gov (United States)

    Hansen, Mary A.; Lyon, Steven R.; Heh, Peter; Zigmond, Naomi

    2013-01-01

    Large-scale assessment programs, including alternate assessments based on alternate achievement standards (AA-AAS), must provide evidence of technical quality and validity. This study provides information about the technical quality of one AA-AAS by evaluating the standard setting for the science component. The assessment was designed to have…

  17. Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model

    Science.gov (United States)

    Kuznetsov, A. V.; Makaryants, G. M.

    2018-01-01

    There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.

  18. Supporting the Constructive Use of Existing Hydrological Models in Participatory Settings: a Set of "Rules of the Game"

    Directory of Open Access Journals (Sweden)

    Pieter W. G. Bots

    2011-06-01

    Full Text Available When hydrological models are used in support of water management decisions, stakeholders often contest these models because they perceive certain aspects to be inadequately addressed. A strongly contested model may be abandoned completely, even when stakeholders could potentially agree on the validity of part of the information it can produce. The development of a new model is costly, and the results may be contested again. We consider how existing hydrological models can be used in a policy process so as to benefit from both hydrological knowledge and the perspectives and local knowledge of stakeholders. We define a code of conduct as a set of "rules of the game" that we base on a case study of developing a water management plan for a Natura 2000 site in the Netherlands. We propose general rules for agenda management and information sharing, and more specific rules for model use and option development. These rules structure the interactions among actors, help them to explicitly acknowledge uncertainties, and prevent expertise from being neglected or overlooked. We designed the rules to favor openness, protection of core stakeholder values, the use of relevant substantive knowledge, and the momentum of the process. We expect that these rules, although developed on the basis of a water-management issue, can also be applied to support the use of existing computer models in other policy domains. As rules will shape actions only when they are constantly affirmed by actors, we expect that the rules will become less useful in an "unruly" social environment where stakeholders constantly challenge the proceedings.

  19. Heterogeneity in Wage Setting Behavior in a New-Keynesian Model

    NARCIS (Netherlands)

    Eijffinger, S.C.W.; Grajales Olarte, A.; Uras, R.B.

    2015-01-01

    In this paper we estimate a New-Keynesian DSGE model with heterogeneity in price and wage setting behavior. In a recent study, Coibion and Gorodnichenko (2011) develop a DSGE model, in which firms follow four different types of price setting schemes: sticky prices, sticky information, rule of thumb,

  20. Organizational factors related to low levels of sickness absence in a representative set of Swedish companies.

    Science.gov (United States)

    Stoetzer, Ulrich; Bergman, Peter; Aborg, Carl; Johansson, Gun; Ahlberg, Gunnel; Parmsund, Marianne; Svartengren, Magnus

    2014-01-01

    The aim of this qualitative study was to identify manageable organizational factors that could explain why some companies have low levels of sickness absence. There may be factors at company level that can be managed to influence levels of sickness absence, and promote health and a prosperous organization. 38 representative Swedish companies. The study included a total of 204 semi-structured interviews at 38 representative Swedish companies. Qualitative thematic analysis was applied to the interviews, primarily with managers, to indicate the organizational factors that characterize companies with low levels of sickness absence. The factors that were found to characterize companies with low levels of sickness absence concerned strategies and procedures for managing leadership, employee development, communication, employee participation and involvement, corporate values and visions, and employee health. The results may be useful in finding strategies and procedures to reduce levels of sickness absence and promote health. There is research at individual level on the reasons for sickness absence. This study tries to elevate the issue to an organizational level. The findings suggest that explicit strategies for managing certain organizational factors can reduce sickness absence and help companies to develop more health-promoting strategies.

  1. Modeling antecedents of electronic medical record system implementation success in low-resource setting hospitals.

    Science.gov (United States)

    Tilahun, Binyam; Fritz, Fleur

    2015-08-01

    With the increasing implementation of Electronic Medical Record Systems (EMR) in developing countries, there is a growing need to identify antecedents of EMR success to measure and predict the level of adoption before costly implementation. However, less evidence is available about EMR success in the context of low-resource setting implementations. Therefore, this study aims to fill this gap by examining the constructs and relationships of the widely used DeLone and MacLean (D&M) information system success model to determine whether it can be applied to measure EMR success in those settings. A quantitative cross sectional study design using self-administered questionnaires was used to collect data from 384 health professionals working in five governmental hospitals in Ethiopia. The hospitals use a comprehensive EMR system since three years. Descriptive and structural equation modeling methods were applied to describe and validate the extent of relationship of constructs and mediating effects. The findings of the structural equation modeling shows that system quality has significant influence on EMR use (β = 0.32, P quality has significant influence on EMR use (β = 0.44, P service quality has strong significant influence on EMR use (β = 0.36, P effect of EMR use on user satisfaction was not significant. Both EMR use and user satisfaction have significant influence on perceived net-benefit (β = 0.31, P mediating factor in the relationship between service quality and EMR use (P effect on perceived net-benefit of health professionals. EMR implementers and managers in developing countries are in urgent need of implementation models to design proper implementation strategies. In this study, the constructs and relationships depicted in the updated D&M model were found to be applicable to assess the success of EMR in low resource settings. Additionally, computer literacy was found to be a mediating factor in EMR use and user satisfaction of

  2. Meta-analysis of choice set generation effects on route choice model estimates and predictions

    DEFF Research Database (Denmark)

    Prato, Carlo Giacomo

    2012-01-01

    are applied for model estimation and results are compared to the ‘true model estimates’. Last, predictions from the simulation of models estimated with objective choice sets are compared to the ‘postulated predicted routes’. A meta-analytical approach allows synthesizing the effect of judgments......Large scale applications of behaviorally realistic transport models pose several challenges to transport modelers on both the demand and the supply sides. On the supply side, path-based solutions to the user assignment equilibrium problem help modelers in enhancing the route choice behavior...... modeling, but require them to generate choice sets by selecting a path generation technique and its parameters according to personal judgments. This paper proposes a methodology and an experimental setting to provide general indications about objective judgments for an effective route choice set generation...

  3. An analysis of a joint shear model for jointed media with orthogonal joint sets

    International Nuclear Information System (INIS)

    Koteras, J.R.

    1991-10-01

    This report describes a joint shear model used in conjunction with a computational model for jointed media with orthogonal joint sets. The joint shear model allows nonlinear behavior for both joint sets. Because nonlinear behavior is allowed for both joint sets, a great many cases must be considered to fully describe the joint shear behavior of the jointed medium. An extensive set of equations is required to describe the joint shear stress and slip displacements that can occur for all the various cases. This report examines possible methods for simplifying this set of equations so that the model can be implemented efficiently form a computational standpoint. The shear model must be examined carefully to obtain a computationally efficient implementation that does not lead to numerical problems. The application to fractures in rock is discussed. 5 refs., 4 figs

  4. Effectiveness of reactive case detection for malaria elimination in three archetypical transmission settings: a modelling study.

    Science.gov (United States)

    Gerardin, Jaline; Bever, Caitlin A; Bridenbecker, Daniel; Hamainza, Busiku; Silumbe, Kafula; Miller, John M; Eisele, Thomas P; Eckhoff, Philip A; Wenger, Edward A

    2017-06-12

    Reactive case detection could be a powerful tool in malaria elimination, as it selectively targets transmission pockets. However, field operations have yet to demonstrate under which conditions, if any, reactive case detection is best poised to push a region to elimination. This study uses mathematical modelling to assess how baseline transmission intensity and local interconnectedness affect the impact of reactive activities in the context of other possible intervention packages. Communities in Southern Province, Zambia, where elimination operations are currently underway, were used as representatives of three archetypes of malaria transmission: low-transmission, high household density; high-transmission, low household density; and high-transmission, high household density. Transmission at the spatially-connected household level was simulated with a dynamical model of malaria transmission, and local variation in vectorial capacity and intervention coverage were parameterized according to data collected from the area. Various potential intervention packages were imposed on each of the archetypical settings and the resulting likelihoods of elimination by the end of 2020 were compared. Simulations predict that success of elimination campaigns in both low- and high-transmission areas is strongly dependent on stemming the flow of imported infections, underscoring the need for regional-scale strategies capable of reducing transmission concurrently across many connected areas. In historically low-transmission areas, treatment of clinical malaria should form the cornerstone of elimination operations, as most malaria infections in these areas are symptomatic and onward transmission would be mitigated through health system strengthening; reactive case detection has minimal impact in these settings. In historically high-transmission areas, vector control and case management are crucial for limiting outbreak size, and the asymptomatic reservoir must be addressed through

  5. An optimized process flow for rapid segmentation of cortical bones of the craniofacial skeleton using the level-set method.

    Science.gov (United States)

    Szwedowski, T D; Fialkov, J; Pakdel, A; Whyne, C M

    2013-01-01

    Accurate representation of skeletal structures is essential for quantifying structural integrity, for developing accurate models, for improving patient-specific implant design and in image-guided surgery applications. The complex morphology of thin cortical structures of the craniofacial skeleton (CFS) represents a significant challenge with respect to accurate bony segmentation. This technical study presents optimized processing steps to segment the three-dimensional (3D) geometry of thin cortical bone structures from CT images. In this procedure, anoisotropic filtering and a connected components scheme were utilized to isolate and enhance the internal boundaries between craniofacial cortical and trabecular bone. Subsequently, the shell-like nature of cortical bone was exploited using boundary-tracking level-set methods with optimized parameters determined from large-scale sensitivity analysis. The process was applied to clinical CT images acquired from two cadaveric CFSs. The accuracy of the automated segmentations was determined based on their volumetric concurrencies with visually optimized manual segmentations, without statistical appraisal. The full CFSs demonstrated volumetric concurrencies of 0.904 and 0.719; accuracy increased to concurrencies of 0.936 and 0.846 when considering only the maxillary region. The highly automated approach presented here is able to segment the cortical shell and trabecular boundaries of the CFS in clinical CT images. The results indicate that initial scan resolution and cortical-trabecular bone contrast may impact performance. Future application of these steps to larger data sets will enable the determination of the method's sensitivity to differences in image quality and CFS morphology.

  6. Area-level risk factors for adverse birth outcomes: trends in urban and rural settings

    OpenAIRE

    Kent, Shia T; McClure, Leslie A; Zaitchik, Ben F; Gohlke, Julia M

    2013-01-01

    Background Significant and persistent racial and income disparities in birth outcomes exist in the US. The analyses in this manuscript examine whether adverse birth outcome time trends and associations between area-level variables and adverse birth outcomes differ by urban?rural status. Methods Alabama births records were merged with ZIP code-level census measures of race, poverty, and rurality. B-splines were used to determine long-term preterm birth (PTB) and low birth weight (LBW) trends b...

  7. Representing the Fuzzy improved risk graph for determination of optimized safety integrity level in industrial setting

    Directory of Open Access Journals (Sweden)

    Z. Qorbali

    2013-12-01

    .Conclusion: as a result of establishing the presented method, identical levels in conventional risk graph table are replaced with different sublevels that not only increases the accuracy in determining the SIL, but also elucidates the effective factor in improving the safety level and consequently saves time and cost significantly. The proposed technique has been employed to develop the SIL of Tehran Refinery ISOMAX Center. IRG and FIRG results have been compared to clarify the efficacy and importance of the proposed method

  8. On Models with Uncountable Set of Spin Values on a Cayley Tree: Integral Equations

    International Nuclear Information System (INIS)

    Rozikov, Utkir A.; Eshkobilov, Yusup Kh.

    2010-01-01

    We consider models with nearest-neighbor interactions and with the set [0, 1] of spin values, on a Cayley tree of order k ≥ 1. We reduce the problem of describing the 'splitting Gibbs measures' of the model to the description of the solutions of some nonlinear integral equation. For k = 1 we show that the integral equation has a unique solution. In case k ≥ 2 some models (with the set [0, 1] of spin values) which have a unique splitting Gibbs measure are constructed. Also for the Potts model with uncountable set of spin values it is proven that there is unique splitting Gibbs measure.

  9. Multivariate modeling of complications with data driven variable selection: Guarding against overfitting and effects of data set size

    International Nuclear Information System (INIS)

    Schaaf, Arjen van der; Xu Chengjian; Luijk, Peter van; Veld, Aart A. van’t; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    data sets, in particular in data sets with a low number of events (median: 7, 95th percentile: 32). Recognizing overfitting from an inverted sign of the estimated model coefficients has a limited discriminative value. Conclusions: Despite considerable spread around the optimal number of selected variables, the bootstrapping method is efficient and accurate for sufficiently large data sets, and guards against overfitting for all simulated cases with the exception of some data sets with a particularly low number of events. An appropriate minimum data set size to obtain a model with high predictive power is approximately 200 patients and more than 32 events. With fewer data samples the true predictive power decreases rapidly, and for larger data set sizes the benefit levels off toward an asymptotic maximum predictive power.

  10. Using Models to Understand Sea Level Rise

    Science.gov (United States)

    Barth-Cohen, Lauren; Medina, Edwing

    2017-01-01

    Important science phenomena--such as atomic structure, evolution, and climate change--are often hard to observe directly. That's why an important scientific practice is to use scientific models to represent one's current understanding of a system. Using models has been included as an essential science and engineering practice in the "Next…

  11. Simulation of neuro-fuzzy model for optimization of combine header setting

    Directory of Open Access Journals (Sweden)

    S Zareei

    2016-09-01

    Full Text Available Introduction The noticeable proportion of producing wheat losses occur during production and consumption steps and the loss due to harvesting with combine harvester is regarded as one of the main factors. A grain combines harvester consists of different sets of equipment and one of the most important parts is the header which comprises more than 50% of the entire harvesting losses. Some researchers have presented regression equation to estimate grain loss of combine harvester. The results of their study indicated that grain moisture content, reel index, cutter bar speed, service life of cutter bar, tine spacing, tine clearance over cutter bar, stem length were the major parameters affecting the losses. On the other hand, there are several researchswhich have used the variety of artificial intelligence methods in the different aspects of combine harvester. In neuro-fuzzy control systems, membership functions and if-then rules were defined through neural networks. Sugeno- type fuzzy inference model was applied to generate fuzzy rules from a given input-output data set due to its less time-consuming and mathematically tractable defuzzification operation for sample data-based fuzzy modeling. In this study, neuro-fuzzy model was applied to develop forecasting models which can predict the combine header loss for each set of the header parameter adjustments related to site-specific information and therefore can minimize the header loss. Materials and Methods The field experiment was conducted during the harvesting season of 2011 at the research station of the Faulty of Agriculture, Shiraz University, Shiraz, Iran. The wheat field (CV. Shiraz was harvested with a Claas Lexion-510 combine harvester. The factors which were selected as main factors influenced the header performance were three levels of reel index (RI (forward speed of combine harvester divided by peripheral speed of reel (1, 1.2, 1.5, three levels of cutting height (CH(25, 30, 35 cm, three

  12. On the Level Set of a Function with Degenerate Minimum Point

    Directory of Open Access Journals (Sweden)

    Yasuhiko Kamiyama

    2015-01-01

    Full Text Available For n≥2, let M be an n-dimensional smooth closed manifold and f:M→R a smooth function. We set minf(M=m and assume that m is attained by unique point p∈M such that p is a nondegenerate critical point. Then the Morse lemma tells us that if a is slightly bigger than m, f-1(a is diffeomorphic to Sn-1. In this paper, we relax the condition on p from being nondegenerate to being an isolated critical point and obtain the same consequence. Some application to the topology of polygon spaces is also included.

  13. CUDA based Level Set Method for 3D Reconstruction of Fishes from Large Acoustic Data

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Anton, François

    2009-01-01

    Acoustic images present views of underwater dynamics, even in high depths. With multi-beam echo sounders (SONARs), it is possible to capture series of 2D high resolution acoustic images. 3D reconstruction of the water column and subsequent estimation of fish abundance and fish species identificat...... of suppressing threshold and show its convergence as the evolution proceeds. We also present a GPU based streaming computation of the method using NVIDIA's CUDA framework to handle large volume data-sets. Our implementation is optimised for memory usage to handle large volumes....

  14. A suitable model plant for control of the set fuel cell-DC/DC converter

    Energy Technology Data Exchange (ETDEWEB)

    Andujar, J.M.; Segura, F.; Vasallo, M.J. [Departamento de Ingenieria Electronica, Sistemas Informaticos y Automatica, E.P.S. La Rabida, Universidad de Huelva, Ctra. Huelva - Palos de la Frontera, S/N, 21819 La Rabida - Palos de la Frontera Huelva (Spain)

    2008-04-15

    In this work a state and transfer function model of the set made up of a proton exchange membrane (PEM) fuel cell and a DC/DC converter is developed. The set is modelled as a plant controlled by the converter duty cycle. In addition to allow setting the plant operating point at any point of its characteristic curve (two interesting points are maximum efficiency and maximum power points), this approach also allows the connection of the fuel cell to other energy generation and storage devices, given that, as they all usually share a single DC bus, a thorough control of the interconnected devices is required. First, the state and transfer function models of the fuel cell and the converter are obtained. Then, both models are related in order to achieve the fuel cell+DC/DC converter set (plant) model. The results of the theoretical developments are validated by simulation on a real fuel cell model. (author)

  15. Wave energy level and geographic setting correlate with Florida beach water quality.

    Science.gov (United States)

    Feng, Zhixuan; Reniers, Ad; Haus, Brian K; Solo-Gabriele, Helena M; Kelly, Elizabeth A

    2016-03-15

    Many recreational beaches suffer from elevated levels of microorganisms, resulting in beach advisories and closures due to lack of compliance with Environmental Protection Agency guidelines. We conducted the first statewide beach water quality assessment by analyzing decadal records of fecal indicator bacteria (enterococci and fecal coliform) levels at 262 Florida beaches. The objectives were to depict synoptic patterns of beach water quality exceedance along the entire Florida shoreline and to evaluate their relationships with wave condition and geographic location. Percent exceedances based on enterococci and fecal coliform were negatively correlated with both long-term mean wave energy and beach slope. Also, Gulf of Mexico beaches exceeded the thresholds significantly more than Atlantic Ocean ones, perhaps partially due to the lower wave energy. A possible linkage between wave energy level and water quality is beach sand, a pervasive nonpoint source that tends to harbor more bacteria in the low-wave-energy environment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Algebraic Specifications, Higher-order Types and Set-theoretic Models

    DEFF Research Database (Denmark)

    Kirchner, Hélène; Mosses, Peter David

    2001-01-01

    , and power-sets. This paper presents a simple framework for algebraic specifications with higher-order types and set-theoretic models. It may be regarded as the basis for a Horn-clause approximation to the Z framework, and has the advantage of being amenable to prototyping and automated reasoning. Standard......In most algebraic  specification frameworks, the type system is restricted to sorts, subsorts, and first-order function types. This is in marked contrast to the so-called model-oriented frameworks, which provide higer-order types, interpreted set-theoretically as Cartesian products, function spaces...... set-theoretic models are considered, and conditions are given for the existence of initial reduct's of such models. Algebraic specifications for various set-theoretic concepts are considered....

  17. Analysis and Modeling of Urban Land Cover Change in Setúbal and Sesimbra, Portugal

    Directory of Open Access Journals (Sweden)

    Yikalo H. Araya

    2010-06-01

    Full Text Available The expansion of cities entails the abandonment of forest and agricultural lands, and these lands’ conversion into urban areas, which results in substantial impacts on ecosystems. Monitoring these changes and planning urban development can be successfully achieved using multitemporal remotely sensed data, spatial metrics, and modeling. In this paper, urban land use change analysis and modeling was carried out for the Concelhos of Setúbal and Sesimbra in Portugal. An existing land cover map for the year 1990, together with two derived land cover maps from multispectral satellite images for the years 2000 and 2006, were utilized using an object-oriented classification approach. Classification accuracy assessment revealed satisfactory results that fulfilled minimum standard accuracy levels. Urban land use dynamics, in terms of both patterns and quantities, were studied using selected landscape metrics and the Shannon Entropy index. Results show that urban areas increased by 91.11% between 1990 and 2006. In contrast, the change was only 6.34% between 2000 and 2006. The entropy value was 0.73 for both municipalities in 1990, indicating a high rate of urban sprawl in the area. In 2006, this value, for both Sesimbra and Setúbal, reached almost 0.90. This is demonstrative of a tendency toward intensive urban sprawl. Urban land use change for the year 2020 was modeled using a Cellular Automata based approach. The predictive power of the model was successfully validated using Kappa variations. Projected land cover changes show a growing tendency in urban land use, which might threaten areas that are currently reserved for natural parks and agricultural lands.

  18. Endogenous Currency of Price Setting in a Dynamic Open Economy Model

    OpenAIRE

    Michael B. Devereux; Charles Engel

    2001-01-01

    Many papers in the recent literature in open economy macroeconomics make different assumptions about the currency in which firms set their export prices when nominal prices must be pre-set. But to date, all of these studies take the currency of price setting as exogenous. This paper sets up a simple two-country general equilibrium model in which exporting firms can choose the currency in which they set prices for sales to foreign markets. We make two alternative assumptions about the structur...

  19. County-Level Poverty Is Equally Associated with Unmet Health Care Needs in Rural and Urban Settings

    Science.gov (United States)

    Peterson, Lars E.; Litaker, David G.

    2010-01-01

    Context: Regional poverty is associated with reduced access to health care. Whether this relationship is equally strong in both rural and urban settings or is affected by the contextual and individual-level characteristics that distinguish these areas, is unclear. Purpose: Compare the association between regional poverty with self-reported unmet…

  20. The Daily Events and Emotions of Master's-Level Family Therapy Trainees in Off-Campus Practicum Settings

    Science.gov (United States)

    Edwards, Todd M.; Patterson, Jo Ellen

    2012-01-01

    The Day Reconstruction Method (DRM) was used to assess the daily events and emotions of one program's master's-level family therapy trainees in off-campus practicum settings. This study examines the DRM reports of 35 family therapy trainees in the second year of their master's program in marriage and family therapy. Four themes emerged from the…

  1. Activity Sets in Multi-Organizational Ecologies : A Project-Level Perspective on Sustainable Energy Innovations

    NARCIS (Netherlands)

    Gerrit Willem Ziggers; Kristina Manser; Bas Hillebrand; Paul Driessen; Josée Bloemer

    2014-01-01

    Complex innovations involve multi-organizational ecologies consisting of a myriad of different actors. This study investigates how innovation activities can be interpreted in the context of multi-organizational ecologies. Taking a project-level perspective, this study proposes a typology of four

  2. Supporting Diverse Young Adolescents: Cooperative Grouping in Inclusive Middle-Level Settings

    Science.gov (United States)

    Miller, Nicole C.; McKissick, Bethany R.; Ivy, Jessica T.; Moser, Kelly

    2017-01-01

    The middle level classroom presents unique challenges to educators who strive to provide opportunities that acknowledge learner diversity in terms of social, cognitive, physical, and emotional development. This is confounded even further within inclusive middle-school classrooms where the responsibility to differentiate instruction is even more…

  3. Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.

    Science.gov (United States)

    Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao

    2017-06-30

    Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.

  4. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets.

    Science.gov (United States)

    Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing

    2017-03-01

    Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.

  5. Stochastic Models for Low Level DNA Mixtures

    Czech Academy of Sciences Publication Activity Database

    Slovák, Dalibor; Zvárová, Jana

    2012-01-01

    Roč. 8, č. 5 (2012), s. 25-30 ISSN 1801-5603 Grant - others:GA UK(CZ) SVV-2012-264513 Institutional support: RVO:67985807 Keywords : forensic DNA interpretation * low level samples * allele peak areas * dropout probability Subject RIV: IN - Informatics, Computer Science http://www.ejbi.org/img/ejbi/2012/5/Slovak_en.pdf

  6. Stochastic Models for Low Level DNA Mixtures

    Czech Academy of Sciences Publication Activity Database

    Slovák, Dalibor; Zvárová, Jana

    2013-01-01

    Roč. 1, č. 1 (2013), s. 28-28 ISSN 1805-8698. [EFMI 2013 Special Topic Conference. 17.04.2013-19.04.2013, Prague] Institutional support: RVO:67985807 Keywords : forensic DNA interpretation * low level samples * allele peak heights * dropout probability Subject RIV: IN - Informatics, Computer Science

  7. Semiparametric Copula Models for Biometric Score Level

    NARCIS (Netherlands)

    Caselli, M.

    2016-01-01

    In biometric recognition systems, biometric samples (images of faces, finger- prints, voices, gaits, etc.) of people are compared and classifiers (matchers) indicate the level of similarity between any pair of samples by a score. If two samples of the same person are compared, a genuine score is

  8. Basic priority rating model 2.0: current applications for priority setting in health promotion practice.

    Science.gov (United States)

    Neiger, Brad L; Thackeray, Rosemary; Fagen, Michael C

    2011-03-01

    Priority setting is an important component of systematic planning in health promotion and also factors into the development of a comprehensive evaluation plan. The basic priority rating (BPR) model was introduced more than 50 years ago and includes criteria that should be considered in any priority setting approach (i.e., use of predetermined criteria, standardized comparisons, and a rubric that controls bias). Although the BPR model has provided basic direction in priority setting, it does not represent the broad array of data currently available to decision makers. Elements in the model also give more weight to the impact of communicable diseases compared with chronic diseases. For these reasons, several modifications are recommended to improve the BPR model and to better assist health promotion practitioners in the priority setting process. The authors also suggest a new name, BPR 2.0, to represent this revised model.

  9. Level set method for computational multi-fluid dynamics: A review on ...

    Indian Academy of Sciences (India)

    to multi fluid/phase as well as to various types of two-phase flow. In the second ...... simulated bubble generation in a quiescent and co-flowing fluid, for various liquid-to-gas mean injection velocity at ... in modelling of droplet impact behaviour.

  10. The GRENE-TEA model intercomparison project (GTMIP) Stage 1 forcing data set

    Science.gov (United States)

    Sueyoshi, T.; Saito, K.; Miyazaki, S.; Mori, J.; Ise, T.; Arakida, H.; Suzuki, R.; Sato, A.; Iijima, Y.; Yabuki, H.; Ikawa, H.; Ohta, T.; Kotani, A.; Hajima, T.; Sato, H.; Yamazaki, T.; Sugimoto, A.

    2016-01-01

    Here, the authors describe the construction of a forcing data set for land surface models (including both physical and biogeochemical models; LSMs) with eight meteorological variables for the 35-year period from 1979 to 2013. The data set is intended for use in a model intercomparison study, called GTMIP, which is a part of the Japanese-funded Arctic Climate Change Research Project. In order to prepare a set of site-fitted forcing data for LSMs with realistic yet continuous entries (i.e. without missing data), four observational sites across the pan-Arctic region (Fairbanks, Tiksi, Yakutsk, and Kevo) were selected to construct a blended data set using both global reanalysis and observational data. Marked improvements were found in the diurnal cycles of surface air temperature and humidity, wind speed, and precipitation. The data sets and participation in GTMIP are open to the scientific community (doi:10.17592/001.2015093001).

  11. Implementing and measuring the level of laboratory service integration in a program setting in Nigeria.

    Directory of Open Access Journals (Sweden)

    Henry Mbah

    Full Text Available The surge of donor funds to fight HIV&AIDS epidemic inadvertently resulted in the setup of laboratories as parallel structures to rapidly respond to the identified need. However these parallel structures are a threat to the existing fragile laboratory systems. Laboratory service integration is critical to remedy this situation. This paper describes an approach to quantitatively measure and track integration of HIV-related laboratory services into the mainstream laboratory services and highlight some key intervention steps taken, to enhance service integration.A quantitative before-and-after study conducted in 122 Family Health International (FHI360 supported health facilities across Nigeria. A minimum service package was identified including management structure; trainings; equipment utilization and maintenance; information, commodity and quality management for laboratory integration. A check list was used to assess facilities at baseline and 3 months follow-up. Level of integration was assessed on an ordinal scale (0 = no integration, 1 = partial integration, 2 = full integration for each service package. A composite score grading expressed as a percentage of total obtainable score of 14 was defined and used to classify facilities (≤ 80% FULL, 25% to 79% PARTIAL and <25% NO integration. Weaknesses were noted and addressed.We analyzed 9 (7.4% primary, 104 (85.2% secondary and 9 (7.4% tertiary level facilities. There were statistically significant differences in integration levels between baseline and 3 months follow-up period (p<0.01. Baseline median total integration score was 4 (IQR 3 to 5 compared to 7 (IQR 4 to 9 at 3 months follow-up (p = 0.000. Partial and fully integrated laboratory systems were 64 (52.5% and 0 (0.0% at baseline, compared to 100 (82.0% and 3 (2.4% respectively at 3 months follow-up (p = 0.000.This project showcases our novel approach to measure the status of each laboratory on the integration continuum.

  12. Job satisfaction in nurses working in tertiary level health care settings of Islamabad, Pakistan.

    Science.gov (United States)

    Bahalkani, Habib Akhtar; Kumar, Ramesh; Lakho, Abdul Rehman; Mahar, Benazir; Mazhar, Syeda Batool; Majeed, Abdul

    2011-01-01

    Job satisfaction greatly determines the productivity and efficiency of human resource for health. It literally means: 'the extent to which Health Professionals like or dislike their jobs'. Job satisfaction is said to be linked with employee's work environment, job responsibilities, and powers; and time pressure among various health professionals. As such it affects employee's organizational commitment and consequently the quality of health services. Objective of this study was to determine the level of job satisfaction and factors influencing it among nurses in a public sector hospital of Islamabad. A cross sectional study with self-administered structured questionnaire was conducted in the federal capital of Pakistan, Islamabad. Sample included 56 qualified nurses working in a tertiary care hospital. Overall 86% respondents were dissatisfied with about 26% highly dissatisfied with their job. The work environments, poor fringe benefits, dignity, responsibility given at workplace and time pressure were reason for dissatisfaction. Poor work environment, low salaries, lack of training opportunities, proper supervision, time pressure and financial rewards reported by the respondents. Our findings state a low level of overall satisfaction among workers in a public sector tertiary care health organization in Islamabad. Most of this dissatisfaction is caused by poor salaries, not given the due respect, poor work environment, unbalanced responsibilities with little overall control, time pressure, patient care and lack of opportunities for professional development.

  13. Comparison of Co-Temporal Modeling Algorithms on Sparse Experimental Time Series Data Sets.

    Science.gov (United States)

    Allen, Edward E; Norris, James L; John, David J; Thomas, Stan J; Turkett, William H; Fetrow, Jacquelyn S

    2010-01-01

    Multiple approaches for reverse-engineering biological networks from time-series data have been proposed in the computational biology literature. These approaches can be classified by their underlying mathematical algorithms, such as Bayesian or algebraic techniques, as well as by their time paradigm, which includes next-state and co-temporal modeling. The types of biological relationships, such as parent-child or siblings, discovered by these algorithms are quite varied. It is important to understand the strengths and weaknesses of the various algorithms and time paradigms on actual experimental data. We assess how well the co-temporal implementations of three algorithms, continuous Bayesian, discrete Bayesian, and computational algebraic, can 1) identify two types of entity relationships, parent and sibling, between biological entities, 2) deal with experimental sparse time course data, and 3) handle experimental noise seen in replicate data sets. These algorithms are evaluated, using the shuffle index metric, for how well the resulting models match literature models in terms of siblings and parent relationships. Results indicate that all three co-temporal algorithms perform well, at a statistically significant level, at finding sibling relationships, but perform relatively poorly in finding parent relationships.

  14. Comparing of goal setting strategy with group education method to increase physical activity level: A randomized trial.

    Science.gov (United States)

    Jiryaee, Nasrin; Siadat, Zahra Dana; Zamani, Ahmadreza; Taleban, Roya

    2015-10-01

    Designing an intervention to increase physical activity is important to be based on the health care settings resources and be acceptable by the subject group. This study was designed to assess and compare the effect of the goal setting strategy with a group education method on increasing the physical activity of mothers of children aged 1 to 5. Mothers who had at least one child of 1-5 years were randomized into two groups. The effect of 1) goal-setting strategy and 2) group education method on increasing physical activity was assessed and compared 1 month and 3 months after the intervention. Also, the weight, height, body mass index (BMI), waist and hip circumference, and well-being were compared between the two groups before and after the intervention. Physical activity level increased significantly after the intervention in the goal-setting group and it was significantly different between the two groups after intervention (P goal-setting group after the intervention. In the group education method, only the well-being score improved significantly (P goal-setting strategy to boost physical activity, improving the state of well-being and decreasing BMI, waist, and hip circumference.

  15. Comparing of goal setting strategy with group education method to increase physical activity level: A randomized trial

    Directory of Open Access Journals (Sweden)

    Nasrin Jiryaee

    2015-01-01

    Full Text Available Background: Designing an intervention to increase physical activity is important to be based on the health care settings resources and be acceptable by the subject group. This study was designed to assess and compare the effect of the goal setting strategy with a group education method on increasing the physical activity of mothers of children aged 1 to 5. Materials and Methods: Mothers who had at least one child of 1-5 years were randomized into two groups. The effect of 1 goal-setting strategy and 2 group education method on increasing physical activity was assessed and compared 1 month and 3 months after the intervention. Also, the weight, height, body mass index (BMI, waist and hip circumference, and well-being were compared between the two groups before and after the intervention. Results: Physical activity level increased significantly after the intervention in the goal-setting group and it was significantly different between the two groups after intervention (P < 0.05. BMI, waist circumference, hip circumference, and well-being score were significantly different in the goal-setting group after the intervention. In the group education method, only the well-being score improved significantly (P < 0.05. Conclusion: Our study presented the effects of using the goal-setting strategy to boost physical activity, improving the state of well-being and decreasing BMI, waist, and hip circumference.

  16. What Time Is Sunrise? Revisiting the Refraction Component of Sunrise/set Prediction Models

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer L.; Hilton, James Lindsay

    2017-01-01

    Algorithms that predict sunrise and sunset times currently have an error of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, even including difficulties determining when the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction. We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We then compare these predictions with data sets of observed rise/set times to create a better model. Sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem. While there are a few data sets available, we will also begin collecting this data using smartphones as part of a citizen science project. The mobile application for this project will be available in the Google Play store. Data analysis will lead to more complete models that will provide more accurate rise/set times for the benefit of astronomers, navigators, and outdoorsmen everywhere.

  17. Optimal Interest-Rate Setting in a Dynamic IS/AS Model

    DEFF Research Database (Denmark)

    Jensen, Henrik

    2011-01-01

    This note deals with interest-rate setting in a simple dynamic macroeconomic setting. The purpose is to present some basic and central properties of an optimal interest-rate rule. The model framework predates the New-Keynesian paradigm of the late 1990s and onwards (it is accordingly dubbed “Old...

  18. USE OF ROUGH SETS AND SPECTRAL DATA FOR BUILDING PREDICTIVE MODELS OF REACTION RATE CONSTANTS

    Science.gov (United States)

    A model for predicting the log of the rate constants for alkaline hydrolysis of organic esters has been developed with the use of gas-phase min-infrared library spectra and a rule-building software system based on the mathematical theory of rough sets. A diverse set of 41 esters ...

  19. Stabilizing model predictive control : on the enlargement of the terminal set

    NARCIS (Netherlands)

    Brunner, F.D.; Lazar, M.; Allgöwer, F.

    2015-01-01

    It is well known that a large terminal set leads to a large region where the model predictive control problem is feasible without the need for a long prediction horizon. This paper proposes a new method for the enlargement of the terminal set. Different from existing approaches, the method uses the

  20. Low-level HIV-1 replication and the dynamics of the resting CD4+ T cell reservoir for HIV-1 in the setting of HAART

    Directory of Open Access Journals (Sweden)

    Wilke Claus O

    2008-01-01

    Full Text Available Abstract Background In the setting of highly active antiretroviral therapy (HAART, plasma levels of human immunodeficiency type-1 (HIV-1 rapidly decay to below the limit of detection of standard clinical assays. However, reactivation of remaining latently infected memory CD4+ T cells is a source of continued virus production, forcing patients to remain on HAART despite clinically undetectable viral loads. Unfortunately, the latent reservoir decays slowly, with a half-life of up to 44 months, making it the major known obstacle to the eradication of HIV-1 infection. However, the mechanism underlying the long half-life of the latent reservoir is unknown. The most likely potential mechanisms are low-level viral replication and the intrinsic stability of latently infected cells. Methods Here we use a mathematical model of T cell dynamics in the setting of HIV-1 infection to probe the decay characteristics of the latent reservoir upon initiation of HAART. We compare the behavior of this model to patient derived data in order to gain insight into the role of low-level viral replication in the setting of HAART. Results By comparing the behavior of our model to patient derived data, we find that the viral dynamics observed in patients on HAART could be consistent with low-level viral replication but that this replication would not significantly affect the decay rate of the latent reservoir. Rather than low-level replication, the intrinsic stability of latently infected cells and the rate at which they are reactivated primarily determine the observed reservoir decay rate according to the predictions of our model. Conclusion The intrinsic stability of the latent reservoir has important implications for efforts to eradicate HIV-1 infection and suggests that intensified HAART would not accelerate the decay of the latent reservoir.

  1. Low-level HIV-1 replication and the dynamics of the resting CD4+ T cell reservoir for HIV-1 in the setting of HAART

    Science.gov (United States)

    Sedaghat, Ahmad R; Siliciano, Robert F; Wilke, Claus O

    2008-01-01

    Background In the setting of highly active antiretroviral therapy (HAART), plasma levels of human immunodeficiency type-1 (HIV-1) rapidly decay to below the limit of detection of standard clinical assays. However, reactivation of remaining latently infected memory CD4+ T cells is a source of continued virus production, forcing patients to remain on HAART despite clinically undetectable viral loads. Unfortunately, the latent reservoir decays slowly, with a half-life of up to 44 months, making it the major known obstacle to the eradication of HIV-1 infection. However, the mechanism underlying the long half-life of the latent reservoir is unknown. The most likely potential mechanisms are low-level viral replication and the intrinsic stability of latently infected cells. Methods Here we use a mathematical model of T cell dynamics in the setting of HIV-1 infection to probe the decay characteristics of the latent reservoir upon initiation of HAART. We compare the behavior of this model to patient derived data in order to gain insight into the role of low-level viral replication in the setting of HAART. Results By comparing the behavior of our model to patient derived data, we find that the viral dynamics observed in patients on HAART could be consistent with low-level viral replication but that this replication would not significantly affect the decay rate of the latent reservoir. Rather than low-level replication, the intrinsic stability of latently infected cells and the rate at which they are reactivated primarily determine the observed reservoir decay rate according to the predictions of our model. Conclusion The intrinsic stability of the latent reservoir has important implications for efforts to eradicate HIV-1 infection and suggests that intensified HAART would not accelerate the decay of the latent reservoir. PMID:18171475

  2. An open-loop, physiologic model-based decision support system can provide appropriate ventilator settings

    DEFF Research Database (Denmark)

    Karbing, Dan Stieper; Spadaro, Savino; Dey, Nilanjan

    2018-01-01

    OBJECTIVES: To evaluate the physiologic effects of applying advice on mechanical ventilation by an open-loop, physiologic model-based clinical decision support system. DESIGN: Prospective, observational study. SETTING: University and Regional Hospitals' ICUs. PATIENTS: Varied adult ICU population...

  3. A possible methodological approach to setting up control level of radiation factors

    International Nuclear Information System (INIS)

    Devyatajkin, E.V.; Abramov, Yu.V.

    1986-01-01

    The mathematical formalization of the concept of control levels (CL) which enables one to obtain CL numerical values of controllable parameters required for rapid control purposes is described. The initial data for the assessment of environmental radioactivity are the controllable parameter values that is practical characteristic of controllable radiation factor showing technically measurable or calculation value. The controllable parameters can be divided into two classes depending on the degree of radiation effect on a man: possessing additivity properties (dosimetric class) and non-possessing (radiation class, which comprises the results of control of medium alteration dynamics, equipment operation safety, completeness of protection measures performance). The CL calculation formulas with account for requirements of radiation safety standards (RSS-76) are presented

  4. High Levels of Post-Abortion Complication in a Setting Where Abortion Service Is Not Legalized

    Science.gov (United States)

    Melese, Tadele; Habte, Dereje; Tsima, Billy M.; Mogobe, Keitshokile Dintle; Chabaesele, Kesegofetse; Rankgoane, Goabaone; Keakabetse, Tshiamo R.; Masweu, Mabole; Mokotedi, Mosidi; Motana, Mpho; Moreri-Ntshabele, Badani

    2017-01-01

    Background Maternal mortality due to abortion complications stands among the three leading causes of maternal death in Botswana where there is a restrictive abortion law. This study aimed at assessing the patterns and determinants of post-abortion complications. Methods A retrospective institution based cross-sectional study was conducted at four hospitals from January to August 2014. Data were extracted from patients’ records with regards to their socio-demographic variables, abortion complications and length of hospital stay. Descriptive statistics and bivariate analysis were employed. Result A total of 619 patients’ records were reviewed with a mean (SD) age of 27.12 (5.97) years. The majority of abortions (95.5%) were reported to be spontaneous and 3.9% of the abortions were induced by the patient. Two thirds of the patients were admitted as their first visit to the hospitals and one third were referrals from other health facilities. Two thirds of the patients were admitted as a result of incomplete abortion followed by inevitable abortion (16.8%). Offensive vaginal discharge (17.9%), tender uterus (11.3%), septic shock (3.9%) and pelvic peritonitis (2.4%) were among the physical findings recorded on admission. Clinically detectable anaemia evidenced by pallor was found to be the leading major complication in 193 (31.2%) of the cases followed by hypovolemic and septic shock 65 (10.5%). There were a total of 9 abortion related deaths with a case fatality rate of 1.5%. Self-induced abortion and delayed uterine evacuation of more than six hours were found to have significant association with post-abortion complications (p-values of 0.018 and 0.035 respectively). Conclusion Abortion related complications and deaths are high in our setting where abortion is illegal. Mechanisms need to be devised in the health facilities to evacuate the uterus in good time whenever it is indicated and to be equipped to handle the fatal complications. There is an indication for

  5. High Levels of Post-Abortion Complication in a Setting Where Abortion Service Is Not Legalized.

    Directory of Open Access Journals (Sweden)

    Tadele Melese

    Full Text Available Maternal mortality due to abortion complications stands among the three leading causes of maternal death in Botswana where there is a restrictive abortion law. This study aimed at assessing the patterns and determinants of post-abortion complications.A retrospective institution based cross-sectional study was conducted at four hospitals from January to August 2014. Data were extracted from patients' records with regards to their socio-demographic variables, abortion complications and length of hospital stay. Descriptive statistics and bivariate analysis were employed.A total of 619 patients' records were reviewed with a mean (SD age of 27.12 (5.97 years. The majority of abortions (95.5% were reported to be spontaneous and 3.9% of the abortions were induced by the patient. Two thirds of the patients were admitted as their first visit to the hospitals and one third were referrals from other health facilities. Two thirds of the patients were admitted as a result of incomplete abortion followed by inevitable abortion (16.8%. Offensive vaginal discharge (17.9%, tender uterus (11.3%, septic shock (3.9% and pelvic peritonitis (2.4% were among the physical findings recorded on admission. Clinically detectable anaemia evidenced by pallor was found to be the leading major complication in 193 (31.2% of the cases followed by hypovolemic and septic shock 65 (10.5%. There were a total of 9 abortion related deaths with a case fatality rate of 1.5%. Self-induced abortion and delayed uterine evacuation of more than six hours were found to have significant association with post-abortion complications (p-values of 0.018 and 0.035 respectively.Abortion related complications and deaths are high in our setting where abortion is illegal. Mechanisms need to be devised in the health facilities to evacuate the uterus in good time whenever it is indicated and to be equipped to handle the fatal complications. There is an indication for clinical audit on post-abortion care

  6. Analysis model for forecasting extreme temperature using refined rank set pair

    Directory of Open Access Journals (Sweden)

    Qiao Ling-Xia

    2013-01-01

    Full Text Available In order to improve the precision of forecasting extreme temperature time series, a refined rank set pair analysis model with a refined rank transformation function is proposed to improve precision of its prediction. The measured values of the annual highest temperature of two China’s cities, Taiyuan and Shijiazhuang, in July are taken to examine the performance of a refined rank set pair model.

  7. Using Mathematical Modeling and Set-Based Design Principles to Recommend an Existing CVL Design

    Science.gov (United States)

    2017-09-01

    MATHEMATICAL MODELING AND SET-BASED DESIGN PRINCIPLES TO RECOMMEND AN EXISTING CVL DESIGN by William H. Ehlies September 2017 Thesis Advisor...Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington, DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE...September 2017 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE USING MATHEMATICAL MODELING AND SET-BASED DESIGN PRINCIPLES

  8. Evaluation and comparison of alternative fleet-level selective maintenance models

    International Nuclear Information System (INIS)

    Schneider, Kellie; Richard Cassady, C.

    2015-01-01

    Fleet-level selective maintenance refers to the process of identifying the subset of maintenance actions to perform on a fleet of repairable systems when the maintenance resources allocated to the fleet are insufficient for performing all desirable maintenance actions. The original fleet-level selective maintenance model is designed to maximize the probability that all missions in a future set are completed successfully. We extend this model in several ways. First, we consider a cost-based optimization model and show that a special case of this model maximizes the expected value of the number of successful missions in the future set. We also consider the situation in which one or more of the future missions may be canceled. These models and the original fleet-level selective maintenance optimization models are nonlinear. Therefore, we also consider an alternative model in which the objective function can be linearized. We show that the alternative model is a good approximation to the other models. - Highlights: • Investigate nonlinear fleet-level selective maintenance optimization models. • A cost based model is used to maximize the expected number of successful missions. • Another model is allowed to cancel missions if reliability is sufficiently low. • An alternative model has an objective function that can be linearized. • We show that the alternative model is a good approximation to the other models

  9. Modeling category-level purchase timing with brand-level marketing variables

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard)

    2003-01-01

    textabstractPurchase timing of households is usually modeled at the category level. Marketing efforts are however only available at the brand level. Hence, to describe category-level interpurchase times using marketing efforts one has to construct a category-level measure of marketing efforts from

  10. Setting up experimental incineration system for low-level radioactive samples and combustion experiments

    International Nuclear Information System (INIS)

    Yumoto, Yasuhiro; Hanafusa, Tadashi; Nagamatsu, Tomohiro; Okada, Shigeru

    1997-01-01

    An incineration system was constructed which were composed of a combustion furnace (AP-150R), a cyclone dust collector, radioisotope trapping and measurement apparatus and a radioisotope storage room built in the first basement of the Radioisotope Center. Low level radioactive samples (LLRS) used for the combustion experiment were composed of combustible material or semi-combustible material containing 500 kBq of 99m TcO 4 or 23.25 kBq of 131 INa. The distribution of radioisotopes both in the inside and outside of combustion furnace were estimated. We measured radioactivity of a smoke duct gas in terminal exit of the exhaust port. In case of combustion of LLRS containing 99m TcO 4 or 131 INa, concentration of radioisotopes at the exhaust port showed less than legal concentration limit of these radioisotopes. In cases of combustion of LLRS containing 99m TcO 4 or 131 INa, decontamination factors of the incineration system were 120 and 1.1, respectively. (author)

  11. Implementing and measuring the level of laboratory service integration in a program setting in Nigeria.

    Science.gov (United States)

    Mbah, Henry; Negedu-Momoh, Olubunmi Ruth; Adedokun, Oluwasanmi; Ikani, Patrick Anibbe; Balogun, Oluseyi; Sanwo, Olusola; Ochei, Kingsley; Ekanem, Maurice; Torpey, Kwasi

    2014-01-01

    The surge of donor funds to fight HIV&AIDS epidemic inadvertently resulted in the setup of laboratories as parallel structures to rapidly respond to the identified need. However these parallel structures are a threat to the existing fragile laboratory systems. Laboratory service integration is critical to remedy this situation. This paper describes an approach to quantitatively measure and track integration of HIV-related laboratory services into the mainstream laboratory services and highlight some key intervention steps taken, to enhance service integration. A quantitative before-and-after study conducted in 122 Family Health International (FHI360) supported health facilities across Nigeria. A minimum service package was identified including management structure; trainings; equipment utilization and maintenance; information, commodity and quality management for laboratory integration. A check list was used to assess facilities at baseline and 3 months follow-up. Level of integration was assessed on an ordinal scale (0 = no integration, 1 = partial integration, 2 = full integration) for each service package. A composite score grading expressed as a percentage of total obtainable score of 14 was defined and used to classify facilities (≤ 80% FULL, 25% to 79% PARTIAL and laboratory systems were 64 (52.5%) and 0 (0.0%) at baseline, compared to 100 (82.0%) and 3 (2.4%) respectively at 3 months follow-up (p = 0.000). This project showcases our novel approach to measure the status of each laboratory on the integration continuum.

  12. Description of a practice model for pharmacist medication review in a general practice setting

    DEFF Research Database (Denmark)

    Brandt, Mette; Hallas, Jesper; Hansen, Trine Graabæk

    2014-01-01

    BACKGROUND: Practical descriptions of procedures used for pharmacists' medication reviews are sparse. OBJECTIVE: To describe a model for medication review by pharmacists tailored to a general practice setting. METHODS: A stepwise model is described. The model is based on data from the medical chart...... no indication (n=47, 23%). Most interventions were aimed at cardiovascular drugs. CONCLUSION: We have provided a detailed description of a practical approach to pharmacists' medication review in a GP setting. The model was tested and found to be usable, and to deliver a medication review with high acceptance...

  13. Joint Clustering and Component Analysis of Correspondenceless Point Sets: Application to Cardiac Statistical Modeling.

    Science.gov (United States)

    Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F

    2015-01-01

    Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.

  14. Simulation modelling as a tool for knowledge mobilisation in health policy settings: a case study protocol.

    Science.gov (United States)

    Freebairn, L; Atkinson, J; Kelly, P; McDonnell, G; Rychetnik, L

    2016-09-21

    Evidence-informed decision-making is essential to ensure that health programs and services are effective and offer value for money; however, barriers to the use of evidence persist. Emerging systems science approaches and advances in technology are providing new methods and tools to facilitate evidence-based decision-making. Simulation modelling offers a unique tool for synthesising and leveraging existing evidence, data and expert local knowledge to examine, in a robust, low risk and low cost way, the likely impact of alternative policy and service provision scenarios. This case study will evaluate participatory simulation modelling to inform the prevention and management of gestational diabetes mellitus (GDM). The risks associated with GDM are well recognised; however, debate remains regarding diagnostic thresholds and whether screening and treatment to reduce maternal glucose levels reduce the associated risks. A diagnosis of GDM may provide a leverage point for multidisciplinary lifestyle modification interventions. This research will apply and evaluate a simulation modelling approach to understand the complex interrelation of factors that drive GDM rates, test options for screening and interventions, and optimise the use of evidence to inform policy and program decision-making. The study design will use mixed methods to achieve the objectives. Policy, clinical practice and research experts will work collaboratively to develop, test and validate a simulation model of GDM in the Australian Capital Territory (ACT). The model will be applied to support evidence-informed policy dialogues with diverse stakeholders for the management of GDM in the ACT. Qualitative methods will be used to evaluate simulation modelling as an evidence synthesis tool to support evidence-based decision-making. Interviews and analysis of workshop recordings will focus on the participants' engagement in the modelling process; perceived value of the participatory process, perceived

  15. A topology optimization method based on the level set method for the design of negative permeability dielectric metamaterials

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Izui, Kazuhiro

    2012-01-01

    This paper presents a level set-based topology optimization method for the design of negative permeability dielectric metamaterials. Metamaterials are artificial materials that display extraordinary physical properties that are unavailable with natural materials. The aim of the formulated...... optimization problem is to find optimized layouts of a dielectric material that achieve negative permeability. The presence of grayscale areas in the optimized configurations critically affects the performance of metamaterials, positively as well as negatively, but configurations that contain grayscale areas...... are highly impractical from an engineering and manufacturing point of view. Therefore, a topology optimization method that can obtain clear optimized configurations is desirable. Here, a level set-based topology optimization method incorporating a fictitious interface energy is applied to a negative...

  16. Wind-Induced Air-Flow Patterns in an Urban Setting: Observations and Numerical Modeling

    Science.gov (United States)

    Sattar, Ahmed M. A.; Elhakeem, Mohamed; Gerges, Bishoy N.; Gharabaghi, Bahram; Gultepe, Ismail

    2018-04-01

    City planning can have a significant effect on wind flow velocity patterns and thus natural ventilation. Buildings with different heights are roughness elements that can affect the near- and far-field wind flow velocity. This paper aims at investigating the impact of an increase in building height on the nearby velocity fields. A prototype urban setting of buildings with two different heights (25 and 62.5 cm) is built up and placed in a wind tunnel. Wind flow velocity around the buildings is mapped at different heights. Wind tunnel measurements are used to validate a 3D-numerical Reynolds averaged Naviers-Stokes model. The validated model is further used to calculate the wind flow velocity patterns for cases with different building heights. It was found that increasing the height of some buildings in an urban setting can lead to the formation of large horseshoe vortices and eddies around building corners. A separation area is formed at the leeward side of the building, and the recirculation of air behind the building leads to the formation of slow rotation vortices. The opposite effect is observed in the wake (cavity) region of the buildings, where both the cavity length and width are significantly reduced, and this resulted in a pronounced increase in the wind flow velocity. A significant increase in the wind flow velocity in the wake region of tall buildings with a value of up to 30% is observed. The spatially averaged velocities around short buildings also increased by 25% compared to those around buildings with different heights. The increase in the height of some buildings is found to have a positive effect on the wind ventilation at the pedestrian level.

  17. Numerical simulation of interface movement in gas-liquid two-phase flows with Level Set method

    International Nuclear Information System (INIS)

    Li Huixiong; Chinese Academy of Sciences, Beijing; Deng Sheng; Chen Tingkuan; Zhao Jianfu; Wang Fei

    2005-01-01

    Numerical simulation of gas-liquid two-phase flow and heat transfer has been an attractive work for a quite long time, but still remains as a knotty difficulty due to the inherent complexities of the gas-liquid two-phase flow resulted from the existence of moving interfaces with topology changes. This paper reports the effort and the latest advances that have been made by the authors, with special emphasis on the methods for computing solutions to the advection equation of the Level set function, which is utilized to capture the moving interfaces in gas-liquid two-phase flows. Three different schemes, i.e. the simple finite difference scheme, the Superbee-TVD scheme and the 5-order WENO scheme in combination with the Runge-Kutta method are respectively applied to solve the advection equation of the Level Set. A numerical procedure based on the well-verified SIMPLER method is employed to numerically calculate the momentum equations of the two-phase flow. The above-mentioned three schemes are employed to simulate the movement of four typical interfaces under 5 typical flowing conditions. Analysis of the numerical results shows that the 5-order WENO scheme and the Superbee-TVD scheme are much better than the simple finite difference scheme, and the 5-order WENO scheme is the best to compute solutions to the advection equation of the Level Set. The 5-order WENO scheme will be employed as the main scheme to get solutions to the advection equations of the Level Set when gas-liquid two-phase flows are numerically studied in the future. (authors)

  18. Moderation analysis using a two-level regression model.

    Science.gov (United States)

    Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott

    2014-10-01

    Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.

  19. Classification of Normal and Apoptotic Cells from Fluorescence Microscopy Images Using Generalized Polynomial Chaos and Level Set Function.

    Science.gov (United States)

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2016-06-01

    Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.

  20. GPU accelerated edge-region based level set evolution constrained by 2D gray-scale histogram.

    Science.gov (United States)

    Balla-Arabé, Souleymane; Gao, Xinbo; Wang, Bin

    2013-07-01

    Due to its intrinsic nature which allows to easily handle complex shapes and topological changes, the level set method (LSM) has been widely used in image segmentation. Nevertheless, LSM is computationally expensive, which limits its applications in real-time systems. For this purpose, we propose a new level set algorithm, which uses simultaneously edge, region, and 2D histogram information in order to efficiently segment objects of interest in a given scene. The computational complexity of the proposed LSM is greatly reduced by using the highly parallelizable lattice Boltzmann method (LBM) with a body force to solve the level set equation (LSE). The body force is the link with image data and is defined from the proposed LSE. The proposed LSM is then implemented using an NVIDIA graphics processing units to fully take advantage of the LBM local nature. The new algorithm is effective, robust against noise, independent to the initial contour, fast, and highly parallelizable. The edge and region information enable to detect objects with and without edges, and the 2D histogram information enable the effectiveness of the method in a noisy environment. Experimental results on synthetic and real images demonstrate subjectively and objectively the performance of the proposed method.

  1. Comparison of different statistical methods for estimation of extreme sea levels with wave set-up contribution

    Science.gov (United States)

    Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme

    2013-04-01

    Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.

  2. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  3. Random Intercept and Random Slope 2-Level Multilevel Models

    Directory of Open Access Journals (Sweden)

    Rehan Ahmad Khan

    2012-11-01

    Full Text Available Random intercept model and random intercept & random slope model carrying two-levels of hierarchy in the population are presented and compared with the traditional regression approach. The impact of students’ satisfaction on their grade point average (GPA was explored with and without controlling teachers influence. The variation at level-1 can be controlled by introducing the higher levels of hierarchy in the model. The fanny movement of the fitted lines proves variation of student grades around teachers.

  4. Global combustion sources of organic aerosols: model comparison with 84 AMS factor-analysis data sets

    Science.gov (United States)

    Tsimpidi, Alexandra P.; Karydis, Vlassis A.; Pandis, Spyros N.; Lelieveld, Jos

    2016-07-01

    Emissions of organic compounds from biomass, biofuel, and fossil fuel combustion strongly influence the global atmospheric aerosol load. Some of the organics are directly released as primary organic aerosol (POA). Most are emitted in the gas phase and undergo chemical transformations (i.e., oxidation by hydroxyl radical) and form secondary organic aerosol (SOA). In this work we use the global chemistry climate model ECHAM/MESSy Atmospheric Chemistry (EMAC) with a computationally efficient module for the description of organic aerosol (OA) composition and evolution in the atmosphere (ORACLE). The tropospheric burden of open biomass and anthropogenic (fossil and biofuel) combustion particles is estimated to be 0.59 and 0.63 Tg, respectively, accounting for about 30 and 32 % of the total tropospheric OA load. About 30 % of the open biomass burning and 10 % of the anthropogenic combustion aerosols originate from direct particle emissions, whereas the rest is formed in the atmosphere. A comprehensive data set of aerosol mass spectrometer (AMS) measurements along with factor-analysis results from 84 field campaigns across the Northern Hemisphere are used to evaluate the model results. Both the AMS observations and the model results suggest that over urban areas both POA (25-40 %) and SOA (60-75 %) contribute substantially to the overall OA mass, whereas further downwind and in rural areas the POA concentrations decrease substantially and SOA dominates (80-85 %). EMAC does a reasonable job in reproducing POA and SOA levels during most of the year. However, it tends to underpredict POA and SOA concentrations during winter indicating that the model misses wintertime sources of OA (e.g., residential biofuel use) and SOA formation pathways (e.g., multiphase oxidation).

  5. Potential carbon sequestration of European arable soils estimated by modelling a comprehensive set of management practices.

    Science.gov (United States)

    Lugato, Emanuele; Bampa, Francesca; Panagos, Panos; Montanarella, Luca; Jones, Arwyn

    2014-11-01

    Bottom-up estimates from long-term field experiments and modelling are the most commonly used approaches to estimate the carbon (C) sequestration potential of the agricultural sector. However, when data are required at European level, important margins of uncertainty still exist due to the representativeness of local data at large scale or different assumptions and information utilized for running models. In this context, a pan-European (EU + Serbia, Bosnia and Herzegovina, Montenegro, Albania, Former Yugoslav Republic of Macedonia and Norway) simulation platform with high spatial resolution and harmonized data sets was developed to provide consistent scenarios in support of possible carbon sequestration policies. Using the CENTURY agroecosystem model, six alternative management practices (AMP) scenarios were assessed as alternatives to the business as usual situation (BAU). These consisted of the conversion of arable land to grassland (and vice versa), straw incorporation, reduced tillage, straw incorporation combined with reduced tillage, ley cropping system and cover crops. The conversion into grassland showed the highest soil organic carbon (SOC) sequestration rates, ranging between 0.4 and 0.8 t C ha(-1)  yr(-1) , while the opposite extreme scenario (100% of grassland conversion into arable) gave cumulated losses of up to 2 Gt of C by 2100. Among the other practices, ley cropping systems and cover crops gave better performances than straw incorporation and reduced tillage. The allocation of 12 to 28% of the European arable land to different AMP combinations resulted in a potential SOC sequestration of 101-336 Mt CO2 eq. by 2020 and 549-2141 Mt CO2 eq. by 2100. Modelled carbon sequestration rates compared with values from an ad hoc meta-analysis confirmed the robustness of these estimates. © 2014 John Wiley & Sons Ltd.

  6. "Economic microscope": The agent-based model set as an instrument in an economic system research

    Science.gov (United States)

    Berg, D. B.; Zvereva, O. M.; Akenov, Serik

    2017-07-01

    To create a valid model of a social or economic system one must consider a lot of parameters, conditions and restrictions. Systems and, consequently, the corresponding models are proved to be very complicated. The problem of such system model engineering can't be solved only with mathematical methods usage. The decision could be found in computer simulation. Simulation does not reject mathematical methods, mathematical expressions could become the foundation for a computer model. In these materials the set of agent-based computer models is under discussion. All the set models simulate productive agents communications, but every model is geared towards the specific goal, and, thus, has its own algorithm and its own peculiarities. It is shown that computer simulation can discover new features of the agents' behavior that can not be obtained by analytical solvation of mathematical equations and thus plays the role of some kind of economic microscope.

  7. Multivariate Term Structure Models with Level and Heteroskedasticity Effects

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    The paper introduces and estimates a multivariate level-GARCH model for the long rate and the term-structure spread where the conditional volatility is proportional to the ãth power of the variable itself (level effects) and the conditional covariance matrix evolves according to a multivariate GA...... and the level model. GARCH effects are more important than level effects. The results are robust to the maturity of the interest rates. Udgivelsesdato: MAY...

  8. Evolving Non-Dominated Parameter Sets for Computational Models from Multiple Experiments

    Science.gov (United States)

    Lane, Peter C. R.; Gobet, Fernand

    2013-03-01

    Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the `speciated non-dominated sorting genetic algorithm' for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.

  9. Predicting inpatient clinical order patterns with probabilistic topic models vs conventional order sets.

    Science.gov (United States)

    Chen, Jonathan H; Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B

    2017-05-01

    Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% ( P  sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  10. A comparison of simulation results from two terrestrial carbon cycle models using three climate data sets

    International Nuclear Information System (INIS)

    Ito, Akihiko; Sasai, Takahiro

    2006-01-01

    This study addressed how different climate data sets influence simulations of the global terrestrial carbon cycle. For the period 1982-2001, we compared the results of simulations based on three climate data sets (NCEP/NCAR, NCEP/DOE AMIP-II and ERA40) employed in meteorological, ecological and biogeochemical studies and two different models (BEAMS and Sim-CYCLE). The models differed in their parameterizations of photosynthetic and phenological processes but used the same surface climate (e.g. shortwave radiation, temperature and precipitation), vegetation, soil and topography data. The three data sets give different climatic conditions, especially for shortwave radiation, in terms of long-term means, linear trends and interannual variability. Consequently, the simulation results for global net primary productivity varied by 16%-43% only from differences in the climate data sets, especially in these regions where the shortwave radiation data differed markedly: differences in the climate data set can strongly influence simulation results. The differences among the climate data set and between the two models resulted in slightly different spatial distribution and interannual variability in the net ecosystem carbon budget. To minimize uncertainty, we should pay attention to the specific climate data used. We recommend developing an accurate standard climate data set for simulation studies

  11. The effects of climate downscaling technique and observational data set on modeled ecological responses

    Science.gov (United States)

    Afshin Pourmokhtarian; Charles T. Driscoll; John L. Campbell; Katharine Hayhoe; Anne M. K. Stoner

    2016-01-01

    Assessments of future climate change impacts on ecosystems typically rely on multiple climate model projections, but often utilize only one downscaling approach trained on one set of observations. Here, we explore the extent to which modeled biogeochemical responses to changing climate are affected by the selection of the climate downscaling method and training...

  12. General sets of coherent states and the Jaynes-Cummings model

    International Nuclear Information System (INIS)

    Daoud, M.; Hussin, V.

    2002-01-01

    General sets of coherent states are constructed for quantum systems admitting a nondegenerate infinite discrete energy spectrum. They are eigenstates of an annihilation operator and satisfy the usual properties of standard coherent states. The application of such a construction to the quantum optics Jaynes-Cummings model leads to a new understanding of the properties of this model. (author)

  13. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    Science.gov (United States)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  14. THE DEVELOPMENT AND USE OF A MODEL TO PREDICT SUSTAINABILITY OF CHANGE IN HEALTH CARE SETTINGS.

    Science.gov (United States)

    Molfenter, Todd; Ford, James H; Bhattacharya, Abhik

    2011-01-01

    Innovations adopted through organizational change initiatives are often not sustained leading to diminished quality, productivity, and consumer satisfaction. Research explaining variance in the use of adopted innovations in health care settings is sparse, suggesting the need for a theoretical model to guide research and practice. In this article, we describe the development of a hybrid conjoint decision theoretic model designed to predict the sustainability of organizational change in health care settings. An initial test of the model's predictive validity using expert scored hypothetic profiles resulted in an r-squared value of .77. The test of this model offers a theoretical base for future research on the sustainability of change in health care settings.

  15. Search for the standard model Higgs Boson produced in association with top quarks using the full CDF data set.

    Science.gov (United States)

    Aaltonen, T; Álvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Appel, J A; Arisawa, T; Artikov, A; Asaadi, J; Ashmanskas, W; Auerbach, B; Aurisano, A; Azfar, F; Badgett, W; Bae, T; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Barria, P; Bartos, P; Bauce, M; Bedeschi, F; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Bhatti, A; Bisello, D; Bizjak, I; Bland, K R; Blumenfeld, B; Bocci, A; Bodek, A; Bortoletto, D; Boudreau, J; Boveia, A; Brigliadori, L; Bromberg, C; Brucken, E; Budagov, J; Budd, H S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Calamba, A; Calancha, C; Camarda, S; Campanelli, M; Campbell, M; Canelli, F; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chung, W H; Chung, Y S; Ciocci, M A; Clark, A; Clarke, C; Compostella, G; Connors, J; Convery, M E; Conway, J; Corbo, M; Cordelli, M; Cox, C A; Cox, D J; Crescioli, F; Cuevas, J; Culbertson, R; Dagenhart, D; d'Ascenzo, N; Datta, M; de Barbaro, P; Dell'Orso, M; Demortier, L; Deninno, M; Devoto, F; d'Errico, M; Di Canto, A; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Dorigo, M; Dorigo, T; Ebina, K; Elagin, A; Eppig, A; Erbacher, R; Errede, S; Ershaidat, N; Eusebi, R; Farrington, S; Feindt, M; Fernandez, J P; Field, R; Flanagan, G; Forrest, R; Frank, M J; Franklin, M; Freeman, J C; Funakoshi, Y; Furic, I; Gallinaro, M; Garcia, J E; Garfinkel, A F; Garosi, P; Gerberich, H; Gerchtein, E; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Ginsburg, C M; Giokaris, N; Giromini, P; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldin, D; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Grinstein, S; Grosso-Pilcher, C; Group, R C; Guimaraes da Costa, J; Hahn, S R; Halkiadakis, E; Hamaguchi, A; Han, J Y; Happacher, F; Hara, K; Hare, D; Hare, M; Harr, R F; Hatakeyama, K; Hays, C; Heck, M; Heinrich, J; Herndon, M; Hewamanage, S; Hocker, A; Hopkins, W; Horn, D; Hou, S; Hughes, R E; Hurwitz, M; Husemann, U; Hussain, N; Hussein, M; Huston, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jang, D; Jayatilaka, B; Jeon, E J; Jindariani, S; Jones, M; Joo, K K; Jun, S Y; Junk, T R; Kamon, T; Karchin, P E; Kasmi, A; Kato, Y; Ketchum, W; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kim, Y J; Kimura, N; Kirby, M; Klimenko, S; Knoepfel, K; Kondo, K; Kong, D J; Konigsberg, J; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Kruse, M; Krutelyov, V; Kuhr, T; Kurata, M; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lecompte, T; Lee, E; Lee, H S; Lee, J S; Lee, S W; Leo, S; Leone, S; Lewis, J D; Limosani, A; Lin, C-J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, H; Liu, Q; Liu, T; Lockwitz, S; Loginov, A; Lucchesi, D; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lys, J; Lysak, R; Madrak, R; Maeshima, K; Maestro, P; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Martínez, M; Mastrandrea, P; Matera, K; Mattson, M E; Mazzacane, A; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Mesropian, C; Miao, T; Mietlicki, D; Mitra, A; Miyake, H; Moed, S; Moggi, N; Mondragon, M N; Moon, C S; Moore, R; Morello, M J; Morlock, J; Movilla Fernandez, P; Mukherjee, A; Muller, Th; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Naganoma, J; Nakano, I; Napier, A; Nett, J; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Noh, S Y; Norniella, O; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Ortolan, L; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Paramonov, A A; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pilot, J; Pitts, K; Plager, C; Pondrom, L; Poprocki, S; Potamianos, K; Prokoshin, F; Pranko, A; Ptohos, F; Punzi, G; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Renton, P; Rescigno, M; Riddick, T; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Ruffini, F; Ruiz, A; Russ, J; Rusu, V; Safonov, A; Sakumoto, W K; Sakurai, Y; Santi, L; Sato, K; Saveliev, V; Savoy-Navarro, A; Schlabach, P; Schmidt, A; Schmidt, E E; Schwarz, T; Scodellaro, L; Scribano, A; Scuri, F; Seidel, S; Seiya, Y; Semenov, A; Sforza, F; Shalhout, S Z; Shears, T; Shepard, P F; Shimojima, M; Shochet, M; Shreyber-Tecker, I; Simonenko, A; Sinervo, P; Sliwa, K; Smith, J R; Snider, F D; Soha, A; Sorin, V; Song, H; Squillacioti, P; Stancari, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Strycker, G L; Sudo, Y; Sukhanov, A; Suslov, I; Takemasa, K; Takeuchi, Y; Tang, J; Tecchio, M; Teng, P K; Thom, J; Thome, J; Thompson, G A; Thomson, E; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Trovato, M; Ukegawa, F; Uozumi, S; Varganov, A; Vázquez, F; Velev, G; Vellidis, C; Vidal, M; Vila, I; Vilar, R; Vizán, J; Vogel, M; Volpi, G; Wagner, P; Wagner, R L; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Wester, W C; Whiteson, D; Wicklund, A B; Wicklund, E; Wilbur, S; Wick, F; Williams, H H; Wilson, J S; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, H; Wright, T; Wu, X; Wu, Z; Yamamoto, K; Yamato, D; Yang, T; Yang, U K; Yang, Y C; Yao, W-M; Yeh, G P; Yi, K; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanetti, A; Zeng, Y; Zhou, C; Zucchelli, S

    2012-11-02

    A search is presented for the standard model Higgs boson produced in association with top quarks using the full Run II proton-antiproton collision data set, corresponding to 9.45 fb(-1), collected by the Collider Detector at Fermilab. No significant excess over the expected background is observed, and 95% credibility-level upper bounds are placed on the cross section σ(ttH → lepton + missing transverse energy+jets). For a Higgs boson mass of 125 GeV/c(2), we expect to set a limit of 12.6 and observe a limit of 20.5 times the standard model rate. This represents the most sensitive search for a standard model Higgs boson in this channel to date.

  16. A model for a national low level waste program

    International Nuclear Information System (INIS)

    Blankenhorn, James A.

    2009-01-01

    A national program for the management of low level waste is essential to the success of environmental clean-up, decontamination and decommissioning, current operations and future missions. The value of a national program is recognized through procedural consistency and a shared set of resources. A national program requires a clear waste definition and an understanding of waste characteristics matched against available and proposed disposal options. A national program requires the development and implementation of standards and procedures for implementing the waste hierarchy, with a specitic emphasis on waste avoidance, minimization and recycling. It requires a common set of objectives for waste characterization based on the disposal facility's waste acceptance criteria, regulatory and license requirements and performance assessments. Finally, a national waste certification program is required to ensure compliance. To facilitate and enhance the national program, a centralized generator services organization, tasked with providing technical services to the generators on behalf of the national program, is necessary. These subject matter experts are the interface between the generating sites and the disposal facility(s). They provide an invaluable service to the generating organizations through their involvement in waste planning prior to waste generation and through championing implementation of the waste hierarchy. Through their interface, national treatment and transportation services are optimized and new business opportunities are identified. This national model is based on extensive experience in the development and on-going management of a national transuranic waste program and management of the national repository, the Waste Isolation Pilot Plant. The Low Level Program at the Savannah River Site also successfully developed and implemented the waste hierarchy, waste certification and waste generator services concepts presented below. The Savannah River Site

  17. Constructing set-valued fundamental diagrams from jamiton solutions in second order traffic models

    KAUST Repository

    Seibold, Benjamin; Flynn, Morris R.; Kasimov, Aslan R.; Rosales, Rodolfo Rubé n

    2013-01-01

    Fundamental diagrams of vehicular traiic ow are generally multivalued in the congested ow regime. We show that such set-valued fundamental diagrams can be constructed systematically from simple second order macroscopic traiic models, such as the classical Payne-Whitham model or the inhomogeneous Aw-Rascle-Zhang model. These second order models possess nonlinear traveling wave solutions, called jamitons, and the multi-valued parts in the fundamental diagram correspond precisely to jamiton-dominated solutions. This study shows that transitions from function-valued to set-valued parts in a fundamental diagram arise naturally in well-known second order models. As a particular consequence, these models intrinsically reproduce traiic phases. © American Institute of Mathematical Sciences.

  18. Constructing set-valued fundamental diagrams from jamiton solutions in second order traffic models

    KAUST Repository

    Seibold, Benjamin

    2013-09-01

    Fundamental diagrams of vehicular traiic ow are generally multivalued in the congested ow regime. We show that such set-valued fundamental diagrams can be constructed systematically from simple second order macroscopic traiic models, such as the classical Payne-Whitham model or the inhomogeneous Aw-Rascle-Zhang model. These second order models possess nonlinear traveling wave solutions, called jamitons, and the multi-valued parts in the fundamental diagram correspond precisely to jamiton-dominated solutions. This study shows that transitions from function-valued to set-valued parts in a fundamental diagram arise naturally in well-known second order models. As a particular consequence, these models intrinsically reproduce traiic phases. © American Institute of Mathematical Sciences.

  19. Bud development, flowering and fruit set of Moringa oleifera Lam. (Horseradish Tree as affected by various irrigation levels

    Directory of Open Access Journals (Sweden)

    Quintin Ernst Muhl

    2013-12-01

    Full Text Available Moringa oleifera is becoming increasingly popular as an industrial crop due to its multitude of useful attributes as water purifier, nutritional supplement and biofuel feedstock. Given its tolerance to sub-optimal growing conditions, most of the current and anticipated cultivation areas are in medium to low rainfall areas. This study aimed to assess the effect of various irrigation levels on floral initiation, flowering and fruit set. Three treatments namely, a 900 mm (900IT, 600 mm (600IT and 300 mm (300IT per annum irrigation treatment were administered through drip irrigation, simulating three total annual rainfall amounts. Individual inflorescences from each treatment were tagged during floral initiation and monitored throughout until fruit set. Flower bud initiation was highest at the 300IT and lowest at the 900IT for two consecutive growing seasons. Fruit set on the other hand, decreased with the decrease in irrigation treatment. Floral abortion, reduced pollen viability as well as moisture stress in the style were contributing factors to the reduction in fruiting/yield observed at the 300IT. Moderate water stress prior to floral initiation could stimulate flower initiation, however, this should be followed by sufficient irrigation to ensure good pollination, fruit set and yield.

  20. A Dual Hesitant Fuzzy Multigranulation Rough Set over Two-Universe Model for Medical Diagnoses

    Science.gov (United States)

    Zhang, Chao; Li, Deyu; Yan, Yan

    2015-01-01

    In medical science, disease diagnosis is one of the difficult tasks for medical experts who are confronted with challenges in dealing with a lot of uncertain medical information. And different medical experts might express their own thought about the medical knowledge base which slightly differs from other medical experts. Thus, to solve the problems of uncertain data analysis and group decision making in disease diagnoses, we propose a new rough set model called dual hesitant fuzzy multigranulation rough set over two universes by combining the dual hesitant fuzzy set and multigranulation rough set theories. In the framework of our study, both the definition and some basic properties of the proposed model are presented. Finally, we give a general approach which is applied to a decision making problem in disease diagnoses, and the effectiveness of the approach is demonstrated by a numerical example. PMID:26858772

  1. Bayesian Mixed Hidden Markov Models: A Multi-Level Approach to Modeling Categorical Outcomes with Differential Misclassification

    Science.gov (United States)

    Zhang, Yue; Berhane, Kiros

    2014-01-01

    Questionnaire-based health status outcomes are often prone to misclassification. When studying the effect of risk factors on such outcomes, ignoring any potential misclassification may lead to biased effect estimates. Analytical challenges posed by these misclassified outcomes are further complicated when simultaneously exploring factors for both the misclassification and health processes in a multi-level setting. To address these challenges, we propose a fully Bayesian Mixed Hidden Markov Model (BMHMM) for handling differential misclassification in categorical outcomes in a multi-level setting. The BMHMM generalizes the traditional Hidden Markov Model (HMM) by introducing random effects into three sets of HMM parameters for joint estimation of the prevalence, transition and misclassification probabilities. This formulation not only allows joint estimation of all three sets of parameters, but also accounts for cluster level heterogeneity based on a multi-level model structure. Using this novel approach, both the true health status prevalence and the transition probabilities between the health states during follow-up are modeled as functions of covariates. The observed, possibly misclassified, health states are related to the true, but unobserved, health states and covariates. Results from simulation studies are presented to validate the estimation procedure, to show the computational efficiency due to the Bayesian approach and also to illustrate the gains from the proposed method compared to existing methods that ignore outcome misclassification and cluster level heterogeneity. We apply the proposed method to examine the risk factors for both asthma transition and misclassification in the Southern California Children's Health Study (CHS). PMID:24254432

  2. On a Formalization of Cantor Set Theory for Natural Models of the Physical Phenomena

    Directory of Open Access Journals (Sweden)

    Nudel'man A. S.

    2010-01-01

    Full Text Available This article presents a set theory which is an extension of ZFC . In contrast to ZFC , a new theory admits absolutely non-denumerable sets. It is feasible that a symbiosis of the proposed theory and Vdovin set theory will permit to formulate a (presumably non- contradictory axiomatic set theory which will represent the core of Cantor set theory in a maximally full manner as to the essence and the contents of the latter. This is possible due to the fact that the generalized principle of choice and the generalized continuum hypothesis are proved in Vdovin theory. The theory, being more complete than ZF and more natural according to Cantor, will allow to construct and study (in its framework only natural models of the real physical phenomena.

  3. On a Formalization of Cantor Set Theory for Natural Models of the Physical Phenomena

    Directory of Open Access Journals (Sweden)

    Nudel'man A. S.

    2010-01-01

    Full Text Available This article presents a set theory which is an extension of $ZFC$. In contrast to $ZFC$, a new theory admits absolutely non-denumerable sets. It is feasible that a symbiosis of the proposed theory and Vdovin set theory will permit to formulate a (presumably non-contradictory axiomatic set theory which will represent the core of Cantor set theory in a maximally full manner as to the essence and the contents of the latter. This is possible due to the fact that the generalized principle of choice and the generalized continuum hypothesis are proved in Vdovin theory. The theory, being more complete than $ZF$ and more natural according to Cantor, will allow to construct and study (in its framework only natural models of the real physical phenomena.

  4. Uniqueness of Gibbs Measure for Models with Uncountable Set of Spin Values on a Cayley Tree

    International Nuclear Information System (INIS)

    Eshkabilov, Yu. Kh.; Haydarov, F. H.; Rozikov, U. A.

    2013-01-01

    We consider models with nearest-neighbor interactions and with the set [0, 1] of spin values, on a Cayley tree of order K ≥ 1. It is known that the ‘splitting Gibbs measures’ of the model can be described by solutions of a nonlinear integral equation. For arbitrary k ≥ 2 we find a sufficient condition under which the integral equation has unique solution, hence under the condition the corresponding model has unique splitting Gibbs measure.

  5. Improving district level health planning and priority setting in Tanzania through implementing accountability for reasonableness framework: Perceptions of stakeholders.

    Science.gov (United States)

    Maluka, Stephen; Kamuzora, Peter; San Sebastián, Miguel; Byskov, Jens; Ndawi, Benedict; Hurtig, Anna-Karin

    2010-12-01

    In 2006, researchers and decision-makers launched a five-year project - Response to Accountable Priority Setting for Trust in Health Systems (REACT) - to improve planning and priority-setting through implementing the Accountability for Reasonableness framework in Mbarali District, Tanzania. The objective of this paper is to explore the acceptability of Accountability for Reasonableness from the perspectives of the Council Health Management Team, local government officials, health workforce and members of user boards and committees. Individual interviews were carried out with different categories of actors and stakeholders in the district. The interview guide consisted of a series of questions, asking respondents to describe their perceptions regarding each condition of the Accountability for Reasonableness framework in terms of priority setting. Interviews were analysed using thematic framework analysis. Documentary data were used to support, verify and highlight the key issues that emerged. Almost all stakeholders viewed Accountability for Reasonableness as an important and feasible approach for improving priority-setting and health service delivery in their context. However, a few aspects of Accountability for Reasonableness were seen as too difficult to implement given the socio-political conditions and traditions in Tanzania. Respondents mentioned: budget ceilings and guidelines, low level of public awareness, unreliable and untimely funding, as well as the limited capacity of the district to generate local resources as the major contextual factors that hampered the full implementation of the framework in their context. This study was one of the first assessments of the applicability of Accountability for Reasonableness in health care priority-setting in Tanzania. The analysis, overall, suggests that the Accountability for Reasonableness framework could be an important tool for improving priority-setting processes in the contexts of resource-poor settings

  6. Feature Set Evaluation for Offline Handwriting Recognition Systems: Application to the Recurrent Neural Network Model.

    Science.gov (United States)

    Chherawala, Youssouf; Roy, Partha Pratim; Cheriet, Mohamed

    2016-12-01

    The performance of handwriting recognition systems is dependent on the features extracted from the word image. A large body of features exists in the literature, but no method has yet been proposed to identify the most promising of these, other than a straightforward comparison based on the recognition rate. In this paper, we propose a framework for feature set evaluation based on a collaborative setting. We use a weighted vote combination of recurrent neural network (RNN) classifiers, each trained with a particular feature set. This combination is modeled in a probabilistic framework as a mixture model and two methods for weight estimation are described. The main contribution of this paper is to quantify the importance of feature sets through the combination weights, which reflect their strength and complementarity. We chose the RNN classifier because of its state-of-the-art performance. Also, we provide the first feature set benchmark for this classifier. We evaluated several feature sets on the IFN/ENIT and RIMES databases of Arabic and Latin script, respectively. The resulting combination model is competitive with state-of-the-art systems.

  7. Group theoretical construction of two-dimensional models with infinite sets of conservation laws

    International Nuclear Information System (INIS)

    D'Auria, R.; Regge, T.; Sciuto, S.

    1980-01-01

    We explicitly construct some classes of field theoretical 2-dimensional models associated with symmetric spaces G/H according to a general scheme proposed in an earlier paper. We treat the SO(n + 1)/SO(n) and SU(n + 1)/U(n) case, giving their relationship with the O(n) sigma-models and the CP(n) models. Moreover, we present a new class of models associated to the SU(n)/SO(n) case. All these models are shown to possess an infinite set of local conservation laws. (orig.)

  8. Selecting an interprofessional education model for a tertiary health care setting.

    Science.gov (United States)

    Menard, Prudy; Varpio, Lara

    2014-07-01

    The World Health Organization describes interprofessional education (IPE) and collaboration as necessary components of all health professionals' education - in curriculum and in practice. However, no standard framework exists to guide healthcare settings in developing or selecting an IPE model that meets the learning needs of licensed practitioners in practice and that suits the unique needs of their setting. Initially, a broad review of the grey literature (organizational websites, government documents and published books) and healthcare databases was undertaken for existing IPE models. Subsequently, database searches of published papers using Scopus, Scholars Portal and Medline was undertaken. Through this search process five IPE models were identified in the literature. This paper attempts to: briefly outline the five different models of IPE that are presently offered in the literature; and illustrate how a healthcare setting can select the IPE model within their context using Reeves' seven key trends in developing IPE. In presenting these results, the paper contributes to the interprofessional literature by offering an overview of possible IPE models that can be used to inform the implementation or modification of interprofessional practices in a tertiary healthcare setting.

  9. PENERAPAN MODEL PEMBELAJARAN ADVANCE ORGANIZER BERVISI SETS TERHADAP PENINGKATAN PENGUASAAN KONSEP KIMIA

    Directory of Open Access Journals (Sweden)

    Ilam Pratitis

    2015-11-01

    Full Text Available This study aims to determine the effect of the application of learning model with advance organizer envisions SETS to increase mastery of chemistry concepts in the high school in Semarang on buffer solution material. The design used in this research is the design of the control group non equivalent. Sampling was conducted with a purposive sampling technique, and obtained a XI 6 science grade as experimental class and class XI 5 science grade as control class. Data collection method used is the method of documentation, testing, observation, and questionnaires. The results showed that the average cognitive achievement of experimental class was 84, while the control class was 82. The result of data analysis showed that the effect of the application of learning model with advance organizer envisions SETS was able to increase the mastery of chemical concepts of 4%, with a correlation rate of 0.2. Based on the results, it can be concluded that the learning model with advance organizer envisions SETS had positive effect of increasing mastery of the concept of chemistry on buffer solution material. The advice given is learning model with organizer envisions SETS should also be applied to other chemistry materials. This is of course accompanied by a change in order to suit the needs of its effect on learning outcomes in the form of concept mastery of chemistry to be more increased.Keywords: Advance Organizer, Buffer Solution, Concept Mastery, SETS

  10. Modeling category-level purchase timing with brand-level marketing variables

    OpenAIRE

    Fok, D.; Paap, R.

    2003-01-01

    textabstractPurchase timing of households is usually modeled at the category level. Marketing efforts are however only available at the brand level. Hence, to describe category-level interpurchase times using marketing efforts one has to construct a category-level measure of marketing efforts from the marketing mix of individual brands. In this paper we discuss two standard approaches suggested in the literature to solve this problem, that is, using individual choice shares as weights to aver...

  11. Regional hydrogeological conceptual model of candidate Beishan area for high level radioactive waste disposal repository

    International Nuclear Information System (INIS)

    Wang Hailong; Guo Yonghai

    2014-01-01

    The numerical modeling of groundwater flow is an important aspect of hydrogeological assessment in siting of a high level radioactive waste disposal repository. Hydrogeological conceptual model is the basic and premise of numerical modeling of groundwater flow. Based on the hydrogeological analysis of candidate Beishan area, surface water system was created by using DEM data and the modeling area is determined. Three-dimensional hydrogeological structure model was created through GMS software. On the basis of analysis and description of boundary condition, flow field, groundwater budget and hydrogeological parameters, hydrogeological conceptual model was set up for the Beishan area. (authors)

  12. What Time is Your Sunset? Accounting for Refraction in Sunrise/set Prediction Models

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer Lynn; Chizek Frouard, Malynda; Hilton, James; Phlips, Alan; Edgar, Roman

    2018-01-01

    Algorithms that predict sunrise and sunset times currently have an uncertainty of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, including difficulties determining whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction.We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We, then, compared these predictions with data sets of observed rise/set times taken from Mount Wilson Observatory in California, University of Alberta in Edmonton, Alberta, and onboard the SS James Franco in the Atlantic. A thorough investigation of the problem requires a more substantial data set of observed rise/set times and corresponding meteorological data from around the world.We have developed a mobile application, Sunrise & Sunset Observer, so that anyone can capture this astronomical and meteorological data using their smartphone video recorder as part of a citizen science project. The Android app for this project is available in the Google Play store. Videos can also be submitted through the project website (riseset.phy.mtu.edu). Data analysis will lead to more complete models that will provide higher accuracy rise/set predictions to benefit astronomers, navigators, and outdoorsmen everywhere.

  13. Building more effective sea level rise models for coastal management

    Science.gov (United States)

    Kidwell, D.; Buckel, C.; Collini, R.; Meckley, T.

    2017-12-01

    For over a decade, increased attention on coastal resilience and adaptation to sea level rise has resulted in a proliferation of predictive models and tools. This proliferation has enhanced our understanding of our vulnerability to sea level rise, but has also led to stakeholder fatigue in trying to realize the value of each advancement. These models vary in type and complexity ranging from GIS-based bathtub viewers to modeling systems that dynamically couple complex biophysical and geomorphic processes. These approaches and capabilities typically have the common purpose using scenarios of global and regional sea level change to inform adaptation and mitigation. In addition, stakeholders are often presented a plethora of options to address sea level rise issues from a variety of agencies, academics, and consulting firms. All of this can result in confusion, misapplication of a specific model/tool, and stakeholder feedback of "no more new science or tools, just help me understand which one to use". Concerns from stakeholders have led to the question; how do we move forward with sea level rise modeling? This presentation will provide a synthesis of the experiences and feedback derived from NOAA's Ecological Effects of Sea level Rise (EESLR) program to discuss the future of predictive sea level rise impact modeling. EESLR is an applied research program focused on the advancement of dynamic modeling capabilities in collaboration with local and regional stakeholders. Key concerns from stakeholder engagement include questions about model uncertainty, approaches for model validation, and a lack of cross-model comparisons. Effective communication of model/tool products, capabilities, and results is paramount to address these concerns. Looking forward, the most effective predictions of sea level rise impacts on our coast will be attained through a focus on coupled modeling systems, particularly those that connect natural processes and human response.

  14. Accident sequence precursor analysis level 2/3 model development

    International Nuclear Information System (INIS)

    Lui, C.H.; Galyean, W.J.; Brownson, D.A.

    1997-01-01

    The US Nuclear Regulatory Commission's Accident Sequence Precursor (ASP) program currently uses simple Level 1 models to assess the conditional core damage probability for operational events occurring in commercial nuclear power plants (NPP). Since not all accident sequences leading to core damage will result in the same radiological consequences, it is necessary to develop simple Level 2/3 models that can be used to analyze the response of the NPP containment structure in the context of a core damage accident, estimate the magnitude of the resulting radioactive releases to the environment, and calculate the consequences associated with these releases. The simple Level 2/3 model development work was initiated in 1995, and several prototype models have been completed. Once developed, these simple Level 2/3 models are linked to the simple Level 1 models to provide risk perspectives for operational events. This paper describes the methods implemented for the development of these simple Level 2/3 ASP models, and the linkage process to the existing Level 1 models

  15. An Innovative Model of Integrated Behavioral Health: School Psychologists in Pediatric Primary Care Settings

    Science.gov (United States)

    Adams, Carolyn D.; Hinojosa, Sara; Armstrong, Kathleen; Takagishi, Jennifer; Dabrow, Sharon

    2016-01-01

    This article discusses an innovative example of integrated care in which doctoral level school psychology interns and residents worked alongside pediatric residents and pediatricians in the primary care settings to jointly provide services to patients. School psychologists specializing in pediatric health are uniquely trained to recognize and…

  16. Modelling the impact of antiretroviral use in resource-poor settings.

    Directory of Open Access Journals (Sweden)

    Rebecca F Baggaley

    2006-04-01

    Full Text Available BACKGROUND: The anticipated scale-up of antiretroviral therapy (ART in high-prevalence, resource-constrained settings requires operational research to guide policy on the design of treatment programmes. Mathematical models can explore the potential impacts of various treatment strategies, including timing of treatment initiation and provision of laboratory monitoring facilities, to complement evidence from pilot programmes. METHODS AND FINDINGS: A deterministic model of HIV transmission incorporating ART and stratifying infection progression into stages was constructed. The impact of ART was evaluated for various scenarios and treatment strategies, with different levels of coverage, patient eligibility, and other parameter values. These strategies included the provision of laboratory facilities that perform CD4 counts and viral load testing, and the timing of the stage of infection at which treatment is initiated. In our analysis, unlimited ART provision initiated at late-stage infection (AIDS increased prevalence of HIV infection. The effect of additionally treating pre-AIDS patients depended on the behaviour change of treated patients. Different coverage levels for ART do not affect benefits such as life-years gained per person-year of treatment and have minimal effect on infections averted when treating AIDS patients only. Scaling up treatment of pre-AIDS patients resulted in more infections being averted per person-year of treatment, but the absolute number of infections averted remained small. As coverage increased in the models, the emergence and risk of spread of drug resistance increased. Withdrawal of failing treatment (clinical resurgence of symptoms, immunologic (CD4 count decline, or virologic failure (viral rebound increased the number of infected individuals who could benefit from ART, but effectiveness per person is compromised. Only withdrawal at a very early stage of treatment failure, soon after viral rebound, would have a

  17. Novel room-temperature-setting phosphate ceramics for stabilizing combustion products and low-level mixed wastes

    International Nuclear Information System (INIS)

    Wagh, A.S.; Singh, D.

    1994-01-01

    Argonne National Laboratory, with support from the Office of Technology in the US Department of Energy (DOE), has developed a new process employing novel, chemically bonded ceramic materials to stabilize secondary waste streams. Such waste streams result from the thermal processes used to stabilize low-level, mixed wastes. The process will help the electric power industry treat its combustion and low-level mixed wastes. The ceramic materials are strong, dense, leach-resistant, and inexpensive to fabricate. The room-temperature-setting process allows stabilization of volatile components containing lead, mercury, cadmium, chromium, and nickel. The process also provides effective stabilization of fossil fuel combustion products. It is most suitable for treating fly and bottom ashes

  18. Modeling and low-level waste management: an interagency workshop

    Energy Technology Data Exchange (ETDEWEB)

    Little, C.A.; Stratton, L.E. (comps.)

    1980-01-01

    The interagency workshop on Modeling and Low-Level Waste Management was held on December 1-4, 1980 in Denver, Colorado. Twenty papers were presented at this meeting which consisted of three sessions. First, each agency presented its point of view concerning modeling and the need for models in low-level radioactive waste applications. Second, a larger group of more technical papers was presented by persons actively involved in model development or applications. Last of all, four workshops were held to attempt to reach a consensus among participants regarding numerous waste modeling topics. Abstracts are provided for the papers presented at this workshop.

  19. Modeling and low-level waste management: an interagency workshop

    International Nuclear Information System (INIS)

    Little, C.A.; Stratton, L.E.

    1980-01-01

    The interagency workshop on Modeling and Low-Level Waste Management was held on December 1-4, 1980 in Denver, Colorado. Twenty papers were presented at this meeting which consisted of three sessions. First, each agency presented its point of view concerning modeling and the need for models in low-level radioactive waste applications. Second, a larger group of more technical papers was presented by persons actively involved in model development or applications. Last of all, four workshops were held to attempt to reach a consensus among participants regarding numerous waste modeling topics. Abstracts are provided for the papers presented at this workshop

  20. California Dental Hygiene Educators' Perceptions of an Application of the ADHA Advanced Dental Hygiene Practitioner (ADHP) Model in Medical Settings.

    Science.gov (United States)

    Smith, Lauren; Walsh, Margaret

    2015-12-01

    To assess California dental hygiene educators' perceptions of an application of the American Dental Hygienists' Association's (ADHA) advanced dental hygiene practitioner model (ADHP) in medical settings where the advanced dental hygiene practitioner collaborates in medical settings with other health professionals to meet clients' oral health needs. In 2014, 30 directors of California dental hygiene programs were contacted to participate in and distribute an online survey to their faculty. In order to capture non-respondents, 2 follow-up e-mails were sent. Descriptive analysis and cross-tabulations were analyzed using the online survey software program, Qualtrics™. The educator response rate was 18% (70/387). Nearly 90% of respondents supported the proposed application of the ADHA ADHP model and believed it would increase access to care and reduce oral health disparities. They also agreed with most of the proposed services, target populations and workplace settings. Slightly over half believed a master's degree was the appropriate educational level needed. Among California dental hygiene educators responding to this survey, there was strong support for the proposed application of the ADHA model in medical settings. More research is needed among a larger sample of dental hygiene educators and clinicians, as well as among other health professionals such as physicians, nurses and dentists. Copyright © 2015 The American Dental Hygienists’ Association.

  1. Income Distribution Over Educational Levels: A Simple Model.

    Science.gov (United States)

    Tinbergen, Jan

    An econometric model is formulated that explains income per person in various compartments of the labor market defined by three main levels of education and by education required. The model enables an estimation of the effect of increased access to education on that distribution. The model is based on a production for the economy as a whole; a…

  2. Automated volume analysis of head and neck lesions on CT scans using 3D level set segmentation

    International Nuclear Information System (INIS)

    Street, Ethan; Hadjiiski, Lubomir; Sahiner, Berkman; Gujar, Sachin; Ibrahim, Mohannad; Mukherji, Suresh K.; Chan, Heang-Ping

    2007-01-01

    The authors have developed a semiautomatic system for segmentation of a diverse set of lesions in head and neck CT scans. The system takes as input an approximate bounding box, and uses a multistage level set to perform the final segmentation. A data set consisting of 69 lesions marked on 33 scans from 23 patients was used to evaluate the performance of the system. The contours from automatic segmentation were compared to both 2D and 3D gold standard contours manually drawn by three experienced radiologists. Three performance metric measures were used for the comparison. In addition, a radiologist provided quality ratings on a 1 to 10 scale for all of the automatic segmentations. For this pilot study, the authors observed that the differences between the automatic and gold standard contours were larger than the interobserver differences. However, the system performed comparably to the radiologists, achieving an average area intersection ratio of 85.4% compared to an average of 91.2% between two radiologists. The average absolute area error was 21.1% compared to 10.8%, and the average 2D distance was 1.38 mm compared to 0.84 mm between the radiologists. In addition, the quality rating data showed that, despite the very lax assumptions made on the lesion characteristics in designing the system, the automatic contours approximated many of the lesions very well

  3. Simulation to aid in interpreting biological relevance and setting of population-level protection goals for risk assessment of pesticides.

    Science.gov (United States)

    Topping, Christopher John; Luttik, Robert

    2017-10-01

    Specific protection goals (SPGs) comprise an explicit expression of the environmental components that need protection and the maximum impacts that can be tolerated. SPGs are set by risk managers and are typically based on protecting populations or functions. However, the measurable endpoints available to risk managers, at least for vertebrates, are typically laboratory tests. We demonstrate, using the example of eggshell thinning in skylarks, how simulation can be used to place laboratory endpoints in context of population-level effects as an aid to setting the SPGs. We develop explanatory scenarios investigating the impact of different assumptions of eggshell thinning on skylark population size, density and distribution in 10 Danish landscapes, chosen to represent the range of typical Danish agricultural conditions. Landscape and timing of application of the pesticide were found to be the most critical factors to consider in the impact assessment. Consequently, a regulatory scenario of monoculture spring barley with an early spray treatment eliciting the eggshell thinning effect was applied using concentrations eliciting effects of zero to 100% in steps of 5%. Setting the SPGs requires balancing scientific, social and political realities. However, the provision of clear and detailed options such as those from comprehensive simulation results can inform the decision process by improving transparency and by putting the more abstract testing data into the context of real-world impacts. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Imaging disturbance zones ahead of a tunnel by elastic full-waveform inversion: Adjoint gradient based inversion vs. parameter space reduction using a level-set method

    Directory of Open Access Journals (Sweden)

    Andre Lamert

    2018-03-01

    Full Text Available We present and compare two flexible and effective methodologies to predict disturbance zones ahead of underground tunnels by using elastic full-waveform inversion. One methodology uses a linearized, iterative approach based on misfit gradients computed with the adjoint method while the other uses iterative, gradient-free unscented Kalman filtering in conjunction with a level-set representation. Whereas the former does not involve a priori assumptions on the distribution of elastic properties ahead of the tunnel, the latter introduces a massive reduction in the number of explicit model parameters to be inverted for by focusing on the geometric form of potential disturbances and their average elastic properties. Both imaging methodologies are validated through successful reconstructions of simple disturbances. As an application, we consider an elastic multiple disturbance scenario. By using identical synthetic time-domain seismograms as test data, we obtain satisfactory, albeit different, reconstruction results from the two inversion methodologies. The computational costs of both approaches are of the same order of magnitude, with the gradient-based approach showing a slight advantage. The model parameter space reduction approach compensates for this by additionally providing a posteriori estimates of model parameter uncertainty. Keywords: Tunnel seismics, Full waveform inversion, Seismic waves, Level-set method, Adjoint method, Kalman filter

  5. Level-set reconstruction algorithm for ultrafast limited-angle X-ray computed tomography of two-phase flows.

    Science.gov (United States)

    Bieberle, M; Hampel, U

    2015-06-13

    Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  6. Effect of a uniform magnetic field on dielectric two-phase bubbly flows using the level set method

    International Nuclear Information System (INIS)

    Ansari, M.R.; Hadidi, A.; Nimvari, M.E.

    2012-01-01

    In this study, the behavior of a single bubble in a dielectric viscous fluid under a uniform magnetic field has been simulated numerically using the Level Set method in two-phase bubbly flow. The two-phase bubbly flow was considered to be laminar and homogeneous. Deformation of the bubble was considered to be due to buoyancy and magnetic forces induced from the external applied magnetic field. A computer code was developed to solve the problem using the flow field, the interface of two phases, and the magnetic field. The Finite Volume method was applied using the SIMPLE algorithm to discretize the governing equations. Using this algorithm enables us to calculate the pressure parameter, which has been eliminated by previous researchers because of the complexity of the two-phase flow. The finite difference method was used to solve the magnetic field equation. The results outlined in the present study agree well with the existing experimental data and numerical results. These results show that the magnetic field affects and controls the shape, size, velocity, and location of the bubble. - Highlights: ►A bubble behavior was simulated numerically. ► A single bubble behavior was considered in a dielectric viscous fluid. ► A uniform magnetic field is used to study a bubble behavior. ► Deformation of the bubble was considered using the Level Set method. ► The magnetic field affects the shape, size, velocity, and location of the bubble.

  7. Identification of Arbitrary Zonation in Groundwater Parameters using the Level Set Method and a Parallel Genetic Algorithm

    Science.gov (United States)

    Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.

    2017-12-01

    Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.

  8. System-level Modeling of Wireless Integrated Sensor Networks

    DEFF Research Database (Denmark)

    Virk, Kashif M.; Hansen, Knud; Madsen, Jan

    2005-01-01

    Wireless integrated sensor networks have emerged as a promising infrastructure for a new generation of monitoring and tracking applications. In order to efficiently utilize the extremely limited resources of wireless sensor nodes, accurate modeling of the key aspects of wireless sensor networks...... is necessary so that system-level design decisions can be made about the hardware and the software (applications and real-time operating system) architecture of sensor nodes. In this paper, we present a SystemC-based abstract modeling framework that enables system-level modeling of sensor network behavior...... by modeling the applications, real-time operating system, sensors, processor, and radio transceiver at the sensor node level and environmental phenomena, including radio signal propagation, at the sensor network level. We demonstrate the potential of our modeling framework by simulating and analyzing a small...

  9. Ferromagnetic interaction model of activity level in workplace communication

    Science.gov (United States)

    Akitomi, Tomoaki; Ara, Koji; Watanabe, Jun-ichiro; Yano, Kazuo

    2013-03-01

    The nature of human-human interaction, specifically, how people synchronize with each other in multiple-participant conversations, is described by a ferromagnetic interaction model of people’s activity levels. We found two microscopic human interaction characteristics from a real-environment face-to-face conversation. The first characteristic is that people quite regularly synchronize their activity level with that of the other participants in a conversation. The second characteristic is that the degree of synchronization increases as the number of participants increases. Based on these microscopic ferromagnetic characteristics, a “conversation activity level” was modeled according to the Ising model. The results of a simulation of activity level based on this model well reproduce macroscopic experimental measurements of activity level. This model will give a new insight into how people interact with each other in a conversation.

  10. GSHR, a Web-Based Platform Provides Gene Set-Level Analyses of Hormone Responses in Arabidopsis

    Directory of Open Access Journals (Sweden)

    Xiaojuan Ran

    2018-01-01

    Full Text Available Phytohormones regulate diverse aspects of plant growth and environmental responses. Recent high-throughput technologies have promoted a more comprehensive profiling of genes regulated by different hormones. However, these omics data generally result in large gene lists that make it challenging to interpret the data and extract insights into biological significance. With the rapid accumulation of theses large-scale experiments, especially the transcriptomic data available in public databases, a means of using this information to explore the transcriptional networks is needed. Different platforms have different architectures and designs, and even similar studies using the same platform may obtain data with large variances because of the highly dynamic and flexible effects of plant hormones; this makes it difficult to make comparisons across different studies and platforms. Here, we present a web server providing gene set-level analyses of Arabidopsis thaliana hormone responses. GSHR collected 333 RNA-seq and 1,205 microarray datasets from the Gene Expression Omnibus, characterizing transcriptomic changes in Arabidopsis in response to phytohormones including abscisic acid, auxin, brassinosteroids, cytokinins, ethylene, gibberellins, jasmonic acid, salicylic acid, and strigolactones. These data were further processed and organized into 1,368 gene sets regulated by different hormones or hormone-related factors. By comparing input gene lists to these gene sets, GSHR helped to identify gene sets from the input gene list regulated by different phytohormones or related factors. Together, GSHR links prior information regarding transcriptomic changes induced by hormones and related factors to newly generated data and facilities cross-study and cross-platform comparisons; this helps facilitate the mining of biologically significant information from large-scale datasets. The GSHR is freely available at http://bioinfo.sibs.ac.cn/GSHR/.

  11. Development of a new model to engage patients and clinicians in setting research priorities.

    Science.gov (United States)

    Pollock, Alex; St George, Bridget; Fenton, Mark; Crowe, Sally; Firkins, Lester

    2014-01-01

    Equitable involvement of patients and clinicians in setting research and funding priorities is ethically desirable and can improve the quality, relevance and implementation of research. Survey methods used in previous priority setting projects to gather treatment uncertainties may not be sufficient to facilitate responses from patients and their lay carers for some health care topics. We aimed to develop a new model to engage patients and clinicians in setting research priorities relating to life after stroke, and to explore the use of this model within a James Lind Alliance (JLA) priority setting project. We developed a model to facilitate involvement through targeted engagement and assisted involvement (FREE TEA model). We implemented both standard surveys and the FREE TEA model to gather research priorities (treatment uncertainties) from people affected by stroke living in Scotland. We explored and configured the number of treatment uncertainties elicited from different groups by the two approaches. We gathered 516 treatment uncertainties from stroke survivors, carers and health professionals. We achieved approximately equal numbers of contributions; 281 (54%) from stroke survivors/carers; 235 (46%) from health professionals. For stroke survivors and carers, 98 (35%) treatment uncertainties were elicited from the standard survey and 183 (65%) at FREE TEA face-to-face visits. This contrasted with the health professionals for whom 198 (84%) were elicited from the standard survey and only 37 (16%) from FREE TEA visits. The FREE TEA model has implications for future priority setting projects and user-involvement relating to populations of people with complex health needs. Our results imply that reliance on standard surveys may result in poor and unrepresentative involvement of patients, thereby favouring the views of health professionals.

  12. Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation

    International Nuclear Information System (INIS)

    Schranz, C; Möller, K; Becher, T; Schädler, D; Weiler, N

    2014-01-01

    Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (p I ), inspiration and expiration time (t I , t E ) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal p I and adequate t E can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's ‘optimized’ settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end

  13. Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation.

    Science.gov (United States)

    Schranz, C; Becher, T; Schädler, D; Weiler, N; Möller, K

    2014-03-01

    Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (pI), inspiration and expiration time (tI, tE) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal pI and adequate tE can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's 'optimized' settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end-expiratory pressure.

  14. A 2D model of causal set quantum gravity: the emergence of the continuum

    International Nuclear Information System (INIS)

    Brightwell, Graham; Henson, Joe; Surya, Sumati

    2008-01-01

    Non-perturbative theories of quantum gravity inevitably include configurations that fail to resemble physically reasonable spacetimes at large scales. Often, these configurations are entropically dominant and pose an obstacle to obtaining the desired classical limit. We examine this 'entropy problem' in a model of causal set quantum gravity corresponding to a discretization of 2D spacetimes. Using results from the theory of partial orders we show that, in the large volume or continuum limit, its partition function is dominated by causal sets which approximate to a region of 2D Minkowski space. This model of causal set quantum gravity thus overcomes the entropy problem and predicts the emergence of a physically reasonable geometry

  15. Mathematical model of the electronuclear set-up on the beam of the JINR synchrotron

    International Nuclear Information System (INIS)

    Barashenkov, V.S.; Kumawat, H.; Lobanova, V.A.; Kumar, V.

    2003-01-01

    On the base of the Monte Carlo code CASCADE, developed at JINR, a mathematical model of the deep-subcritical set-up with uranium blanket used in experiments underway at JINR using a 0.6-4 GeV proton beam, is created. The neutron spectra, yields and energies of generated particles are calculated and compared for several modifications of the set-up. The influence of paraffin and graphite moderators on the characteristics of particles escaping lead target is studied. The modelled set-up can be considered as a first step to experiments with the designed at JINR U-Pu ADS SAD with heat power of several tens of kW

  16. Uniqueness of Gibbs measure for Potts model with countable set of spin values

    International Nuclear Information System (INIS)

    Ganikhodjaev, N.N.; Rozikov, U.A.

    2004-11-01

    We consider a nearest-neighbor Potts model with countable spin values 0,1,..., and non zero external field, on a Cayley tree of order k (with k+1 neighbors). We study translation-invariant 'splitting' Gibbs measures. We reduce the problem to the description of the solutions of some infinite system of equations. For any k≥1 and any fixed probability measure ν with ν(i)>0 on the set of all non negative integer numbers Φ={0,1,...} we show that the set of translation-invariant splitting Gibbs measures contains at most one point, independently on parameters of the Potts model with countable set of spin values on Cayley tree. Also we give a full description of the class of measures ν on Φ such that wit respect to each element of this class our infinite system of equations has unique solution {a i =1,2,...}, where a is an element of (0,1). (author)

  17. Refining Sunrise/set Prediction Models by Accounting for the Effects of Refraction

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer L.

    2016-01-01

    Current atmospheric models used to predict the times of sunrise and sunset have an error of one to four minutes at mid-latitudes (0° - 55° N/S). At higher latitudes, slight changes in refraction may cause significant discrepancies, including determining even whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols, could significantly improve the standard prediction. Because sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem, we will collect this data using smartphones as part of a citizen science project. This analysis will lead to more complete models that will provide more accurate times for navigators and outdoorsman alike.

  18. Mathematical Model of the Electronuclear Set-Up on the Beam of the JINR Synchrotron

    CERN Document Server

    Barashenkov, V S; Kumawat, H; Lobanova, V A

    2004-01-01

    On the base of the Monte Carlo code CASCADE, developed at JINR, a mathematical model of the deep-subcritical set-up with uranium blanket used in experiments underway at JINR using a 0.6-4 GeV proton beam, is created. The neutron spectra, yields and energies of generated particles are calculated and compared for several modifications of the set-up. The influence of paraffin and graphite moderators on the characteristics of particles escaping lead target is studied. The modelled set-up can be considered as a first step to experiments with the designed at JINR U-Pu ADS SAD with heat power of several tens of kW.

  19. Dimensional changes in plaster cast models due to the position of the impression tray during setting

    Directory of Open Access Journals (Sweden)

    Betina Grehs Porto

    2014-01-01

    Full Text Available Introduction: The objective of this study was to assess whether the positioning of the impression tray could cause distortion to plaster casts during gypsum setting time.Materials and Methods: Fifteen pairs of master models were cast with alginate impression material and immediately filled with gypsum. Impressions were allowed to set with the tray in the noninverted position (Group A or in the inverted position (Group B. The plaster models were digitized using a laser scanner (3Shape R-700, 3Shape A/S, Copenhagen, Denmark. Measurements of tooth size and distance were obtained using O3d software (Widialabs, Brazil measurement tools. Data were analyzed by paired t-test and linear regression with 5% significance.Results and Conclusion: Most of the measurements from both groups were similar, except forthe lower intermolar distance. It was not possible to corroborate the presence of distortions due to the position of the impression tray during gypsum setting time.

  20. Mentoring for junior medical faculty: Existing models and suggestions for low-resource settings.

    Science.gov (United States)

    Menon, Vikas; Muraleedharan, Aparna; Bhat, Ballambhattu Vishnu

    2016-02-01

    Globally, there is increasing recognition about the positive benefits and impact of mentoring on faculty retention rates, career satisfaction and scholarly output. However, emphasis on research and practice of mentoring is comparatively meagre in low and middle income countries. In this commentary, we critically examine two existing models of mentorship for medical faculty and offer few suggestions for an integrated hybrid model that can be adapted for use in low resource settings. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Index-based groundwater vulnerability mapping models using hydrogeological settings: A critical evaluation

    International Nuclear Information System (INIS)

    Kumar, Prashant; Bansod, Baban K.S.; Debnath, Sanjit K.; Thakur, Praveen Kumar; Ghanshyam, C.

    2015-01-01

    Groundwater vulnerability maps are useful for decision making in land use planning and water resource management. This paper reviews the various groundwater vulnerability assessment models developed across the world. Each model has been evaluated in terms of its pros and cons and the environmental conditions of its application. The paper further discusses the validation techniques used for the generated vulnerability maps by various models. Implicit challenges associated with the development of the groundwater vulnerability assessment models have also been identified with scientific considerations to the parameter relations and their selections. - Highlights: • Various index-based groundwater vulnerability assessment models have been discussed. • A comparative analysis of the models and its applicability in different hydrogeological settings has been discussed. • Research problems of underlying vulnerability assessment models are also reported in this review paper

  2. Index-based groundwater vulnerability mapping models using hydrogeological settings: A critical evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Prashant, E-mail: prashantkumar@csio.res.in [CSIR-Central Scientific Instruments Organisation, Chandigarh 160030 (India); Academy of Scientific and Innovative Research—CSIO, Chandigarh 160030 (India); Bansod, Baban K.S.; Debnath, Sanjit K. [CSIR-Central Scientific Instruments Organisation, Chandigarh 160030 (India); Academy of Scientific and Innovative Research—CSIO, Chandigarh 160030 (India); Thakur, Praveen Kumar [Indian Institute of Remote Sensing (ISRO), Dehradun 248001 (India); Ghanshyam, C. [CSIR-Central Scientific Instruments Organisation, Chandigarh 160030 (India); Academy of Scientific and Innovative Research—CSIO, Chandigarh 160030 (India)

    2015-02-15

    Groundwater vulnerability maps are useful for decision making in land use planning and water resource management. This paper reviews the various groundwater vulnerability assessment models developed across the world. Each model has been evaluated in terms of its pros and cons and the environmental conditions of its application. The paper further discusses the validation techniques used for the generated vulnerability maps by various models. Implicit challenges associated with the development of the groundwater vulnerability assessment models have also been identified with scientific considerations to the parameter relations and their selections. - Highlights: • Various index-based groundwater vulnerability assessment models have been discussed. • A comparative analysis of the models and its applicability in different hydrogeological settings has been discussed. • Research problems of underlying vulnerability assessment models are also reported in this review paper.

  3. Admissible Estimators in the General Multivariate Linear Model with Respect to Inequality Restricted Parameter Set

    Directory of Open Access Journals (Sweden)

    Shangli Zhang

    2009-01-01

    Full Text Available By using the methods of linear algebra and matrix inequality theory, we obtain the characterization of admissible estimators in the general multivariate linear model with respect to inequality restricted parameter set. In the classes of homogeneous and general linear estimators, the necessary and suffcient conditions that the estimators of regression coeffcient function are admissible are established.

  4. Brief Report: Predictors of Outcomes in the Early Start Denver Model Delivered in a Group Setting

    Science.gov (United States)

    Vivanti, Giacomo; Dissanayake, Cheryl; Zierhut, Cynthia; Rogers, Sally J.

    2013-01-01

    There is a paucity of studies that have looked at factors associated with responsiveness to interventions in preschoolers with autism spectrum disorder (ASD). We investigated learning profiles associated with response to the Early Start Denver Model delivered in a group setting. Our preliminary results from 21 preschool children with an ASD aged…

  5. Stabilizing model predictive control of a gantry crane based on flexible set-membership constraints

    NARCIS (Netherlands)

    Iles, Sandor; Lazar, M.; Kolonic, Fetah; Jadranko, Matusko

    2015-01-01

    This paper presents a stabilizing distributed model predictive control of a gantry crane taking into account the variation of cable length. The proposed algorithm is based on the off-line computation of a sequence of 1-step controllable sets and a condition that enables flexible convergence towards

  6. Breaking Bad News in Counseling: Applying the PEWTER Model in the School Setting

    Science.gov (United States)

    Keefe-Cooperman, Kathleen; Brady-Amoon, Peggy

    2013-01-01

    Breaking bad news is a stressful experience for counselors and clients. In this article, the PEWTER (Prepare, Evaluate, Warning, Telling, Emotional Response, Regrouping) model (Nardi & Keefe-Cooperman, 2006) is used as a guide to facilitate the process of a difficult conversation and promote client growth in a school setting. In this…

  7. Thermal comfort assessment in a Dutch hospital settingmodel applicability

    NARCIS (Netherlands)

    Ottenheijm, E.M.M.; Loomans, M.G.L.C.; Kort, H.S.M.; Trip, A.

    2016-01-01

    SUMMARY Limited information is available on thermal comfort performance of the indoor environment in health care facilities both for staff and patients. Thermal comfort models such as Predicted Mean Vote (PMV) and Adaptive Thermal Comfort (ATC), have not been applied extensively for this setting. In

  8. A Model for Teaching Rational Behavior Therapy in a Public School Setting.

    Science.gov (United States)

    Patton, Patricia L.

    A training model for the use of rational behavior therapy (RBT) with emotionally disturbed adolescents in a school setting is presented, including a structured, didactic format consisting of five basic RBT training techniques. The training sessions, lasting 10 weeks each, are described. Also presented is the organization for the actual classroom…

  9. Electrical modeling of semiconductor bridge (SCB) BNCP detonators with electrochemical capacitor firing sets

    Energy Technology Data Exchange (ETDEWEB)

    Marx, K.D. [Sandia National Labs., Livermore, CA (United States); Ingersoll, D.; Bickes, R.W. Jr. [Sandia National Labs., Albuquerque, NM (United States)

    1998-11-01

    In this paper the authors describe computer models that simulate the electrical characteristics and hence, the firing characteristics and performance of a semiconductor bridge (SCB) detonator for the initiation of BNCP [tetraammine-cis-bis (5-nitro-2H-tetrazolato-N{sup 2}) cobalt(III) perchlorate]. The electrical data and resultant models provide new insights into the fundamental behavior of SCB detonators, particularly with respect to the initiation mechanism and the interaction of the explosive powder with the SCB. One model developed, the Thermal Feedback Model, considers the total energy budget for the system, including the time evolution of the energy delivered to the powder by the electrical circuit, as well as that released by the ignition and subsequent chemical reaction of the powder. The authors also present data obtained using a new low-voltage firing set which employed an advanced electrochemical capacitor having a nominal capacitance of 350,000 {micro}F at 9 V, the maximum voltage rating for this particular device. A model for this firing set and detonator was developed by making measurements of the intrinsic capacitance and equivalent series resistance (ESR < 10 m{Omega}) of a single device. This model was then used to predict the behavior of BNCP SCB detonators fired alone, as well as in a multishot, parallel-string configuration using a firing set composed of either a single 9 V electrochemical capacitor or two of the capacitors wired in series and charged to 18 V.

  10. A neighborhood statistics model for predicting stream pathogen indicator levels.

    Science.gov (United States)

    Pandey, Pramod K; Pasternack, Gregory B; Majumder, Mahbubul; Soupir, Michelle L; Kaiser, Mark S

    2015-03-01

    Because elevated levels of water-borne Escherichia coli in streams are a leading cause of water quality impairments in the U.S., water-quality managers need tools for predicting aqueous E. coli levels. Presently, E. coli levels may be predicted using complex mechanistic models that have a high degree of unchecked uncertainty or simpler statistical models. To assess spatio-temporal patterns of instream E. coli levels, herein we measured E. coli, a pathogen indicator, at 16 sites (at four different times) within the Squaw Creek watershed, Iowa, and subsequently, the Markov Random Field model was exploited to develop a neighborhood statistics model for predicting instream E. coli levels. Two observed covariates, local water temperature (degrees Celsius) and mean cross-sectional depth (meters), were used as inputs to the model. Predictions of E. coli levels in the water column were compared with independent observational data collected from 16 in-stream locations. The results revealed that spatio-temporal averages of predicted and observed E. coli levels were extremely close. Approximately 66 % of individual predicted E. coli concentrations were within a factor of 2 of the observed values. In only one event, the difference between prediction and observation was beyond one order of magnitude. The mean of all predicted values at 16 locations was approximately 1 % higher than the mean of the observed values. The approach presented here will be useful while assessing instream contaminations such as pathogen/pathogen indicator levels at the watershed scale.

  11. Mathematical model comparing of the multi-level economics systems

    Science.gov (United States)

    Brykalov, S. M.; Kryanev, A. V.

    2017-12-01

    The mathematical model (scheme) of a multi-level comparison of the economic system, characterized by the system of indices, is worked out. In the mathematical model of the multi-level comparison of the economic systems, the indicators of peer review and forecasting of the economic system under consideration can be used. The model can take into account the uncertainty in the estimated values of the parameters or expert estimations. The model uses the multi-criteria approach based on the Pareto solutions.

  12. Model tracking system for low-level radioactive waste disposal facilities: License application interrogatories and responses

    Energy Technology Data Exchange (ETDEWEB)

    Benbennick, M.E.; Broton, M.S.; Fuoto, J.S.; Novgrod, R.L.

    1994-08-01

    This report describes a model tracking system for a low-level radioactive waste (LLW) disposal facility license application. In particular, the model tracks interrogatories (questions, requests for information, comments) and responses. A set of requirements and desired features for the model tracking system was developed, including required structure and computer screens. Nine tracking systems were then reviewed against the model system requirements and only two were found to meet all requirements. Using Kepner-Tregoe decision analysis, a model tracking system was selected.

  13. Model tracking system for low-level radioactive waste disposal facilities: License application interrogatories and responses

    International Nuclear Information System (INIS)

    Benbennick, M.E.; Broton, M.S.; Fuoto, J.S.; Novgrod, R.L.

    1994-08-01

    This report describes a model tracking system for a low-level radioactive waste (LLW) disposal facility license application. In particular, the model tracks interrogatories (questions, requests for information, comments) and responses. A set of requirements and desired features for the model tracking system was developed, including required structure and computer screens. Nine tracking systems were then reviewed against the model system requirements and only two were found to meet all requirements. Using Kepner-Tregoe decision analysis, a model tracking system was selected

  14. Exhaustively characterizing feasible logic models of a signaling network using Answer Set Programming.

    Science.gov (United States)

    Guziolowski, Carito; Videla, Santiago; Eduati, Federica; Thiele, Sven; Cokelaer, Thomas; Siegel, Anne; Saez-Rodriguez, Julio

    2013-09-15

    Logic modeling is a useful tool to study signal transduction across multiple pathways. Logic models can be generated by training a network containing the prior knowledge to phospho-proteomics data. The training can be performed using stochastic optimization procedures, but these are unable to guarantee a global optima or to report the complete family of feasible models. This, however, is essential to provide precise insight in the mechanisms underlaying signal transduction and generate reliable predictions. We propose the use of Answer Set Programming to explore exhaustively the space of feasible logic models. Toward this end, we have developed caspo, an open-source Python package that provides a powerful platform to learn and characterize logic models by leveraging the rich modeling language and solving technologies of Answer Set Programming. We illustrate the usefulness of caspo by revisiting a model of pro-growth and inflammatory pathways in liver cells. We show that, if experimental error is taken into account, there are thousands (11 700) of models compatible with the data. Despite the large number, we can extract structural features from the models, such as links that are always (or never) present or modules that appear in a mutual exclusive fashion. To further characterize this family of models, we investigate the input-output behavior of the models. We find 91 behaviors across the 11 700 models and we suggest new experiments to discriminate among them. Our results underscore the importance of characterizing in a global and exhaustive manner the family of feasible models, with important implications for experimental design. caspo is freely available for download (license GPLv3) and as a web service at http://caspo.genouest.org/. Supplementary materials are available at Bioinformatics online. santiago.videla@irisa.fr.

  15. A population-based model for priority setting across the care continuum and across modalities

    Directory of Open Access Journals (Sweden)

    Mortimer Duncan

    2006-03-01

    Full Text Available Abstract Background The Health-sector Wide (HsW priority setting model is designed to shift the focus of priority setting away from 'program budgets' – that are typically defined by modality or disease-stage – and towards well-defined target populations with a particular disease/health problem. Methods The key features of the HsW model are i a disease/health problem framework, ii a sequential approach to covering the entire health sector, iii comprehensiveness of scope in identifying intervention options and iv the use of objective evidence. The HsW model redefines the unit of analysis over which priorities are set to include all mutually exclusive and complementary interventions for the prevention and treatment of each disease/health problem under consideration. The HsW model is therefore incompatible with the fragmented approach to priority setting across multiple program budgets that currently characterises allocation in many health systems. The HsW model employs standard cost-utility analyses and decision-rules with the aim of maximising QALYs contingent upon the global budget constraint for the set of diseases/health problems under consideration. It is recognised that the objective function may include non-health arguments that would imply a departure from simple QALY maximisation and that political constraints frequently limit degrees of freedom. In addressing these broader considerations, the HsW model can be modified to maximise value-weighted QALYs contingent upon the global budget constraint and any political constraints bearing upon allocation decisions. Results The HsW model has been applied in several contexts, recently to osteoarthritis, that has demonstrated both its practical application and its capacity to derive clear evidenced-based policy recommendations. Conclusion Comparisons with other approaches to priority setting, such as Programme Budgeting and Marginal Analysis (PBMA and modality-based cost

  16. Mixture modeling of multi-component data sets with application to ion-probe zircon ages

    Science.gov (United States)

    Sambridge, M. S.; Compston, W.

    1994-12-01

    A method is presented for detecting multiple components in a population of analytical observations for zircon and other ages. The procedure uses an approach known as mixture modeling, in order to estimate the most likely ages, proportions and number of distinct components in a given data set. Particular attention is paid to estimating errors in the estimated ages and proportions. At each stage of the procedure several alternative numerical approaches are suggested, each having their own advantages in terms of efficency and accuracy. The methodology is tested on synthetic data sets simulating two or more mixed populations of zircon ages. In this case true ages and proportions of each population are known and compare well with the results of the new procedure. Two examples are presented of its use with sets of SHRIMP U-238 - Pb-206 zircon ages from Palaeozoic rocks. A published data set for altered zircons from bentonite at Meishucun, South China, previously treated as a single-component population after screening for gross alteration effects, can be resolved into two components by the new procedure and their ages, proportions and standard errors estimated. The older component, at 530 +/- 5 Ma (2 sigma), is our best current estimate for the age of the bentonite. Mixture modeling of a data set for unaltered zircons from a tonalite elsewhere defines the magmatic U-238 - Pb-206 age at high precision (2 sigma +/- 1.5 Ma), but one-quarter of the 41 analyses detect hidden and significantly older cores.

  17. Model ecosystem approach to estimate community level effects of radiation

    Energy Technology Data Exchange (ETDEWEB)

    Masahiro, Doi; Nobuyuki, Tanaka; Shoichi, Fuma; Nobuyoshi, Ishii; Hiroshi, Takeda; Zenichiro, Kawabata [National Institute of Radiological Sciences, Environmental and Toxicological Sciences Research Group, Chiba (Japan)

    2004-07-01

    Mathematical computer model is developed to simulate the population dynamics and dynamic mass budgets of the microbial community realized as a self sustainable aquatic ecological system in the tube. Autotrophic algae, heterotrophic protozoa and sapro-trophic bacteria live symbiotically with inter-species' interactions as predator-prey relationship, competition for the common resource, autolysis of detritus and detritus-grazing food chain, etc. The simulation model is the individual-based parallel model, built in the demographic stochasticity, environmental stochasticity by dividing the aquatic environment into patches. Validity of the model is checked by the multifaceted data of the microcosm experiments. In the analysis, intrinsic parameters of umbrella endpoints (lethality, morbidity, reproductive growth, mutation) are manipulated at the individual level, and tried to find the population level, community level and ecosystem level disorders of ecologically crucial parameters (e.g. intrinsic growth rate, carrying capacity, variation, etc.) that related to the probability of population extinction. (author)

  18. Model ecosystem approach to estimate community level effects of radiation

    International Nuclear Information System (INIS)

    Masahiro, Doi; Nobuyuki, Tanaka; Shoichi, Fuma; Nobuyoshi, Ishii; Hiroshi, Takeda; Zenichiro, Kawabata

    2004-01-01

    Mathematical computer model is developed to simulate the population dynamics and dynamic mass budgets of the microbial community realized as a self sustainable aquatic ecological system in the tube. Autotrophic algae, heterotrophic protozoa and sapro-trophic bacteria live symbiotically with inter-species' interactions as predator-prey relationship, competition for the common resource, autolysis of detritus and detritus-grazing food chain, etc. The simulation model is the individual-based parallel model, built in the demographic stochasticity, environmental stochasticity by dividing the aquatic environment into patches. Validity of the model is checked by the multifaceted data of the microcosm experiments. In the analysis, intrinsic parameters of umbrella endpoints (lethality, morbidity, reproductive growth, mutation) are manipulated at the individual level, and tried to find the population level, community level and ecosystem level disorders of ecologically crucial parameters (e.g. intrinsic growth rate, carrying capacity, variation, etc.) that related to the probability of population extinction. (author)

  19. Three level constraints on conformal field theories and string models

    International Nuclear Information System (INIS)

    Lewellen, D.C.

    1989-05-01

    Simple tree level constraints for conformal field theories which follow from the requirement of crossing symmetry of four-point amplitudes are presented, and their utility for probing general properties of string models is briefly illustrated and discussed. 9 refs

  20. Improving satellite-based PM2.5 estimates in China using Gaussian processes modeling in a Bayesian hierarchical setting.

    Science.gov (United States)

    Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun

    2017-08-01

    Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2  = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.

  1. A Decomposition Model for HPLC-DAD Data Set and Its Solution by Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Lizhi Cui

    2014-01-01

    Full Text Available This paper proposes a separation method, based on the model of Generalized Reference Curve Measurement and the algorithm of Particle Swarm Optimization (GRCM-PSO, for the High Performance Liquid Chromatography with Diode Array Detection (HPLC-DAD data set. Firstly, initial parameters are generated to construct reference curves for the chromatogram peaks of the compounds based on its physical principle. Then, a General Reference Curve Measurement (GRCM model is designed to transform these parameters to scalar values, which indicate the fitness for all parameters. Thirdly, rough solutions are found by searching individual target for every parameter, and reinitialization only around these rough solutions is executed. Then, the Particle Swarm Optimization (PSO algorithm is adopted to obtain the optimal parameters by minimizing the fitness of these new parameters given by the GRCM model. Finally, spectra for the compounds are estimated based on the optimal parameters and the HPLC-DAD data set. Through simulations and experiments, following conclusions are drawn: (1 the GRCM-PSO method can separate the chromatogram peaks and spectra from the HPLC-DAD data set without knowing the number of the compounds in advance even when severe overlap and white noise exist; (2 the GRCM-PSO method is able to handle the real HPLC-DAD data set.

  2. MODEL COOPERATIVE SCRIPT BERPENDEKATAN SCIENCE, ENVIRONMENT, TECHNOLOGY, AND SOCIETY (SETS TERHADAP HASIL BELAJAR

    Directory of Open Access Journals (Sweden)

    Amir Maksum

    2015-11-01

    Full Text Available This study aimed to determine the positive effects of the applicatioan of learning model by using script cooperative with SETS approach to chemistry students' learning outcomes of student in class X. The population in this study is students class X high school in Kendal. Sampling is done by cluster purposive sampling technique, obtained one class as a experiment class that uses of script cooperative learning with the model SETS approach and another class as the gain control class with expository teaching using SETS approach. Data were collected by using documentation method, testing, observation and questionnaires. Based on the analysis of  affective domain data, it gained score percentage of 80% for the experimental class and 78% for   control class. While the score percentage for the psychomotor domain data acquired 79% of the experimental class and 78% the control class. Based on the analysis of the results, obtained correlation coefficient r b 0.52 with the contribution of 28%. The conclusions in this study is the use of script cooperative learning with the model SETS approach have an effects on the the learning outcomes of chemistry class X of high school students in Kendal on the subject redox concept with contributions of 28%.

  3. Emergency residential care settings: A model for service assessment and design.

    Science.gov (United States)

    Graça, João; Calheiros, Maria Manuela; Patrício, Joana Nunes; Magalhães, Eunice Vieira

    2018-02-01

    There have been calls for uncovering the "black box" of residential care services, with a particular need for research focusing on emergency care settings for children and youth in danger. In fact, the strikingly scant empirical attention that these settings have received so far contrasts with the role that they often play as gateway into the child welfare system. To answer these calls, this work presents and tests a framework for assessing a service model in residential emergency care. It comprises seven studies which address a set of different focal areas (e.g., service logic model; care experiences), informants (e.g., case records; staff; children/youth), and service components (e.g., case assessment/evaluation; intervention; placement/referral). Drawing on this process-consultation approach, the work proposes a set of key challenges for emergency residential care in terms of service improvement and development, and calls for further research targeting more care units and different types of residential care services. These findings offer a contribution to inform evidence-based practice and policy in service models of residential care. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Towards deep inclusion for equity-oriented health research priority-setting: A working model.

    Science.gov (United States)

    Pratt, Bridget; Merritt, Maria; Hyder, Adnan A

    2016-02-01

    Growing consensus that health research funders should align their investments with national research priorities presupposes that such national priorities exist and are just. Arguably, justice requires national health research priority-setting to promote health equity. Such a position is consistent with recommendations made by the World Health Organization and at global ministerial summits that health research should serve to reduce health inequalities between and within countries. Thus far, no specific requirements for equity-oriented research priority-setting have been described to guide policymakers. As a step towards the explication and defence of such requirements, we propose that deep inclusion is a key procedural component of equity-oriented research priority-setting. We offer a model of deep inclusion that was developed by applying concepts from work on deliberative democracy and development ethics. This model consists of three dimensions--breadth, qualitative equality, and high-quality non-elite participation. Deep inclusion is captured not only by who is invited to join a decision-making process but also by how they are involved and by when non-elite stakeholders are involved. To clarify and illustrate the proposed dimensions, we use the sustained example of health systems research. We conclude by reviewing practical challenges to achieving deep inclusion. Despite the existence of barriers to implementation, our model can help policymakers and other stakeholders design more inclusive national health research priority-setting processes and assess these processes' depth of inclusion. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Which currency to set price? A model of multiple countries and risk averse firm

    OpenAIRE

    Jian Wang

    2004-01-01

    A crucial question centering many recent debates in the international macroeconomics is under which currency the price is sticky. This paper provides a microfoundation to study the firm¡¦s choice of price setting currency in the sticky price model. I first prove that the risk preference is a secondary consideration in the choice of the price setting currency. This result questions the claim that the currency forward market can change the currency choice of risk averse firms. Then I extend the...

  6. Integrated Berth Allocation and Quay Crane Assignment Problem: Set partitioning models and computational results

    DEFF Research Database (Denmark)

    Iris, Cagatay; Pacino, Dario; Røpke, Stefan

    2015-01-01

    Most of the operational problems in container terminals are strongly interconnected. In this paper, we study the integrated Berth Allocation and Quay Crane Assignment Problem in seaport container terminals. We will extend the current state-of-the-art by proposing novel set partitioning models....... To improve the performance of the set partitioning formulations, a number of variable reduction techniques are proposed. Furthermore, we analyze the effects of different discretization schemes and the impact of using a time-variant/invariant quay crane allocation policy. Computational experiments show...

  7. Modeling decisions from experience: How models with a set of parameters for aggregate choices explain individual choices

    Directory of Open Access Journals (Sweden)

    Neha Sharma

    2017-10-01

    Full Text Available One of the paradigms (called “sampling paradigm” in judgment and decision-making involves decision-makers sample information before making a final consequential choice. In the sampling paradigm, certain computational models have been proposed where a set of single or distribution parameters is calibrated to the choice proportions of a group of participants (aggregate and hierarchical models. However, currently little is known on how aggregate and hierarchical models would account for choices made by individual participants in the sampling paradigm. In this paper, we test the ability of aggregate and hierarchical models to explain choices made by individual participants. Several models, Ensemble, Cumulative Prospect Theory (CPT, Best Estimation and Simulation Techniques (BEAST, Natural-Mean Heuristic (NMH, and Instance-Based Learning (IBL, had their parameters calibrated to individual choices in a large dataset involving the sampling paradigm. Later, these models were generalized to two large datasets in the sampling paradigm. Results revealed that the aggregate models (like CPT and IBL accounted for individual choices better than hierarchical models (like Ensemble and BEAST upon generalization to problems that were like those encountered during calibration. Furthermore, the CPT model, which relies on differential valuing of gains and losses, respectively, performed better than other models during calibration and generalization on datasets with similar set of problems. The IBL model, relying on recency and frequency of sampled information, and the NMH model, relying on frequency of sampled information, performed better than other models during generalization to a challenging dataset. Sequential analyses of results from different models showed how these models accounted for transitions from the last sample to final choice in human data. We highlight the implications of using aggregate and hierarchical models in explaining individual choices

  8. Glycated haemoglobin (HbA1c ) and fasting plasma glucose relationships in sea-level and high-altitude settings.

    Science.gov (United States)

    Bazo-Alvarez, J C; Quispe, R; Pillay, T D; Bernabé-Ortiz, A; Smeeth, L; Checkley, W; Gilman, R H; Málaga, G; Miranda, J J

    2017-06-01

    Higher haemoglobin levels and differences in glucose metabolism have been reported among high-altitude residents, which may influence the diagnostic performance of HbA 1c . This study explores the relationship between HbA 1c and fasting plasma glucose (FPG) in populations living at sea level and at an altitude of > 3000 m. Data from 3613 Peruvian adults without a known diagnosis of diabetes from sea-level and high-altitude settings were evaluated. Linear, quadratic and cubic regression models were performed adjusting for potential confounders. Receiver operating characteristic (ROC) curves were constructed and concordance between HbA 1c and FPG was assessed using a Kappa index. At sea level and high altitude, means were 13.5 and 16.7 g/dl (P > 0.05) for haemoglobin level; 41 and 40 mmol/mol (5.9% and 5.8%; P < 0.01) for HbA 1c ; and 5.8 and 5.1 mmol/l (105 and 91.3 mg/dl; P < 0.001) for FPG, respectively. The adjusted relationship between HbA 1c and FPG was quadratic at sea level and linear at high altitude. Adjusted models showed that, to predict an HbA 1c value of 48 mmol/mol (6.5%), the corresponding mean FPG values at sea level and high altitude were 6.6 and 14.8 mmol/l (120 and 266 mg/dl), respectively. An HbA 1c cut-off of 48 mmol/mol (6.5%) had a sensitivity for high FPG of 87.3% (95% confidence interval (95% CI) 76.5 to 94.4) at sea level and 40.9% (95% CI 20.7 to 63.6) at high altitude. The relationship between HbA 1c and FPG is less clear at high altitude than at sea level. Caution is warranted when using HbA 1c to diagnose diabetes mellitus in this setting. © 2017 The Authors. Diabetic Medicine published by John Wiley & Sons Ltd on behalf of Diabetes UK.

  9. Level-ARCH Short Rate Models with Regime Switching

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    This paper introduces regime switching volatility into level- ARCH models for the short rates of the US, the UK, and Germany. Once regime switching and level effects are included there are no gains from including ARCH effects. It is of secondary importance exactly how the regime switching is spec...

  10. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  11. Predictive modelling of noise level generated during sawing of rocks

    Indian Academy of Sciences (India)

    This paper presents an experimental and statistical study on noise level generated during of rock sawing by circular diamond sawblades. Influence of the operating variables and rock properties on the noise level are investigated and analysed. Statistical analyses are then employed and models are built for the prediction of ...

  12. A Compositional Knowledge Level Process Model of Requirements Engineering

    NARCIS (Netherlands)

    Herlea, D.E.; Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.

    2002-01-01

    In current literature few detailed process models for Requirements Engineering are presented: usually high-level activities are distinguished, without a more precise specification of each activity. In this paper the process of Requirements Engineering has been analyzed using knowledge-level

  13. Intruder level and deformation in SD-pair shell model

    International Nuclear Information System (INIS)

    Luo Yan'an; Ning Pingzhi; Pan Feng

    2004-01-01

    The influence of intruder level on nuclear deformation is studied within the framework of the nucleon-pair shell model truncated to an SD-pair subspace. The results suggest that the intruder level has a tendency to reduce the deformation and plays an important role in determining the onset of rotational behavior. (authors)

  14. An approach for maximizing the smallest eigenfrequency of structure vibration based on piecewise constant level set method

    Science.gov (United States)

    Zhang, Zhengfang; Chen, Weifeng

    2018-05-01

    Maximization of the smallest eigenfrequency of the linearized elasticity system with area constraint is investigated. The elasticity system is extended into a large background domain, but the void is vacuum and not filled with ersatz material. The piecewise constant level set (PCLS) method is applied to present two regions, the original material region and the void region. A quadratic PCLS function is proposed to represent the characteristic function. Consequently, the functional derivative of the smallest eigenfrequency with respect to PCLS function takes nonzero value in the original material region and zero in the void region. A penalty gradient algorithm is proposed, which initializes the whole background domain with the original material and decreases the area of original material region till the area constraint is satisfied. 2D and 3D numerical examples are presented, illustrating the validity of the proposed algorithm.

  15. Efficient globally optimal segmentation of cells in fluorescence microscopy images using level sets and convex energy functionals.

    Science.gov (United States)

    Bergeest, Jan-Philip; Rohr, Karl

    2012-10-01

    In high-throughput applications, accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression and the understanding of cell function. We propose an approach for segmenting cell nuclei which is based on active contours using level sets and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We consider three different well-known energy functionals for active contour-based segmentation and introduce convex formulations of these functionals. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images from different experiments comprising different cell types. We have also performed a quantitative comparison with previous segmentation approaches. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Requirements for High Level Models Supporting Design Space Exploration in Model-based Systems Engineering

    OpenAIRE

    Haveman, Steven P.; Bonnema, G. Maarten

    2013-01-01

    Most formal models are used in detailed design and focus on a single domain. Few effective approaches exist that can effectively tie these lower level models to a high level system model during design space exploration. This complicates the validation of high level system requirements during detailed design. In this paper, we define requirements for a high level model that is firstly driven by key systems engineering challenges present in industry and secondly connects to several formal and d...

  17. Combining SKU-level sales forecasts from models and experts

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Legerstee (Rianne)

    2009-01-01

    textabstractWe study the performance of SKU-level sales forecasts which linearly combine statistical model forecasts and expert forecasts. Using a large and unique database containing model forecasts for monthly sales of various pharmaceutical products and forecasts given by about fifty experts, we

  18. Aspect-Oriented Model Development at Different Levels of Abstraction

    NARCIS (Netherlands)

    Alférez, Mauricio; Amálio, Nuno; Ciraci, S.; Fleurey, Franck; Kienzle, Jörg; Klein, Jacques; Kramer, Max; Mosser, Sebastien; Mussbacher, Gunter; Roubtsova, Ella; Zhang, Gefei; France, Robert B.; Kuester, Jochen M.; Bordbar, Behzad; Paige, Richard F.

    2011-01-01

    The last decade has seen the development of diverse aspect- oriented modeling (AOM) approaches. This paper presents eight different AOM approaches that produce models at different level of abstraction. The approaches are different with respect to the phases of the development lifecycle they target,

  19. Effect of culture levels, ultrafiltered retentate addition, total solid levels and heat treatments on quality improvement of buffalo milk plain set yoghurt.

    Science.gov (United States)

    Yadav, Vijesh; Gupta, Vijay Kumar; Meena, Ganga Sahay

    2018-05-01

    Studied the effect of culture (2, 2.5 and 3%), ultrafiltered (UF) retentate addition (0, 11, 18%), total milk solids (13, 13.50, 14%) and heat treatments (80 and 85 °C/30 min) on the change in pH and titratable acidity (TA), sensory scores and rheological parameters of yoghurt. With 3% culture levels, the required TA (0.90% LA) was achieved in minimum 6 h incubation. With an increase in UF retentate addition, there was observed a highly significant decrease in overall acceptability, body and texture and colour and appearance scores, but there was highly significant increase in rheological parameters of yoghurt samples. Yoghurt made from even 13.75% total solids containing nil UF retentate was observed to be sufficiently firm by the sensory panel. Most of the sensory attributes of yoghurt made with 13.50% total solids were significantly better than yoghurt prepared with either 13 or 14% total solids. Standardised milk heated to 85 °C/30 min resulted in significantly better overall acceptability in yoghurt. Overall acceptability of optimised yoghurt was significantly better than a branded market sample. UF retentate addition adversely affected yoghurt quality, whereas optimization of culture levels, totals milk solids and others process parameters noticeably improved the quality of plain set yoghurt with a shelf life of 15 days at 4 °C.

  20. Requirements for high level models supporting design space exploration in model-based systems engineering

    NARCIS (Netherlands)

    Haveman, Steven; Bonnema, Gerrit Maarten

    2013-01-01

    Most formal models are used in detailed design and focus on a single domain. Few effective approaches exist that can effectively tie these lower level models to a high level system model during design space exploration. This complicates the validation of high level system requirements during

  1. Process-based interpretation of conceptual hydrological model performance using a multinational catchment set

    Science.gov (United States)

    Poncelet, Carine; Merz, Ralf; Merz, Bruno; Parajka, Juraj; Oudin, Ludovic; Andréassian, Vazken; Perrin, Charles

    2017-08-01

    Most of previous assessments of hydrologic model performance are fragmented, based on small number of catchments, different methods or time periods and do not link the results to landscape or climate characteristics. This study uses large-sample hydrology to identify major catchment controls on daily runoff simulations. It is based on a conceptual lumped hydrological model (GR6J), a collection of 29 catchment characteristics, a multinational set of 1103 catchments located in Austria, France, and Germany and four runoff model efficiency criteria. Two analyses are conducted to assess how features and criteria are linked: (i) a one-dimensional analysis based on the Kruskal-Wallis test and (ii) a multidimensional analysis based on regression trees and investigating the interplay between features. The catchment features most affecting model performance are the flashiness of precipitation and streamflow (computed as the ratio of absolute day-to-day fluctuations by the total amount in a year), the seasonality of evaporation, the catchment area, and the catchment aridity. Nonflashy, nonseasonal, large, and nonarid catchments show the best performance for all the tested criteria. We argue that this higher performance is due to fewer nonlinear responses (higher correlation between precipitation and streamflow) and lower input and output variability for such catchments. Finally, we show that, compared to national sets, multinational sets increase results transferability because they explore a wider range of hydroclimatic conditions.

  2. Capacitated set-covering model considering the distance objective and dependency of alternative facilities

    Science.gov (United States)

    Wayan Suletra, I.; Priyandari, Yusuf; Jauhari, Wakhid A.

    2018-03-01

    We propose a new model of facility location to solve a kind of problem that belong to a class of set-covering problem using an integer programming formulation. Our model contains a single objective function, but it represents two goals. The first is to minimize the number of facilities, and the other is to minimize the total distance of customers to facilities. The first goal is a mandatory goal, and the second is an improvement goal that is very useful when alternate optimum solutions for the first goal exist. We use a big number as a weight on the first goal to force the solution algorithm to give first priority to the first goal. Besides considering capacity constraints, our model accommodates a kind of either-or constraints representing facilities dependency. The either-or constraints will prevent the solution algorithm to select two or more facilities from the same set of facility with mutually exclusive properties. A real location selection problem to locate a set of wastewater treatment facility (IPAL) in Surakarta city, Indonesia, will describe the implementation of our model. A numerical example is given using the data of that real problem.

  3. System-level modeling of acetone-butanol-ethanol fermentation.

    Science.gov (United States)

    Liao, Chen; Seo, Seung-Oh; Lu, Ting

    2016-05-01

    Acetone-butanol-ethanol (ABE) fermentation is a metabolic process of clostridia that produces bio-based solvents including butanol. It is enabled by an underlying metabolic reaction network and modulated by cellular gene regulation and environmental cues. Mathematical modeling has served as a valuable strategy to facilitate the understanding, characterization and optimization of this process. In this review, we highlight recent advances in system-level, quantitative modeling of ABE fermentation. We begin with an overview of integrative processes underlying the fermentation. Next we survey modeling efforts including early simple models, models with a systematic metabolic description, and those incorporating metabolism through simple gene regulation. Particular focus is given to a recent system-level model that integrates the metabolic reactions, gene regulation and environmental cues. We conclude by discussing the remaining challenges and future directions towards predictive understanding of ABE fermentation. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Multipurpose optimization models for high level waste vitrification

    International Nuclear Information System (INIS)

    Hoza, M.

    1994-08-01

    Optimal Waste Loading (OWL) models have been developed as multipurpose tools for high-level waste studies for the Tank Waste Remediation Program at Hanford. Using nonlinear programming techniques, these models maximize the waste loading of the vitrified waste and optimize the glass formers composition such that the glass produced has the appropriate properties within the melter, and the resultant vitrified waste form meets the requirements for disposal. The OWL model can be used for a single waste stream or for blended streams. The models can determine optimal continuous blends or optimal discrete blends of a number of different wastes. The OWL models have been used to identify the most restrictive constraints, to evaluate prospective waste pretreatment methods, to formulate and evaluate blending strategies, and to determine the impacts of variability in the wastes. The OWL models will be used to aid in the design of frits and the maximize the waste in the glass for High-Level Waste (HLW) vitrification

  5. Theoretical models for the muon spectrum at sea level

    International Nuclear Information System (INIS)

    Abdel-Monem, M.S.; Benbrook, J.R.; Osborne, A.R.; Sheldon, W.R.

    1975-01-01

    The absolute vertical cosmic ray muon spectrum is investigated theoretically. Models of high energy interactions (namely, Maeda-Cantrell (MC), Constant Energy (CE), Cocconi-Koester-Perkins (CKP) and Scaling Models) are used to calculate the spectrum of cosmic ray muons at sea level. A comparison is made between the measured spectrum and that predicted from each of the four theoretical models. It is concluded that the recently available measured muon differential intensities agree with the scaling model for energies less than 100 GeV and with the CKP model for energies greater than 200 GeV. The measured differential intensities (Abdel-Monem et al.) agree with scaling. (orig.) [de

  6. The Impact of Individual Differences, Types of Model and Social Settings on Block Building Performance among Chinese Preschoolers

    Directory of Open Access Journals (Sweden)

    Mi Tian

    2018-01-01

    Full Text Available Children’s block building performances are used as indicators of other abilities in multiple domains. In the current study, we examined individual differences, types of model and social settings as influences on children’s block building performance. Chinese preschoolers (N = 180 participated in a block building activity in a natural setting, and performance was assessed with multiple measures in order to identify a range of specific skills. Using scores generated across these measures, three dependent variables were analyzed: block building skills, structural balance and structural features. An overall MANOVA showed that there were significant main effects of gender and grade level across most measures. Types of model showed no significant effect in children’s block building. There was a significant main effect of social settings on structural features, with the best performance in the 5-member group, followed by individual and then the 10-member block building. These findings suggest that boys performed better than girls in block building activity. Block building performance increased significantly from 1st to 2nd year of preschool, but not from second to third. The preschoolers created more representational constructions when presented with a model made of wooden rather than with a picture. There was partial evidence that children performed better when working with peers in a small group than when working alone or working in a large group. It is suggested that future study should examine other modalities rather than the visual one, diversify the samples and adopt a longitudinal investigation.

  7. The Impact of Individual Differences, Types of Model and Social Settings on Block Building Performance among Chinese Preschoolers.

    Science.gov (United States)

    Tian, Mi; Deng, Zhu; Meng, Zhaokun; Li, Rui; Zhang, Zhiyi; Qi, Wenhui; Wang, Rui; Yin, Tingting; Ji, Menghui

    2018-01-01

    Children's block building performances are used as indicators of other abilities in multiple domains. In the current study, we examined individual differences, types of model and social settings as influences on children's block building performance. Chinese preschoolers ( N = 180) participated in a block building activity in a natural setting, and performance was assessed with multiple measures in order to identify a range of specific skills. Using scores generated across these measures, three dependent variables were analyzed: block building skills, structural balance and structural features. An overall MANOVA showed that there were significant main effects of gender and grade level across most measures. Types of model showed no significant effect in children's block building. There was a significant main effect of social settings on structural features, with the best performance in the 5-member group, followed by individual and then the 10-member block building. These findings suggest that boys performed better than girls in block building activity. Block building performance increased significantly from 1st to 2nd year of preschool, but not from second to third. The preschoolers created more representational constructions when presented with a model made of wooden rather than with a picture. There was partial evidence that children performed better when working with peers in a small group than when working alone or working in a large group. It is suggested that future study should examine other modalities rather than the visual one, diversify the samples and adopt a longitudinal investigation.

  8. Fuzzy sets as extension of probabilistic models for evaluating human reliability

    International Nuclear Information System (INIS)

    Przybylski, F.

    1996-11-01

    On the base of a survey of established quantification methodologies for evaluating human reliability, a new computerized methodology was developed in which a differential consideration of user uncertainties is made. In this quantification method FURTHER (FUzzy Sets Related To Human Error Rate Prediction), user uncertainties are quantified separately from model and data uncertainties. As tools fuzzy sets are applied which, however, stay hidden to the method's user. The user in the quantification process only chooses an action pattern, performance shaping factors and natural language expressions. The acknowledged method HEART (Human Error Assessment and Reduction Technique) serves as foundation of the fuzzy set approach FURTHER. By means of this method, the selection of a basic task in connection with its basic error probability, the decision how correct the basic task's selection is, the selection of a peformance shaping factor, and the decision how correct the selection and how important the performance shaping factor is, were identified as aspects of fuzzification. This fuzzification is made on the base of data collection and information from literature as well as of the estimation by competent persons. To verify the ammount of additional information to be received by the usage of fuzzy sets, a benchmark session was accomplished. In this benchmark twelve actions were assessed by five test-persons. In case of the same degree of detail in the action modelling process, the bandwidths of the interpersonal evaluations decrease in FURTHER in comparison with HEART. The uncertainties of the single results could not be reduced up to now. The benchmark sessions conducted so far showed plausible results. A further testing of the fuzzy set approach by using better confirmed fuzzy sets can only be achieved in future practical application. Adequate procedures, however, are provided. (orig.) [de

  9. Gay-Straight Alliances vary on dimensions of youth socializing and advocacy: factors accounting for individual and setting-level differences.

    Science.gov (United States)

    Poteat, V Paul; Scheer, Jillian R; Marx, Robert A; Calzo, Jerel P; Yoshikawa, Hirokazu

    2015-06-01

    Gay-Straight Alliances (GSAs) are school-based youth settings that could promote health. Yet, GSAs have been treated as homogenous without attention to variability in how they operate or to how youth are involved in different capacities. Using a systems perspective, we considered two primary dimensions along which GSAs function to promote health: providing socializing and advocacy opportunities. Among 448 students in 48 GSAs who attended six regional conferences in Massachusetts (59.8 % LGBQ; 69.9 % White; 70.1 % cisgender female), we found substantial variation among GSAs and youth in levels of socializing and advocacy. GSAs were more distinct from one another on advocacy than socializing. Using multilevel modeling, we identified group and individual factors accounting for this variability. In the socializing model, youth and GSAs that did more socializing activities did more advocacy. In the advocacy model, youth who were more actively engaged in the GSA as well as GSAs whose youth collectively perceived greater school hostility and reported greater social justice efficacy did more advocacy. Findings suggest potential reasons why GSAs vary in how they function in ways ranging from internal provisions of support, to visibility raising, to collective social change. The findings are further relevant for settings supporting youth from other marginalized backgrounds and that include advocacy in their mission.

  10. Aeon: Synthesizing Scheduling Algorithms from High-Level Models

    Science.gov (United States)

    Monette, Jean-Noël; Deville, Yves; van Hentenryck, Pascal

    This paper describes the aeon system whose aim is to synthesize scheduling algorithms from high-level models. A eon, which is entirely written in comet, receives as input a high-level model for a scheduling application which is then analyzed to generate a dedicated scheduling algorithm exploiting the structure of the model. A eon provides a variety of synthesizers for generating complete or heuristic algorithms. Moreover, synthesizers are compositional, making it possible to generate complex hybrid algorithms naturally. Preliminary experimental results indicate that this approach may be competitive with state-of-the-art search algorithms.

  11. New Hybrid Multiple Attribute Decision-Making Model for Improving Competence Sets: Enhancing a Company’s Core Competitiveness

    Directory of Open Access Journals (Sweden)

    Kuan-Wei Huang

    2016-02-01

    Full Text Available A company’s core competitiveness depends on the strategic allocation of its human resources in alignment with employee capabilities. Competency models can identify the range of capabilities at a company’s disposal, and this information can be used to develop internal or external education training policies for sustainable development. Such models can ensure the importation of a strategic orientation reflecting the growth of its employee competence set and enhancing human resource sustainably. This approach ensures that the most appropriate people are assigned to the most appropriate positions. In this study, we proposed a new hybrid multiple attributed decision-making model by using the Decision-making trial and Evaluation Laboratory Technique (DEMATEL to construct an influential network relation map (INRM and determined the influential weights by using the basic concept of the analytic network process (called DEMATEL-based ANP, DANP; the influential weights were then adopted with a modified Vise Kriterijumska Optimizacija I Kompromisno Resenje (VIKOR method. A simple forecasting technique as an iteration function was also proposed. The proposed model was effective. We expect that the proposed model can facilitate making timely revisions, reflecting the growth of employee competence sets, reducing the performance gap toward the aspiration level, and ensuring the sustainability of a company.

  12. Development of estrogen receptor beta binding prediction model using large sets of chemicals.

    Science.gov (United States)

    Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao

    2017-11-03

    We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .

  13. Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops

    Science.gov (United States)

    Rahman, Aminur; Jordan, Ian; Blackmore, Denis

    2018-01-01

    It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.

  14. Set-up and first operation of a plasma oven for treatment of low level radioactive wastes

    Directory of Open Access Journals (Sweden)

    Nachtrodt Frederik

    2014-01-01

    Full Text Available An experimental device for plasma treatment of low and intermediate level radioactive waste was built and tested in several design variations. The laboratory device is designed with the intention to study the general effects and difficulties in a plasma incineration set-up for the further future development of a larger scale pilot plant. The key part of the device consists of a novel microwave plasma torch driven by 200 W electric power, and operating at atmospheric pressure. It is a specific design characteristic of the torch that a high peak temperature can be reached with a low power input compared to other plasma torches. Experiments have been carried out to analyze the effect of the plasma on materials typical for operational low-level wastes. In some preliminary cold tests the behavior of stable volatile species e. g., caesium was investigated by TXRF measurements of material collected from the oven walls and the filtered off-gas. The results help in improving and scaling up the existing design and in understanding the effects for a pilot plant, especially for the off-gas collection and treatment.

  15. MODEL PREDIKSI NILAI PERUSAHAAN MELALUI KEPEMILIKAN MANAJERIAL DAN SET KESEMPATAN INVESTASI

    Directory of Open Access Journals (Sweden)

    Herry Laksito

    2017-03-01

    Full Text Available This study empirically examined the effect of managerial ownership on firm value of Investment OpportunitySet with mediation. Model, this research examined corporate governance measured by the shares of thecompany’s value with the mediation set of investment opportunities. The purpose of this study was to analyzethe effect on the value of corporate governance mediation firm with an investment opportunity sets on manufacturingcompanies listed in Indonesia Stock Exchange. The populations in this study were all of manufacturingcompanies listed in Indonesia Stock Exchange and reporting financial statement in the Indonesian capitalmarket directory during the period 2005-2007. Determination of sample used purposive sampling. The datamet the characteristic of 37 firms. Statistical method used was path analysis. The results showed that managerialstock ownership (corporate governance did not affect the value of a company with a negative direction.Managerial stock ownership (corporate governance affected the investment opportunity set (IOS. IOS did notaffect the value of the company and investment opportunity set could not significantly mediate the effect ofmanagerial ownership (corporate governance against the value of the firm.

  16. APPLICATION OF ROUGH SET THEORY TO MAINTENANCE LEVEL DECISION-MAKING FOR AERO-ENGINE MODULES BASED ON INCREMENTAL KNOWLEDGE LEARNING

    Institute of Scientific and Technical Information of China (English)

    陆晓华; 左洪福; 蔡景

    2013-01-01

    The maintenance of an aero-engine usually includes three levels ,and the maintenance cost and period greatly differ depending on the different maintenance levels .To plan a reasonable maintenance budget program , airlines would like to predict the maintenance level of aero-engine before repairing in terms of performance parame-ters ,which can provide more economic benefits .The maintenance level decision rules are mined using the histori-cal maintenance data of a civil aero-engine based on the rough set theory ,and a variety of possible models of upda-ting rules produced by newly increased maintenance cases added to the historical maintenance case database are in-vestigated by the means of incremental machine learning .The continuously updated rules can provide reasonable guidance suggestions for engineers and decision support for planning a maintenance budget program before repai-ring .The results of an example show that the decision rules become more typical and robust ,and they are more accurate to predict the maintenance level of an aero-engine module as the maintenance data increase ,which illus-trates the feasibility of the represented method .

  17. Analysis of the experimental data of air pollution using atmospheric dispersion modeling and rough set

    International Nuclear Information System (INIS)

    Halfa, I.K.I

    2008-01-01

    This thesis contains four chapters and list of references:In chapter 1, we introduce a brief survey about the atmospheric concepts and the topological methods for data analysis.In section 1.1, we give introduce a general introduction. We recall some of atmospheric fundamentals in Section 1.2. Section 1.3, shows the concepts of modern topological methods for data analysis.In chapter 2, we have studied the properties of atmosphere and focus on concept of Rough set and its properties. This concepts of rough set has been applied to analyze the atmospheric data.In section 2.1, we introduce a general introduction about concept of rough set and properties of atmosphere. Section 2.2 focuses on the concept of rough set and its properties and generalization of approximation of rough set theory by using topological space. In section 2.3 we have studied the stabilities of atmosphere for Inshas location for all seasons using different schemes and compared these schemes using statistical and rough set methods. In section 2.4, we introduce mixing height of plume for all seasons. Section 2.5 introduced seasonal surface layer turbulence processes for Inshas location. Section 2.6 gives a comparison between the seasonal surface layer turbulence processes for Inshas location and for different locations using rough set theory.In chapter 3 we focus on the concept of variable precision rough set (VPRS) and its properties and using it to compare, between the estimated and observed data of the concentration of air pollution for Inshas location. In Section 3.1 we introduce a general introduction about VPRS and air pollution. In Section 3.2 we have focused on the concept and properties of VPRS. In Section 3.3 we have introduced a method to estimate the concentration of air pollution for Inshas location using Gaussian plume model. Section 3.4 has showed the experimental data. The estimated data have been compared with the observed data using statistical methods in Section 3.5. In Section 3

  18. A model set-up for an oxygen and nutrient flux model for Aarhus Bay (Denmark)

    DEFF Research Database (Denmark)

    Fossing, H.; Berg, P.; Thamdrup, B.

    . They also produce a number of waste products, such as hydrogen sulphide, that have a great impact on the marine environment. After many years of research, our knowledge of the processes going on in the seabed is substantial. This knowledge forms the basis of a new mathematical model linking the complex...

  19. Research of Strategic Alliance Stable Decision-making Model Based on Rough Set and DEA

    OpenAIRE

    Zhang Yi

    2013-01-01

    This article uses rough set theory for stability evaluation system of strategic alliance at first. Uses data analysis method for reduction, eliminates redundant indexes. Selected 6 enterprises as a decision-making unit, then select 4 inputs and 2 outputs indexes data, using DEA model to calculate, analysis reasons for poor benefit of decision-making unit, find out improvement direction and quantity for changing, provide a reference for the alliance stability.

  20. Finite-Control-Set Model Predictive Control (FCS-MPC) for Islanded Hybrid Microgrids

    OpenAIRE

    Yi, Zhehan; Babqi, Abdulrahman J.; Wang, Yishen; Shi, Di; Etemadi, Amir H.; Wang, Zhiwei; Huang, Bibin

    2018-01-01

    Microgrids consisting of multiple distributed energy resources (DERs) provide a promising solution to integrate renewable energies, e.g., solar photovoltaic (PV) systems. Hybrid AC/DC microgrids leverage the merits of both AC and DC power systems. In this paper, a control strategy for islanded multi-bus hybrid microgrids is proposed based on the Finite-Control-Set Model Predictive Control (FCS-MPC) technologies. The control loops are expedited by predicting the future states and determining t...