WorldWideScience

Sample records for level set model

  1. A new level set model for multimaterial flows

    Energy Technology Data Exchange (ETDEWEB)

    Starinshak, David P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Karni, Smadar [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of Mathematics; Roe, Philip L. [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of AerospaceEngineering

    2014-01-08

    We present a new level set model for representing multimaterial flows in multiple space dimensions. Instead of associating a level set function with a specific fluid material, the function is associated with a pair of materials and the interface that separates them. A voting algorithm collects sign information from all level sets and determines material designations. M(M ₋1)/2 level set functions might be needed to represent a general M-material configuration; problems of practical interest use far fewer functions, since not all pairs of materials share an interface. The new model is less prone to producing indeterminate material states, i.e. regions claimed by more than one material (overlaps) or no material at all (vacuums). It outperforms existing material-based level set models without the need for reinitialization schemes, thereby avoiding additional computational costs and preventing excessive numerical diffusion.

  2. A Variational Level Set Model Combined with FCMS for Image Clustering Segmentation

    Directory of Open Access Journals (Sweden)

    Liming Tang

    2014-01-01

    Full Text Available The fuzzy C means clustering algorithm with spatial constraint (FCMS is effective for image segmentation. However, it lacks essential smoothing constraints to the cluster boundaries and enough robustness to the noise. Samson et al. proposed a variational level set model for image clustering segmentation, which can get the smooth cluster boundaries and closed cluster regions due to the use of level set scheme. However it is very sensitive to the noise since it is actually a hard C means clustering model. In this paper, based on Samson’s work, we propose a new variational level set model combined with FCMS for image clustering segmentation. Compared with FCMS clustering, the proposed model can get smooth cluster boundaries and closed cluster regions due to the use of level set scheme. In addition, a block-based energy is incorporated into the energy functional, which enables the proposed model to be more robust to the noise than FCMS clustering and Samson’s model. Some experiments on the synthetic and real images are performed to assess the performance of the proposed model. Compared with some classical image segmentation models, the proposed model has a better performance for the images contaminated by different noise levels.

  3. Level-set techniques for facies identification in reservoir modeling

    Science.gov (United States)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  4. Level-set techniques for facies identification in reservoir modeling

    International Nuclear Information System (INIS)

    Iglesias, Marco A; McLaughlin, Dennis

    2011-01-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil–water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301–29; 2004 Inverse Problems 20 259–82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg–Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush–Kuhn–Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies

  5. Numerical Modelling of Three-Fluid Flow Using The Level-set Method

    Science.gov (United States)

    Li, Hongying; Lou, Jing; Shang, Zhi

    2014-11-01

    This work presents a numerical model for simulation of three-fluid flow involving two different moving interfaces. These interfaces are captured using the level-set method via two different level-set functions. A combined formulation with only one set of conservation equations for the whole physical domain, consisting of the three different immiscible fluids, is employed. Numerical solution is performed on a fixed mesh using the finite volume method. Surface tension effect is incorporated using the Continuum Surface Force model. Validation of the present model is made against available results for stratified flow and rising bubble in a container with a free surface. Applications of the present model are demonstrated by a variety of three-fluid flow systems including (1) three-fluid stratified flow, (2) two-fluid stratified flow carrying the third fluid in the form of drops and (3) simultaneous rising and settling of two drops in a stationary third fluid. The work is supported by a Thematic and Strategic Research from A*STAR, Singapore (Ref. #: 1021640075).

  6. A new level set model for cell image segmentation

    Science.gov (United States)

    Ma, Jing-Feng; Hou, Kai; Bao, Shang-Lian; Chen, Chun

    2011-02-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.

  7. A new level set model for cell image segmentation

    International Nuclear Information System (INIS)

    Ma Jing-Feng; Chen Chun; Hou Kai; Bao Shang-Lian

    2011-01-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing. (cross-disciplinary physics and related areas of science and technology)

  8. Aerostructural Level Set Topology Optimization for a Common Research Model Wing

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2014-01-01

    The purpose of this work is to use level set topology optimization to improve the design of a representative wing box structure for the NASA common research model. The objective is to minimize the total compliance of the structure under aerodynamic and body force loading, where the aerodynamic loading is coupled to the structural deformation. A taxi bump case was also considered, where only body force loads were applied. The trim condition that aerodynamic lift must balance the total weight of the aircraft is enforced by allowing the root angle of attack to change. The level set optimization method is implemented on an unstructured three-dimensional grid, so that the method can optimize a wing box with arbitrary geometry. Fast matching and upwind schemes are developed for an unstructured grid, which make the level set method robust and efficient. The adjoint method is used to obtain the coupled shape sensitivities required to perform aerostructural optimization of the wing box structure.

  9. A thick level set interface model for simulating fatigue-drive delamination in composites

    NARCIS (Netherlands)

    Latifi, M.; Van der Meer, F.P.; Sluys, L.J.

    2015-01-01

    This paper presents a new damage model for simulating fatigue-driven delamination in composite laminates. This model is developed based on the Thick Level Set approach (TLS) and provides a favorable link between damage mechanics and fracture mechanics through the non-local evaluation of the energy

  10. Fast Sparse Level Sets on Graphics Hardware

    NARCIS (Netherlands)

    Jalba, Andrei C.; Laan, Wladimir J. van der; Roerdink, Jos B.T.M.

    The level-set method is one of the most popular techniques for capturing and tracking deformable interfaces. Although level sets have demonstrated great potential in visualization and computer graphics applications, such as surface editing and physically based modeling, their use for interactive

  11. INTEGRATED SFM TECHNIQUES USING DATA SET FROM GOOGLE EARTH 3D MODEL AND FROM STREET LEVEL

    Directory of Open Access Journals (Sweden)

    L. Inzerillo

    2017-08-01

    Full Text Available Structure from motion (SfM represents a widespread photogrammetric method that uses the photogrammetric rules to carry out a 3D model from a photo data set collection. Some complex ancient buildings, such as Cathedrals, or Theatres, or Castles, etc. need to implement the data set (realized from street level with the UAV one in order to have the 3D roof reconstruction. Nevertheless, the use of UAV is strong limited from the government rules. In these last years, Google Earth (GE has been enriched with the 3D models of the earth sites. For this reason, it seemed convenient to start to test the potentiality offered by GE in order to extract from it a data set that replace the UAV function, to close the aerial building data set, using screen images of high resolution 3D models. Users can take unlimited “aerial photos” of a scene while flying around in GE at any viewing angle and altitude. The challenge is to verify the metric reliability of the SfM model carried out with an integrated data set (the one from street level and the one from GE aimed at replace the UAV use in urban contest. This model is called integrated GE SfM model (i-GESfM. In this paper will be present a case study: the Cathedral of Palermo.

  12. An Accurate Fire-Spread Algorithm in the Weather Research and Forecasting Model Using the Level-Set Method

    Science.gov (United States)

    Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.

    2018-04-01

    The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.

  13. Two-phase electro-hydrodynamic flow modeling by a conservative level set model.

    Science.gov (United States)

    Lin, Yuan

    2013-03-01

    The principles of electro-hydrodynamic (EHD) flow have been known for more than a century and have been adopted for various industrial applications, for example, fluid mixing and demixing. Analytical solutions of such EHD flow only exist in a limited number of scenarios, for example, predicting a small deformation of a single droplet in a uniform electric field. Numerical modeling of such phenomena can provide significant insights about EHDs multiphase flows. During the last decade, many numerical results have been reported to provide novel and useful tools of studying the multiphase EHD flow. Based on a conservative level set method, the proposed model is able to simulate large deformations of a droplet by a steady electric field, which is beyond the region of theoretic prediction. The model is validated for both leaky dielectrics and perfect dielectrics, and is found to be in excellent agreement with existing analytical solutions and numerical studies in the literature. Furthermore, simulations of the deformation of a water droplet in decyl alcohol in a steady electric field match better with published experimental data than the theoretical prediction for large deformations. Therefore the proposed model can serve as a practical and accurate tool for simulating two-phase EHD flow. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. The Model Confidence Set

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.

    The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS......, beyond the comparison of models. We apply the MCS procedure to two empirical problems. First, we revisit the inflation forecasting problem posed by Stock and Watson (1999), and compute the MCS for their set of inflation forecasts. Second, we compare a number of Taylor rule regressions and determine...... the MCS of the best in terms of in-sample likelihood criteria....

  15. Structural level set inversion for microwave breast screening

    International Nuclear Information System (INIS)

    Irishina, Natalia; Álvarez, Diego; Dorn, Oliver; Moscoso, Miguel

    2010-01-01

    We present a new inversion strategy for the early detection of breast cancer from microwave data which is based on a new multiphase level set technique. This novel structural inversion method uses a modification of the color level set technique adapted to the specific situation of structural breast imaging taking into account the high complexity of the breast tissue. We only use data of a few microwave frequencies for detecting the tumors hidden in this complex structure. Three level set functions are employed for describing four different types of breast tissue, where each of these four regions is allowed to have a complicated topology and to have an interior structure which needs to be estimated from the data simultaneously with the region interfaces. The algorithm consists of several stages of increasing complexity. In each stage more details about the anatomical structure of the breast interior is incorporated into the inversion model. The synthetic breast models which are used for creating simulated data are based on real MRI images of the breast and are therefore quite realistic. Our results demonstrate the potential and feasibility of the proposed level set technique for detecting, locating and characterizing a small tumor in its early stage of development embedded in such a realistic breast model. Both the data acquisition simulation and the inversion are carried out in 2D

  16. HPC in Basin Modeling: Simulating Mechanical Compaction through Vertical Effective Stress using Level Sets

    Science.gov (United States)

    McGovern, S.; Kollet, S. J.; Buerger, C. M.; Schwede, R. L.; Podlaha, O. G.

    2017-12-01

    In the context of sedimentary basins, we present a model for the simulation of the movement of ageological formation (layers) during the evolution of the basin through sedimentation and compactionprocesses. Assuming a single phase saturated porous medium for the sedimentary layers, the modelfocuses on the tracking of the layer interfaces, through the use of the level set method, as sedimentationdrives fluid-flow and reduction of pore space by compaction. On the assumption of Terzaghi's effectivestress concept, the coupling of the pore fluid pressure to the motion of interfaces in 1-D is presented inMcGovern, et.al (2017) [1] .The current work extends the spatial domain to 3-D, though we maintain the assumption ofvertical effective stress to drive the compaction. The idealized geological evolution is conceptualized asthe motion of interfaces between rock layers, whose paths are determined by the magnitude of a speedfunction in the direction normal to the evolving layer interface. The speeds normal to the interface aredependent on the change in porosity, determined through an effective stress-based compaction law,such as the exponential Athy's law. Provided with the speeds normal to the interface, the level setmethod uses an advection equation to evolve a potential function, whose zero level set defines theinterface. Thus, the moving layer geometry influences the pore pressure distribution which couplesback to the interface speeds. The flexible construction of the speed function allows extension, in thefuture, to other terms to represent different physical processes, analogous to how the compaction rulerepresents material deformation.The 3-D model is implemented using the generic finite element method framework Deal II,which provides tools, building on p4est and interfacing to PETSc, for the massively parallel distributedsolution to the model equations [2]. Experiments are being run on the Juelich Supercomputing Center'sJureca cluster. [1] McGovern, et.al. (2017

  17. Two Surface-Tension Formulations For The Level Set Interface-Tracking Method

    International Nuclear Information System (INIS)

    Shepel, S.V.; Smith, B.L.

    2005-01-01

    The paper describes a comparative study of two surface-tension models for the Level Set interface tracking method. In both models, the surface tension is represented as a body force, concentrated near the interface, but the technical implementation of the two options is different. The first is based on a traditional Level Set approach, in which the surface tension is distributed over a narrow band around the interface using a smoothed Delta function. In the second model, which is based on the integral form of the fluid-flow equations, the force is imposed only in those computational cells through which the interface passes. Both models have been incorporated into the Finite-Element/Finite-Volume Level Set method, previously implemented into the commercial Computational Fluid Dynamics (CFD) code CFX-4. A critical evaluation of the two models, undertaken in the context of four standard Level Set benchmark problems, shows that the first model, based on the smoothed Delta function approach, is the more general, and more robust, of the two. (author)

  18. Exploring the level sets of quantum control landscapes

    International Nuclear Information System (INIS)

    Rothman, Adam; Ho, Tak-San; Rabitz, Herschel

    2006-01-01

    A quantum control landscape is defined by the value of a physical observable as a functional of the time-dependent control field E(t) for a given quantum-mechanical system. Level sets through this landscape are prescribed by a particular value of the target observable at the final dynamical time T, regardless of the intervening dynamics. We present a technique for exploring a landscape level set, where a scalar variable s is introduced to characterize trajectories along these level sets. The control fields E(s,t) accomplishing this exploration (i.e., that produce the same value of the target observable for a given system) are determined by solving a differential equation over s in conjunction with the time-dependent Schroedinger equation. There is full freedom to traverse a level set, and a particular trajectory is realized by making an a priori choice for a continuous function f(s,t) that appears in the differential equation for the control field. The continuous function f(s,t) can assume an arbitrary form, and thus a level set generally contains a family of controls, where each control takes the quantum system to the same final target value, but produces a distinct control mechanism. In addition, although the observable value remains invariant over the level set, other dynamical properties (e.g., the degree of robustness to control noise) are not specifically preserved and can vary greatly. Examples are presented to illustrate the continuous nature of level-set controls and their associated induced dynamical features, including continuously morphing mechanisms for population control in model quantum systems

  19. A level set approach for shock-induced α-γ phase transition of RDX

    Science.gov (United States)

    Josyula, Kartik; Rahul; De, Suvranu

    2018-02-01

    We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.

  20. Multi-phase flow monitoring with electrical impedance tomography using level set based method

    International Nuclear Information System (INIS)

    Liu, Dong; Khambampati, Anil Kumar; Kim, Sin; Kim, Kyung Youn

    2015-01-01

    Highlights: • LSM has been used for shape reconstruction to monitor multi-phase flow using EIT. • Multi-phase level set model for conductivity is represented by two level set functions. • LSM handles topological merging and breaking naturally during evolution process. • To reduce the computational time, a narrowband technique was applied. • Use of narrowband and optimization approach results in efficient and fast method. - Abstract: In this paper, a level set-based reconstruction scheme is applied to multi-phase flow monitoring using electrical impedance tomography (EIT). The proposed scheme involves applying a narrowband level set method to solve the inverse problem of finding the interface between the regions having different conductivity values. The multi-phase level set model for the conductivity distribution inside the domain is represented by two level set functions. The key principle of the level set-based method is to implicitly represent the shape of interface as the zero level set of higher dimensional function and then solve a set of partial differential equations. The level set-based scheme handles topological merging and breaking naturally during the evolution process. It also offers several advantages compared to traditional pixel-based approach. Level set-based method for multi-phase flow is tested with numerical and experimental data. It is found that level set-based method has better reconstruction performance when compared to pixel-based method

  1. Level Set Approach to Anisotropic Wet Etching of Silicon

    Directory of Open Access Journals (Sweden)

    Branislav Radjenović

    2010-05-01

    Full Text Available In this paper a methodology for the three dimensional (3D modeling and simulation of the profile evolution during anisotropic wet etching of silicon based on the level set method is presented. Etching rate anisotropy in silicon is modeled taking into account full silicon symmetry properties, by means of the interpolation technique using experimentally obtained values for the etching rates along thirteen principal and high index directions in KOH solutions. The resulting level set equations are solved using an open source implementation of the sparse field method (ITK library, developed in medical image processing community, extended for the case of non-convex Hamiltonians. Simulation results for some interesting initial 3D shapes, as well as some more practical examples illustrating anisotropic etching simulation in the presence of masks (simple square aperture mask, convex corner undercutting and convex corner compensation, formation of suspended structures are shown also. The obtained results show that level set method can be used as an effective tool for wet etching process modeling, and that is a viable alternative to the Cellular Automata method which now prevails in the simulations of the wet etching process.

  2. Modeling Restrained Shrinkage Induced Cracking in Concrete Rings Using the Thick Level Set Approach

    Directory of Open Access Journals (Sweden)

    Rebecca Nakhoul

    2018-03-01

    Full Text Available Modeling restrained shrinkage-induced damage and cracking in concrete is addressed herein. The novel Thick Level Set (TLS damage growth and crack propagation model is used and adapted by introducing shrinkage contribution into the formulation. The TLS capacity to predict damage evolution, crack initiation and growth triggered by restrained shrinkage in absence of external loads is evaluated. A study dealing with shrinkage-induced cracking in elliptical concrete rings is presented herein. Key results such as the effect of rings oblateness on stress distribution and critical shrinkage strain needed to initiate damage are highlighted. In addition, crack positions are compared to those observed in experiments and are found satisfactory.

  3. Variational Level Set Method for Two-Stage Image Segmentation Based on Morphological Gradients

    Directory of Open Access Journals (Sweden)

    Zemin Ren

    2014-01-01

    Full Text Available We use variational level set method and transition region extraction techniques to achieve image segmentation task. The proposed scheme is done by two steps. We first develop a novel algorithm to extract transition region based on the morphological gradient. After this, we integrate the transition region into a variational level set framework and develop a novel geometric active contour model, which include an external energy based on transition region and fractional order edge indicator function. The external energy is used to drive the zero level set toward the desired image features, such as object boundaries. Due to this external energy, the proposed model allows for more flexible initialization. The fractional order edge indicator function is incorporated into the length regularization term to diminish the influence of noise. Moreover, internal energy is added into the proposed model to penalize the deviation of the level set function from a signed distance function. The results evolution of the level set function is the gradient flow that minimizes the overall energy functional. The proposed model has been applied to both synthetic and real images with promising results.

  4. Presenting a Model for Setting in Narrative Fiction Illustration

    Directory of Open Access Journals (Sweden)

    Hajar Salimi Namin

    2017-12-01

    Full Text Available The present research aims at presenting a model for evaluating and enhancing training the setting in illustration for narrative fictions for undergraduate students of graphic design who are weak in setting. The research utilized expert’s opinions through a survey. The designed model was submitted to eight experts, and their opinions were used to have the model adjusted and improved. Used as research instruments were notes, materials in text books, papers, and related websites, as well as questionnaires. Results indicated that, for evaluating and enhancing the level of training the setting in illustration for narrative fiction to students, one needs to extract sub-indexes of setting. Moreover, definition and recognition of the model of setting helps undergraduate students of graphic design enhance the level of setting in their works skill by recognizing details of setting. Accordingly, it is recommended to design training packages to enhance these sub-indexes and hence improve the setting for narrative fiction illustration.

  5. Settings for Physical Activity – Developing a Site-specific Physical Activity Behavior Model based on Multi-level Intervention Studies

    DEFF Research Database (Denmark)

    Troelsen, Jens; Klinker, Charlotte Demant; Breum, Lars

    Settings for Physical Activity – Developing a Site-specific Physical Activity Behavior Model based on Multi-level Intervention Studies Introduction: Ecological models of health behavior have potential as theoretical framework to comprehend the multiple levels of factors influencing physical...... to be taken into consideration. A theoretical implication of this finding is to develop a site-specific physical activity behavior model adding a layered structure to the ecological model representing the determinants related to the specific site. Support: This study was supported by TrygFonden, Realdania...... activity (PA). The potential is shown by the fact that there has been a dramatic increase in application of ecological models in research and practice. One proposed core principle is that an ecological model is most powerful if the model is behavior-specific. However, based on multi-level interventions...

  6. A deep level set method for image segmentation

    OpenAIRE

    Tang, Min; Valipour, Sepehr; Zhang, Zichen Vincent; Cobzas, Dana; MartinJagersand

    2017-01-01

    This paper proposes a novel image segmentation approachthat integrates fully convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the integrated method can incorporatesmoothing and prior information to achieve an accurate segmentation.Furthermore, different than using the level set model as a post-processingtool, we integrate it into the training phase to fine-tune the FCN. Thisallows the use of unlabeled data during training in a semi-supervisedsetting. Using two types o...

  7. Introduction to the level-set full field modeling of laths spheroidization phenomenon in α/β titanium alloys

    Directory of Open Access Journals (Sweden)

    Polychronopoulou D.

    2016-01-01

    Full Text Available Fragmentation of α lamellae and subsequent spheroidization of α laths in α/β titanium alloys occurring during and after deformation are well known phenomena. We will illustrate the development of a new finite element methodology to model them. This new methodology is based on a level set framework to model the deformation and the ad hoc simultaneous and/or subsequent interfaces kinetics. We will focus, at yet, on the modeling of the surface diffusion at the α/β phase interfaces and the motion by mean curvature at the α/α grain interfaces.

  8. On the modeling of bubble evolution and transport using coupled level-set/CFD method

    International Nuclear Information System (INIS)

    Bartlomiej Wierzbicki; Steven P Antal; Michael Z Podowski

    2005-01-01

    Full text of publication follows: The ability to predict the shape of the gas/liquid/solid interfaces is important for various multiphase flow and heat transfer applications. Specific issues of interest to nuclear reactor thermal-hydraulics, include the evolution of the shape of bubbles attached to solid surfaces during nucleation, bubble surface interactions in complex geometries, etc. Additional problems, making the overall task even more complicated, are associated with the effect of material properties that may be significantly altered by the addition of minute amounts of impurities, such as surfactants or nano-particles. The present paper is concerned with the development of an innovative approach to model time-dependent shape of gas/liquid interfaces in the presence of solid walls. The proposed approach combines a modified level-set method with an advanced CFD code, NPHASE. The coupled numerical solver can be used to simulate the evolution of gas/liquid interfaces in two-phase flows for a variety of geometries and flow conditions, from individual bubbles to free surfaces (stratified flows). The issues discussed in the full paper will include: a description of the novel aspects of the proposed level-set concept based method, an overview of the NPHASE code modeling framework and a description of the coupling method between these two elements of the overall model. A particular attention will be give to the consistency and completeness of model formulation for the interfacial phenomena near the liquid/gas/solid triple line, and to the impact of the proposed numerical approach on the accuracy and consistency of predictions. The accuracy will be measured in terms of both the calculated shape of the interfaces and the gas and liquid velocity fields around the interfaces and in the entire computational domain. The results of model testing and validation will also be shown in the full paper. The situations analyzed will include: bubbles of different sizes and varying

  9. A Level Set Discontinuous Galerkin Method for Free Surface Flows

    DEFF Research Database (Denmark)

    Grooss, Jesper; Hesthaven, Jan

    2006-01-01

    We present a discontinuous Galerkin method on a fully unstructured grid for the modeling of unsteady incompressible fluid flows with free surfaces. The surface is modeled by embedding and represented by a levelset. We discuss the discretization of the flow equations and the level set equation...

  10. Reconstruction of thin electromagnetic inclusions by a level-set method

    International Nuclear Information System (INIS)

    Park, Won-Kwang; Lesselier, Dominique

    2009-01-01

    In this contribution, we consider a technique of electromagnetic imaging (at a single, non-zero frequency) which uses the level-set evolution method for reconstructing a thin inclusion (possibly made of disconnected parts) with either dielectric or magnetic contrast with respect to the embedding homogeneous medium. Emphasis is on the proof of the concept, the scattering problem at hand being so far based on a two-dimensional scalar model. To do so, two level-set functions are employed; the first one describes location and shape, and the other one describes connectivity and length. Speeds of evolution of the level-set functions are calculated via the introduction of Fréchet derivatives of a least-square cost functional. Several numerical experiments on noiseless and noisy data as well illustrate how the proposed method behaves

  11. Global and local level density models

    International Nuclear Information System (INIS)

    Koning, A.J.; Hilaire, S.; Goriely, S.

    2008-01-01

    Four different level density models, three phenomenological and one microscopic, are consistently parameterized using the same set of experimental observables. For each of the phenomenological models, the Constant Temperature Model, the Back-shifted Fermi gas Model and the Generalized Superfluid Model, a version without and with explicit collective enhancement is considered. Moreover, a recently published microscopic combinatorial model is compared with the phenomenological approaches and with the same set of experimental data. For each nuclide for which sufficient experimental data exists, a local level density parameterization is constructed for each model. Next, these local models have helped to construct global level density prescriptions, to be used for cases for which no experimental data exists. Altogether, this yields a collection of level density formulae and parameters that can be used with confidence in nuclear model calculations. To demonstrate this, a large-scale validation with experimental discrete level schemes and experimental cross sections and neutron emission spectra for various different reaction channels has been performed

  12. Discretisation Schemes for Level Sets of Planar Gaussian Fields

    Science.gov (United States)

    Beliaev, D.; Muirhead, S.

    2018-01-01

    Smooth random Gaussian functions play an important role in mathematical physics, a main example being the random plane wave model conjectured by Berry to give a universal description of high-energy eigenfunctions of the Laplacian on generic compact manifolds. Our work is motivated by questions about the geometry of such random functions, in particular relating to the structure of their nodal and level sets. We study four discretisation schemes that extract information about level sets of planar Gaussian fields. Each scheme recovers information up to a different level of precision, and each requires a maximum mesh-size in order to be valid with high probability. The first two schemes are generalisations and enhancements of similar schemes that have appeared in the literature (Beffara and Gayet in Publ Math IHES, 2017. https://doi.org/10.1007/s10240-017-0093-0; Mischaikow and Wanner in Ann Appl Probab 17:980-1018, 2007); these give complete topological information about the level sets on either a local or global scale. As an application, we improve the results in Beffara and Gayet (2017) on Russo-Seymour-Welsh estimates for the nodal set of positively-correlated planar Gaussian fields. The third and fourth schemes are, to the best of our knowledge, completely new. The third scheme is specific to the nodal set of the random plane wave, and provides global topological information about the nodal set up to `visible ambiguities'. The fourth scheme gives a way to approximate the mean number of excursion domains of planar Gaussian fields.

  13. Transport and diffusion of material quantities on propagating interfaces via level set methods

    CERN Document Server

    Adalsteinsson, D

    2003-01-01

    We develop theory and numerical algorithms to apply level set methods to problems involving the transport and diffusion of material quantities in a level set framework. Level set methods are computational techniques for tracking moving interfaces; they work by embedding the propagating interface as the zero level set of a higher dimensional function, and then approximate the solution of the resulting initial value partial differential equation using upwind finite difference schemes. The traditional level set method works in the trace space of the evolving interface, and hence disregards any parameterization in the interface description. Consequently, material quantities on the interface which themselves are transported under the interface motion are not easily handled in this framework. We develop model equations and algorithmic techniques to extend the level set method to include these problems. We demonstrate the accuracy of our approach through a series of test examples and convergence studies.

  14. Transport and diffusion of material quantities on propagating interfaces via level set methods

    International Nuclear Information System (INIS)

    Adalsteinsson, David; Sethian, J.A.

    2003-01-01

    We develop theory and numerical algorithms to apply level set methods to problems involving the transport and diffusion of material quantities in a level set framework. Level set methods are computational techniques for tracking moving interfaces; they work by embedding the propagating interface as the zero level set of a higher dimensional function, and then approximate the solution of the resulting initial value partial differential equation using upwind finite difference schemes. The traditional level set method works in the trace space of the evolving interface, and hence disregards any parameterization in the interface description. Consequently, material quantities on the interface which themselves are transported under the interface motion are not easily handled in this framework. We develop model equations and algorithmic techniques to extend the level set method to include these problems. We demonstrate the accuracy of our approach through a series of test examples and convergence studies

  15. Novel gene sets improve set-level classification of prokaryotic gene expression data.

    Science.gov (United States)

    Holec, Matěj; Kuželka, Ondřej; Železný, Filip

    2015-10-28

    Set-level classification of gene expression data has received significant attention recently. In this setting, high-dimensional vectors of features corresponding to genes are converted into lower-dimensional vectors of features corresponding to biologically interpretable gene sets. The dimensionality reduction brings the promise of a decreased risk of overfitting, potentially resulting in improved accuracy of the learned classifiers. However, recent empirical research has not confirmed this expectation. Here we hypothesize that the reported unfavorable classification results in the set-level framework were due to the adoption of unsuitable gene sets defined typically on the basis of the Gene ontology and the KEGG database of metabolic networks. We explore an alternative approach to defining gene sets, based on regulatory interactions, which we expect to collect genes with more correlated expression. We hypothesize that such more correlated gene sets will enable to learn more accurate classifiers. We define two families of gene sets using information on regulatory interactions, and evaluate them on phenotype-classification tasks using public prokaryotic gene expression data sets. From each of the two gene-set families, we first select the best-performing subtype. The two selected subtypes are then evaluated on independent (testing) data sets against state-of-the-art gene sets and against the conventional gene-level approach. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. Novel gene sets defined on the basis of regulatory interactions improve set-level classification of gene expression data. The experimental scripts and other material needed to reproduce the experiments are available at http://ida.felk.cvut.cz/novelgenesets.tar.gz.

  16. Mapping topographic structure in white matter pathways with level set trees.

    Directory of Open Access Journals (Sweden)

    Brian P Kent

    Full Text Available Fiber tractography on diffusion imaging data offers rich potential for describing white matter pathways in the human brain, but characterizing the spatial organization in these large and complex data sets remains a challenge. We show that level set trees--which provide a concise representation of the hierarchical mode structure of probability density functions--offer a statistically-principled framework for visualizing and analyzing topography in fiber streamlines. Using diffusion spectrum imaging data collected on neurologically healthy controls (N = 30, we mapped white matter pathways from the cortex into the striatum using a deterministic tractography algorithm that estimates fiber bundles as dimensionless streamlines. Level set trees were used for interactive exploration of patterns in the endpoint distributions of the mapped fiber pathways and an efficient segmentation of the pathways that had empirical accuracy comparable to standard nonparametric clustering techniques. We show that level set trees can also be generalized to model pseudo-density functions in order to analyze a broader array of data types, including entire fiber streamlines. Finally, resampling methods show the reliability of the level set tree as a descriptive measure of topographic structure, illustrating its potential as a statistical descriptor in brain imaging analysis. These results highlight the broad applicability of level set trees for visualizing and analyzing high-dimensional data like fiber tractography output.

  17. Volume Sculpting Using the Level-Set Method

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Christensen, Niels Jørgen

    2002-01-01

    In this paper, we propose the use of the Level--Set Method as the underlying technology of a volume sculpting system. The main motivation is that this leads to a very generic technique for deformation of volumetric solids. In addition, our method preserves a distance field volume representation....... A scaling window is used to adapt the Level--Set Method to local deformations and to allow the user to control the intensity of the tool. Level--Set based tools have been implemented in an interactive sculpting system, and we show sculptures created using the system....

  18. Level set methods for detonation shock dynamics using high-order finite elements

    Energy Technology Data Exchange (ETDEWEB)

    Dobrev, V. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Grogan, F. C. [Univ. of California, San Diego, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kolev, T. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rieben, R [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Tomov, V. Z. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-26

    Level set methods are a popular approach to modeling evolving interfaces. We present a level set ad- vection solver in two and three dimensions using the discontinuous Galerkin method with high-order nite elements. During evolution, the level set function is reinitialized to a signed distance function to maintain ac- curacy. Our approach leads to stable front propagation and convergence on high-order, curved, unstructured meshes. The ability of the solver to implicitly track moving fronts lends itself to a number of applications; in particular, we highlight applications to high-explosive (HE) burn and detonation shock dynamics (DSD). We provide results for two- and three-dimensional benchmark problems as well as applications to DSD.

  19. Modeling of Two-Phase Flow in Rough-Walled Fracture Using Level Set Method

    Directory of Open Access Journals (Sweden)

    Yunfeng Dai

    2017-01-01

    Full Text Available To describe accurately the flow characteristic of fracture scale displacements of immiscible fluids, an incompressible two-phase (crude oil and water flow model incorporating interfacial forces and nonzero contact angles is developed. The roughness of the two-dimensional synthetic rough-walled fractures is controlled with different fractal dimension parameters. Described by the Navier–Stokes equations, the moving interface between crude oil and water is tracked using level set method. The method accounts for differences in densities and viscosities of crude oil and water and includes the effect of interfacial force. The wettability of the rough fracture wall is taken into account by defining the contact angle and slip length. The curve of the invasion pressure-water volume fraction is generated by modeling two-phase flow during a sudden drainage. The volume fraction of water restricted in the rough-walled fracture is calculated by integrating the water volume and dividing by the total cavity volume of the fracture while the two-phase flow is quasistatic. The effect of invasion pressure of crude oil, roughness of fracture wall, and wettability of the wall on two-phase flow in rough-walled fracture is evaluated.

  20. A parametric level-set approach for topology optimization of flow domains

    DEFF Research Database (Denmark)

    Pingen, Georg; Waidmann, Matthias; Evgrafov, Anton

    2010-01-01

    of the design variables in the traditional approaches is seen as a possible cause for the slow convergence. Non-smooth material distributions are suspected to trigger premature onset of instationary flows which cannot be treated by steady-state flow models. In the present work, we study whether the convergence...... and the versatility of topology optimization methods for fluidic systems can be improved by employing a parametric level-set description. In general, level-set methods allow controlling the smoothness of boundaries, yield a non-local influence of design variables, and decouple the material description from the flow...... field discretization. The parametric level-set method used in this study utilizes a material distribution approach to represent flow boundaries, resulting in a non-trivial mapping between design variables and local material properties. Using a hydrodynamic lattice Boltzmann method, we study...

  1. Level set methods for inverse scattering—some recent developments

    International Nuclear Information System (INIS)

    Dorn, Oliver; Lesselier, Dominique

    2009-01-01

    We give an update on recent techniques which use a level set representation of shapes for solving inverse scattering problems, completing in that matter the exposition made in (Dorn and Lesselier 2006 Inverse Problems 22 R67) and (Dorn and Lesselier 2007 Deformable Models (New York: Springer) pp 61–90), and bringing it closer to the current state of the art

  2. Economic communication model set

    Science.gov (United States)

    Zvereva, Olga M.; Berg, Dmitry B.

    2017-06-01

    This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.

  3. Skull defect reconstruction based on a new hybrid level set.

    Science.gov (United States)

    Zhang, Ziqun; Zhang, Ran; Song, Zhijian

    2014-01-01

    Skull defect reconstruction is an important aspect of surgical repair. Historically, a skull defect prosthesis was created by the mirroring technique, surface fitting, or formed templates. These methods are not based on the anatomy of the individual patient's skull, and therefore, the prosthesis cannot precisely correct the defect. This study presented a new hybrid level set model, taking into account both the global optimization region information and the local accuracy edge information, while avoiding re-initialization during the evolution of the level set function. Based on the new method, a skull defect was reconstructed, and the skull prosthesis was produced by rapid prototyping technology. This resulted in a skull defect prosthesis that well matched the skull defect with excellent individual adaptation.

  4. Level-Set Topology Optimization with Aeroelastic Constraints

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2015-01-01

    Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.

  5. An experimental methodology for a fuzzy set preference model

    Science.gov (United States)

    Turksen, I. B.; Willson, Ian A.

    1992-01-01

    A flexible fuzzy set preference model first requires approximate methodologies for implementation. Fuzzy sets must be defined for each individual consumer using computer software, requiring a minimum of time and expertise on the part of the consumer. The amount of information needed in defining sets must also be established. The model itself must adapt fully to the subject's choice of attributes (vague or precise), attribute levels, and importance weights. The resulting individual-level model should be fully adapted to each consumer. The methodologies needed to develop this model will be equally useful in a new generation of intelligent systems which interact with ordinary consumers, controlling electronic devices through fuzzy expert systems or making recommendations based on a variety of inputs. The power of personal computers and their acceptance by consumers has yet to be fully utilized to create interactive knowledge systems that fully adapt their function to the user. Understanding individual consumer preferences is critical to the design of new products and the estimation of demand (market share) for existing products, which in turn is an input to management systems concerned with production and distribution. The question of what to make, for whom to make it and how much to make requires an understanding of the customer's preferences and the trade-offs that exist between alternatives. Conjoint analysis is a widely used methodology which de-composes an overall preference for an object into a combination of preferences for its constituent parts (attributes such as taste and price), which are combined using an appropriate combination function. Preferences are often expressed using linguistic terms which cannot be represented in conjoint models. Current models are also not implemented an individual level, making it difficult to reach meaningful conclusions about the cause of an individual's behavior from an aggregate model. The combination of complex aggregate

  6. A finite element/level set model of polyurethane foam expansion and polymerization

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Long, Kevin Nicholas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Roberts, Christine Cardinal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Celina, Mathias C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brunini, Victor [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Soehnel, Melissa Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Noble, David R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tinsley, James [Honeywell Federal Manufacturing & Technologies, Kansas City, MO (United States); Mondy, Lisa [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    Polyurethane foams are used widely for encapsulation and structural purposes because they are inexpensive, straightforward to process, amenable to a wide range of density variations (1 lb/ft3 - 50 lb/ft3), and able to fill complex molds quickly and effectively. Computational model of the filling and curing process are needed to reduce defects such as voids, out-of-specification density, density gradients, foam decomposition from high temperatures due to exotherms, and incomplete filling. This paper details the development of a computational fluid dynamics model of a moderate density PMDI structural foam, PMDI-10. PMDI is an isocyanate-based polyurethane foam, which is chemically blown with water. The polyol reacts with isocyanate to produces the polymer. PMDI- 10 is catalyzed giving it a short pot life: it foams and polymerizes to a solid within 5 minutes during normal processing. To achieve a higher density, the foam is over-packed to twice or more of its free rise density of 10 lb/ft3. The goal for modeling is to represent the expansion, filling of molds, and the polymerization of the foam. This will be used to reduce defects, optimize the mold design, troubleshoot the processed, and predict the final foam properties. A homogenized continuum model foaming and curing was developed based on reaction kinetics, documented in a recent paper; it uses a simplified mathematical formalism that decouples these two reactions. The chemo-rheology of PMDI is measured experimentally and fit to a generalized- Newtonian viscosity model that is dependent on the extent of cure, gas fraction, and temperature. The conservation equations, including the equations of motion, an energy balance, and three rate equations are solved via a stabilized finite element method. The equations are combined with a level set method to determine the location of the foam-gas interface as it evolves to fill the mold. Understanding the thermal history and loads on the foam due to exothermicity and oven

  7. Reconstruction of incomplete cell paths through a 3D-2D level set segmentation

    Science.gov (United States)

    Hariri, Maia; Wan, Justin W. L.

    2012-02-01

    Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.

  8. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  9. Some free boundary problems in potential flow regime usinga based level set method

    Energy Technology Data Exchange (ETDEWEB)

    Garzon, M.; Bobillo-Ares, N.; Sethian, J.A.

    2008-12-09

    Recent advances in the field of fluid mechanics with moving fronts are linked to the use of Level Set Methods, a versatile mathematical technique to follow free boundaries which undergo topological changes. A challenging class of problems in this context are those related to the solution of a partial differential equation posed on a moving domain, in which the boundary condition for the PDE solver has to be obtained from a partial differential equation defined on the front. This is the case of potential flow models with moving boundaries. Moreover the fluid front will possibly be carrying some material substance which will diffuse in the front and be advected by the front velocity, as for example the use of surfactants to lower surface tension. We present a Level Set based methodology to embed this partial differential equations defined on the front in a complete Eulerian framework, fully avoiding the tracking of fluid particles and its known limitations. To show the advantages of this approach in the field of Fluid Mechanics we present in this work one particular application: the numerical approximation of a potential flow model to simulate the evolution and breaking of a solitary wave propagating over a slopping bottom and compare the level set based algorithm with previous front tracking models.

  10. Application of physiologically based pharmacokinetic modeling in setting acute exposure guideline levels for methylene chloride.

    NARCIS (Netherlands)

    Bos, Peter Martinus Jozef; Zeilmaker, Marco Jacob; Eijkeren, Jan Cornelis Henri van

    2006-01-01

    Acute exposure guideline levels (AEGLs) are derived to protect the human population from adverse health effects in case of single exposure due to an accidental release of chemicals into the atmosphere. AEGLs are set at three different levels of increasing toxicity for exposure durations ranging from

  11. A highly efficient 3D level-set grain growth algorithm tailored for ccNUMA architecture

    Science.gov (United States)

    Mießen, C.; Velinov, N.; Gottstein, G.; Barrales-Mora, L. A.

    2017-12-01

    A highly efficient simulation model for 2D and 3D grain growth was developed based on the level-set method. The model introduces modern computational concepts to achieve excellent performance on parallel computer architectures. Strong scalability was measured on cache-coherent non-uniform memory access (ccNUMA) architectures. To achieve this, the proposed approach considers the application of local level-set functions at the grain level. Ideal and non-ideal grain growth was simulated in 3D with the objective to study the evolution of statistical representative volume elements in polycrystals. In addition, microstructure evolution in an anisotropic magnetic material affected by an external magnetic field was simulated.

  12. Level-set simulations of buoyancy-driven motion of single and multiple bubbles

    International Nuclear Information System (INIS)

    Balcázar, Néstor; Lehmkuhl, Oriol; Jofre, Lluís; Oliva, Assensi

    2015-01-01

    Highlights: • A conservative level-set method is validated and verified. • An extensive study of buoyancy-driven motion of single bubbles is performed. • The interactions of two spherical and ellipsoidal bubbles is studied. • The interaction of multiple bubbles is simulated in a vertical channel. - Abstract: This paper presents a numerical study of buoyancy-driven motion of single and multiple bubbles by means of the conservative level-set method. First, an extensive study of the hydrodynamics of single bubbles rising in a quiescent liquid is performed, including its shape, terminal velocity, drag coefficients and wake patterns. These results are validated against experimental and numerical data well established in the scientific literature. Then, a further study on the interaction of two spherical and ellipsoidal bubbles is performed for different orientation angles. Finally, the interaction of multiple bubbles is explored in a periodic vertical channel. The results show that the conservative level-set approach can be used for accurate modelling of bubble dynamics. Moreover, it is demonstrated that the present method is numerically stable for a wide range of Morton and Reynolds numbers.

  13. A fuzzy set preference model for market share analysis

    Science.gov (United States)

    Turksen, I. B.; Willson, Ian A.

    1992-01-01

    Consumer preference models are widely used in new product design, marketing management, pricing, and market segmentation. The success of new products depends on accurate market share prediction and design decisions based on consumer preferences. The vague linguistic nature of consumer preferences and product attributes, combined with the substantial differences between individuals, creates a formidable challenge to marketing models. The most widely used methodology is conjoint analysis. Conjoint models, as currently implemented, represent linguistic preferences as ratio or interval-scaled numbers, use only numeric product attributes, and require aggregation of individuals for estimation purposes. It is not surprising that these models are costly to implement, are inflexible, and have a predictive validity that is not substantially better than chance. This affects the accuracy of market share estimates. A fuzzy set preference model can easily represent linguistic variables either in consumer preferences or product attributes with minimal measurement requirements (ordinal scales), while still estimating overall preferences suitable for market share prediction. This approach results in flexible individual-level conjoint models which can provide more accurate market share estimates from a smaller number of more meaningful consumer ratings. Fuzzy sets can be incorporated within existing preference model structures, such as a linear combination, using the techniques developed for conjoint analysis and market share estimation. The purpose of this article is to develop and fully test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation), and how much to make (market share

  14. On multiple level-set regularization methods for inverse problems

    International Nuclear Information System (INIS)

    DeCezaro, A; Leitão, A; Tai, X-C

    2009-01-01

    We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels

  15. A level set method for multiple sclerosis lesion segmentation.

    Science.gov (United States)

    Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming

    2018-06-01

    In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Dynamic-thresholding level set: a novel computer-aided volumetry method for liver tumors in hepatic CT images

    Science.gov (United States)

    Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.

    2007-03-01

    Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.

  17. A parametric level-set method for partially discrete tomography

    NARCIS (Netherlands)

    A. Kadu (Ajinkya); T. van Leeuwen (Tristan); K.J. Batenburg (Joost)

    2017-01-01

    textabstractThis paper introduces a parametric level-set method for tomographic reconstruction of partially discrete images. Such images consist of a continuously varying background and an anomaly with a constant (known) grey-value. We express the geometry of the anomaly using a level-set function,

  18. On the Relationship between Variational Level Set-Based and SOM-Based Active Contours

    Science.gov (United States)

    Abdelsamea, Mohammed M.; Gnecco, Giorgio; Gaber, Mohamed Medhat; Elyan, Eyad

    2015-01-01

    Most Active Contour Models (ACMs) deal with the image segmentation problem as a functional optimization problem, as they work on dividing an image into several regions by optimizing a suitable functional. Among ACMs, variational level set methods have been used to build an active contour with the aim of modeling arbitrarily complex shapes. Moreover, they can handle also topological changes of the contours. Self-Organizing Maps (SOMs) have attracted the attention of many computer vision scientists, particularly in modeling an active contour based on the idea of utilizing the prototypes (weights) of a SOM to control the evolution of the contour. SOM-based models have been proposed in general with the aim of exploiting the specific ability of SOMs to learn the edge-map information via their topology preservation property and overcoming some drawbacks of other ACMs, such as trapping into local minima of the image energy functional to be minimized in such models. In this survey, we illustrate the main concepts of variational level set-based ACMs, SOM-based ACMs, and their relationship and review in a comprehensive fashion the development of their state-of-the-art models from a machine learning perspective, with a focus on their strengths and weaknesses. PMID:25960736

  19. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Energy Technology Data Exchange (ETDEWEB)

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  20. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    International Nuclear Information System (INIS)

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang; Hu, Ying; Xiong, Jing; Zhang, Jianwei

    2015-01-01

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm 3 ) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm 3 , 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm 3 , 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm

  1. Individual-and Setting-Level Correlates of Secondary Traumatic Stress in Rape Crisis Center Staff.

    Science.gov (United States)

    Dworkin, Emily R; Sorell, Nicole R; Allen, Nicole E

    2016-02-01

    Secondary traumatic stress (STS) is an issue of significant concern among providers who work with survivors of sexual assault. Although STS has been studied in relation to individual-level characteristics of a variety of types of trauma responders, less research has focused specifically on rape crisis centers as environments that might convey risk or protection from STS, and no research to knowledge has modeled setting-level variation in correlates of STS. The current study uses a sample of 164 staff members representing 40 rape crisis centers across a single Midwestern state to investigate the staff member-and agency-level correlates of STS. Results suggest that correlates exist at both levels of analysis. Younger age and greater severity of sexual assault history were statistically significant individual-level predictors of increased STS. Greater frequency of supervision was more strongly related to secondary stress for non-advocates than for advocates. At the setting level, lower levels of supervision and higher client loads agency-wide accounted for unique variance in staff members' STS. These findings suggest that characteristics of both providers and their settings are important to consider when understanding their STS. © The Author(s) 2014.

  2. Robust boundary detection of left ventricles on ultrasound images using ASM-level set method.

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Li, Hong; Teng, Yueyang; Kang, Yan

    2015-01-01

    Level set method has been widely used in medical image analysis, but it has difficulties when being used in the segmentation of left ventricular (LV) boundaries on echocardiography images because the boundaries are not very distinguish, and the signal-to-noise ratio of echocardiography images is not very high. In this paper, we introduce the Active Shape Model (ASM) into the traditional level set method to enforce shape constraints. It improves the accuracy of boundary detection and makes the evolution more efficient. The experiments conducted on the real cardiac ultrasound image sequences show a positive and promising result.

  3. A combined single-multiphase flow formulation of the premixing phase using the level set method

    International Nuclear Information System (INIS)

    Leskovar, M.; Marn, J.

    1999-01-01

    The premixing phase of a steam explosion covers the interaction of the melt jet or droplets with the water prior to any steam explosion occurring. To get a better insight of the hydrodynamic processes during the premixing phase beside hot premixing experiments, where the water evaporation is significant, also cold isothermal premixing experiments are performed. The specialty of isothermal premixing experiments is that three phases are involved: the water, the air and the spheres phase, but only the spheres phase mixes with the other two phases whereas the water and air phases do not mix and remain separated by a free surface. Our idea therefore was to treat the isothermal premixing process with a combined single-multiphase flow model. In this combined model the water and air phase are treated as a single phase with discontinuous phase properties at the water air interface, whereas the spheres are treated as usually with a multiphase flow model, where the spheres represent the dispersed phase and the common water-air phase represents the continuous phase. The common water-air phase was described with the front capturing method based on the level set formulation. In the level set formulation, the boundary of two-fluid interfaces is modeled as the zero set of a smooth signed normal distance function defined on the entire physical domain. The boundary is then updated by solving a nonlinear equation of the Hamilton-Jacobi type on the whole domain. With this single-multiphase flow model the Queos isothermal premixing Q08 has been simulated. A numerical analysis using different treatments of the water-air interface (level set, high-resolution and upwind) has been performed for the incompressible and compressible case and the results were compared to experimental measurements.(author)

  4. Level Set Projection Method for Incompressible Navier-Stokes on Arbitrary Boundaries

    KAUST Repository

    Williams-Rioux, Bertrand

    2012-01-12

    Second order level set projection method for incompressible Navier-Stokes equations is proposed to solve flow around arbitrary geometries. We used rectilinear grid with collocated cell centered velocity and pressure. An explicit Godunov procedure is used to address the nonlinear advection terms, and an implicit Crank-Nicholson method to update viscous effects. An approximate pressure projection is implemented at the end of the time stepping using multigrid as a conventional fast iterative method. The level set method developed by Osher and Sethian [17] is implemented to address real momentum and pressure boundary conditions by the advection of a distance function, as proposed by Aslam [3]. Numerical results for the Strouhal number and drag coefficients validated the model with good accuracy for flow over a cylinder in the parallel shedding regime (47 < Re < 180). Simulations for an array of cylinders and an oscillating cylinder were performed, with the latter demonstrating our methods ability to handle dynamic boundary conditions.

  5. Identifying Heterogeneities in Subsurface Environment using the Level Set Method

    Energy Technology Data Exchange (ETDEWEB)

    Lei, Hongzhuan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lu, Zhiming [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vesselinov, Velimir Valentinov [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-25

    These are slides from a presentation on identifying heterogeneities in subsurface environment using the level set method. The slides start with the motivation, then explain Level Set Method (LSM), the algorithms, some examples are given, and finally future work is explained.

  6. Setting-level influences on implementation of the responsive classroom approach.

    Science.gov (United States)

    Wanless, Shannon B; Patton, Christine L; Rimm-Kaufman, Sara E; Deutsch, Nancy L

    2013-02-01

    We used mixed methods to examine the association between setting-level factors and observed implementation of a social and emotional learning intervention (Responsive Classroom® approach; RC). In study 1 (N = 33 3rd grade teachers after the first year of RC implementation), we identified relevant setting-level factors and uncovered the mechanisms through which they related to implementation. In study 2 (N = 50 4th grade teachers after the second year of RC implementation), we validated our most salient Study 1 finding across multiple informants. Findings suggested that teachers perceived setting-level factors, particularly principal buy-in to the intervention and individualized coaching, as influential to their degree of implementation. Further, we found that intervention coaches' perspectives of principal buy-in were more related to implementation than principals' or teachers' perspectives. Findings extend the application of setting theory to the field of implementation science and suggest that interventionists may want to consider particular accounts of school setting factors before determining the likelihood of schools achieving high levels of implementation.

  7. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    Science.gov (United States)

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  8. Level Set Structure of an Integrable Cellular Automaton

    Directory of Open Access Journals (Sweden)

    Taichiro Takagi

    2010-03-01

    Full Text Available Based on a group theoretical setting a sort of discrete dynamical system is constructed and applied to a combinatorial dynamical system defined on the set of certain Bethe ansatz related objects known as the rigged configurations. This system is then used to study a one-dimensional periodic cellular automaton related to discrete Toda lattice. It is shown for the first time that the level set of this cellular automaton is decomposed into connected components and every such component is a torus.

  9. Relationships between college settings and student alcohol use before, during and after events: a multi-level study.

    Science.gov (United States)

    Paschall, Mallie J; Saltz, Robert F

    2007-11-01

    We examined how alcohol risk is distributed based on college students' drinking before, during and after they go to certain settings. Students attending 14 California public universities (N=10,152) completed a web-based or mailed survey in the fall 2003 semester, which included questions about how many drinks they consumed before, during and after the last time they went to six settings/events: fraternity or sorority party, residence hall party, campus event (e.g. football game), off-campus party, bar/restaurant and outdoor setting (referent). Multi-level analyses were conducted in hierarchical linear modeling (HLM) to examine relationships between type of setting and level of alcohol use before, during and after going to the setting, and possible age and gender differences in these relationships. Drinking episodes (N=24,207) were level 1 units, students were level 2 units and colleges were level 3 units. The highest drinking levels were observed during all settings/events except campus events, with the highest number of drinks being consumed at off-campus parties, followed by residence hall and fraternity/sorority parties. The number of drinks consumed before a fraternity/sorority party was higher than other settings/events. Age group and gender differences in relationships between type of setting/event and 'before,''during' and 'after' drinking levels also were observed. For example, going to a bar/restaurant (relative to an outdoor setting) was positively associated with 'during' drinks among students of legal drinking age while no relationship was observed for underage students. Findings of this study indicate differences in the extent to which college settings are associated with student drinking levels before, during and after related events, and may have implications for intervention strategies targeting different types of settings.

  10. Benchmarking of protein descriptor sets in proteochemometric modeling (part 2): modeling performance of 13 amino acid descriptor sets

    Science.gov (United States)

    2013-01-01

    Background While a large body of work exists on comparing and benchmarking descriptors of molecular structures, a similar comparison of protein descriptor sets is lacking. Hence, in the current work a total of 13 amino acid descriptor sets have been benchmarked with respect to their ability of establishing bioactivity models. The descriptor sets included in the study are Z-scales (3 variants), VHSE, T-scales, ST-scales, MS-WHIM, FASGAI, BLOSUM, a novel protein descriptor set (termed ProtFP (4 variants)), and in addition we created and benchmarked three pairs of descriptor combinations. Prediction performance was evaluated in seven structure-activity benchmarks which comprise Angiotensin Converting Enzyme (ACE) dipeptidic inhibitor data, and three proteochemometric data sets, namely (1) GPCR ligands modeled against a GPCR panel, (2) enzyme inhibitors (NNRTIs) with associated bioactivities against a set of HIV enzyme mutants, and (3) enzyme inhibitors (PIs) with associated bioactivities on a large set of HIV enzyme mutants. Results The amino acid descriptor sets compared here show similar performance (set differences ( > 0.3 log units RMSE difference and >0.7 difference in MCC). Combining different descriptor sets generally leads to better modeling performance than utilizing individual sets. The best performers were Z-scales (3) combined with ProtFP (Feature), or Z-Scales (3) combined with an average Z-Scale value for each target, while ProtFP (PCA8), ST-Scales, and ProtFP (Feature) rank last. Conclusions While amino acid descriptor sets capture different aspects of amino acids their ability to be used for bioactivity modeling is still – on average – surprisingly similar. Still, combining sets describing complementary information consistently leads to small but consistent improvement in modeling performance (average MCC 0.01 better, average RMSE 0.01 log units lower). Finally, performance differences exist between the targets compared thereby underlining that

  11. Setting development goals using stochastic dynamical system models.

    Science.gov (United States)

    Ranganathan, Shyam; Nicolis, Stamatios C; Bali Swain, Ranjula; Sumpter, David J T

    2017-01-01

    The Millennium Development Goals (MDG) programme was an ambitious attempt to encourage a globalised solution to important but often-overlooked development problems. The programme led to wide-ranging development but it has also been criticised for unrealistic and arbitrary targets. In this paper, we show how country-specific development targets can be set using stochastic, dynamical system models built from historical data. In particular, we show that the MDG target of two-thirds reduction of child mortality from 1990 levels was infeasible for most countries, especially in sub-Saharan Africa. At the same time, the MDG targets were not ambitious enough for fast-developing countries such as Brazil and China. We suggest that model-based setting of country-specific targets is essential for the success of global development programmes such as the Sustainable Development Goals (SDG). This approach should provide clear, quantifiable targets for policymakers.

  12. Reevaluation of steam generator level trip set point

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Yoon Sub; Soh, Dong Sub; Kim, Sung Oh; Jung, Se Won; Sung, Kang Sik; Lee, Joon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1994-06-01

    The reactor trip by the low level of steam generator water accounts for a substantial portion of reactor scrams in a nuclear plant and the feasibility of modification of the steam generator water level trip system of YGN 1/2 was evaluated in this study. The study revealed removal of the reactor trip function from the SG water level trip system is not possible because of plant safety but relaxation of the trip set point by 9 % is feasible. The set point relaxation requires drilling of new holes for level measurement to operating steam generators. Characteristics of negative neutron flux rate trip and reactor trip were also reviewed as an additional work. Since the purpose of the trip system modification for reduction of a reactor scram frequency is not to satisfy legal requirements but to improve plant performance and the modification yields positive and negative aspects, the decision of actual modification needs to be made based on the results of this study and also the policy of a plant owner. 37 figs, 6 tabs, 14 refs. (Author).

  13. Improved inhalation technology for setting safe exposure levels for workplace chemicals

    Science.gov (United States)

    Stuart, Bruce O.

    1993-01-01

    Threshold Limit Values recommended as allowable air concentrations of a chemical in the workplace are often based upon a no-observable-effect-level (NOEL) determined by experimental inhalation studies using rodents. A 'safe level' for human exposure must then be estimated by the use of generalized safety factors in attempts to extrapolate from experimental rodents to man. The recent development of chemical-specific physiologically-based toxicokinetics makes use of measured physiological, biochemical, and metabolic parameters to construct a validated model that is able to 'scale-up' rodent response data to predict the behavior of the chemical in man. This procedure is made possible by recent advances in personal computer software and the emergence of appropriate biological data, and provides an analytical tool for much more reliable risk evaluation and airborne chemical exposure level setting for humans.

  14. A variational approach to multi-phase motion of gas, liquid and solid based on the level set method

    Science.gov (United States)

    Yokoi, Kensuke

    2009-07-01

    We propose a simple and robust numerical algorithm to deal with multi-phase motion of gas, liquid and solid based on the level set method [S. Osher, J.A. Sethian, Front propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation, J. Comput. Phys. 79 (1988) 12; M. Sussman, P. Smereka, S. Osher, A level set approach for capturing solution to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146; J.A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, 1999; S. Osher, R. Fedkiw, Level Set Methods and Dynamics Implicit Surface, Applied Mathematical Sciences, vol. 153, Springer, 2003]. In Eulerian framework, to simulate interaction between a moving solid object and an interfacial flow, we need to define at least two functions (level set functions) to distinguish three materials. In such simulations, in general two functions overlap and/or disagree due to numerical errors such as numerical diffusion. In this paper, we resolved the problem using the idea of the active contour model [M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International Journal of Computer Vision 1 (1988) 321; V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, International Journal of Computer Vision 22 (1997) 61; G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2001; R. Kimmel, Numerical Geometry of Images: Theory, Algorithms, and Applications, Springer-Verlag, 2003] introduced in the field of image processing.

  15. Gradient augmented level set method for phase change simulations

    Science.gov (United States)

    Anumolu, Lakshman; Trujillo, Mario F.

    2018-01-01

    A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.

  16. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  17. Automatic segmentation of Leishmania parasite in microscopic images using a modified CV level set method

    Science.gov (United States)

    Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab

    2015-12-01

    Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.

  18. Quasi-min-max Fuzzy MPC of UTSG Water Level Based on Off-Line Invariant Set

    Science.gov (United States)

    Liu, Xiangjie; Jiang, Di; Lee, Kwang Y.

    2015-10-01

    In a nuclear power plant, the water level of the U-tube steam generator (UTSG) must be maintained within a safe range. Traditional control methods encounter difficulties due to the complexity, strong nonlinearity and “swell and shrink” effects, especially at low power levels. A properly designed robust model predictive control can well solve this problem. In this paper, a quasi-min-max fuzzy model predictive controller is developed for controlling the constrained UTSG system. While the online computational burden could be quite large for the real-time control, a bank of ellipsoid invariant sets together with the corresponding feedback control laws are obtained by off-line solving linear matrix inequalities (LMIs). Based on the UTSG states, the online optimization is simplified as a constrained optimization problem with a bisection search for the corresponding ellipsoid invariant set. Simulation results are given to show the effectiveness of the proposed controller.

  19. A simple mass-conserved level set method for simulation of multiphase flows

    Science.gov (United States)

    Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.

    2018-04-01

    In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.

  20. Evaluation and comparison of alternative fleet-level selective maintenance models

    International Nuclear Information System (INIS)

    Schneider, Kellie; Richard Cassady, C.

    2015-01-01

    Fleet-level selective maintenance refers to the process of identifying the subset of maintenance actions to perform on a fleet of repairable systems when the maintenance resources allocated to the fleet are insufficient for performing all desirable maintenance actions. The original fleet-level selective maintenance model is designed to maximize the probability that all missions in a future set are completed successfully. We extend this model in several ways. First, we consider a cost-based optimization model and show that a special case of this model maximizes the expected value of the number of successful missions in the future set. We also consider the situation in which one or more of the future missions may be canceled. These models and the original fleet-level selective maintenance optimization models are nonlinear. Therefore, we also consider an alternative model in which the objective function can be linearized. We show that the alternative model is a good approximation to the other models. - Highlights: • Investigate nonlinear fleet-level selective maintenance optimization models. • A cost based model is used to maximize the expected number of successful missions. • Another model is allowed to cancel missions if reliability is sufficiently low. • An alternative model has an objective function that can be linearized. • We show that the alternative model is a good approximation to the other models

  1. Level-Set Methodology on Adaptive Octree Grids

    Science.gov (United States)

    Gibou, Frederic; Guittet, Arthur; Mirzadeh, Mohammad; Theillard, Maxime

    2017-11-01

    Numerical simulations of interfacial problems in fluids require a methodology capable of tracking surfaces that can undergo changes in topology and capable to imposing jump boundary conditions in a sharp manner. In this talk, we will discuss recent advances in the level-set framework, in particular one that is based on adaptive grids.

  2. Fast Streaming 3D Level set Segmentation on the GPU for Smooth Multi-phase Segmentation

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Zhang, Qin; Anton, François

    2011-01-01

    Level set method based segmentation provides an efficient tool for topological and geometrical shape handling, but it is slow due to high computational burden. In this work, we provide a framework for streaming computations on large volumetric images on the GPU. A streaming computational model...

  3. An accurate conservative level set/ghost fluid method for simulating turbulent atomization

    International Nuclear Information System (INIS)

    Desjardins, Olivier; Moureau, Vincent; Pitsch, Heinz

    2008-01-01

    This paper presents a novel methodology for simulating incompressible two-phase flows by combining an improved version of the conservative level set technique introduced in [E. Olsson, G. Kreiss, A conservative level set method for two phase flow, J. Comput. Phys. 210 (2005) 225-246] with a ghost fluid approach. By employing a hyperbolic tangent level set function that is transported and re-initialized using fully conservative numerical schemes, mass conservation issues that are known to affect level set methods are greatly reduced. In order to improve the accuracy of the conservative level set method, high order numerical schemes are used. The overall robustness of the numerical approach is increased by computing the interface normals from a signed distance function reconstructed from the hyperbolic tangent level set by a fast marching method. The convergence of the curvature calculation is ensured by using a least squares reconstruction. The ghost fluid technique provides a way of handling the interfacial forces and large density jumps associated with two-phase flows with good accuracy, while avoiding artificial spreading of the interface. Since the proposed approach relies on partial differential equations, its implementation is straightforward in all coordinate systems, and it benefits from high parallel efficiency. The robustness and efficiency of the approach is further improved by using implicit schemes for the interface transport and re-initialization equations, as well as for the momentum solver. The performance of the method is assessed through both classical level set transport tests and simple two-phase flow examples including topology changes. It is then applied to simulate turbulent atomization of a liquid Diesel jet at Re=3000. The conservation errors associated with the accurate conservative level set technique are shown to remain small even for this complex case

  4. Joint level-set and spatio-temporal motion detection for cell segmentation.

    Science.gov (United States)

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan

  5. Annotation-based feature extraction from sets of SBML models.

    Science.gov (United States)

    Alm, Rebekka; Waltemath, Dagmar; Wolfien, Markus; Wolkenhauer, Olaf; Henkel, Ron

    2015-01-01

    Model repositories such as BioModels Database provide computational models of biological systems for the scientific community. These models contain rich semantic annotations that link model entities to concepts in well-established bio-ontologies such as Gene Ontology. Consequently, thematically similar models are likely to share similar annotations. Based on this assumption, we argue that semantic annotations are a suitable tool to characterize sets of models. These characteristics improve model classification, allow to identify additional features for model retrieval tasks, and enable the comparison of sets of models. In this paper we discuss four methods for annotation-based feature extraction from model sets. We tested all methods on sets of models in SBML format which were composed from BioModels Database. To characterize each of these sets, we analyzed and extracted concepts from three frequently used ontologies, namely Gene Ontology, ChEBI and SBO. We find that three out of the methods are suitable to determine characteristic features for arbitrary sets of models: The selected features vary depending on the underlying model set, and they are also specific to the chosen model set. We show that the identified features map on concepts that are higher up in the hierarchy of the ontologies than the concepts used for model annotations. Our analysis also reveals that the information content of concepts in ontologies and their usage for model annotation do not correlate. Annotation-based feature extraction enables the comparison of model sets, as opposed to existing methods for model-to-keyword comparison, or model-to-model comparison.

  6. Bayesian Mixed Hidden Markov Models: A Multi-Level Approach to Modeling Categorical Outcomes with Differential Misclassification

    Science.gov (United States)

    Zhang, Yue; Berhane, Kiros

    2014-01-01

    Questionnaire-based health status outcomes are often prone to misclassification. When studying the effect of risk factors on such outcomes, ignoring any potential misclassification may lead to biased effect estimates. Analytical challenges posed by these misclassified outcomes are further complicated when simultaneously exploring factors for both the misclassification and health processes in a multi-level setting. To address these challenges, we propose a fully Bayesian Mixed Hidden Markov Model (BMHMM) for handling differential misclassification in categorical outcomes in a multi-level setting. The BMHMM generalizes the traditional Hidden Markov Model (HMM) by introducing random effects into three sets of HMM parameters for joint estimation of the prevalence, transition and misclassification probabilities. This formulation not only allows joint estimation of all three sets of parameters, but also accounts for cluster level heterogeneity based on a multi-level model structure. Using this novel approach, both the true health status prevalence and the transition probabilities between the health states during follow-up are modeled as functions of covariates. The observed, possibly misclassified, health states are related to the true, but unobserved, health states and covariates. Results from simulation studies are presented to validate the estimation procedure, to show the computational efficiency due to the Bayesian approach and also to illustrate the gains from the proposed method compared to existing methods that ignore outcome misclassification and cluster level heterogeneity. We apply the proposed method to examine the risk factors for both asthma transition and misclassification in the Southern California Children's Health Study (CHS). PMID:24254432

  7. Set-Theoretic Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan

    Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...

  8. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shenggao, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Mathematical Center for Interdiscipline Research, Soochow University, 1 Shizi Street, Jiangsu, Suzhou 215006 (China); Sun, Hui; Cheng, Li-Tien [Department of Mathematics, University of California, San Diego, La Jolla, California 92093-0112 (United States); Dzubiella, Joachim [Soft Matter and Functional Materials, Helmholtz-Zentrum Berlin, 14109 Berlin, Germany and Institut für Physik, Humboldt-Universität zu Berlin, 12489 Berlin (Germany); Li, Bo, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Quantitative Biology Graduate Program, University of California, San Diego, La Jolla, California 92093-0112 (United States); McCammon, J. Andrew [Department of Chemistry and Biochemistry, Department of Pharmacology, Howard Hughes Medical Institute, University of California, San Diego, La Jolla, California 92093-0365 (United States)

    2016-08-07

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the

  9. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin

    2011-04-01

    In this paper, we construct a level set method for an elliptic obstacle problem, which can be reformulated as a shape optimization problem. We provide a detailed shape sensitivity analysis for this reformulation and a stability result for the shape Hessian at the optimal shape. Using the shape sensitivities, we construct a geometric gradient flow, which can be realized in the context of level set methods. We prove the convergence of the gradient flow to an optimal shape and provide a complete analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its behavior through several computational experiments. © 2011 World Scientific Publishing Company.

  10. Modelling fatigue and the use of fatigue models in work settings.

    Science.gov (United States)

    Dawson, Drew; Ian Noy, Y; Härmä, Mikko; Akerstedt, Torbjorn; Belenky, Gregory

    2011-03-01

    In recent years, theoretical models of the sleep and circadian system developed in laboratory settings have been adapted to predict fatigue and, by inference, performance. This is typically done using the timing of prior sleep and waking or working hours as the primary input and the time course of the predicted variables as the primary output. The aim of these models is to provide employers, unions and regulators with quantitative information on the likely average level of fatigue, or risk, associated with a given pattern of work and sleep with the goal of better managing the risk of fatigue-related errors and accidents/incidents. The first part of this review summarises the variables known to influence workplace fatigue and draws attention to the considerable variability attributable to individual and task variables not included in current models. The second part reviews the current fatigue models described in the scientific and technical literature and classifies them according to whether they predict fatigue directly by using the timing of prior sleep and wake (one-step models) or indirectly by using work schedules to infer an average sleep-wake pattern that is then used to predict fatigue (two-step models). The third part of the review looks at the current use of fatigue models in field settings by organizations and regulators. Given their limitations it is suggested that the current generation of models may be appropriate for use as one element in a fatigue risk management system. The final section of the review looks at the future of these models and recommends a standardised approach for their use as an element of the 'defenses-in-depth' approach to fatigue risk management. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. The level density parameters for fermi gas model

    International Nuclear Information System (INIS)

    Zuang Youxiang; Wang Cuilan; Zhou Chunmei; Su Zongdi

    1986-01-01

    Nuclear level densities are crucial ingredient in the statistical models, for instance, in the calculations of the widths, cross sections, emitted particle spectra, etc. for various reaction channels. In this work 667 sets of more reliable and new experimental data are adopted, which include average level spacing D D , radiative capture width Γ γ 0 at neutron binding energy and cumulative level number N 0 at the low excitation energy. They are published during 1973 to 1983. Based on the parameters given by Gilbert-Cameon and Cook the physical quantities mentioned above are calculated. The calculated results have the deviation obviously from experimental values. In order to improve the fitting, the parameters in the G-C formula are adjusted and new set of level density parameters is obsained. The parameters is this work are more suitable to fit new measurements

  12. Settings in Social Networks : a Measurement Model

    NARCIS (Netherlands)

    Schweinberger, Michael; Snijders, Tom A.B.

    2003-01-01

    A class of statistical models is proposed that aims to recover latent settings structures in social networks. Settings may be regarded as clusters of vertices. The measurement model is based on two assumptions. (1) The observed network is generated by hierarchically nested latent transitive

  13. Diverse Data Sets Can Yield Reliable Information through Mechanistic Modeling: Salicylic Acid Clearance.

    Science.gov (United States)

    Raymond, G M; Bassingthwaighte, J B

    This is a practical example of a powerful research strategy: putting together data from studies covering a diversity of conditions can yield a scientifically sound grasp of the phenomenon when the individual observations failed to provide definitive understanding. The rationale is that defining a realistic, quantitative, explanatory hypothesis for the whole set of studies, brings about a "consilience" of the often competing hypotheses considered for individual data sets. An internally consistent conjecture linking multiple data sets simultaneously provides stronger evidence on the characteristics of a system than does analysis of individual data sets limited to narrow ranges of conditions. Our example examines three very different data sets on the clearance of salicylic acid from humans: a high concentration set from aspirin overdoses; a set with medium concentrations from a research study on the influences of the route of administration and of sex on the clearance kinetics, and a set on low dose aspirin for cardiovascular health. Three models were tested: (1) a first order reaction, (2) a Michaelis-Menten (M-M) approach, and (3) an enzyme kinetic model with forward and backward reactions. The reaction rates found from model 1 were distinctly different for the three data sets, having no commonality. The M-M model 2 fitted each of the three data sets but gave a reliable estimates of the Michaelis constant only for the medium level data (K m = 24±5.4 mg/L); analyzing the three data sets together with model 2 gave K m = 18±2.6 mg/L. (Estimating parameters using larger numbers of data points in an optimization increases the degrees of freedom, constraining the range of the estimates). Using the enzyme kinetic model (3) increased the number of free parameters but nevertheless improved the goodness of fit to the combined data sets, giving tighter constraints, and a lower estimated K m = 14.6±2.9 mg/L, demonstrating that fitting diverse data sets with a single model

  14. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin; Matevosyan, Norayr; Wolfram, Marie-Therese

    2011-01-01

    analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its

  15. A local level set method based on a finite element method for unstructured meshes

    International Nuclear Information System (INIS)

    Ngo, Long Cu; Choi, Hyoung Gwon

    2016-01-01

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time

  16. A local level set method based on a finite element method for unstructured meshes

    Energy Technology Data Exchange (ETDEWEB)

    Ngo, Long Cu; Choi, Hyoung Gwon [School of Mechanical Engineering, Seoul National University of Science and Technology, Seoul (Korea, Republic of)

    2016-12-15

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time.

  17. Generalized algebra-valued models of set theory

    NARCIS (Netherlands)

    Löwe, B.; Tarafder, S.

    2015-01-01

    We generalize the construction of lattice-valued models of set theory due to Takeuti, Titani, Kozawa and Ozawa to a wider class of algebras and show that this yields a model of a paraconsistent logic that validates all axioms of the negation-free fragment of Zermelo-Fraenkel set theory.

  18. Robust space-time extraction of ventricular surface evolution using multiphase level sets

    Science.gov (United States)

    Drapaca, Corina S.; Cardenas, Valerie; Studholme, Colin

    2004-05-01

    This paper focuses on the problem of accurately extracting the CSF-tissue boundary, particularly around the ventricular surface, from serial structural MRI of the brain acquired in imaging studies of aging and dementia. This is a challenging problem because of the common occurrence of peri-ventricular lesions which locally alter the appearance of white matter. We examine a level set approach which evolves a four dimensional description of the ventricular surface over time. This has the advantage of allowing constraints on the contour in the temporal dimension, improving the consistency of the extracted object over time. We follow the approach proposed by Chan and Vese which is based on the Mumford and Shah model and implemented using the Osher and Sethian level set method. We have extended this to the 4 dimensional case to propagate a 4D contour toward the tissue boundaries through the evolution of a 5D implicit function. For convergence we use region-based information provided by the image rather than the gradient of the image. This is adapted to allow intensity contrast changes between time frames in the MRI sequence. Results on time sequences of 3D brain MR images are presented and discussed.

  19. Multi-level decision making models, methods and applications

    CERN Document Server

    Zhang, Guangquan; Gao, Ya

    2015-01-01

    This monograph presents new developments in multi-level decision-making theory, technique and method in both modeling and solution issues. It especially presents how a decision support system can support managers in reaching a solution to a multi-level decision problem in practice. This monograph combines decision theories, methods, algorithms and applications effectively. It discusses in detail the models and solution algorithms of each issue of bi-level and tri-level decision-making, such as multi-leaders, multi-followers, multi-objectives, rule-set-based, and fuzzy parameters. Potential readers include organizational managers and practicing professionals, who can use the methods and software provided to solve their real decision problems; PhD students and researchers in the areas of bi-level and multi-level decision-making and decision support systems; students at an advanced undergraduate, master’s level in information systems, business administration, or the application of computer science.  

  20. An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs

    Directory of Open Access Journals (Sweden)

    Kishore R. Mosaliganti

    2013-12-01

    Full Text Available In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse and grid representations (point, mesh, and image-based. Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g. gradient and Hessians across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a

  1. Unified model of nuclear mass and level density formulas

    International Nuclear Information System (INIS)

    Nakamura, Hisashi

    2001-01-01

    The objective of present work is to obtain a unified description of nuclear shell, pairing and deformation effects for both ground state masses and level densities, and to find a new set of parameter systematics for both the mass and the level density formulas on the basis of a model for new single-particle state densities. In this model, an analytical expression is adopted for the anisotropic harmonic oscillator spectra, but the shell-pairing correlation are introduced in a new way. (author)

  2. County-level poverty is equally associated with unmet health care needs in rural and urban settings.

    Science.gov (United States)

    Peterson, Lars E; Litaker, David G

    2010-01-01

    Regional poverty is associated with reduced access to health care. Whether this relationship is equally strong in both rural and urban settings or is affected by the contextual and individual-level characteristics that distinguish these areas, is unclear. Compare the association between regional poverty with self-reported unmet need, a marker of health care access, by rural/urban setting. Multilevel, cross-sectional analysis of a state-representative sample of 39,953 adults stratified by rural/urban status, linked at the county level to data describing contextual characteristics. Weighted random intercept models examined the independent association of regional poverty with unmet needs, controlling for a range of contextual and individual-level characteristics. The unadjusted association between regional poverty levels and unmet needs was similar in both rural (OR = 1.06 [95% CI, 1.04-1.08]) and urban (OR = 1.03 [1.02-1.05]) settings. Adjusting for other contextual characteristics increased the size of the association in both rural (OR = 1.11 [1.04-1.19]) and urban (OR = 1.11 [1.05-1.18]) settings. Further adjustment for individual characteristics had little additional effect in rural (OR = 1.10 [1.00-1.20]) or urban (OR = 1.11 [1.01-1.22]) settings. To better meet the health care needs of all Americans, health care systems in areas with high regional poverty should acknowledge the relationship between poverty and unmet health care needs. Investments, or other interventions, that reduce regional poverty may be useful strategies for improving health through better access to health care. © 2010 National Rural Health Association.

  3. Economic comparison of food, non food crops, set-aside at a regional level with a linear programming model

    International Nuclear Information System (INIS)

    Sourie, J.C.; Hautcolas, J.C.; Blanchet, J.

    1992-01-01

    This paper is concerned with a regional linear programming model. Its purpose is a simulation of the European Economic Community supply of non-food crops at the farm gate according to different sets of European Common Agriculture Policy (CAP) measures. The methodology is first described with a special emphasis on the aggregation problem. The model allows the simultaneous calculation of the impact of non food crops on the farmer's income and on the agricultural budget. The model is then applied to an intensive agricultural region (400 000 ha of arable land). In this region, sugar beet and rape seem the less costly resources, both for the farmers and the CAP taxpayers. An improvement of the economic situation of the two previous agents can be obtained only if a tax exemption on ethanol and rape oil and a subsidy per hactare are allowed. This subsidy can be lower than the set aside premium. (author)

  4. The effectiveness of flipped classroom learning model in secondary physics classroom setting

    Science.gov (United States)

    Prasetyo, B. D.; Suprapto, N.; Pudyastomo, R. N.

    2018-03-01

    The research aimed to describe the effectiveness of flipped classroom learning model on secondary physics classroom setting during Fall semester of 2017. The research object was Secondary 3 Physics group of Singapore School Kelapa Gading. This research was initiated by giving a pre-test, followed by treatment setting of the flipped classroom learning model. By the end of the learning process, the pupils were given a post-test and questionnaire to figure out pupils' response to the flipped classroom learning model. Based on the data analysis, 89% of pupils had passed the minimum criteria of standardization. The increment level in the students' mark was analysed by normalized n-gain formula, obtaining a normalized n-gain score of 0.4 which fulfil medium category range. Obtains from the questionnaire distributed to the students that 93% of students become more motivated to study physics and 89% of students were very happy to carry on hands-on activity based on the flipped classroom learning model. Those three aspects were used to generate a conclusion that applying flipped classroom learning model in Secondary Physics Classroom setting is effectively applicable.

  5. GeneTopics - interpretation of gene sets via literature-driven topic models

    Science.gov (United States)

    2013-01-01

    Background Annotation of a set of genes is often accomplished through comparison to a library of labelled gene sets such as biological processes or canonical pathways. However, this approach might fail if the employed libraries are not up to date with the latest research, don't capture relevant biological themes or are curated at a different level of granularity than is required to appropriately analyze the input gene set. At the same time, the vast biomedical literature offers an unstructured repository of the latest research findings that can be tapped to provide thematic sub-groupings for any input gene set. Methods Our proposed method relies on a gene-specific text corpus and extracts commonalities between documents in an unsupervised manner using a topic model approach. We automatically determine the number of topics summarizing the corpus and calculate a gene relevancy score for each topic allowing us to eliminate non-specific topics. As a result we obtain a set of literature topics in which each topic is associated with a subset of the input genes providing directly interpretable keywords and corresponding documents for literature research. Results We validate our method based on labelled gene sets from the KEGG metabolic pathway collection and the genetic association database (GAD) and show that the approach is able to detect topics consistent with the labelled annotation. Furthermore, we discuss the results on three different types of experimentally derived gene sets, (1) differentially expressed genes from a cardiac hypertrophy experiment in mice, (2) altered transcript abundance in human pancreatic beta cells, and (3) genes implicated by GWA studies to be associated with metabolite levels in a healthy population. In all three cases, we are able to replicate findings from the original papers in a quick and semi-automated manner. Conclusions Our approach provides a novel way of automatically generating meaningful annotations for gene sets that are directly

  6. Fate modelling of chemical compounds with incomplete data sets

    DEFF Research Database (Denmark)

    Birkved, Morten; Heijungs, Reinout

    2011-01-01

    Impact assessment of chemical compounds in Life Cycle Impact Assessment (LCIA) and Environmental Risk Assessment (ERA) requires a vast amount of data on the properties of the chemical compounds being assessed. These data are used in multi-media fate and exposure models, to calculate risk levels...... in an approximate way. The idea is that not all data needed in a multi-media fate and exposure model are completely independent and equally important, but that there are physical-chemical and biological relationships between sets of chemical properties. A statistical model is constructed to underpin this assumption...... and other indicators. ERA typically addresses one specific chemical, but in an LCIA, the number of chemicals encountered may be quite high, up to hundreds or thousands. This study explores the development of meta-models, which are supposed to reflect the “true”multi-media fate and exposure model...

  7. Multi person detection and tracking based on hierarchical level-set method

    Science.gov (United States)

    Khraief, Chadia; Benzarti, Faouzi; Amiri, Hamid

    2018-04-01

    In this paper, we propose an efficient unsupervised method for mutli-person tracking based on hierarchical level-set approach. The proposed method uses both edge and region information in order to effectively detect objects. The persons are tracked on each frame of the sequence by minimizing an energy functional that combines color, texture and shape information. These features are enrolled in covariance matrix as region descriptor. The present method is fully automated without the need to manually specify the initial contour of Level-set. It is based on combined person detection and background subtraction methods. The edge-based is employed to maintain a stable evolution, guide the segmentation towards apparent boundaries and inhibit regions fusion. The computational cost of level-set is reduced by using narrow band technique. Many experimental results are performed on challenging video sequences and show the effectiveness of the proposed method.

  8. An investigation of children's levels of inquiry in an informal science setting

    Science.gov (United States)

    Clark-Thomas, Beth Anne

    Elementary school students' understanding of both science content and processes are enhanced by the higher level thinking associated with inquiry-based science investigations. Informal science setting personnel, elementary school teachers, and curriculum specialists charged with designing inquiry-based investigations would be well served by an understanding of the varying influence of certain present factors upon the students' willingness and ability to delve into such higher level inquiries. This study examined young children's use of inquiry-based materials and factors which may influence the level of inquiry they engaged in during informal science activities. An informal science setting was selected as the context for the examination of student inquiry behaviors because of the rich inquiry-based environment present at the site and the benefits previously noted in the research regarding the impact of informal science settings upon the construction of knowledge in science. The study revealed several patterns of behavior among children when they are engaged in inquiry-based activities at informal science exhibits. These repeated behaviors varied in the children's apparent purposeful use of the materials at the exhibits. These levels of inquiry behavior were taxonomically defined as high/medium/low within this study utilizing a researcher-developed tool. Furthermore, in this study adult interventions, questions, or prompting were found to impact the level of inquiry engaged in by the children. This study revealed that higher levels of inquiry were preceded by task directed and physical feature prompts. Moreover, the levels of inquiry behaviors were haltered, even lowered, when preceded by a prompt that focused on a science content or concept question. Results of this study have implications for the enhancement of inquiry-based science activities in elementary schools as well as in informal science settings. These findings have significance for all science educators

  9. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...

  10. Setting ozone critical levels for protecting horticultural Mediterranean crops: Case study of tomato

    International Nuclear Information System (INIS)

    González-Fernández, I.; Calvo, E.; Gerosa, G.; Bermejo, V.; Marzuoli, R.; Calatayud, V.; Alonso, R.

    2014-01-01

    Seven experiments carried out in Italy and Spain have been used to parameterising a stomatal conductance model and establishing exposure– and dose–response relationships for yield and quality of tomato with the main goal of setting O 3 critical levels (CLe). CLe with confidence intervals, between brackets, were set at an accumulated hourly O 3 exposure over 40 nl l −1 , AOT40 = 8.4 (1.2, 15.6) ppm h and a phytotoxic ozone dose above a threshold of 6 nmol m −2 s −1 , POD6 = 2.7 (0.8, 4.6) mmol m −2 for yield and AOT40 = 18.7 (8.5, 28.8) ppm h and POD6 = 4.1 (2.0, 6.2) mmol m −2 for quality, both indices performing equally well. CLe confidence intervals provide information on the quality of the dataset and should be included in future calculations of O 3 CLe for improving current methodologies. These CLe, derived for sensitive tomato cultivars, should not be applied for quantifying O 3 -induced losses at the risk of making important overestimations of the economical losses associated with O 3 pollution. -- Highlights: • Seven independent experiments from Italy and Spain were analysed. • O 3 critical levels are proposed for the protection of summer horticultural crops. • Exposure- and flux-based O 3 indices performed equally well. • Confidence intervals of the new O 3 critical levels are calculated. • A new method to estimate the degree risk of O 3 damage is proposed. -- Critical levels for tomato yield were set at AOT40 = 8.4 ppm h and POD6 = 2.7 mmol m −2 and confidence intervals should be used for improving O 3 risk assessment

  11. Metabolic level recognition of progesterone in dairy Holstein cows using probabilistic models

    Directory of Open Access Journals (Sweden)

    Ludmila N. Turino

    2014-05-01

    Full Text Available Administration of exogenous progesterone is widely used in hormonal protocols for estrous (resynchronization of dairy cattle without regarding pharmacological issues for dose calculation. This happens because it is difficult to estimate the metabolic level of progesterone for each individual cow before administration. In the present contribution, progesterone pharmacokinetics has been determined in lactating Holstein cows with different milk production yields. A Bayesian approach has been implemented to build two probabilistic progesterone pharmacokinetic models for high and low yield dairy cows. Such models are based on a one-compartment Hill structure. Posterior probabilistic models have been structurally set up and parametric probability density functions have been empirically estimated. Moreover, a global sensitivity analysis has been done to know sensitivity profile of each model. Finally, posterior probabilistic models have adequately recognized cow’s progesterone metabolic level in a validation set when Kullback-Leibler based indices were used. These results suggest that milk yield may be a good index for estimating pharmacokinetic level of progesterone.

  12. Towards Precise Metadata-set for Discovering 3D Geospatial Models in Geo-portals

    Science.gov (United States)

    Zamyadi, A.; Pouliot, J.; Bédard, Y.

    2013-09-01

    Accessing 3D geospatial models, eventually at no cost and for unrestricted use, is certainly an important issue as they become popular among participatory communities, consultants, and officials. Various geo-portals, mainly established for 2D resources, have tried to provide access to existing 3D resources such as digital elevation model, LIDAR or classic topographic data. Describing the content of data, metadata is a key component of data discovery in geo-portals. An inventory of seven online geo-portals and commercial catalogues shows that the metadata referring to 3D information is very different from one geo-portal to another as well as for similar 3D resources in the same geo-portal. The inventory considered 971 data resources affiliated with elevation. 51% of them were from three geo-portals running at Canadian federal and municipal levels whose metadata resources did not consider 3D model by any definition. Regarding the remaining 49% which refer to 3D models, different definition of terms and metadata were found, resulting in confusion and misinterpretation. The overall assessment of these geo-portals clearly shows that the provided metadata do not integrate specific and common information about 3D geospatial models. Accordingly, the main objective of this research is to improve 3D geospatial model discovery in geo-portals by adding a specific metadata-set. Based on the knowledge and current practices on 3D modeling, and 3D data acquisition and management, a set of metadata is proposed to increase its suitability for 3D geospatial models. This metadata-set enables the definition of genuine classes, fields, and code-lists for a 3D metadata profile. The main structure of the proposal contains 21 metadata classes. These classes are classified in three packages as General and Complementary on contextual and structural information, and Availability on the transition from storage to delivery format. The proposed metadata set is compared with Canadian Geospatial

  13. Setting limits on supersymmetry using simplified models

    CERN Document Server

    Gutschow, C.

    2012-01-01

    Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical implications. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be re-cast in this manner into almost any theoretical framework, includ...

  14. Fuzzy GML Modeling Based on Vague Soft Sets

    Directory of Open Access Journals (Sweden)

    Bo Wei

    2017-01-01

    Full Text Available The Open Geospatial Consortium (OGC Geography Markup Language (GML explicitly represents geographical spatial knowledge in text mode. All kinds of fuzzy problems will inevitably be encountered in spatial knowledge expression. Especially for those expressions in text mode, this fuzziness will be broader. Describing and representing fuzziness in GML seems necessary. Three kinds of fuzziness in GML can be found: element fuzziness, chain fuzziness, and attribute fuzziness. Both element fuzziness and chain fuzziness belong to the reflection of the fuzziness between GML elements and, then, the representation of chain fuzziness can be replaced by the representation of element fuzziness in GML. On the basis of vague soft set theory, two kinds of modeling, vague soft set GML Document Type Definition (DTD modeling and vague soft set GML schema modeling, are proposed for fuzzy modeling in GML DTD and GML schema, respectively. Five elements or pairs, associated with vague soft sets, are introduced. Then, the DTDs and the schemas of the five elements are correspondingly designed and presented according to their different chains and different fuzzy data types. While the introduction of the five elements or pairs is the basis of vague soft set GML modeling, the corresponding DTD and schema modifications are key for implementation of modeling. The establishment of vague soft set GML enables GML to represent fuzziness and solves the problem of lack of fuzzy information expression in GML.

  15. The null hypothesis of GSEA, and a novel statistical model for competitive gene set analysis

    DEFF Research Database (Denmark)

    Debrabant, Birgit

    2017-01-01

    MOTIVATION: Competitive gene set analysis intends to assess whether a specific set of genes is more associated with a trait than the remaining genes. However, the statistical models assumed to date to underly these methods do not enable a clear cut formulation of the competitive null hypothesis....... This is a major handicap to the interpretation of results obtained from a gene set analysis. RESULTS: This work presents a hierarchical statistical model based on the notion of dependence measures, which overcomes this problem. The two levels of the model naturally reflect the modular structure of many gene set...... analysis methods. We apply the model to show that the popular GSEA method, which recently has been claimed to test the self-contained null hypothesis, actually tests the competitive null if the weight parameter is zero. However, for this result to hold strictly, the choice of the dependence measures...

  16. Hybrid Compensatory-Noncompensatory Choice Sets in Semicompensatory Models

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Bekhor, Shlomo; Shiftan, Yoram

    2013-01-01

    Semicompensatory models represent a choice process consisting of an elimination-based choice set formation on satisfaction of criterion thresholds and a utility-based choice. Current semicompensatory models assume a purely noncompensatory choice set formation and therefore do not support multinom...

  17. On piecewise constant level-set (PCLS) methods for the identification of discontinuous parameters in ill-posed problems

    International Nuclear Information System (INIS)

    De Cezaro, A; Leitão, A; Tai, X-C

    2013-01-01

    We investigate level-set-type methods for solving ill-posed problems with discontinuous (piecewise constant) coefficients. The goal is to identify the level sets as well as the level values of an unknown parameter function on a model described by a nonlinear ill-posed operator equation. The PCLS approach is used here to parametrize the solution of a given operator equation in terms of a L 2 level-set function, i.e. the level-set function itself is assumed to be a piecewise constant function. Two distinct methods are proposed for computing stable solutions of the resulting ill-posed problem: the first is based on Tikhonov regularization, while the second is based on the augmented Lagrangian approach with total variation penalization. Classical regularization results (Engl H W et al 1996 Mathematics and its Applications (Dordrecht: Kluwer)) are derived for the Tikhonov method. On the other hand, for the augmented Lagrangian method, we succeed in proving the existence of (generalized) Lagrangian multipliers in the sense of (Rockafellar R T and Wets R J-B 1998 Grundlehren der Mathematischen Wissenschaften (Berlin: Springer)). Numerical experiments are performed for a 2D inverse potential problem (Hettlich F and Rundell W 1996 Inverse Problems 12 251–66), demonstrating the capabilities of both methods for solving this ill-posed problem in a stable way (complicated inclusions are recovered without any a priori geometrical information on the unknown parameter). (paper)

  18. Ultrasonic scalpel causes greater depth of soft tissue necrosis compared to monopolar electrocautery at standard power level settings in a pig model.

    Science.gov (United States)

    Homayounfar, Kia; Meis, Johanna; Jung, Klaus; Klosterhalfen, Bernd; Sprenger, Thilo; Conradi, Lena-Christin; Langer, Claus; Becker, Heinz

    2012-02-23

    Ultrasonic scalpel (UC) and monopolar electrocautery (ME) are common tools for soft tissue dissection. However, morphological data on the related tissue alteration are discordant. We developed an automatic device for standardized sample excision and compared quality and depth of morphological changes caused by UC and ME in a pig model. 100 tissue samples (5 × 3 cm) of the abdominal wall were excised in 16 pigs. Excisions were randomly performed manually or by using the self-constructed automatic device at standard power levels (60 W cutting in ME, level 5 in UC) for abdominal surgery. Quality of tissue alteration and depth of coagulation necrosis were examined histopathologically. Device (UC vs. ME) and mode (manually vs. automatic) effects were studied by two-way analysis of variance at a significance level of 5%. At the investigated power level settings UC and ME induced qualitatively similar coagulation necroses. Mean depth of necrosis was 450.4 ± 457.8 μm for manual UC and 553.5 ± 326.9 μm for automatic UC versus 149.0 ± 74.3 μm for manual ME and 257.6 ± 119.4 μm for automatic ME. Coagulation necrosis was significantly deeper (p power levels.

  19. Surface-to-surface registration using level sets

    DEFF Research Database (Denmark)

    Hansen, Mads Fogtmann; Erbou, Søren G.; Vester-Christensen, Martin

    2007-01-01

    This paper presents a general approach for surface-to-surface registration (S2SR) with the Euclidean metric using signed distance maps. In addition, the method is symmetric such that the registration of a shape A to a shape B is identical to the registration of the shape B to the shape A. The S2SR...... problem can be approximated by the image registration (IR) problem of the signed distance maps (SDMs) of the surfaces confined to some narrow band. By shrinking the narrow bands around the zero level sets the solution to the IR problem converges towards the S2SR problem. It is our hypothesis...... that this approach is more robust and less prone to fall into local minima than ordinary surface-to-surface registration. The IR problem is solved using the inverse compositional algorithm. In this paper, a set of 40 pelvic bones of Duroc pigs are registered to each other w.r.t. the Euclidean transformation...

  20. Algebraic Specifications, Higher-order Types and Set-theoretic Models

    DEFF Research Database (Denmark)

    Kirchner, Hélène; Mosses, Peter David

    2001-01-01

    , and power-sets. This paper presents a simple framework for algebraic specifications with higher-order types and set-theoretic models. It may be regarded as the basis for a Horn-clause approximation to the Z framework, and has the advantage of being amenable to prototyping and automated reasoning. Standard......In most algebraic  specification frameworks, the type system is restricted to sorts, subsorts, and first-order function types. This is in marked contrast to the so-called model-oriented frameworks, which provide higer-order types, interpreted set-theoretically as Cartesian products, function spaces...... set-theoretic models are considered, and conditions are given for the existence of initial reduct's of such models. Algebraic specifications for various set-theoretic concepts are considered....

  1. A bottleneck model of set-specific capture.

    Directory of Open Access Journals (Sweden)

    Katherine Sledge Moore

    Full Text Available Set-specific contingent attentional capture is a particularly strong form of capture that occurs when multiple attentional sets guide visual search (e.g., "search for green letters" and "search for orange letters". In this type of capture, a potential target that matches one attentional set (e.g. a green stimulus impairs the ability to identify a temporally proximal target that matches another attentional set (e.g. an orange stimulus. In the present study, we investigated whether set-specific capture stems from a bottleneck in working memory or from a depletion of limited resources that are distributed across multiple attentional sets. In each trial, participants searched a rapid serial visual presentation (RSVP stream for up to three target letters (T1-T3 that could appear in any of three target colors (orange, green, or lavender. The most revealing findings came from trials in which T1 and T2 matched different attentional sets and were both identified. In these trials, T3 accuracy was lower when it did not match T1's set than when it did match, but only when participants failed to identify T2. These findings support a bottleneck model of set-specific capture in which a limited-capacity mechanism in working memory enhances only one attentional set at a time, rather than a resource model in which processing capacity is simultaneously distributed across multiple attentional sets.

  2. Home advantage in high-level volleyball varies according to set number.

    Science.gov (United States)

    Marcelino, Rui; Mesquita, Isabel; Palao Andrés, José Manuel; Sampaio, Jaime

    2009-01-01

    The aim of the present study was to identify the probability of winning each Volleyball set according to game location (home, away). Archival data was obtained from 275 sets in the 2005 Men's Senior World League and 65,949 actions were analysed. Set result (win, loss), game location (home, away), set number (first, second, third, fourth and fifth) and performance indicators (serve, reception, set, attack, dig and block) were the variables considered in this study. In a first moment, performance indicators were used in a logistic model of set result, by binary logistic regression analysis. After finding the adjusted logistic model, the log-odds of winning the set were analysed according to game location and set number. The results showed that winning a set is significantly related to performance indicators (Chisquare(18)=660.97, padvantage at the beginning of the game (first set) and in the two last sets of the game (fourth and fifth sets), probably due to facilities familiarity and crowd effects. Different game actions explain these advantages and showed that to win the first set is more important to take risk, through a better performance in the attack and block, and to win the final set is important to manage the risk through a better performance on the reception. These results may suggest intra-game variation in home advantage and can be most useful to better prepare and direct the competition. Key pointsHome teams always have more probability of winning the game than away teams.Home teams have higher performance in reception, set and attack in the total of the sets.The advantage of home teams is more pronounced at the beginning of the game (first set) and in two last sets of the game (fourth and fifth sets) suggesting intra-game variation in home advantage.Analysis by sets showed that home teams have a better performance in the attack and block in the first set and in the reception in the third and fifth sets.

  3. A Memory and Computation Efficient Sparse Level-Set Method

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.

    Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the

  4. Translation of a High-Level Temporal Model into Lower Level Models: Impact of Modelling at Different Description Levels

    DEFF Research Database (Denmark)

    Kraft, Peter; Sørensen, Jens Otto

    2001-01-01

    given types of properties, and examine how descriptions on higher levels translate into descriptions on lower levels. Our example looks at temporal properties where the information is concerned with the existence in time. In a high level temporal model with information kept in a three-dimensional space...... the existences in time can be mapped precisely and consistently securing a consistent handling of the temporal properties. We translate the high level temporal model into an entity-relationship model, with the information in a two-dimensional graph, and finally we look at the translations into relational...... and other textual models. We also consider the aptness of models that include procedural mechanisms such as active and object databases...

  5. Comparing Fuzzy Sets and Random Sets to Model the Uncertainty of Fuzzy Shorelines

    NARCIS (Netherlands)

    Dewi, Ratna Sari; Bijker, Wietske; Stein, Alfred

    2017-01-01

    This paper addresses uncertainty modelling of shorelines by comparing fuzzy sets and random sets. Both methods quantify extensional uncertainty of shorelines extracted from remote sensing images. Two datasets were tested: pan-sharpened Pleiades with four bands (Pleiades) and pan-sharpened Pleiades

  6. Study on high-level waste geological disposal metadata model

    International Nuclear Information System (INIS)

    Ding Xiaobin; Wang Changhong; Zhu Hehua; Li Xiaojun

    2008-01-01

    This paper expatiated the concept of metadata and its researches within china and abroad, then explain why start the study on the metadata model of high-level nuclear waste deep geological disposal project. As reference to GML, the author first set up DML under the framework of digital underground space engineering. Based on DML, a standardized metadata employed in high-level nuclear waste deep geological disposal project is presented. Then, a Metadata Model with the utilization of internet is put forward. With the standardized data and CSW services, this model may solve the problem in the data sharing and exchanging of different data form A metadata editor is build up in order to search and maintain metadata based on this model. (authors)

  7. Level sets and extrema of random processes and fields

    CERN Document Server

    Azais, Jean-Marc

    2009-01-01

    A timely and comprehensive treatment of random field theory with applications across diverse areas of study Level Sets and Extrema of Random Processes and Fields discusses how to understand the properties of the level sets of paths as well as how to compute the probability distribution of its extremal values, which are two general classes of problems that arise in the study of random processes and fields and in related applications. This book provides a unified and accessible approach to these two topics and their relationship to classical theory and Gaussian processes and fields, and the most modern research findings are also discussed. The authors begin with an introduction to the basic concepts of stochastic processes, including a modern review of Gaussian fields and their classical inequalities. Subsequent chapters are devoted to Rice formulas, regularity properties, and recent results on the tails of the distribution of the maximum. Finally, applications of random fields to various areas of mathematics a...

  8. 76 FR 9004 - Public Comment on Setting Achievement Levels in Writing

    Science.gov (United States)

    2011-02-16

    ... DEPARTMENT OF EDUCATION Public Comment on Setting Achievement Levels in Writing AGENCY: U.S... Achievement Levels in Writing. SUMMARY: The National Assessment Governing Board (Governing Board) is... for NAEP in writing. This notice provides opportunity for public comment and submitting...

  9. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    Science.gov (United States)

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Appropriate criteria set for personnel promotion across organizational levels using analytic hierarchy process (AHP

    Directory of Open Access Journals (Sweden)

    Charles Noven Castillo

    2017-01-01

    Full Text Available Currently, there has been limited established specific set of criteria for personnel promotion to each level of the organization. This study is conducted in order to develop a personnel promotion strategy by identifying specific sets of criteria for each level of the organization. The complexity of identifying the criteria set along with the subjectivity of these criteria require the use of multi-criteria decision-making approach particularly the analytic hierarchy process (AHP. Results show different sets of criteria for each management level which are consistent with several frameworks in literature. These criteria sets would help avoid mismatch of employee skills and competencies and their job, and at the same time eliminate the issues in personnel promotion such as favouritism, glass ceiling, and gender and physical attractiveness preference. This work also shows that personality and traits, job satisfaction and experience and skills are more critical rather than social capital across different organizational levels. The contribution of this work is in identifying relevant criteria in developing a personnel promotion strategy across organizational levels.

  11. Demons versus level-set motion registration for coronary 18F-sodium fluoride PET

    Science.gov (United States)

    Rubeaux, Mathieu; Joshi, Nikhil; Dweck, Marc R.; Fletcher, Alison; Motwani, Manish; Thomson, Louise E.; Germano, Guido; Dey, Damini; Berman, Daniel S.; Newby, David E.; Slomka, Piotr J.

    2016-03-01

    Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18F-sodium fluoride (18F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18F-NaF PET. To this end, fifteen patients underwent 18F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically

  12. Mental models of audit and feedback in primary care settings.

    Science.gov (United States)

    Hysong, Sylvia J; Smitham, Kristen; SoRelle, Richard; Amspoker, Amber; Hughes, Ashley M; Haidet, Paul

    2018-05-30

    Audit and feedback has been shown to be instrumental in improving quality of care, particularly in outpatient settings. The mental model individuals and organizations hold regarding audit and feedback can moderate its effectiveness, yet this has received limited study in the quality improvement literature. In this study we sought to uncover patterns in mental models of current feedback practices within high- and low-performing healthcare facilities. We purposively sampled 16 geographically dispersed VA hospitals based on high and low performance on a set of chronic and preventive care measures. We interviewed up to 4 personnel from each location (n = 48) to determine the facility's receptivity to audit and feedback practices. Interview transcripts were analyzed via content and framework analysis to identify emergent themes. We found high variability in the mental models of audit and feedback, which we organized into positive and negative themes. We were unable to associate mental models of audit and feedback with clinical performance due to high variance in facility performance over time. Positive mental models exhibit perceived utility of audit and feedback practices in improving performance; whereas, negative mental models did not. Results speak to the variability of mental models of feedback, highlighting how facilities perceive current audit and feedback practices. Findings are consistent with prior research  in that variability in feedback mental models is associated with lower performance.; Future research should seek to empirically link mental models revealed in this paper to high and low levels of clinical performance.

  13. Out-of-Core Computations of High-Resolution Level Sets by Means of Code Transformation

    DEFF Research Database (Denmark)

    Christensen, Brian Bunch; Nielsen, Michael Bang; Museth, Ken

    2012-01-01

    We propose a storage efficient, fast and parallelizable out-of-core framework for streaming computations of high resolution level sets. The fundamental techniques are skewing and tiling transformations of streamed level set computations which allow for the combination of interface propagation, re...... computations are now CPU bound and consequently the overall performance is unaffected by disk latency and bandwidth limitations. We demonstrate this with several benchmark tests that show sustained out-of-core throughputs close to that of in-core level set simulations....

  14. Simulation of neuro-fuzzy model for optimization of combine header setting

    Directory of Open Access Journals (Sweden)

    S Zareei

    2016-09-01

    Full Text Available Introduction The noticeable proportion of producing wheat losses occur during production and consumption steps and the loss due to harvesting with combine harvester is regarded as one of the main factors. A grain combines harvester consists of different sets of equipment and one of the most important parts is the header which comprises more than 50% of the entire harvesting losses. Some researchers have presented regression equation to estimate grain loss of combine harvester. The results of their study indicated that grain moisture content, reel index, cutter bar speed, service life of cutter bar, tine spacing, tine clearance over cutter bar, stem length were the major parameters affecting the losses. On the other hand, there are several researchswhich have used the variety of artificial intelligence methods in the different aspects of combine harvester. In neuro-fuzzy control systems, membership functions and if-then rules were defined through neural networks. Sugeno- type fuzzy inference model was applied to generate fuzzy rules from a given input-output data set due to its less time-consuming and mathematically tractable defuzzification operation for sample data-based fuzzy modeling. In this study, neuro-fuzzy model was applied to develop forecasting models which can predict the combine header loss for each set of the header parameter adjustments related to site-specific information and therefore can minimize the header loss. Materials and Methods The field experiment was conducted during the harvesting season of 2011 at the research station of the Faulty of Agriculture, Shiraz University, Shiraz, Iran. The wheat field (CV. Shiraz was harvested with a Claas Lexion-510 combine harvester. The factors which were selected as main factors influenced the header performance were three levels of reel index (RI (forward speed of combine harvester divided by peripheral speed of reel (1, 1.2, 1.5, three levels of cutting height (CH(25, 30, 35 cm, three

  15. Basic set theory

    CERN Document Server

    Levy, Azriel

    2002-01-01

    An advanced-level treatment of the basics of set theory, this text offers students a firm foundation, stopping just short of the areas employing model-theoretic methods. Geared toward upper-level undergraduate and graduate students, it consists of two parts: the first covers pure set theory, including the basic motions, order and well-foundedness, cardinal numbers, the ordinals, and the axiom of choice and some of it consequences; the second deals with applications and advanced topics such as point set topology, real spaces, Boolean algebras, and infinite combinatorics and large cardinals. An

  16. Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement

    Science.gov (United States)

    Shervani-Tabar, Navid; Vasilyev, Oleg V.

    2016-11-01

    This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.

  17. Some numerical studies of interface advection properties of level set ...

    Indian Academy of Sciences (India)

    explicit computational elements moving through an Eulerian grid. ... location. The interface is implicitly defined (captured) as the location of the discontinuity in the ... This level set function is advected with the background flow field and thus ...

  18. Model based decision support system of operating settings for MMAT nozzles

    Directory of Open Access Journals (Sweden)

    Fritz Bradley Keith

    2016-04-01

    Full Text Available Droplet size, which is affected by nozzle type, nozzle setups and operation, and spray solution, is one of the most critical factors influencing spray performance, environment pollution, food safety, and must be considered as part of any application scenario. Characterizing spray nozzles can be a timely and expensive proposition if the entire operational space (all combinations of spray pressure and orifice size, what influence flow rate is to be evaluated. This research proposes a structured, experimental design that allows for the development of computational models for droplet size based on any combination of a nozzle’s potential operational settings. The developed droplet size determination model can be used as Decision Support System (DSS for precise selection of sprayer working parameters to adapt to local field scenarios. Five nozzle types (designs were evaluated across their complete range of orifice size (flow rate* and spray pressures using a response surface experimental design. Several of the models showed high level fits of the modeled to the measured data while several did not as a result of the lack of significant effect from either orifice size (flow rate* or spray pressure. The computational models were integrated into a spreadsheet based user interface for ease of use. The proposed experimental design provides for efficient nozzle evaluations and development of computational models that allow for the determination of droplet size spectrum and spraying classification for any combination of a given nozzle’s operating settings. The proposed DSS will allow for the ready assessment and modification of a sprayers performance based on the operational settings, to ensure the application is made following recommendations in plant protection products (PPP labels.

  19. Analysis and classification of data sets for calibration and validation of agro-ecosystem models

    DEFF Research Database (Denmark)

    Kersebaum, K C; Boote, K J; Jorgenson, J S

    2015-01-01

    Experimental field data are used at different levels of complexity to calibrate, validate and improve agro-ecosystem models to enhance their reliability for regional impact assessment. A methodological framework and software are presented to evaluate and classify data sets into four classes regar...

  20. Reservoir characterisation by a binary level set method and adaptive multiscale estimation

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Lars Kristian

    2006-01-15

    The main focus of this work is on estimation of the absolute permeability as a solution of an inverse problem. We have both considered a single-phase and a two-phase flow model. Two novel approaches have been introduced and tested numerical for solving the inverse problems. The first approach is a multi scale zonation technique which is treated in Paper A. The purpose of the work in this paper is to find a coarse scale solution based on production data from wells. In the suggested approach, the robustness of an already developed method, the adaptive multi scale estimation (AME), has been improved by utilising information from several candidate solutions generated by a stochastic optimizer. The new approach also suggests a way of combining a stochastic and a gradient search method, which in general is a problematic issue. The second approach is a piecewise constant level set approach and is applied in Paper B, C, D and E. Paper B considers the stationary single-phase problem, while Paper C, D and E use a two-phase flow model. In the two-phase flow problem we have utilised information from both production data in wells and spatially distributed data gathered from seismic surveys. Due to the higher content of information provided by the spatially distributed data, we search solutions on a slightly finer scale than one typically does with only production data included. The applied level set method is suitable for reconstruction of fields with a supposed known facies-type of solution. That is, the solution should be close to piecewise constant. This information is utilised through a strong restriction of the number of constant levels in the estimate. On the other hand, the flexibility in the geometries of the zones is much larger for this method than in a typical zonation approach, for example the multi scale approach applied in Paper A. In all these papers, the numerical studies are done on synthetic data sets. An advantage of synthetic data studies is that the true

  1. Analyzing ROC curves using the effective set-size model

    Science.gov (United States)

    Samuelson, Frank W.; Abbey, Craig K.; He, Xin

    2018-03-01

    The Effective Set-Size model has been used to describe uncertainty in various signal detection experiments. The model regards images as if they were an effective number (M*) of searchable locations, where the observer treats each location as a location-known-exactly detection task with signals having average detectability d'. The model assumes a rational observer behaves as if he searches an effective number of independent locations and follows signal detection theory at each location. Thus the location-known-exactly detectability (d') and the effective number of independent locations M* fully characterize search performance. In this model the image rating in a single-response task is assumed to be the maximum response that the observer would assign to these many locations. The model has been used by a number of other researchers, and is well corroborated. We examine this model as a way of differentiating imaging tasks that radiologists perform. Tasks involving more searching or location uncertainty may have higher estimated M* values. In this work we applied the Effective Set-Size model to a number of medical imaging data sets. The data sets include radiologists reading screening and diagnostic mammography with and without computer-aided diagnosis (CAD), and breast tomosynthesis. We developed an algorithm to fit the model parameters using two-sample maximum-likelihood ordinal regression, similar to the classic bi-normal model. The resulting model ROC curves are rational and fit the observed data well. We find that the distributions of M* and d' differ significantly among these data sets, and differ between pairs of imaging systems within studies. For example, on average tomosynthesis increased readers' d' values, while CAD reduced the M* parameters. We demonstrate that the model parameters M* and d' are correlated. We conclude that the Effective Set-Size model may be a useful way of differentiating location uncertainty from the diagnostic uncertainty in medical

  2. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    Science.gov (United States)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  3. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation.

    Science.gov (United States)

    Barasa, Edwine W; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-09-16

    Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these complementary schools of thought. © 2015

  4. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    Science.gov (United States)

    Barasa, Edwine W.; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-01-01

    Background: Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods: We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results: Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Conclusion: Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these

  5. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    Directory of Open Access Journals (Sweden)

    Edwine W. Barasa

    2015-11-01

    Full Text Available Background Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1 Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a Stakeholder satisfaction, (b Stakeholder understanding, (c Shifted priorities (reallocation of resources, and (d Implementation of decisions. (2 Priority setting processes should also meet the procedural conditions of (a Stakeholder engagement, (b Stakeholder empowerment, (c Transparency, (d Use of evidence, (e Revisions, (f Enforcement, and (g Being grounded on community values. Conclusion Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from

  6. Review and evaluation of performance measures for survival prediction models in external validation settings

    Directory of Open Access Journals (Sweden)

    M. Shafiqur Rahman

    2017-04-01

    Full Text Available Abstract Background When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. Methods An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Results Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell’s concordance measure which tended to increase as censoring increased. Conclusions We recommend that Uno’s concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller’s measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston’s D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive

  7. Continuous soil maps - a fuzzy set approach to bridge the gap between aggregation levels of process and distribution models

    NARCIS (Netherlands)

    Gruijter, de J.J.; Walvoort, D.J.J.; Gaans, van P.F.M.

    1997-01-01

    Soil maps as multi-purpose models of spatial soil distribution have a much higher level of aggregation (map units) than the models of soil processes and land-use effects that need input from soil maps. This mismatch between aggregation levels is particularly detrimental in the context of precision

  8. Level Sets and Voronoi based Feature Extraction from any Imagery

    DEFF Research Database (Denmark)

    Sharma, O.; Anton, François; Mioc, Darka

    2012-01-01

    Polygon features are of interest in many GEOProcessing applications like shoreline mapping, boundary delineation, change detection, etc. This paper presents a unique new GPU-based methodology to automate feature extraction combining level sets, or mean shift based segmentation together with Voron...

  9. Evaluating healthcare priority setting at the meso level: A thematic review of empirical literature

    Science.gov (United States)

    Waithaka, Dennis; Tsofa, Benjamin; Barasa, Edwine

    2018-01-01

    Background: Decentralization of health systems has made sub-national/regional healthcare systems the backbone of healthcare delivery. These regions are tasked with the difficult responsibility of determining healthcare priorities and resource allocation amidst scarce resources. We aimed to review empirical literature that evaluated priority setting practice at the meso (sub-national) level of health systems. Methods: We systematically searched PubMed, ScienceDirect and Google scholar databases and supplemented these with manual searching for relevant studies, based on the reference list of selected papers. We only included empirical studies that described and evaluated, or those that only evaluated priority setting practice at the meso-level. A total of 16 papers were identified from LMICs and HICs. We analyzed data from the selected papers by thematic review. Results: Few studies used systematic priority setting processes, and all but one were from HICs. Both formal and informal criteria are used in priority-setting, however, informal criteria appear to be more perverse in LMICs compared to HICs. The priority setting process at the meso-level is a top-down approach with minimal involvement of the community. Accountability for reasonableness was the most common evaluative framework as it was used in 12 of the 16 studies. Efficiency, reallocation of resources and options for service delivery redesign were the most common outcome measures used to evaluate priority setting. Limitations: Our study was limited by the fact that there are very few empirical studies that have evaluated priority setting at the meso-level and there is likelihood that we did not capture all the studies. Conclusions: Improving priority setting practices at the meso level is crucial to strengthening health systems. This can be achieved through incorporating and adapting systematic priority setting processes and frameworks to the context where used, and making considerations of both process

  10. Modelling occupants’ heating set-point prefferences

    DEFF Research Database (Denmark)

    Andersen, Rune Vinther; Olesen, Bjarne W.; Toftum, Jørn

    2011-01-01

    consumption. Simultaneous measurement of the set-point of thermostatic radiator valves (trv), and indoor and outdoor environment characteristics was carried out in 15 dwellings in Denmark in 2008. Linear regression was used to infer a model of occupants’ interactions with trvs. This model could easily...... be implemented in most simulation software packages to increase the validity of the simulation outcomes....

  11. Online monitoring of oil film using electrical capacitance tomography and level set method

    International Nuclear Information System (INIS)

    Xue, Q.; Ma, M.; Sun, B. Y.; Cui, Z. Q.; Wang, H. X.

    2015-01-01

    In the application of oil-air lubrication system, electrical capacitance tomography (ECT) provides a promising way for monitoring oil film in the pipelines by reconstructing cross sectional oil distributions in real time. While in the case of small diameter pipe and thin oil film, the thickness of the oil film is hard to be observed visually since the interface of oil and air is not obvious in the reconstructed images. And the existence of artifacts in the reconstructions has seriously influenced the effectiveness of image segmentation techniques such as level set method. Besides, level set method is also unavailable for online monitoring due to its low computation speed. To address these problems, a modified level set method is developed: a distance regularized level set evolution formulation is extended to image two-phase flow online using an ECT system, a narrowband image filter is defined to eliminate the influence of artifacts, and considering the continuity of the oil distribution variation, the detected oil-air interface of a former image can be used as the initial contour for the detection of the subsequent frame; thus, the propagation from the initial contour to the boundary can be greatly accelerated, making it possible for real time tracking. To testify the feasibility of the proposed method, an oil-air lubrication facility with 4 mm inner diameter pipe is measured in normal operation using an 8-electrode ECT system. Both simulation and experiment results indicate that the modified level set method is capable of visualizing the oil-air interface accurately online

  12. A level-set method for two-phase flows with soluble surfactant

    Science.gov (United States)

    Xu, Jian-Jun; Shi, Weidong; Lai, Ming-Chih

    2018-01-01

    A level-set method is presented for solving two-phase flows with soluble surfactant. The Navier-Stokes equations are solved along with the bulk surfactant and the interfacial surfactant equations. In particular, the convection-diffusion equation for the bulk surfactant on the irregular moving domain is solved by using a level-set based diffusive-domain method. A conservation law for the total surfactant mass is derived, and a re-scaling procedure for the surfactant concentrations is proposed to compensate for the surfactant mass loss due to numerical diffusion. The whole numerical algorithm is easy for implementation. Several numerical simulations in 2D and 3D show the effects of surfactant solubility on drop dynamics under shear flow.

  13. Three-Dimensional Simulation of DRIE Process Based on the Narrow Band Level Set and Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Jia-Cheng Yu

    2018-02-01

    Full Text Available A three-dimensional topography simulation of deep reactive ion etching (DRIE is developed based on the narrow band level set method for surface evolution and Monte Carlo method for flux distribution. The advanced level set method is implemented to simulate the time-related movements of etched surface. In the meanwhile, accelerated by ray tracing algorithm, the Monte Carlo method incorporates all dominant physical and chemical mechanisms such as ion-enhanced etching, ballistic transport, ion scattering, and sidewall passivation. The modified models of charged particles and neutral particles are epitomized to determine the contributions of etching rate. The effects such as scalloping effect and lag effect are investigated in simulations and experiments. Besides, the quantitative analyses are conducted to measure the simulation error. Finally, this simulator will be served as an accurate prediction tool for some MEMS fabrications.

  14. Translation of a High-Level Temporal Model into Lower Level Models: Impact of Modelling at Different Description Levels

    DEFF Research Database (Denmark)

    Kraft, Peter; Sørensen, Jens Otto

    2001-01-01

    The paper attempts theoretically to clarify the interrelation between various levels of descriptions used in the modelling and the programming of information systems. We suggest an analysis where we characterise the description levels with respect to how precisely they may handle information abou...... and other textual models. We also consider the aptness of models that include procedural mechanisms such as active and object databases...

  15. Multi-Level Model

    Directory of Open Access Journals (Sweden)

    Constanta Nicoleta BODEA

    2008-01-01

    Full Text Available Is an original paper, which contains a hierarchical model with three levels, for determining the linearized non-homogeneous and homogeneous credibility premiums at company level, at sector level and at contract level, founded on the relevant covariance relations between the risk premium, the observations and the weighted averages. We give a rather explicit description of the input data for the multi- level hierarchical model used, only to show that in practical situations, there will always be enough data to apply credibility theory to a real insurance portfolio.

  16. Cooperative Fuzzy Games Approach to Setting Target Levels of ECs in Quality Function Deployment

    Directory of Open Access Journals (Sweden)

    Zhihui Yang

    2014-01-01

    Full Text Available Quality function deployment (QFD can provide a means of translating customer requirements (CRs into engineering characteristics (ECs for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach.

  17. Cooperative fuzzy games approach to setting target levels of ECs in quality function deployment.

    Science.gov (United States)

    Yang, Zhihui; Chen, Yizeng; Yin, Yunqiang

    2014-01-01

    Quality function deployment (QFD) can provide a means of translating customer requirements (CRs) into engineering characteristics (ECs) for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach.

  18. Modelling uncertainty with generalized credal sets: application to conjunction and decision

    Science.gov (United States)

    Bronevich, Andrey G.; Rozenberg, Igor N.

    2018-01-01

    To model conflict, non-specificity and contradiction in information, upper and lower generalized credal sets are introduced. Any upper generalized credal set is a convex subset of plausibility measures interpreted as lower probabilities whose bodies of evidence consist of singletons and a certain event. Analogously, contradiction is modelled in the theory of evidence by a belief function that is greater than zero at empty set. Based on generalized credal sets, we extend the conjunctive rule for contradictory sources of information, introduce constructions like natural extension in the theory of imprecise probabilities and show that the model of generalized credal sets coincides with the model of imprecise probabilities if the profile of a generalized credal set consists of probability measures. We give ways how the introduced model can be applied to decision problems.

  19. A level set method for cupping artifact correction in cone-beam CT

    International Nuclear Information System (INIS)

    Xie, Shipeng; Li, Haibo; Ge, Qi; Li, Chunming

    2015-01-01

    Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts in CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts

  20. Spatial occupancy models for large data sets

    Science.gov (United States)

    Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.

    2013-01-01

    Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.

  1. More Precise Estimation of Lower-Level Interaction Effects in Multilevel Models.

    Science.gov (United States)

    Loeys, Tom; Josephy, Haeike; Dewitte, Marieke

    2018-01-01

    In hierarchical data, the effect of a lower-level predictor on a lower-level outcome may often be confounded by an (un)measured upper-level factor. When such confounding is left unaddressed, the effect of the lower-level predictor is estimated with bias. Separating this effect into a within- and between-component removes such bias in a linear random intercept model under a specific set of assumptions for the confounder. When the effect of the lower-level predictor is additionally moderated by another lower-level predictor, an interaction between both lower-level predictors is included into the model. To address unmeasured upper-level confounding, this interaction term ought to be decomposed into a within- and between-component as well. This can be achieved by first multiplying both predictors and centering that product term next, or vice versa. We show that while both approaches, on average, yield the same estimates of the interaction effect in linear models, the former decomposition is much more precise and robust against misspecification of the effects of cross-level and upper-level terms, compared to the latter.

  2. Compositional models for credal sets

    Czech Academy of Sciences Publication Activity Database

    Vejnarová, Jiřina

    2017-01-01

    Roč. 90, č. 1 (2017), s. 359-373 ISSN 0888-613X R&D Projects: GA ČR(CZ) GA16-12010S Institutional support: RVO:67985556 Keywords : Imprecise probabilities * Credal sets * Multidimensional models * Conditional independence Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 2.845, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/vejnarova-0483288.pdf

  3. Priority setting at the micro-, meso- and macro-levels in Canada, Norway and Uganda.

    Science.gov (United States)

    Kapiriri, Lydia; Norheim, Ole Frithjof; Martin, Douglas K

    2007-06-01

    The objectives of this study were (1) to describe the process of healthcare priority setting in Ontario-Canada, Norway and Uganda at the three levels of decision-making; (2) to evaluate the description using the framework for fair priority setting, accountability for reasonableness; so as to identify lessons of good practices. We carried out case studies involving key informant interviews, with 184 health practitioners and health planners from the macro-level, meso-level and micro-level from Canada-Ontario, Norway and Uganda (selected by virtue of their varying experiences in priority setting). Interviews were audio-recorded, transcribed and analyzed using a modified thematic approach. The descriptions were evaluated against the four conditions of "accountability for reasonableness", relevance, publicity, revisions and enforcement. Areas of adherence to these conditions were identified as lessons of good practices; areas of non-adherence were identified as opportunities for improvement. (i) at the macro-level, in all three countries, cabinet makes most of the macro-level resource allocation decisions and they are influenced by politics, public pressure, and advocacy. Decisions within the ministries of health are based on objective formulae and evidence. International priorities influenced decisions in Uganda. Some priority-setting reasons are publicized through circulars, printed documents and the Internet in Canada and Norway. At the meso-level, hospital priority-setting decisions were made by the hospital managers and were based on national priorities, guidelines, and evidence. Hospital departments that handle emergencies, such as surgery, were prioritized. Some of the reasons are available on the hospital intranet or presented at meetings. Micro-level practitioners considered medical and social worth criteria. These reasons are not publicized. Many practitioners lacked knowledge of the macro- and meso-level priority-setting processes. (ii) Evaluation

  4. Embedded Real-Time Architecture for Level-Set-Based Active Contours

    Directory of Open Access Journals (Sweden)

    Dejnožková Eva

    2005-01-01

    Full Text Available Methods described by partial differential equations have gained a considerable interest because of undoubtful advantages such as an easy mathematical description of the underlying physics phenomena, subpixel precision, isotropy, or direct extension to higher dimensions. Though their implementation within the level set framework offers other interesting advantages, their vast industrial deployment on embedded systems is slowed down by their considerable computational effort. This paper exploits the high parallelization potential of the operators from the level set framework and proposes a scalable, asynchronous, multiprocessor platform suitable for system-on-chip solutions. We concentrate on obtaining real-time execution capabilities. The performance is evaluated on a continuous watershed and an object-tracking application based on a simple gradient-based attraction force driving the active countour. The proposed architecture can be realized on commercially available FPGAs. It is built around general-purpose processor cores, and can run code developed with usual tools.

  5. KFUPM-KAUST Red Sea model: Digital viscoelastic depth model and synthetic seismic data set

    KAUST Repository

    Al-Shuhail, Abdullatif A.; Mousa, Wail A.; Alkhalifah, Tariq Ali

    2017-01-01

    The Red Sea is geologically interesting due to its unique structures and abundant mineral and petroleum resources, yet no digital geologic models or synthetic seismic data of the Red Sea are publicly available for testing algorithms to image and analyze the area's interesting features. This study compiles a 2D viscoelastic model of the Red Sea and calculates a corresponding multicomponent synthetic seismic data set. The models and data sets are made publicly available for download. We hope this effort will encourage interested researchers to test their processing algorithms on this data set and model and share their results publicly as well.

  6. KFUPM-KAUST Red Sea model: Digital viscoelastic depth model and synthetic seismic data set

    KAUST Repository

    Al-Shuhail, Abdullatif A.

    2017-06-01

    The Red Sea is geologically interesting due to its unique structures and abundant mineral and petroleum resources, yet no digital geologic models or synthetic seismic data of the Red Sea are publicly available for testing algorithms to image and analyze the area\\'s interesting features. This study compiles a 2D viscoelastic model of the Red Sea and calculates a corresponding multicomponent synthetic seismic data set. The models and data sets are made publicly available for download. We hope this effort will encourage interested researchers to test their processing algorithms on this data set and model and share their results publicly as well.

  7. A computational model for three-dimensional jointed media with a single joint set

    International Nuclear Information System (INIS)

    Koteras, J.R.

    1994-02-01

    This report describes a three-dimensional model for jointed rock or other media with a single set of joints. The joint set consists of evenly spaced joint planes. The normal joint response is nonlinear elastic and is based on a rational polynomial. Joint shear stress is treated as being linear elastic in the shear stress versus slip displacement before attaining a critical stress level governed by a Mohr-Coulomb faction criterion. The three-dimensional model represents an extension of a two-dimensional, multi-joint model that has been in use for several years. Although most of the concepts in the two-dimensional model translate in a straightforward manner to three dimensions, the concept of slip on the joint planes becomes more complex in three dimensions. While slip in two dimensions can be treated as a scalar quantity, it must be treated as a vector in the joint plane in three dimensions. For the three-dimensional model proposed here, the slip direction is assumed to be the direction of maximum principal strain in the joint plane. Five test problems are presented to verify the correctness of the computational implementation of the model

  8. Improving a Lecture-Size Molecular Model Set by Repurposing Used Whiteboard Markers

    Science.gov (United States)

    Dragojlovic, Veljko

    2015-01-01

    Preparation of an inexpensive model set from whiteboard markers and either HGS molecular model set or atoms made of wood is described. The model set is relatively easy to prepare and is sufficiently large to be suitable as an instructor set for use in lectures.

  9. Level set method for image segmentation based on moment competition

    Science.gov (United States)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  10. Numerical simulation of overflow at vertical weirs using a hybrid level set/VOF method

    Science.gov (United States)

    Lv, Xin; Zou, Qingping; Reeve, Dominic

    2011-10-01

    This paper presents the applications of a newly developed free surface flow model to the practical, while challenging overflow problems for weirs. Since the model takes advantage of the strengths of both the level set and volume of fluid methods and solves the Navier-Stokes equations on an unstructured mesh, it is capable of resolving the time evolution of very complex vortical motions, air entrainment and pressure variations due to violent deformations following overflow of the weir crest. In the present study, two different types of vertical weir, namely broad-crested and sharp-crested, are considered for validation purposes. The calculated overflow parameters such as pressure head distributions, velocity distributions, and water surface profiles are compared against experimental data as well as numerical results available in literature. A very good quantitative agreement has been obtained. The numerical model, thus, offers a good alternative to traditional experimental methods in the study of weir problems.

  11. Topological Hausdorff dimension and level sets of generic continuous functions on fractals

    International Nuclear Information System (INIS)

    Balka, Richárd; Buczolich, Zoltán; Elekes, Márton

    2012-01-01

    Highlights: ► We examine a new fractal dimension, the so called topological Hausdorff dimension. ► The generic continuous function has a level set of maximal Hausdorff dimension. ► This maximal dimension is the topological Hausdorff dimension minus one. ► Homogeneity implies that “most” level sets are of this dimension. ► We calculate the various dimensions of the graph of the generic function. - Abstract: In an earlier paper we introduced a new concept of dimension for metric spaces, the so called topological Hausdorff dimension. For a compact metric space K let dim H K and dim tH K denote its Hausdorff and topological Hausdorff dimension, respectively. We proved that this new dimension describes the Hausdorff dimension of the level sets of the generic continuous function on K, namely sup{ dim H f -1 (y):y∈R} =dim tH K-1 for the generic f ∈ C(K), provided that K is not totally disconnected, otherwise every non-empty level set is a singleton. We also proved that if K is not totally disconnected and sufficiently homogeneous then dim H f −1 (y) = dim tH K − 1 for the generic f ∈ C(K) and the generic y ∈ f(K). The most important goal of this paper is to make these theorems more precise. As for the first result, we prove that the supremum is actually attained on the left hand side of the first equation above, and also show that there may only be a unique level set of maximal Hausdorff dimension. As for the second result, we characterize those compact metric spaces for which for the generic f ∈ C(K) and the generic y ∈ f(K) we have dim H f −1 (y) = dim tH K − 1. We also generalize a result of B. Kirchheim by showing that if K is self-similar then for the generic f ∈ C(K) for every y∈intf(K) we have dim H f −1 (y) = dim tH K − 1. Finally, we prove that the graph of the generic f ∈ C(K) has the same Hausdorff and topological Hausdorff dimension as K.

  12. Model answers in pure mathematics for a-level students

    CERN Document Server

    Pratt, GA; Schofield, C W

    1967-01-01

    Model Answers in Pure Mathematics for A-Level Students provides a set of solutions that indicate what is required and expected in an Advanced Level examination in Pure Mathematics. This book serves as a guide to the length of answer required, layout of the solution, and methods of selecting the best approach to any particular type of math problem. This compilation intends to supplement, not replace, the normal textbook and provides a varied selection of questions for practice in addition to the worked solutions. The subjects covered in this text include algebra, trigonometry, coordinate geomet

  13. Individual and setting level predictors of the implementation of a skin cancer prevention program: a multilevel analysis

    Directory of Open Access Journals (Sweden)

    Brownson Ross C

    2010-05-01

    Full Text Available Abstract Background To achieve widespread cancer control, a better understanding is needed of the factors that contribute to successful implementation of effective skin cancer prevention interventions. This study assessed the relative contributions of individual- and setting-level characteristics to implementation of a widely disseminated skin cancer prevention program. Methods A multilevel analysis was conducted using data from the Pool Cool Diffusion Trial from 2004 and replicated with data from 2005. Implementation of Pool Cool by lifeguards was measured using a composite score (implementation variable, range 0 to 10 that assessed whether the lifeguard performed different components of the intervention. Predictors included lifeguard background characteristics, lifeguard sun protection-related attitudes and behaviors, pool characteristics, and enhanced (i.e., more technical assistance, tailored materials, and incentives are provided versus basic treatment group. Results The mean value of the implementation variable was 4 in both years (2004 and 2005; SD = 2 in 2004 and SD = 3 in 2005 indicating a moderate implementation for most lifeguards. Several individual-level (lifeguard characteristics and setting-level (pool characteristics and treatment group factors were found to be significantly associated with implementation of Pool Cool by lifeguards. All three lifeguard-level domains (lifeguard background characteristics, lifeguard sun protection-related attitudes and behaviors and six pool-level predictors (number of weekly pool visitors, intervention intensity, geographic latitude, pool location, sun safety and/or skin cancer prevention programs, and sun safety programs and policies were included in the final model. The most important predictors of implementation were the number of weekly pool visitors (inverse association and enhanced treatment group (positive association. That is, pools with fewer weekly visitors and pools in the enhanced

  14. process setting models for the minimization of costs defectives

    African Journals Online (AJOL)

    Dr Obe

    determine the mean setting so as to minimise the total loss through under-limit complaints and loss of sales and goodwill as well as over-limit losses through excess materials and rework costs. Models are developed for the two types of setting of the mean so that the minimum costs of losses are achieved. Also, a model is ...

  15. Optimisation-Based Solution Methods for Set Partitioning Models

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel

    The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...

  16. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

    Science.gov (United States)

    Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

    2016-01-01

    Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

  17. Hybrid approach for detection of dental caries based on the methods FCM and level sets

    Science.gov (United States)

    Chaabene, Marwa; Ben Ali, Ramzi; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    This paper presents a new technique for detection of dental caries that is a bacterial disease that destroys the tooth structure. In our approach, we have achieved a new segmentation method that combines the advantages of fuzzy C mean algorithm and level set method. The results obtained by the FCM algorithm will be used by Level sets algorithm to reduce the influence of the noise effect on the working of each of these algorithms, to facilitate level sets manipulation and to lead to more robust segmentation. The sensitivity and specificity confirm the effectiveness of proposed method for caries detection.

  18. Agenda Setting for Health Promotion: Exploring an Adapted Model for the Social Media Era.

    Science.gov (United States)

    Albalawi, Yousef; Sixsmith, Jane

    2015-01-01

    The foundation of best practice in health promotion is a robust theoretical base that informs design, implementation, and evaluation of interventions that promote the public's health. This study provides a novel contribution to health promotion through the adaptation of the agenda-setting approach in response to the contribution of social media. This exploration and proposed adaptation is derived from a study that examined the effectiveness of Twitter in influencing agenda setting among users in relation to road traffic accidents in Saudi Arabia. The proposed adaptations to the agenda-setting model to be explored reflect two levels of engagement: agenda setting within the social media sphere and the position of social media within classic agenda setting. This exploratory research aims to assess the veracity of the proposed adaptations on the basis of the hypotheses developed to test these two levels of engagement. To validate the hypotheses, we collected and analyzed data from two primary sources: Twitter activities and Saudi national newspapers. Keyword mentions served as indicators of agenda promotion; for Twitter, interactions were used to measure the process of agenda setting within the platform. The Twitter final dataset comprised 59,046 tweets and 38,066 users who contributed by tweeting, replying, or retweeting. Variables were collected for each tweet and user. In addition, 518 keyword mentions were recorded from six popular Saudi national newspapers. The results showed significant ratification of the study hypotheses at both levels of engagement that framed the proposed adaptions. The results indicate that social media facilitates the contribution of individuals in influencing agendas (individual users accounted for 76.29%, 67.79%, and 96.16% of retweet impressions, total impressions, and amplification multipliers, respectively), a component missing from traditional constructions of agenda-setting models. The influence of organizations on agenda setting is

  19. Agenda Setting for Health Promotion: Exploring an Adapted Model for the Social Media Era

    Science.gov (United States)

    2015-01-01

    Background The foundation of best practice in health promotion is a robust theoretical base that informs design, implementation, and evaluation of interventions that promote the public’s health. This study provides a novel contribution to health promotion through the adaptation of the agenda-setting approach in response to the contribution of social media. This exploration and proposed adaptation is derived from a study that examined the effectiveness of Twitter in influencing agenda setting among users in relation to road traffic accidents in Saudi Arabia. Objective The proposed adaptations to the agenda-setting model to be explored reflect two levels of engagement: agenda setting within the social media sphere and the position of social media within classic agenda setting. This exploratory research aims to assess the veracity of the proposed adaptations on the basis of the hypotheses developed to test these two levels of engagement. Methods To validate the hypotheses, we collected and analyzed data from two primary sources: Twitter activities and Saudi national newspapers. Keyword mentions served as indicators of agenda promotion; for Twitter, interactions were used to measure the process of agenda setting within the platform. The Twitter final dataset comprised 59,046 tweets and 38,066 users who contributed by tweeting, replying, or retweeting. Variables were collected for each tweet and user. In addition, 518 keyword mentions were recorded from six popular Saudi national newspapers. Results The results showed significant ratification of the study hypotheses at both levels of engagement that framed the proposed adaptions. The results indicate that social media facilitates the contribution of individuals in influencing agendas (individual users accounted for 76.29%, 67.79%, and 96.16% of retweet impressions, total impressions, and amplification multipliers, respectively), a component missing from traditional constructions of agenda-setting models. The influence

  20. Ultrasonic scalpel causes greater depth of soft tissue necrosis compared to monopolar electrocautery at standard power level settings in a pig model

    Science.gov (United States)

    2012-01-01

    Background Ultrasonic scalpel (UC) and monopolar electrocautery (ME) are common tools for soft tissue dissection. However, morphological data on the related tissue alteration are discordant. We developed an automatic device for standardized sample excision and compared quality and depth of morphological changes caused by UC and ME in a pig model. Methods 100 tissue samples (5 × 3 cm) of the abdominal wall were excised in 16 pigs. Excisions were randomly performed manually or by using the self-constructed automatic device at standard power levels (60 W cutting in ME, level 5 in UC) for abdominal surgery. Quality of tissue alteration and depth of coagulation necrosis were examined histopathologically. Device (UC vs. ME) and mode (manually vs. automatic) effects were studied by two-way analysis of variance at a significance level of 5%. Results At the investigated power level settings UC and ME induced qualitatively similar coagulation necroses. Mean depth of necrosis was 450.4 ± 457.8 μm for manual UC and 553.5 ± 326.9 μm for automatic UC versus 149.0 ± 74.3 μm for manual ME and 257.6 ± 119.4 μm for automatic ME. Coagulation necrosis was significantly deeper (p < 0.01) when UC was used compared to ME. The mode of excision (manual versus automatic) did not influence the depth of necrosis (p = 0.85). There was no significant interaction between dissection tool and mode of excision (p = 0.93). Conclusions Thermal injury caused by UC and ME results in qualitatively similar coagulation necrosis. The depth of necrosis is significantly greater in UC compared to ME at investigated standard power levels. PMID:22361346

  1. WHEN MODEL MEETS REALITY – A REVIEW OF SPAR LEVEL 2 MODEL AGAINST FUKUSHIMA ACCIDENT

    Energy Technology Data Exchange (ETDEWEB)

    Zhegang Ma

    2013-09-01

    The Standardized Plant Analysis Risk (SPAR) models are a set of probabilistic risk assessment (PRA) models used by the Nuclear Regulatory Commission (NRC) to evaluate the risk of operations at U.S. nuclear power plants and provide inputs to risk informed regulatory process. A small number of SPAR Level 2 models have been developed mostly for feasibility study purpose. They extend the Level 1 models to include containment systems, group plant damage states, and model containment phenomenology and accident progression in containment event trees. A severe earthquake and tsunami hit the eastern coast of Japan in March 2011 and caused significant damages on the reactors in Fukushima Daiichi site. Station blackout (SBO), core damage, containment damage, hydrogen explosion, and intensive radioactivity release, which have been previous analyzed and assumed as postulated accident progression in PRA models, now occurred with various degrees in the multi-units Fukushima Daiichi site. This paper reviews and compares a typical BWR SPAR Level 2 model with the “real” accident progressions and sequences occurred in Fukushima Daiichi Units 1, 2, and 3. It shows that the SPAR Level 2 model is a robust PRA model that could very reasonably describe the accident progression for a real and complicated nuclear accident in the world. On the other hand, the comparison shows that the SPAR model could be enhanced by incorporating some accident characteristics for better representation of severe accident progression.

  2. MAC-Level Communication Time Modeling and Analysis for Real-Time WSNs

    Directory of Open Access Journals (Sweden)

    STANGACIU, V.

    2016-02-01

    Full Text Available Low-level communication protocols and their timing behavior are essential to developing wireless sensor networks (WSNs able to provide the support and operating guarantees required by many current real-time applications. Nevertheless, this aspect still remains an issue in the state-of-the-art. In this paper we provide a detailed analysis of a recently proposed MAC-level communication timing model and demonstrate its usability in designing real-time protocols. The results of a large set of measurements are also presented and discussed here, in direct relation to the main time parameters of the analyzed model.

  3. Automatic Fontanel Extraction from Newborns' CT Images Using Variational Level Set

    Science.gov (United States)

    Kazemi, Kamran; Ghadimi, Sona; Lyaghat, Alireza; Tarighati, Alla; Golshaeyan, Narjes; Abrishami-Moghaddam, Hamid; Grebe, Reinhard; Gondary-Jouet, Catherine; Wallois, Fabrice

    A realistic head model is needed for source localization methods used for the study of epilepsy in neonates applying Electroencephalographic (EEG) measurements from the scalp. The earliest models consider the head as a series of concentric spheres, each layer corresponding to a different tissue whose conductivity is assumed to be homogeneous. The results of the source reconstruction depend highly on the electric conductivities of the tissues forming the head.The most used model is constituted of three layers (scalp, skull, and intracranial). Most of the major bones of the neonates’ skull are ossified at birth but can slightly move relative to each other. This is due to the sutures, fibrous membranes that at this stage of development connect the already ossified flat bones of the neurocranium. These weak parts of the neurocranium are called fontanels. Thus it is important to enter the exact geometry of fontaneles and flat bone in a source reconstruction because they show pronounced in conductivity. Computer Tomography (CT) imaging provides an excellent tool for non-invasive investigation of the skull which expresses itself in high contrast to all other tissues while the fontanels only can be identified as absence of bone, gaps in the skull formed by flat bone. Therefore, the aim of this paper is to extract the fontanels from CT images applying a variational level set method. We applied the proposed method to CT-images of five different subjects. The automatically extracted fontanels show good agreement with the manually extracted ones.

  4. A Validated Set of MIDAS V5 Task Network Model Scenarios to Evaluate Nextgen Closely Spaced Parallel Operations Concepts

    Science.gov (United States)

    Gore, Brian Francis; Hooey, Becky Lee; Haan, Nancy; Socash, Connie; Mahlstedt, Eric; Foyle, David C.

    2013-01-01

    The Closely Spaced Parallel Operations (CSPO) scenario is a complex, human performance model scenario that tested alternate operator roles and responsibilities to a series of off-nominal operations on approach and landing (see Gore, Hooey, Mahlstedt, Foyle, 2013). The model links together the procedures, equipment, crewstation, and external environment to produce predictions of operator performance in response to Next Generation system designs, like those expected in the National Airspaces NextGen concepts. The task analysis that is contained in the present report comes from the task analysis window in the MIDAS software. These tasks link definitions and states for equipment components, environmental features as well as operational contexts. The current task analysis culminated in 3300 tasks that included over 1000 Subject Matter Expert (SME)-vetted, re-usable procedural sets for three critical phases of flight; the Descent, Approach, and Land procedural sets (see Gore et al., 2011 for a description of the development of the tasks included in the model; Gore, Hooey, Mahlstedt, Foyle, 2013 for a description of the model, and its results; Hooey, Gore, Mahlstedt, Foyle, 2013 for a description of the guidelines that were generated from the models results; Gore, Hooey, Foyle, 2012 for a description of the models implementation and its settings). The rollout, after landing checks, taxi to gate and arrive at gate illustrated in Figure 1 were not used in the approach and divert scenarios exercised. The other networks in Figure 1 set up appropriate context settings for the flight deck.The current report presents the models task decomposition from the tophighest level and decomposes it to finer-grained levels. The first task that is completed by the model is to set all of the initial settings for the scenario runs included in the model (network 75 in Figure 1). This initialization process also resets the CAD graphic files contained with MIDAS, as well as the embedded

  5. Data sets for modeling: A retrospective collection of Bidirectional Reflectance and Forest Ecosystems Dynamics Multisensor Aircraft Campaign data sets

    Energy Technology Data Exchange (ETDEWEB)

    Walthall, C.L.; Kim, M. (Univ. of Maryland, College Park, MD (United States). Dept. of Geography); Williams, D.L.; Meeson, B.W.; Agbu, P.A.; Newcomer, J.A.; Levine, E.R.

    1993-12-01

    The Biospheric Sciences Branch, within the Laboratory for Terrestrial Physics at NASA's Goddard Space Flight Center, has assembled two data sets for free dissemination to the remote sensing research community. One data set, referred to as the Retrospective Bidirectional Reflectance Distribution Function (BRDF) Data Collection, is a collection of bidirectional reflectance and supporting biophysical measurements of surfaces ranging in diversity from bare soil to heavily forested canopies. The other data collection, resulting from measurements made in association with the Forest Ecosystems Dynamic Multisensor Aircraft Campaign (FED MAC), contains data that are relevant to ecosystem process models, particularly those which have been modified to incorporate remotely sensed data. Both of these collections are being made available to the science community at large in order to facilitate model development, validation, and usage. These data collections are subsets which have been compiled and consolidated from individual researcher or from several large data set collections including: the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE); FED MAC; the Superior National Forest Project (SNF); the Geologic Remote Sensing Field Experiment (GRSFE); and Agricultural Inventories through Space Applications of Remote Sensing (AgriStars). The complete, stand-along FED MAC Data Collection contains atmospheric, vegetation, and soils data acquired during field measurement campaigns conducted at international Papers' Northern Experimental Forest located approximately 40 km north of Bangor, Maine. Reflectance measurements at the canopy, branch, and needle level are available, along with the detailed canopy architectural measurements.

  6. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  7. Heterogeneity in Wage Setting Behavior in a New-Keynesian Model

    NARCIS (Netherlands)

    Eijffinger, S.C.W.; Grajales Olarte, A.; Uras, R.B.

    2015-01-01

    In this paper we estimate a New-Keynesian DSGE model with heterogeneity in price and wage setting behavior. In a recent study, Coibion and Gorodnichenko (2011) develop a DSGE model, in which firms follow four different types of price setting schemes: sticky prices, sticky information, rule of thumb,

  8. Glycated haemoglobin (HbA1c ) and fasting plasma glucose relationships in sea-level and high-altitude settings.

    Science.gov (United States)

    Bazo-Alvarez, J C; Quispe, R; Pillay, T D; Bernabé-Ortiz, A; Smeeth, L; Checkley, W; Gilman, R H; Málaga, G; Miranda, J J

    2017-06-01

    Higher haemoglobin levels and differences in glucose metabolism have been reported among high-altitude residents, which may influence the diagnostic performance of HbA 1c . This study explores the relationship between HbA 1c and fasting plasma glucose (FPG) in populations living at sea level and at an altitude of > 3000 m. Data from 3613 Peruvian adults without a known diagnosis of diabetes from sea-level and high-altitude settings were evaluated. Linear, quadratic and cubic regression models were performed adjusting for potential confounders. Receiver operating characteristic (ROC) curves were constructed and concordance between HbA 1c and FPG was assessed using a Kappa index. At sea level and high altitude, means were 13.5 and 16.7 g/dl (P > 0.05) for haemoglobin level; 41 and 40 mmol/mol (5.9% and 5.8%; P < 0.01) for HbA 1c ; and 5.8 and 5.1 mmol/l (105 and 91.3 mg/dl; P < 0.001) for FPG, respectively. The adjusted relationship between HbA 1c and FPG was quadratic at sea level and linear at high altitude. Adjusted models showed that, to predict an HbA 1c value of 48 mmol/mol (6.5%), the corresponding mean FPG values at sea level and high altitude were 6.6 and 14.8 mmol/l (120 and 266 mg/dl), respectively. An HbA 1c cut-off of 48 mmol/mol (6.5%) had a sensitivity for high FPG of 87.3% (95% confidence interval (95% CI) 76.5 to 94.4) at sea level and 40.9% (95% CI 20.7 to 63.6) at high altitude. The relationship between HbA 1c and FPG is less clear at high altitude than at sea level. Caution is warranted when using HbA 1c to diagnose diabetes mellitus in this setting. © 2017 The Authors. Diabetic Medicine published by John Wiley & Sons Ltd on behalf of Diabetes UK.

  9. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    Science.gov (United States)

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  10. Level-set-based reconstruction algorithm for EIT lung images: first clinical results

    International Nuclear Information System (INIS)

    Rahmati, Peyman; Adler, Andy; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz

    2012-01-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure–volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM. (paper)

  11. Does rational selection of training and test sets improve the outcome of QSAR modeling?

    Science.gov (United States)

    Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander

    2012-10-22

    Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.

  12. Level set segmentation of bovine corpora lutea in ex situ ovarian ultrasound images

    Directory of Open Access Journals (Sweden)

    Adams Gregg P

    2008-08-01

    Full Text Available Abstract Background The objective of this study was to investigate the viability of level set image segmentation methods for the detection of corpora lutea (corpus luteum, CL boundaries in ultrasonographic ovarian images. It was hypothesized that bovine CL boundaries could be located within 1–2 mm by a level set image segmentation methodology. Methods Level set methods embed a 2D contour in a 3D surface and evolve that surface over time according to an image-dependent speed function. A speed function suitable for segmentation of CL's in ovarian ultrasound images was developed. An initial contour was manually placed and contour evolution was allowed to proceed until the rate of change of the area was sufficiently small. The method was tested on ovarian ultrasonographic images (n = 8 obtained ex situ. A expert in ovarian ultrasound interpretation delineated CL boundaries manually to serve as a "ground truth". Accuracy of the level set segmentation algorithm was determined by comparing semi-automatically determined contours with ground truth contours using the mean absolute difference (MAD, root mean squared difference (RMSD, Hausdorff distance (HD, sensitivity, and specificity metrics. Results and discussion The mean MAD was 0.87 mm (sigma = 0.36 mm, RMSD was 1.1 mm (sigma = 0.47 mm, and HD was 3.4 mm (sigma = 2.0 mm indicating that, on average, boundaries were accurate within 1–2 mm, however, deviations in excess of 3 mm from the ground truth were observed indicating under- or over-expansion of the contour. Mean sensitivity and specificity were 0.814 (sigma = 0.171 and 0.990 (sigma = 0.00786, respectively, indicating that CLs were consistently undersegmented but rarely did the contour interior include pixels that were judged by the human expert not to be part of the CL. It was observed that in localities where gradient magnitudes within the CL were strong due to high contrast speckle, contour expansion stopped too early. Conclusion The

  13. Three essays on multi-level optimization models and applications

    Science.gov (United States)

    Rahdar, Mohammad

    The general form of a multi-level mathematical programming problem is a set of nested optimization problems, in which each level controls a series of decision variables independently. However, the value of decision variables may also impact the objective function of other levels. A two-level model is called a bilevel model and can be considered as a Stackelberg game with a leader and a follower. The leader anticipates the response of the follower and optimizes its objective function, and then the follower reacts to the leader's action. The multi-level decision-making model has many real-world applications such as government decisions, energy policies, market economy, network design, etc. However, there is a lack of capable algorithms to solve medium and large scale these types of problems. The dissertation is devoted to both theoretical research and applications of multi-level mathematical programming models, which consists of three parts, each in a paper format. The first part studies the renewable energy portfolio under two major renewable energy policies. The potential competition for biomass for the growth of the renewable energy portfolio in the United States and other interactions between two policies over the next twenty years are investigated. This problem mainly has two levels of decision makers: the government/policy makers and biofuel producers/electricity generators/farmers. We focus on the lower-level problem to predict the amount of capacity expansions, fuel production, and power generation. In the second part, we address uncertainty over demand and lead time in a multi-stage mathematical programming problem. We propose a two-stage tri-level optimization model in the concept of rolling horizon approach to reducing the dimensionality of the multi-stage problem. In the third part of the dissertation, we introduce a new branch and bound algorithm to solve bilevel linear programming problems. The total time is reduced by solving a smaller relaxation

  14. A comprehensive dwelling unit choice model accommodating psychological constructs within a search strategy for consideration set formation.

    Science.gov (United States)

    2015-12-01

    This study adopts a dwelling unit level of analysis and considers a probabilistic choice set generation approach for residential choice modeling. In doing so, we accommodate the fact that housing choices involve both characteristics of the dwelling u...

  15. Topology optimization of hyperelastic structures using a level set method

    Science.gov (United States)

    Chen, Feifei; Wang, Yiqiang; Wang, Michael Yu; Zhang, Y. F.

    2017-12-01

    Soft rubberlike materials, due to their inherent compliance, are finding widespread implementation in a variety of applications ranging from assistive wearable technologies to soft material robots. Structural design of such soft and rubbery materials necessitates the consideration of large nonlinear deformations and hyperelastic material models to accurately predict their mechanical behaviour. In this paper, we present an effective level set-based topology optimization method for the design of hyperelastic structures that undergo large deformations. The method incorporates both geometric and material nonlinearities where the strain and stress measures are defined within the total Lagrange framework and the hyperelasticity is characterized by the widely-adopted Mooney-Rivlin material model. A shape sensitivity analysis is carried out, in the strict sense of the material derivative, where the high-order terms involving the displacement gradient are retained to ensure the descent direction. As the design velocity enters into the shape derivative in terms of its gradient and divergence terms, we develop a discrete velocity selection strategy. The whole optimization implementation undergoes a two-step process, where the linear optimization is first performed and its optimized solution serves as the initial design for the subsequent nonlinear optimization. It turns out that this operation could efficiently alleviate the numerical instability and facilitate the optimization process. To demonstrate the validity and effectiveness of the proposed method, three compliance minimization problems are studied and their optimized solutions present significant mechanical benefits of incorporating the nonlinearities, in terms of remarkable enhancement in not only the structural stiffness but also the critical buckling load.

  16. Low-level HIV-1 replication and the dynamics of the resting CD4+ T cell reservoir for HIV-1 in the setting of HAART

    Directory of Open Access Journals (Sweden)

    Wilke Claus O

    2008-01-01

    Full Text Available Abstract Background In the setting of highly active antiretroviral therapy (HAART, plasma levels of human immunodeficiency type-1 (HIV-1 rapidly decay to below the limit of detection of standard clinical assays. However, reactivation of remaining latently infected memory CD4+ T cells is a source of continued virus production, forcing patients to remain on HAART despite clinically undetectable viral loads. Unfortunately, the latent reservoir decays slowly, with a half-life of up to 44 months, making it the major known obstacle to the eradication of HIV-1 infection. However, the mechanism underlying the long half-life of the latent reservoir is unknown. The most likely potential mechanisms are low-level viral replication and the intrinsic stability of latently infected cells. Methods Here we use a mathematical model of T cell dynamics in the setting of HIV-1 infection to probe the decay characteristics of the latent reservoir upon initiation of HAART. We compare the behavior of this model to patient derived data in order to gain insight into the role of low-level viral replication in the setting of HAART. Results By comparing the behavior of our model to patient derived data, we find that the viral dynamics observed in patients on HAART could be consistent with low-level viral replication but that this replication would not significantly affect the decay rate of the latent reservoir. Rather than low-level replication, the intrinsic stability of latently infected cells and the rate at which they are reactivated primarily determine the observed reservoir decay rate according to the predictions of our model. Conclusion The intrinsic stability of the latent reservoir has important implications for efforts to eradicate HIV-1 infection and suggests that intensified HAART would not accelerate the decay of the latent reservoir.

  17. Low-level HIV-1 replication and the dynamics of the resting CD4+ T cell reservoir for HIV-1 in the setting of HAART

    Science.gov (United States)

    Sedaghat, Ahmad R; Siliciano, Robert F; Wilke, Claus O

    2008-01-01

    Background In the setting of highly active antiretroviral therapy (HAART), plasma levels of human immunodeficiency type-1 (HIV-1) rapidly decay to below the limit of detection of standard clinical assays. However, reactivation of remaining latently infected memory CD4+ T cells is a source of continued virus production, forcing patients to remain on HAART despite clinically undetectable viral loads. Unfortunately, the latent reservoir decays slowly, with a half-life of up to 44 months, making it the major known obstacle to the eradication of HIV-1 infection. However, the mechanism underlying the long half-life of the latent reservoir is unknown. The most likely potential mechanisms are low-level viral replication and the intrinsic stability of latently infected cells. Methods Here we use a mathematical model of T cell dynamics in the setting of HIV-1 infection to probe the decay characteristics of the latent reservoir upon initiation of HAART. We compare the behavior of this model to patient derived data in order to gain insight into the role of low-level viral replication in the setting of HAART. Results By comparing the behavior of our model to patient derived data, we find that the viral dynamics observed in patients on HAART could be consistent with low-level viral replication but that this replication would not significantly affect the decay rate of the latent reservoir. Rather than low-level replication, the intrinsic stability of latently infected cells and the rate at which they are reactivated primarily determine the observed reservoir decay rate according to the predictions of our model. Conclusion The intrinsic stability of the latent reservoir has important implications for efforts to eradicate HIV-1 infection and suggests that intensified HAART would not accelerate the decay of the latent reservoir. PMID:18171475

  18. Comparative analysis of modified PMV models and SET models to predict human thermal sensation in naturally ventilated buildings

    DEFF Research Database (Denmark)

    Gao, Jie; Wang, Yi; Wargocki, Pawel

    2015-01-01

    In this paper, a comparative analysis was performed on the human thermal sensation estimated by modified predicted mean vote (PMV) models and modified standard effective temperature (SET) models in naturally ventilated buildings; the data were collected in field study. These prediction models were....../s, the expectancy factors for the extended PMV model and the extended SET model were from 0.770 to 0.974 and from 1.330 to 1.363, and the adaptive coefficients for the adaptive PMV model and the adaptive SET model were from 0.029 to 0.167 and from-0.213 to-0.195. In addition, the difference in thermal sensation...... between the measured and predicted values using the modified PMV models exceeded 25%, while the difference between the measured thermal sensation and the predicted thermal sensation using modified SET models was approximately less than 25%. It is concluded that the modified SET models can predict human...

  19. Combinatorial nuclear level-density model

    International Nuclear Information System (INIS)

    Uhrenholt, H.; Åberg, S.; Dobrowolski, A.; Døssing, Th.; Ichikawa, T.; Möller, P.

    2013-01-01

    A microscopic nuclear level-density model is presented. The model is a completely combinatorial (micro-canonical) model based on the folded-Yukawa single-particle potential and includes explicit treatment of pairing, rotational and vibrational states. The microscopic character of all states enables extraction of level-distribution functions with respect to pairing gaps, parity and angular momentum. The results of the model are compared to available experimental data: level spacings at neutron separation energy, data on total level-density functions from the Oslo method, cumulative level densities from low-lying discrete states, and data on parity ratios. Spherical and deformed nuclei follow basically different coupling schemes, and we focus on deformed nuclei

  20. Use of fuzzy sets in modeling of GIS objects

    Science.gov (United States)

    Mironova, Yu N.

    2018-05-01

    The paper discusses modeling and methods of data visualization in geographic information systems. Information processing in Geoinformatics is based on the use of models. Therefore, geoinformation modeling is a key in the chain of GEODATA processing. When solving problems, using geographic information systems often requires submission of the approximate or insufficient reliable information about the map features in the GIS database. Heterogeneous data of different origin and accuracy have some degree of uncertainty. In addition, not all information is accurate: already during the initial measurements, poorly defined terms and attributes (e.g., "soil, well-drained") are used. Therefore, there are necessary methods for working with uncertain requirements, classes, boundaries. The author proposes using spatial information fuzzy sets. In terms of a characteristic function, a fuzzy set is a natural generalization of ordinary sets, when one rejects the binary nature of this feature and assumes that it can take any value in the interval.

  1. Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model

    Science.gov (United States)

    Kuznetsov, A. V.; Makaryants, G. M.

    2018-01-01

    There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.

  2. Probe-level linear model fitting and mixture modeling results in high accuracy detection of differential gene expression

    Directory of Open Access Journals (Sweden)

    Lemieux Sébastien

    2006-08-01

    Full Text Available Abstract Background The identification of differentially expressed genes (DEGs from Affymetrix GeneChips arrays is currently done by first computing expression levels from the low-level probe intensities, then deriving significance by comparing these expression levels between conditions. The proposed PL-LM (Probe-Level Linear Model method implements a linear model applied on the probe-level data to directly estimate the treatment effect. A finite mixture of Gaussian components is then used to identify DEGs using the coefficients estimated by the linear model. This approach can readily be applied to experimental design with or without replication. Results On a wholly defined dataset, the PL-LM method was able to identify 75% of the differentially expressed genes within 10% of false positives. This accuracy was achieved both using the three replicates per conditions available in the dataset and using only one replicate per condition. Conclusion The method achieves, on this dataset, a higher accuracy than the best set of tools identified by the authors of the dataset, and does so using only one replicate per condition.

  3. Evolving Non-Dominated Parameter Sets for Computational Models from Multiple Experiments

    Science.gov (United States)

    Lane, Peter C. R.; Gobet, Fernand

    2013-03-01

    Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the `speciated non-dominated sorting genetic algorithm' for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.

  4. A mathematical model for predicting glucose levels in critically-ill patients: the PIGnOLI model

    Directory of Open Access Journals (Sweden)

    Zhongheng Zhang

    2015-06-01

    Full Text Available Background and Objectives. Glycemic control is of paramount importance in the intensive care unit. Presently, several BG control algorithms have been developed for clinical trials, but they are mostly based on experts’ opinion and consensus. There are no validated models predicting how glucose levels will change after initiating of insulin infusion in critically ill patients. The study aimed to develop an equation for initial insulin dose setting.Methods. A large critical care database was employed for the study. Linear regression model fitting was employed. Retested blood glucose was used as the independent variable. Insulin rate was forced into the model. Multivariable fractional polynomials and interaction terms were used to explore the complex relationships among covariates. The overall fit of the model was examined by using residuals and adjusted R-squared values. Regression diagnostics were used to explore the influence of outliers on the model.Main Results. A total of 6,487 ICU admissions requiring insulin pump therapy were identified. The dataset was randomly split into two subsets at 7 to 3 ratio. The initial model comprised fractional polynomials and interactions terms. However, this model was not stable by excluding several outliers. I fitted a simple linear model without interaction. The selected prediction model (Predicting Glucose Levels in ICU, PIGnOLI included variables of initial blood glucose, insulin rate, PO volume, total parental nutrition, body mass index (BMI, lactate, congestive heart failure, renal failure, liver disease, time interval of BS recheck, dextrose rate. Insulin rate was significantly associated with blood glucose reduction (coefficient: −0.52, 95% CI [−1.03, −0.01]. The parsimonious model was well validated with the validation subset, with an adjusted R-squared value of 0.8259.Conclusion. The study developed the PIGnOLI model for the initial insulin dose setting. Furthermore, experimental study is

  5. Setting conservation management thresholds using a novel participatory modeling approach.

    Science.gov (United States)

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The

  6. A comparison of foetal SAR in three sets of pregnant female models

    International Nuclear Information System (INIS)

    Dimbylow, Peter J; Nagaoka, Tomoaki; Xu, X George

    2009-01-01

    This paper compares the foetal SAR in the HPA hybrid mathematical phantoms with the 26-week foetal model developed at the National Institute of Information and Communications Technology, Tokyo, and the set of 13-, 26- and 38-week boundary representation models produced at Rensselaer Polytechnic Institute. FDTD calculations are performed at a resolution of 2 mm for a plane wave with a vertically aligned electric field incident upon the body from the front, back and two sides from 20 MHz to 3 GHz under isolated conditions. The external electric field values required to produce the ICNIRP public exposure localized restriction of 2 W kg -1 when averaged over 10 g of the foetus are compared with the ICNIRP reference levels.

  7. Modélisation du procédé de soudage hybride Arc / Laser par une approche level set application aux toles d'aciers de fortes épaisseurs A level-set approach for the modelling of hybrid arc/laser welding process application for high thickness steel sheets joining

    Directory of Open Access Journals (Sweden)

    Desmaison Olivier

    2013-11-01

    Full Text Available Le procédé de soudage hybride Arc/Laser est une solution aux assemblages difficiles de tôles de fortes épaisseurs. Ce procédé innovant associe deux sources de chaleur : un arc électrique produit par une torche MIG et une source laser placée en amont. Ce couplage améliore le rendement du procédé, la qualité du cordon et les déformations finales. La modélisation de ce procédé par une approche Level Set permet une prédiction du développement du cordon et du champ de température associé. La simulation du soudage multi-passes d'une nuance d'acier 18MnNiMo5 est présentée ici et les résultats sont comparés aux observations expérimentales. The hybrid arc/laser welding process has been developed in order to overcome the difficulties encountered for joining high thickness steel sheets. This innovative process gathers two heat sources: an arc source developed by a MIG torch and a pre-located laser source. This coupling improves the efficiency of the process, the weld bead quality and the final deformations. The Level-Set approach for the modelling of this process enables the prediction of the weld bead development and the temperature field evolution. The simulation of the multi-passes welding of a 18MnNiMo5 steel grade is detailed and the results are compared to the experimental observations.

  8. Selecting an interprofessional education model for a tertiary health care setting.

    Science.gov (United States)

    Menard, Prudy; Varpio, Lara

    2014-07-01

    The World Health Organization describes interprofessional education (IPE) and collaboration as necessary components of all health professionals' education - in curriculum and in practice. However, no standard framework exists to guide healthcare settings in developing or selecting an IPE model that meets the learning needs of licensed practitioners in practice and that suits the unique needs of their setting. Initially, a broad review of the grey literature (organizational websites, government documents and published books) and healthcare databases was undertaken for existing IPE models. Subsequently, database searches of published papers using Scopus, Scholars Portal and Medline was undertaken. Through this search process five IPE models were identified in the literature. This paper attempts to: briefly outline the five different models of IPE that are presently offered in the literature; and illustrate how a healthcare setting can select the IPE model within their context using Reeves' seven key trends in developing IPE. In presenting these results, the paper contributes to the interprofessional literature by offering an overview of possible IPE models that can be used to inform the implementation or modification of interprofessional practices in a tertiary healthcare setting.

  9. Benchmark data set for wheat growth models

    DEFF Research Database (Denmark)

    Asseng, S; Ewert, F.; Martre, P

    2015-01-01

    The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, max...... analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario....

  10. Modeling Multi-Level Systems

    CERN Document Server

    Iordache, Octavian

    2011-01-01

    This book is devoted to modeling of multi-level complex systems, a challenging domain for engineers, researchers and entrepreneurs, confronted with the transition from learning and adaptability to evolvability and autonomy for technologies, devices and problem solving methods. Chapter 1 introduces the multi-scale and multi-level systems and highlights their presence in different domains of science and technology. Methodologies as, random systems, non-Archimedean analysis, category theory and specific techniques as model categorification and integrative closure, are presented in chapter 2. Chapters 3 and 4 describe polystochastic models, PSM, and their developments. Categorical formulation of integrative closure offers the general PSM framework which serves as a flexible guideline for a large variety of multi-level modeling problems. Focusing on chemical engineering, pharmaceutical and environmental case studies, the chapters 5 to 8 analyze mixing, turbulent dispersion and entropy production for multi-scale sy...

  11. Patient- and population-level health consequences of discontinuing antiretroviral therapy in settings with inadequate HIV treatment availability

    Directory of Open Access Journals (Sweden)

    Kimmel April D

    2012-09-01

    Full Text Available Abstract Background In resource-limited settings, HIV budgets are flattening or decreasing. A policy of discontinuing antiretroviral therapy (ART after HIV treatment failure was modeled to highlight trade-offs among competing policy goals of optimizing individual and population health outcomes. Methods In settings with two available ART regimens, we assessed two strategies: (1 continue ART after second-line failure (Status Quo and (2 discontinue ART after second-line failure (Alternative. A computer model simulated outcomes for a single cohort of newly detected, HIV-infected individuals. Projections were fed into a population-level model allowing multiple cohorts to compete for ART with constraints on treatment capacity. In the Alternative strategy, discontinuation of second-line ART occurred upon detection of antiretroviral failure, specified by WHO guidelines. Those discontinuing failed ART experienced an increased risk of AIDS-related mortality compared to those continuing ART. Results At the population level, the Alternative strategy increased the mean number initiating ART annually by 1,100 individuals (+18.7% to 6,980 compared to the Status Quo. More individuals initiating ART under the Alternative strategy increased total life-years by 15,000 (+2.8% to 555,000, compared to the Status Quo. Although more individuals received treatment under the Alternative strategy, life expectancy for those treated decreased by 0.7 years (−8.0% to 8.1 years compared to the Status Quo. In a cohort of treated patients only, 600 more individuals (+27.1% died by 5 years under the Alternative strategy compared to the Status Quo. Results were sensitive to the timing of detection of ART failure, number of ART regimens, and treatment capacity. Although we believe the results robust in the short-term, this analysis reflects settings where HIV case detection occurs late in the disease course and treatment capacity and the incidence of newly detected patients are

  12. Implications of sea-level rise in a modern carbonate ramp setting

    Science.gov (United States)

    Lokier, Stephen W.; Court, Wesley M.; Onuma, Takumi; Paul, Andreas

    2018-03-01

    This study addresses a gap in our understanding of the effects of sea-level rise on the sedimentary systems and morphological development of recent and ancient carbonate ramp settings. Many ancient carbonate sequences are interpreted as having been deposited in carbonate ramp settings. These settings are poorly-represented in the Recent. The study documents the present-day transgressive flooding of the Abu Dhabi coastline at the southern shoreline of the Arabian/Persian Gulf, a carbonate ramp depositional system that is widely employed as a Recent analogue for numerous ancient carbonate systems. Fourteen years of field-based observations are integrated with historical and recent high-resolution satellite imagery in order to document and assess the onset of flooding. Predicted rates of transgression (i.e. landward movement of the shoreline) of 2.5 m yr- 1 (± 0.2 m yr- 1) based on global sea-level rise alone were far exceeded by the flooding rate calculated from the back-stepping of coastal features (10-29 m yr- 1). This discrepancy results from the dynamic nature of the flooding with increased water depth exposing the coastline to increased erosion and, thereby, enhancing back-stepping. A non-accretionary transgressive shoreline trajectory results from relatively rapid sea-level rise coupled with a low-angle ramp geometry and a paucity of sediments. The flooding is represented by the landward migration of facies belts, a range of erosive features and the onset of bioturbation. Employing Intergovernmental Panel on Climate Change (Church et al., 2013) predictions for 21st century sea-level rise, and allowing for the post-flooding lag time that is typical for the start-up of carbonate factories, it is calculated that the coastline will continue to retrograde for the foreseeable future. Total passive flooding (without considering feedback in the modification of the shoreline) by the year 2100 is calculated to likely be between 340 and 571 m with a flooding rate of 3

  13. Prediction of South China sea level using seasonal ARIMA models

    Science.gov (United States)

    Fernandez, Flerida Regine; Po, Rodolfo; Montero, Neil; Addawe, Rizavel

    2017-11-01

    Accelerating sea level rise is an indicator of global warming and poses a threat to low-lying places and coastal countries. This study aims to fit a Seasonal Autoregressive Integrated Moving Average (SARIMA) model to the time series obtained from the TOPEX and Jason series of satellite radar altimetries of the South China Sea from the year 2008 to 2015. With altimetric measurements taken in a 10-day repeat cycle, monthly averages of the satellite altimetry measurements were taken to compose the data set used in the study. SARIMA models were then tried and fitted to the time series in order to find the best-fit model. Results show that the SARIMA(1,0,0)(0,1,1)12 model best fits the time series and was used to forecast the values for January 2016 to December 2016. The 12-month forecast using SARIMA(1,0,0)(0,1,1)12 shows that the sea level gradually increases from January to September 2016, and decreases until December 2016.

  14. A game on the universe of sets

    International Nuclear Information System (INIS)

    Saveliev, D I

    2008-01-01

    Working in set theory without the axiom of regularity, we consider a two-person game on the universe of sets. In this game, the players choose in turn an element of a given set, an element of this element and so on. A player wins if he leaves his opponent no possibility of making a move, that is, if he has chosen the empty set. Winning sets (those admitting a winning strategy for one of the players) form a natural hierarchy with levels indexed by ordinals (in the finite case, the ordinal indicates the shortest length of a winning strategy). We show that the class of hereditarily winning sets is an inner model containing all well-founded sets and that each of the four possible relations between the universe, the class of hereditarily winning sets, and the class of well-founded sets is consistent. As far as the class of winning sets is concerned, either it is equal to the whole universe, or many of the axioms of set theory cannot hold on this class. Somewhat surprisingly, this does not apply to the axiom of regularity: we show that the failure of this axiom is consistent with its relativization to winning sets. We then establish more subtle properties of winning non-well-founded sets. We describe all classes of ordinals for which the following is consistent: winning sets without minimal elements (in the sense of membership) occur exactly at the levels indexed by the ordinals of this class. In particular, we show that if an even level of the hierarchy of winning sets contains a set without minimal elements, then all higher levels contain such sets. We show that the failure of the axiom of regularity implies that all odd levels contain sets without minimal elements, but it is consistent with the absence of such sets at all even levels as well as with their appearance at an arbitrary even non-limit or countable-cofinal level. To obtain consistency results, we propose a new method for obtaining models with non-well-founded sets. Finally, we study how long this game can

  15. Models for setting ATM parameter values

    DEFF Research Database (Denmark)

    Blaabjerg, Søren; Gravey, A.; Romæuf, L.

    1996-01-01

    essential to set traffic characteristic values that are relevant to the considered cell stream, and that ensure that the amount of non-conforming traffic is small. Using a queueing model representation for the GCRA formalism, several methods are available for choosing the traffic characteristics. This paper......In ATM networks, a user should negotiate at connection set-up a traffic contract which includes traffic characteristics and requested QoS. The traffic characteristics currently considered are the Peak Cell Rate, the Sustainable Cell Rate, the Intrinsic Burst Tolerance and the Cell Delay Variation...... (CDV) tolerance(s). The values taken by these traffic parameters characterize the so-called ''Worst Case Traffic'' that is used by CAC procedures for accepting a new connection and allocating resources to it. Conformance to the negotiated traffic characteristics is defined, at the ingress User...

  16. Topology optimization in acoustics and elasto-acoustics via a level-set method

    Science.gov (United States)

    Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.

    2018-04-01

    Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.

  17. Setting analyst: A practical harvest planning technique

    Science.gov (United States)

    Olivier R.M. Halleux; W. Dale Greene

    2001-01-01

    Setting Analyst is an ArcView extension that facilitates practical harvest planning for ground-based systems. By modeling the travel patterns of ground-based machines, it compares different harvesting settings based on projected average skidding distance, logging costs, and site disturbance levels. Setting Analyst uses information commonly available to consulting...

  18. Particle filters for random set models

    CERN Document Server

    Ristic, Branko

    2013-01-01

    “Particle Filters for Random Set Models” presents coverage of state estimation of stochastic dynamic systems from noisy measurements, specifically sequential Bayesian estimation and nonlinear or stochastic filtering. The class of solutions presented in this book is based  on the Monte Carlo statistical method. The resulting  algorithms, known as particle filters, in the last decade have become one of the essential tools for stochastic filtering, with applications ranging from  navigation and autonomous vehicles to bio-informatics and finance. While particle filters have been around for more than a decade, the recent theoretical developments of sequential Bayesian estimation in the framework of random set theory have provided new opportunities which are not widely known and are covered in this book. These recent developments have dramatically widened the scope of applications, from single to multiple appearing/disappearing objects, from precise to imprecise measurements and measurement models. This book...

  19. Search for the standard model Higgs Boson produced in association with top quarks using the full CDF data set.

    Science.gov (United States)

    Aaltonen, T; Álvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Appel, J A; Arisawa, T; Artikov, A; Asaadi, J; Ashmanskas, W; Auerbach, B; Aurisano, A; Azfar, F; Badgett, W; Bae, T; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Barria, P; Bartos, P; Bauce, M; Bedeschi, F; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Bhatti, A; Bisello, D; Bizjak, I; Bland, K R; Blumenfeld, B; Bocci, A; Bodek, A; Bortoletto, D; Boudreau, J; Boveia, A; Brigliadori, L; Bromberg, C; Brucken, E; Budagov, J; Budd, H S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Calamba, A; Calancha, C; Camarda, S; Campanelli, M; Campbell, M; Canelli, F; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chung, W H; Chung, Y S; Ciocci, M A; Clark, A; Clarke, C; Compostella, G; Connors, J; Convery, M E; Conway, J; Corbo, M; Cordelli, M; Cox, C A; Cox, D J; Crescioli, F; Cuevas, J; Culbertson, R; Dagenhart, D; d'Ascenzo, N; Datta, M; de Barbaro, P; Dell'Orso, M; Demortier, L; Deninno, M; Devoto, F; d'Errico, M; Di Canto, A; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Dorigo, M; Dorigo, T; Ebina, K; Elagin, A; Eppig, A; Erbacher, R; Errede, S; Ershaidat, N; Eusebi, R; Farrington, S; Feindt, M; Fernandez, J P; Field, R; Flanagan, G; Forrest, R; Frank, M J; Franklin, M; Freeman, J C; Funakoshi, Y; Furic, I; Gallinaro, M; Garcia, J E; Garfinkel, A F; Garosi, P; Gerberich, H; Gerchtein, E; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Ginsburg, C M; Giokaris, N; Giromini, P; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldin, D; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Grinstein, S; Grosso-Pilcher, C; Group, R C; Guimaraes da Costa, J; Hahn, S R; Halkiadakis, E; Hamaguchi, A; Han, J Y; Happacher, F; Hara, K; Hare, D; Hare, M; Harr, R F; Hatakeyama, K; Hays, C; Heck, M; Heinrich, J; Herndon, M; Hewamanage, S; Hocker, A; Hopkins, W; Horn, D; Hou, S; Hughes, R E; Hurwitz, M; Husemann, U; Hussain, N; Hussein, M; Huston, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jang, D; Jayatilaka, B; Jeon, E J; Jindariani, S; Jones, M; Joo, K K; Jun, S Y; Junk, T R; Kamon, T; Karchin, P E; Kasmi, A; Kato, Y; Ketchum, W; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kim, Y J; Kimura, N; Kirby, M; Klimenko, S; Knoepfel, K; Kondo, K; Kong, D J; Konigsberg, J; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Kruse, M; Krutelyov, V; Kuhr, T; Kurata, M; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lecompte, T; Lee, E; Lee, H S; Lee, J S; Lee, S W; Leo, S; Leone, S; Lewis, J D; Limosani, A; Lin, C-J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, H; Liu, Q; Liu, T; Lockwitz, S; Loginov, A; Lucchesi, D; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lys, J; Lysak, R; Madrak, R; Maeshima, K; Maestro, P; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Martínez, M; Mastrandrea, P; Matera, K; Mattson, M E; Mazzacane, A; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Mesropian, C; Miao, T; Mietlicki, D; Mitra, A; Miyake, H; Moed, S; Moggi, N; Mondragon, M N; Moon, C S; Moore, R; Morello, M J; Morlock, J; Movilla Fernandez, P; Mukherjee, A; Muller, Th; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Naganoma, J; Nakano, I; Napier, A; Nett, J; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Noh, S Y; Norniella, O; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Ortolan, L; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Paramonov, A A; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pilot, J; Pitts, K; Plager, C; Pondrom, L; Poprocki, S; Potamianos, K; Prokoshin, F; Pranko, A; Ptohos, F; Punzi, G; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Renton, P; Rescigno, M; Riddick, T; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Ruffini, F; Ruiz, A; Russ, J; Rusu, V; Safonov, A; Sakumoto, W K; Sakurai, Y; Santi, L; Sato, K; Saveliev, V; Savoy-Navarro, A; Schlabach, P; Schmidt, A; Schmidt, E E; Schwarz, T; Scodellaro, L; Scribano, A; Scuri, F; Seidel, S; Seiya, Y; Semenov, A; Sforza, F; Shalhout, S Z; Shears, T; Shepard, P F; Shimojima, M; Shochet, M; Shreyber-Tecker, I; Simonenko, A; Sinervo, P; Sliwa, K; Smith, J R; Snider, F D; Soha, A; Sorin, V; Song, H; Squillacioti, P; Stancari, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Strycker, G L; Sudo, Y; Sukhanov, A; Suslov, I; Takemasa, K; Takeuchi, Y; Tang, J; Tecchio, M; Teng, P K; Thom, J; Thome, J; Thompson, G A; Thomson, E; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Trovato, M; Ukegawa, F; Uozumi, S; Varganov, A; Vázquez, F; Velev, G; Vellidis, C; Vidal, M; Vila, I; Vilar, R; Vizán, J; Vogel, M; Volpi, G; Wagner, P; Wagner, R L; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Wester, W C; Whiteson, D; Wicklund, A B; Wicklund, E; Wilbur, S; Wick, F; Williams, H H; Wilson, J S; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, H; Wright, T; Wu, X; Wu, Z; Yamamoto, K; Yamato, D; Yang, T; Yang, U K; Yang, Y C; Yao, W-M; Yeh, G P; Yi, K; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanetti, A; Zeng, Y; Zhou, C; Zucchelli, S

    2012-11-02

    A search is presented for the standard model Higgs boson produced in association with top quarks using the full Run II proton-antiproton collision data set, corresponding to 9.45 fb(-1), collected by the Collider Detector at Fermilab. No significant excess over the expected background is observed, and 95% credibility-level upper bounds are placed on the cross section σ(ttH → lepton + missing transverse energy+jets). For a Higgs boson mass of 125 GeV/c(2), we expect to set a limit of 12.6 and observe a limit of 20.5 times the standard model rate. This represents the most sensitive search for a standard model Higgs boson in this channel to date.

  20. Computerized detection of multiple sclerosis candidate regions based on a level set method using an artificial neural network

    International Nuclear Information System (INIS)

    Kuwazuru, Junpei; Magome, Taiki; Arimura, Hidetaka; Yamashita, Yasuo; Oki, Masafumi; Toyofuku, Fukai; Kakeda, Shingo; Yamamoto, Daisuke

    2010-01-01

    Yamamoto et al. developed the system for computer-aided detection of multiple sclerosis (MS) candidate regions. In a level set method in their proposed method, they employed the constant threshold value for the edge indicator function related to a speed function of the level set method. However, it would be appropriate to adjust the threshold value to each MS candidate region, because the edge magnitudes in MS candidates differ from each other. Our purpose of this study was to develop a computerized detection of MS candidate regions in MR images based on a level set method using an artificial neural network (ANN). To adjust the threshold value for the edge indicator function in the level set method to each true positive (TP) and false positive (FP) region, we constructed the ANN. The ANN could provide the suitable threshold value for each candidate region in the proposed level set method so that TP regions can be segmented and FP regions can be removed. Our proposed method detected MS regions at a sensitivity of 82.1% with 0.204 FPs per slice and similarity index of MS candidate regions was 0.717 on average. (author)

  1. Answer Sets in a Fuzzy Equilibrium Logic

    Science.gov (United States)

    Schockaert, Steven; Janssen, Jeroen; Vermeir, Dirk; de Cock, Martine

    Since its introduction, answer set programming has been generalized in many directions, to cater to the needs of real-world applications. As one of the most general “classical” approaches, answer sets of arbitrary propositional theories can be defined as models in the equilibrium logic of Pearce. Fuzzy answer set programming, on the other hand, extends answer set programming with the capability of modeling continuous systems. In this paper, we combine the expressiveness of both approaches, and define answer sets of arbitrary fuzzy propositional theories as models in a fuzzification of equilibrium logic. We show that the resulting notion of answer set is compatible with existing definitions, when the syntactic restrictions of the corresponding approaches are met. We furthermore locate the complexity of the main reasoning tasks at the second level of the polynomial hierarchy. Finally, as an illustration of its modeling power, we show how fuzzy equilibrium logic can be used to find strong Nash equilibria.

  2. A HIERARCHICAL SET OF MODELS FOR SPECIES RESPONSE ANALYSIS

    NARCIS (Netherlands)

    HUISMAN, J; OLFF, H; FRESCO, LFM

    Variation in the abundance of species in space and/or time can be caused by a wide range of underlying processes. Before such causes can be analysed we need simple mathematical models which can describe the observed response patterns. For this purpose a hierarchical set of models is presented. These

  3. A hierarchical set of models for species response analysis

    NARCIS (Netherlands)

    Huisman, J.; Olff, H.; Fresco, L.F.M.

    1993-01-01

    Variation in the abundance of species in space and/or time can be caused by a wide range of underlying processes. Before such causes can be analysed we need simple mathematical models which can describe the observed response patterns. For this purpose a hierarchical set of models is presented. These

  4. The GRENE-TEA model intercomparison project (GTMIP) Stage 1 forcing data set

    Science.gov (United States)

    Sueyoshi, T.; Saito, K.; Miyazaki, S.; Mori, J.; Ise, T.; Arakida, H.; Suzuki, R.; Sato, A.; Iijima, Y.; Yabuki, H.; Ikawa, H.; Ohta, T.; Kotani, A.; Hajima, T.; Sato, H.; Yamazaki, T.; Sugimoto, A.

    2016-01-01

    Here, the authors describe the construction of a forcing data set for land surface models (including both physical and biogeochemical models; LSMs) with eight meteorological variables for the 35-year period from 1979 to 2013. The data set is intended for use in a model intercomparison study, called GTMIP, which is a part of the Japanese-funded Arctic Climate Change Research Project. In order to prepare a set of site-fitted forcing data for LSMs with realistic yet continuous entries (i.e. without missing data), four observational sites across the pan-Arctic region (Fairbanks, Tiksi, Yakutsk, and Kevo) were selected to construct a blended data set using both global reanalysis and observational data. Marked improvements were found in the diurnal cycles of surface air temperature and humidity, wind speed, and precipitation. The data sets and participation in GTMIP are open to the scientific community (doi:10.17592/001.2015093001).

  5. An Analysis of Turkey's PISA 2015 Results Using Two-Level Hierarchical Linear Modelling

    Science.gov (United States)

    Atas, Dogu; Karadag, Özge

    2017-01-01

    In the field of education, most of the data collected are multi-level structured. Cities, city based schools, school based classes and finally students in the classrooms constitute a hierarchical structure. Hierarchical linear models give more accurate results compared to standard models when the data set has a structure going far as individuals,…

  6. Kir2.1 channels set two levels of resting membrane potential with inward rectification.

    Science.gov (United States)

    Chen, Kuihao; Zuo, Dongchuan; Liu, Zheng; Chen, Haijun

    2018-04-01

    Strong inward rectifier K + channels (Kir2.1) mediate background K + currents primarily responsible for maintenance of resting membrane potential. Multiple types of cells exhibit two levels of resting membrane potential. Kir2.1 and K2P1 currents counterbalance, partially accounting for the phenomenon of human cardiomyocytes in subphysiological extracellular K + concentrations or pathological hypokalemic conditions. The mechanism of how Kir2.1 channels contribute to the two levels of resting membrane potential in different types of cells is not well understood. Here we test the hypothesis that Kir2.1 channels set two levels of resting membrane potential with inward rectification. Under hypokalemic conditions, Kir2.1 currents counterbalance HCN2 or HCN4 cation currents in CHO cells that heterologously express both channels, generating N-shaped current-voltage relationships that cross the voltage axis three times and reconstituting two levels of resting membrane potential. Blockade of HCN channels eliminated the phenomenon in K2P1-deficient Kir2.1-expressing human cardiomyocytes derived from induced pluripotent stem cells or CHO cells expressing both Kir2.1 and HCN2 channels. Weakly inward rectifier Kir4.1 or inward rectification-deficient Kir2.1•E224G mutant channels do not set such two levels of resting membrane potential when co-expressed with HCN2 channels in CHO cells or when overexpressed in human cardiomyocytes derived from induced pluripotent stem cells. These findings demonstrate a common mechanism that Kir2.1 channels set two levels of resting membrane potential with inward rectification by balancing inward currents through different cation channels such as hyperpolarization-activated HCN channels or hypokalemia-induced K2P1 leak channels.

  7. Level of health care and services in a tertiary health setting in Nigeria

    African Journals Online (AJOL)

    Level of health care and services in a tertiary health setting in Nigeria. ... Background: There is a growing awareness and demand for quality health care across the world; hence the ... Doctors and nurses formed 64.3% of the study population.

  8. Analysis model for forecasting extreme temperature using refined rank set pair

    Directory of Open Access Journals (Sweden)

    Qiao Ling-Xia

    2013-01-01

    Full Text Available In order to improve the precision of forecasting extreme temperature time series, a refined rank set pair analysis model with a refined rank transformation function is proposed to improve precision of its prediction. The measured values of the annual highest temperature of two China’s cities, Taiyuan and Shijiazhuang, in July are taken to examine the performance of a refined rank set pair model.

  9. Multivariate modeling of complications with data driven variable selection: Guarding against overfitting and effects of data set size

    International Nuclear Information System (INIS)

    Schaaf, Arjen van der; Xu Chengjian; Luijk, Peter van; Veld, Aart A. van’t; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    data sets, in particular in data sets with a low number of events (median: 7, 95th percentile: 32). Recognizing overfitting from an inverted sign of the estimated model coefficients has a limited discriminative value. Conclusions: Despite considerable spread around the optimal number of selected variables, the bootstrapping method is efficient and accurate for sufficiently large data sets, and guards against overfitting for all simulated cases with the exception of some data sets with a particularly low number of events. An appropriate minimum data set size to obtain a model with high predictive power is approximately 200 patients and more than 32 events. With fewer data samples the true predictive power decreases rapidly, and for larger data set sizes the benefit levels off toward an asymptotic maximum predictive power.

  10. Setting up a hydrological model based on global data for the Ayeyarwady basin in Myanmar

    Science.gov (United States)

    ten Velden, Corine; Sloff, Kees; Nauta, Tjitte

    2017-04-01

    The use of global datasets in local hydrological modelling can be of great value. It opens up the possibility to include data for areas where local data is not or only sparsely available. In hydrological modelling the existence of both static physical data such as elevation and land use, and dynamic meteorological data such as precipitation and temperature, is essential for setting up a hydrological model, but often such data is difficult to obtain at the local level. For the Ayeyarwady catchment in Myanmar a distributed hydrological model (Wflow: https://github.com/openstreams/wflow) was set up with only global datasets, as part of a water resources study. Myanmar is an emerging economy, which has only recently become more receptive to foreign influences. It has a very limited hydrometeorological measurement network, with large spatial and temporal gaps, and data that are of uncertain quality and difficult to obtain. The hydrological model was thus set up based on resampled versions of the SRTM digital elevation model, the GlobCover land cover dataset and the HWSD soil dataset. Three global meteorological datasets were assessed and compared for use in the hydrological model: TRMM, WFDEI and MSWEP. The meteorological datasets were assessed based on their conformity with several precipitation station measurements, and the overall model performance was assessed by calculating the NSE and RVE based on discharge measurements of several gauging stations. The model was run for the period 1979-2012 on a daily time step, and the results show an acceptable applicability of the used global datasets in the hydrological model. The WFDEI forcing dataset gave the best results, with a NSE of 0.55 at the outlet of the model and a RVE of 8.5%, calculated over the calibration period 2006-2012. As a general trend the modelled discharge at the upstream stations tends to be underestimated, and at the downstream stations slightly overestimated. The quality of the discharge measurements

  11. [Cardiac Synchronization Function Estimation Based on ASM Level Set Segmentation Method].

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Tang, Liang; He, Ying; Zhang, Huie

    At present, there is no accurate and quantitative methods for the determination of cardiac mechanical synchronism, and quantitative determination of the synchronization function of the four cardiac cavities with medical images has a great clinical value. This paper uses the whole heart ultrasound image sequence, and segments the left & right atriums and left & right ventricles of each frame. After the segmentation, the number of pixels in each cavity and in each frame is recorded, and the areas of the four cavities of the image sequence are therefore obtained. The area change curves of the four cavities are further extracted, and the synchronous information of the four cavities is obtained. Because of the low SNR of Ultrasound images, the boundary lines of cardiac cavities are vague, so the extraction of cardiac contours is still a challenging problem. Therefore, the ASM model information is added to the traditional level set method to force the curve evolution process. According to the experimental results, the improved method improves the accuracy of the segmentation. Furthermore, based on the ventricular segmentation, the right and left ventricular systolic functions are evaluated, mainly according to the area changes. The synchronization of the four cavities of the heart is estimated based on the area changes and the volume changes.

  12. Latent Variable Regression 4-Level Hierarchical Model Using Multisite Multiple-Cohorts Longitudinal Data. CRESST Report 801

    Science.gov (United States)

    Choi, Kilchan

    2011-01-01

    This report explores a new latent variable regression 4-level hierarchical model for monitoring school performance over time using multisite multiple-cohorts longitudinal data. This kind of data set has a 4-level hierarchical structure: time-series observation nested within students who are nested within different cohorts of students. These…

  13. Numerical simulation of interface movement in gas-liquid two-phase flows with Level Set method

    International Nuclear Information System (INIS)

    Li Huixiong; Chinese Academy of Sciences, Beijing; Deng Sheng; Chen Tingkuan; Zhao Jianfu; Wang Fei

    2005-01-01

    Numerical simulation of gas-liquid two-phase flow and heat transfer has been an attractive work for a quite long time, but still remains as a knotty difficulty due to the inherent complexities of the gas-liquid two-phase flow resulted from the existence of moving interfaces with topology changes. This paper reports the effort and the latest advances that have been made by the authors, with special emphasis on the methods for computing solutions to the advection equation of the Level set function, which is utilized to capture the moving interfaces in gas-liquid two-phase flows. Three different schemes, i.e. the simple finite difference scheme, the Superbee-TVD scheme and the 5-order WENO scheme in combination with the Runge-Kutta method are respectively applied to solve the advection equation of the Level Set. A numerical procedure based on the well-verified SIMPLER method is employed to numerically calculate the momentum equations of the two-phase flow. The above-mentioned three schemes are employed to simulate the movement of four typical interfaces under 5 typical flowing conditions. Analysis of the numerical results shows that the 5-order WENO scheme and the Superbee-TVD scheme are much better than the simple finite difference scheme, and the 5-order WENO scheme is the best to compute solutions to the advection equation of the Level Set. The 5-order WENO scheme will be employed as the main scheme to get solutions to the advection equations of the Level Set when gas-liquid two-phase flows are numerically studied in the future. (authors)

  14. Endogenous Currency of Price Setting in a Dynamic Open Economy Model

    OpenAIRE

    Michael B. Devereux; Charles Engel

    2001-01-01

    Many papers in the recent literature in open economy macroeconomics make different assumptions about the currency in which firms set their export prices when nominal prices must be pre-set. But to date, all of these studies take the currency of price setting as exogenous. This paper sets up a simple two-country general equilibrium model in which exporting firms can choose the currency in which they set prices for sales to foreign markets. We make two alternative assumptions about the structur...

  15. Numerical simulations of natural or mixed convection in vertical channels: comparisons of level-set numerical schemes for the modeling of immiscible incompressible fluid flows

    International Nuclear Information System (INIS)

    Li, R.

    2012-01-01

    The aim of this research dissertation is at studying natural and mixed convections of fluid flows, and to develop and validate numerical schemes for interface tracking in order to treat incompressible and immiscible fluid flows, later. In a first step, an original numerical method, based on Finite Volume discretizations, is developed for modeling low Mach number flows with large temperature gaps. Three physical applications on air flowing through vertical heated parallel plates were investigated. We showed that the optimum spacing corresponding to the peak heat flux transferred from an array of isothermal parallel plates cooled by mixed convection is smaller than those for natural or forced convections when the pressure drop at the outlet keeps constant. We also proved that mixed convection flows resulting from an imposed flow rate may exhibit unexpected physical solutions; alternative model based on prescribed total pressure at inlet and fixed pressure at outlet sections gives more realistic results. For channels heated by heat flux on one wall only, surface radiation tends to suppress the onset of re-circulations at the outlet and to unify the walls temperature. In a second step, the mathematical model coupling the incompressible Navier-Stokes equations and the Level-Set method for interface tracking is derived. Improvements in fluid volume conservation by using high order discretization (ENO-WENO) schemes for the transport equation and variants of the signed distance equation are discussed. (author)

  16. An evaluation of four crop:weed competition models using a common data set

    NARCIS (Netherlands)

    Deen, W.; Cousens, R.; Warringa, J.; Bastiaans, L.; Carberry, P.; Rebel, K.; Riha, S.; Murphy, C.; Benjamin, L.R.; Cloughley, C.; Cussans, J.; Forcella, F.

    2003-01-01

    To date, several crop : weed competition models have been developed. Developers of the various models were invited to compare model performance using a common data set. The data set consisted of wheat and Lolium rigidum grown in monoculture and mixtures under dryland and irrigated conditions.

  17. Optimal Interest-Rate Setting in a Dynamic IS/AS Model

    DEFF Research Database (Denmark)

    Jensen, Henrik

    2011-01-01

    This note deals with interest-rate setting in a simple dynamic macroeconomic setting. The purpose is to present some basic and central properties of an optimal interest-rate rule. The model framework predates the New-Keynesian paradigm of the late 1990s and onwards (it is accordingly dubbed “Old...

  18. What Time Is Sunrise? Revisiting the Refraction Component of Sunrise/set Prediction Models

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer L.; Hilton, James Lindsay

    2017-01-01

    Algorithms that predict sunrise and sunset times currently have an error of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, even including difficulties determining when the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction. We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We then compare these predictions with data sets of observed rise/set times to create a better model. Sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem. While there are a few data sets available, we will also begin collecting this data using smartphones as part of a citizen science project. The mobile application for this project will be available in the Google Play store. Data analysis will lead to more complete models that will provide more accurate rise/set times for the benefit of astronomers, navigators, and outdoorsmen everywhere.

  19. Scope of physician procedures independently billed by mid-level providers in the office setting.

    Science.gov (United States)

    Coldiron, Brett; Ratnarathorn, Mondhipa

    2014-11-01

    Mid-level providers (nurse practitioners and physician assistants) were originally envisioned to provide primary care services in underserved areas. This study details the current scope of independent procedural billing to Medicare of difficult, invasive, and surgical procedures by medical mid-level providers. To understand the scope of independent billing to Medicare for procedures performed by mid-level providers in an outpatient office setting for a calendar year. Analyses of the 2012 Medicare Physician/Supplier Procedure Summary Master File, which reflects fee-for-service claims that were paid by Medicare, for Current Procedural Terminology procedures independently billed by mid-level providers. Outpatient office setting among health care providers. The scope of independent billing to Medicare for procedures performed by mid-level providers. In 2012, nurse practitioners and physician assistants billed independently for more than 4 million procedures at our cutoff of 5000 paid claims per procedure. Most (54.8%) of these procedures were performed in the specialty area of dermatology. The findings of this study are relevant to safety and quality of care. Recently, the shortage of primary care clinicians has prompted discussion of widening the scope of practice for mid-level providers. It would be prudent to temper widening the scope of practice of mid-level providers by recognizing that mid-level providers are not solely limited to primary care, and may involve procedures for which they may not have formal training.

  20. The utility of imputed matched sets. Analyzing probabilistically linked databases in a low information setting.

    Science.gov (United States)

    Thomas, A M; Cook, L J; Dean, J M; Olson, L M

    2014-01-01

    To compare results from high probability matched sets versus imputed matched sets across differing levels of linkage information. A series of linkages with varying amounts of available information were performed on two simulated datasets derived from multiyear motor vehicle crash (MVC) and hospital databases, where true matches were known. Distributions of high probability and imputed matched sets were compared against the true match population for occupant age, MVC county, and MVC hour. Regression models were fit to simulated log hospital charges and hospitalization status. High probability and imputed matched sets were not significantly different from occupant age, MVC county, and MVC hour in high information settings (p > 0.999). In low information settings, high probability matched sets were significantly different from occupant age and MVC county (p sets were not (p > 0.493). High information settings saw no significant differences in inference of simulated log hospital charges and hospitalization status between the two methods. High probability and imputed matched sets were significantly different from the outcomes in low information settings; however, imputed matched sets were more robust. The level of information available to a linkage is an important consideration. High probability matched sets are suitable for high to moderate information settings and for situations involving case-specific analysis. Conversely, imputed matched sets are preferable for low information settings when conducting population-based analyses.

  1. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    Science.gov (United States)

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  2. Automatic Generation of Building Models with Levels of Detail 1-3

    Science.gov (United States)

    Nguatem, W.; Drauschke, M.; Mayer, H.

    2016-06-01

    We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.

  3. Tokunaga and Horton self-similarity for level set trees of Markov chains

    International Nuclear Information System (INIS)

    Zaliapin, Ilia; Kovchegov, Yevgeniy

    2012-01-01

    Highlights: ► Self-similar properties of the level set trees for Markov chains are studied. ► Tokunaga and Horton self-similarity are established for symmetric Markov chains and regular Brownian motion. ► Strong, distributional self-similarity is established for symmetric Markov chains with exponential jumps. ► It is conjectured that fractional Brownian motions are Tokunaga self-similar. - Abstract: The Horton and Tokunaga branching laws provide a convenient framework for studying self-similarity in random trees. The Horton self-similarity is a weaker property that addresses the principal branching in a tree; it is a counterpart of the power-law size distribution for elements of a branching system. The stronger Tokunaga self-similarity addresses so-called side branching. The Horton and Tokunaga self-similarity have been empirically established in numerous observed and modeled systems, and proven for two paradigmatic models: the critical Galton–Watson branching process with finite progeny and the finite-tree representation of a regular Brownian excursion. This study establishes the Tokunaga and Horton self-similarity for a tree representation of a finite symmetric homogeneous Markov chain. We also extend the concept of Horton and Tokunaga self-similarity to infinite trees and establish self-similarity for an infinite-tree representation of a regular Brownian motion. We conjecture that fractional Brownian motions are also Tokunaga and Horton self-similar, with self-similarity parameters depending on the Hurst exponent.

  4. Model tracking system for low-level radioactive waste disposal facilities: License application interrogatories and responses

    Energy Technology Data Exchange (ETDEWEB)

    Benbennick, M.E.; Broton, M.S.; Fuoto, J.S.; Novgrod, R.L.

    1994-08-01

    This report describes a model tracking system for a low-level radioactive waste (LLW) disposal facility license application. In particular, the model tracks interrogatories (questions, requests for information, comments) and responses. A set of requirements and desired features for the model tracking system was developed, including required structure and computer screens. Nine tracking systems were then reviewed against the model system requirements and only two were found to meet all requirements. Using Kepner-Tregoe decision analysis, a model tracking system was selected.

  5. Model tracking system for low-level radioactive waste disposal facilities: License application interrogatories and responses

    International Nuclear Information System (INIS)

    Benbennick, M.E.; Broton, M.S.; Fuoto, J.S.; Novgrod, R.L.

    1994-08-01

    This report describes a model tracking system for a low-level radioactive waste (LLW) disposal facility license application. In particular, the model tracks interrogatories (questions, requests for information, comments) and responses. A set of requirements and desired features for the model tracking system was developed, including required structure and computer screens. Nine tracking systems were then reviewed against the model system requirements and only two were found to meet all requirements. Using Kepner-Tregoe decision analysis, a model tracking system was selected

  6. Meta-analysis of choice set generation effects on route choice model estimates and predictions

    DEFF Research Database (Denmark)

    Prato, Carlo Giacomo

    2012-01-01

    are applied for model estimation and results are compared to the ‘true model estimates’. Last, predictions from the simulation of models estimated with objective choice sets are compared to the ‘postulated predicted routes’. A meta-analytical approach allows synthesizing the effect of judgments......Large scale applications of behaviorally realistic transport models pose several challenges to transport modelers on both the demand and the supply sides. On the supply side, path-based solutions to the user assignment equilibrium problem help modelers in enhancing the route choice behavior...... modeling, but require them to generate choice sets by selecting a path generation technique and its parameters according to personal judgments. This paper proposes a methodology and an experimental setting to provide general indications about objective judgments for an effective route choice set generation...

  7. A simulation study on Bayesian Ridge regression models for several collinearity levels

    Science.gov (United States)

    Efendi, Achmad; Effrihan

    2017-12-01

    When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.

  8. An analysis of a joint shear model for jointed media with orthogonal joint sets

    International Nuclear Information System (INIS)

    Koteras, J.R.

    1991-10-01

    This report describes a joint shear model used in conjunction with a computational model for jointed media with orthogonal joint sets. The joint shear model allows nonlinear behavior for both joint sets. Because nonlinear behavior is allowed for both joint sets, a great many cases must be considered to fully describe the joint shear behavior of the jointed medium. An extensive set of equations is required to describe the joint shear stress and slip displacements that can occur for all the various cases. This report examines possible methods for simplifying this set of equations so that the model can be implemented efficiently form a computational standpoint. The shear model must be examined carefully to obtain a computationally efficient implementation that does not lead to numerical problems. The application to fractures in rock is discussed. 5 refs., 4 figs

  9. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    Science.gov (United States)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  10. An Optimized, Grid Independent, Narrow Band Data Structure for High Resolution Level Sets

    DEFF Research Database (Denmark)

    Nielsen, Michael Bang; Museth, Ken

    2004-01-01

    enforced by the convex boundaries of an underlying cartesian computational grid. Here we present a novel very memory efficient narrow band data structure, dubbed the Sparse Grid, that enables the representation of grid independent high resolution level sets. The key features our new data structure are...

  11. A mass conserving level set method for detailed numerical simulation of liquid atomization

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Kun; Shao, Changxiao [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China); Yang, Yue [State Key Laboratory of Turbulence and Complex Systems, Peking University, Beijing 100871 (China); Fan, Jianren, E-mail: fanjr@zju.edu.cn [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)

    2015-10-01

    An improved mass conserving level set method for detailed numerical simulations of liquid atomization is developed to address the issue of mass loss in the existing level set method. This method introduces a mass remedy procedure based on the local curvature at the interface, and in principle, can ensure the absolute mass conservation of the liquid phase in the computational domain. Three benchmark cases, including Zalesak's disk, a drop deforming in a vortex field, and the binary drop head-on collision, are simulated to validate the present method, and the excellent agreement with exact solutions or experimental results is achieved. It is shown that the present method is able to capture the complex interface with second-order accuracy and negligible additional computational cost. The present method is then applied to study more complex flows, such as a drop impacting on a liquid film and the swirling liquid sheet atomization, which again, demonstrates the advantages of mass conservation and the capability to represent the interface accurately.

  12. Modeling study of solute transport in the unsaturated zone. Information and data sets. Volume 1

    International Nuclear Information System (INIS)

    Polzer, W.L.; Fuentes, H.R.; Springer, E.P.; Nyhan, J.W.

    1986-05-01

    The Environmental Science Group (HSE-12) is conducting a study to compare various approaches of modeling water and solute transport in porous media. Various groups representing different approaches will model a common set of transport data so that the state of the art in modeling and field experimentation can be discussed in a positive framework with an assessment of current capabilities and future needs in this area of research. This paper provides information and sets of data that will be useful to the modelers in meeting the objectives of the modeling study. The information and data sets include: (1) a description of the experimental design and methods used in obtaining solute transport data, (2) supporting data that may be useful in modeling the data set of interest, and (3) the data set to be modeled

  13. Based on user interest level of modeling scenarios and browse content

    Science.gov (United States)

    Zhao, Yang

    2017-08-01

    User interest modeling is the core of personalized service, taking into account the impact of situational information on user preferences, the user behavior days of financial information. This paper proposes a method of user interest modeling based on scenario information, which is obtained by calculating the similarity of the situation. The user's current scene of the approximate scenario set; on the "user - interest items - scenarios" three-dimensional model using the situation pre-filtering method of dimension reduction processing. View the content of the user interested in the theme, the analysis of the page content to get each topic of interest keywords, based on the level of vector space model user interest. The experimental results show that the user interest model based on the scenario information is within 9% of the user's interest prediction, which is effective.

  14. Shape Reconstruction of Thin Electromagnetic Inclusions via Boundary Measurements: Level-Set Method Combined with the Topological Derivative

    Directory of Open Access Journals (Sweden)

    Won-Kwang Park

    2013-01-01

    Full Text Available An inverse problem for reconstructing arbitrary-shaped thin penetrable electromagnetic inclusions concealed in a homogeneous material is considered in this paper. For this purpose, the level-set evolution method is adopted. The topological derivative concept is incorporated in order to evaluate the evolution speed of the level-set functions. The results of the corresponding numerical simulations with and without noise are presented in this paper.

  15. A First Step Towards High-Level Cost Models for the Implementation of SDRs on Multiprocessing Reconfigurable Systems

    DEFF Research Database (Denmark)

    Le Moullec, Yannick

    2011-01-01

    -In-Progress paper we introduce our set of high-level estimation models for Area-Time costs of applications mapped onto FPGA-based multiprocessing reconfigurable architectures. In particular, we suggest models for static and dynamic implementations, taking various internal and external architectural elements...... into account. We believe that such models could be used for rapidly comparing implementation alternatives at a high level of abstraction and for guiding the designer during the (pre)analysis phase of the design flow for the implementation of e.g. SDR platforms....

  16. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    Science.gov (United States)

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Application of the level set method for multi-phase flow computation in fusion engineering

    International Nuclear Information System (INIS)

    Luo, X-Y.; Ni, M-J.; Ying, A.; Abdou, M.

    2006-01-01

    Numerical simulation of multi-phase flow is essential to evaluate the feasibility of a liquid protection scheme for the power plant chamber. The level set method is one of the best methods for computing and analyzing the motion of interface among the multi-phase flow. This paper presents a general formula for the second-order projection method combined with the level set method to simulate unsteady incompressible multi-phase flow with/out phase change flow encountered in fusion science and engineering. The third-order ENO scheme and second-order semi-implicit Crank-Nicholson scheme is used to update the convective and diffusion term. The numerical results show this method can handle the complex deformation of the interface and the effect of liquid-vapor phase change will be included in the future work

  18. Gay-Straight Alliances vary on dimensions of youth socializing and advocacy: factors accounting for individual and setting-level differences.

    Science.gov (United States)

    Poteat, V Paul; Scheer, Jillian R; Marx, Robert A; Calzo, Jerel P; Yoshikawa, Hirokazu

    2015-06-01

    Gay-Straight Alliances (GSAs) are school-based youth settings that could promote health. Yet, GSAs have been treated as homogenous without attention to variability in how they operate or to how youth are involved in different capacities. Using a systems perspective, we considered two primary dimensions along which GSAs function to promote health: providing socializing and advocacy opportunities. Among 448 students in 48 GSAs who attended six regional conferences in Massachusetts (59.8 % LGBQ; 69.9 % White; 70.1 % cisgender female), we found substantial variation among GSAs and youth in levels of socializing and advocacy. GSAs were more distinct from one another on advocacy than socializing. Using multilevel modeling, we identified group and individual factors accounting for this variability. In the socializing model, youth and GSAs that did more socializing activities did more advocacy. In the advocacy model, youth who were more actively engaged in the GSA as well as GSAs whose youth collectively perceived greater school hostility and reported greater social justice efficacy did more advocacy. Findings suggest potential reasons why GSAs vary in how they function in ways ranging from internal provisions of support, to visibility raising, to collective social change. The findings are further relevant for settings supporting youth from other marginalized backgrounds and that include advocacy in their mission.

  19. Final project memorandum: sea-level rise modeling handbook: resource guide for resource managers, engineers, and scientists

    Science.gov (United States)

    Doyle, Thomas W.

    2015-01-01

    Coastal wetlands of the Southeastern United States are undergoing retreat and migration from increasing tidal inundation and saltwater intrusion attributed to climate variability and sea-level rise. Much of the literature describing potential sea-level rise projections and modeling predictions are found in peer-reviewed academic journals or government technical reports largely suited to reading by other Ph.D. scientists who are more familiar or engaged in the climate change debate. Various sea-level rise and coastal wetland models have been developed and applied of different designs and scales of spatial and temporal complexity for predicting habitat and environmental change that have not heretofore been synthesized to aid natural resource managers of their utility and limitations. Training sessions were conducted with Federal land managers with U.S. Fish and Wildlife Service, National Park Service, and NOAA National Estuarine Research Reserves as well as state partners and nongovernmental organizations across the northern Gulf Coast from Florida to Texas to educate and to evaluate user needs and understanding of concepts, data, and modeling tools for projecting sea-level rise and its impact on coastal habitats and wildlife. As a result, this handbook was constructed from these training and feedback sessions with coastal managers and biologists of published decision-support tools and simulation models for sea-level rise and climate change assessments. A simplified tabular context was developed listing the various kinds of decision-support tools and ecological models along with criteria to distinguish the source, scale, and quality of information input and geographic data sets, physical and biological constraints and relationships, datum characteristics of water and land elevation components, utility options for setting sea-level rise and climate change scenarios, and ease or difficulty of storing, displaying, or interpreting model output. The handbook is designed

  20. Lithologic Effects on Landscape Response to Base Level Changes: A Modeling Study in the Context of the Eastern Jura Mountains, Switzerland

    Science.gov (United States)

    Yanites, Brian J.; Becker, Jens K.; Madritsch, Herfried; Schnellmann, Michael; Ehlers, Todd A.

    2017-11-01

    Landscape evolution is a product of the forces that drive geomorphic processes (e.g., tectonics and climate) and the resistance to those processes. The underlying lithology and structural setting in many landscapes set the resistance to erosion. This study uses a modified version of the Channel-Hillslope Integrated Landscape Development (CHILD) landscape evolution model to determine the effect of a spatially and temporally changing erodibility in a terrain with a complex base level history. Specifically, our focus is to quantify how the effects of variable lithology influence transient base level signals. We set up a series of numerical landscape evolution models with increasing levels of complexity based on the lithologic variability and base level history of the Jura Mountains of northern Switzerland. The models are consistent with lithology (and therewith erodibility) playing an important role in the transient evolution of the landscape. The results show that the erosion rate history at a location depends on the rock uplift and base level history, the range of erodibilities of the different lithologies, and the history of the surface geology downstream from the analyzed location. Near the model boundary, the history of erosion is dominated by the base level history. The transient wave of incision, however, is quite variable in the different model runs and depends on the geometric structure of lithology used. It is thus important to constrain the spatiotemporal erodibility patterns downstream of any given point of interest to understand the evolution of a landscape subject to variable base level in a quantitative framework.

  1. RISCOM Applied to the Belgian Partnership Model: More and Deeper Levels

    International Nuclear Information System (INIS)

    Bombaerts, Gunter; Bovy, Michel; Laes, Erik

    2006-01-01

    Technology participation is not a new concept. It has been applied in different settings in different countries. In this article, we report a comparing analysis of the RISCOM model in Sweden and the Belgian partnership model for low and intermediate short-lived nuclear waste. After a brief description of the partnerships and the RISCOM model, we apply the latter to the first and come to recommendations for the partnership model. The strength of the partnership approach is at the community level. In one of the villages, up to one percent of the population was motivated to discuss at least once a month for four years the nuts and bolts of the repository concept. The stress on the community level and the lack of a guardian includes a weakness as well. First of all, if communities come into competition, the inter-community discussions can start resembling local politics and can become less transparent. Local actors are concerned actors but actors at the national level are concerned as well. The local decisions influence how the waste will be transported. The local decisions also determine an extra cost of electricity. We therefore recommend a broad (in terms of territory) public debate on the participation experiments preceding and concluding the local participation process in which this local process maintains an important position. The conclusions of our comparative analysis are: (1) The guardian of the process at the national level is missing. Since the Belgian nuclear regulator plays a controlling role after the process, we recommend a technology assessment institute at the federal level. (2) We state that stretching in the partnership model can happen more profoundly and recommend a 'counter institute' at the European level. The role of non-participative actors should be valued. (3) Recursion levels can be taken as a point of departure for discussion about the problem framing. If people accept them, there is no problem. If people clearly mention issues that are

  2. RISCOM Applied to the Belgian Partnership Model: More and Deeper Levels

    Energy Technology Data Exchange (ETDEWEB)

    Bombaerts, Gunter; Bovy, Michel; Laes, Erik [SCKCEN, Mol (Belgium). PISA

    2006-09-15

    Technology participation is not a new concept. It has been applied in different settings in different countries. In this article, we report a comparing analysis of the RISCOM model in Sweden and the Belgian partnership model for low and intermediate short-lived nuclear waste. After a brief description of the partnerships and the RISCOM model, we apply the latter to the first and come to recommendations for the partnership model. The strength of the partnership approach is at the community level. In one of the villages, up to one percent of the population was motivated to discuss at least once a month for four years the nuts and bolts of the repository concept. The stress on the community level and the lack of a guardian includes a weakness as well. First of all, if communities come into competition, the inter-community discussions can start resembling local politics and can become less transparent. Local actors are concerned actors but actors at the national level are concerned as well. The local decisions influence how the waste will be transported. The local decisions also determine an extra cost of electricity. We therefore recommend a broad (in terms of territory) public debate on the participation experiments preceding and concluding the local participation process in which this local process maintains an important position. The conclusions of our comparative analysis are: (1) The guardian of the process at the national level is missing. Since the Belgian nuclear regulator plays a controlling role after the process, we recommend a technology assessment institute at the federal level. (2) We state that stretching in the partnership model can happen more profoundly and recommend a 'counter institute' at the European level. The role of non-participative actors should be valued. (3) Recursion levels can be taken as a point of departure for discussion about the problem framing. If people accept them, there is no problem. If people clearly mention issues

  3. Route constraints model based on polychromatic sets

    Science.gov (United States)

    Yin, Xianjun; Cai, Chao; Wang, Houjun; Li, Dongwu

    2018-03-01

    With the development of unmanned aerial vehicle (UAV) technology, the fields of its application are constantly expanding. The mission planning of UAV is especially important, and the planning result directly influences whether the UAV can accomplish the task. In order to make the results of mission planning for unmanned aerial vehicle more realistic, it is necessary to consider not only the physical properties of the aircraft, but also the constraints among the various equipment on the UAV. However, constraints among the equipment of UAV are complex, and the equipment has strong diversity and variability, which makes these constraints difficult to be described. In order to solve the above problem, this paper, referring to the polychromatic sets theory used in the advanced manufacturing field to describe complex systems, presents a mission constraint model of UAV based on polychromatic sets.

  4. Modeling antecedents of electronic medical record system implementation success in low-resource setting hospitals.

    Science.gov (United States)

    Tilahun, Binyam; Fritz, Fleur

    2015-08-01

    With the increasing implementation of Electronic Medical Record Systems (EMR) in developing countries, there is a growing need to identify antecedents of EMR success to measure and predict the level of adoption before costly implementation. However, less evidence is available about EMR success in the context of low-resource setting implementations. Therefore, this study aims to fill this gap by examining the constructs and relationships of the widely used DeLone and MacLean (D&M) information system success model to determine whether it can be applied to measure EMR success in those settings. A quantitative cross sectional study design using self-administered questionnaires was used to collect data from 384 health professionals working in five governmental hospitals in Ethiopia. The hospitals use a comprehensive EMR system since three years. Descriptive and structural equation modeling methods were applied to describe and validate the extent of relationship of constructs and mediating effects. The findings of the structural equation modeling shows that system quality has significant influence on EMR use (β = 0.32, P quality has significant influence on EMR use (β = 0.44, P service quality has strong significant influence on EMR use (β = 0.36, P effect of EMR use on user satisfaction was not significant. Both EMR use and user satisfaction have significant influence on perceived net-benefit (β = 0.31, P mediating factor in the relationship between service quality and EMR use (P effect on perceived net-benefit of health professionals. EMR implementers and managers in developing countries are in urgent need of implementation models to design proper implementation strategies. In this study, the constructs and relationships depicted in the updated D&M model were found to be applicable to assess the success of EMR in low resource settings. Additionally, computer literacy was found to be a mediating factor in EMR use and user satisfaction of

  5. Large-Ensemble modeling of past and future variations of the Antarctic Ice Sheet with a coupled ice-Earth-sea level model

    Science.gov (United States)

    Pollard, David; DeConto, Robert; Gomez, Natalya

    2016-04-01

    To date, most modeling of the Antarctic Ice Sheet's response to future warming has been calibrated using recent and modern observations. As an alternate approach, we apply a hybrid 3-D ice sheet-shelf model to the last deglacial retreat of Antarctica, making use of geologic data of the last ~20,000 years to test the model against the large-scale variations during this period. The ice model is coupled to a global Earth-sea level model to improve modeling of the bedrock response and to capture ocean-ice gravitational interactions. Following several recent ice-sheet studies, we use Large Ensemble (LE) statistical methods, performing sets of 625 runs from 30,000 years to present with systematically varying model parameters. Objective scores for each run are calculated using modern data and past reconstructed grounding lines, relative sea level records, cosmogenic elevation-age data and uplift rates. The LE results are analyzed to calibrate 4 particularly uncertain model parameters that concern marginal ice processes and interaction with the ocean. LE's are extended into the future with climates following RCP scenarios. An additional scoring criterion tests the model's ability to reproduce estimated sea-level high stands in the warm mid-Pliocene, for which drastic retreat mechanisms of hydrofracturing and ice-cliff failure are needed in the model. The LE analysis provides future sea-level-rise envelopes with well-defined parametric uncertainty bounds. Sensitivities of future LE results to Pliocene sea-level estimates, coupling to the Earth-sea level model, and vertical profiles of Earth properties, will be presented.

  6. Emergency residential care settings: A model for service assessment and design.

    Science.gov (United States)

    Graça, João; Calheiros, Maria Manuela; Patrício, Joana Nunes; Magalhães, Eunice Vieira

    2018-02-01

    There have been calls for uncovering the "black box" of residential care services, with a particular need for research focusing on emergency care settings for children and youth in danger. In fact, the strikingly scant empirical attention that these settings have received so far contrasts with the role that they often play as gateway into the child welfare system. To answer these calls, this work presents and tests a framework for assessing a service model in residential emergency care. It comprises seven studies which address a set of different focal areas (e.g., service logic model; care experiences), informants (e.g., case records; staff; children/youth), and service components (e.g., case assessment/evaluation; intervention; placement/referral). Drawing on this process-consultation approach, the work proposes a set of key challenges for emergency residential care in terms of service improvement and development, and calls for further research targeting more care units and different types of residential care services. These findings offer a contribution to inform evidence-based practice and policy in service models of residential care. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Prediction of forest fires occurrences with area-level Poisson mixed models.

    Science.gov (United States)

    Boubeta, Miguel; Lombardía, María José; Marey-Pérez, Manuel Francisco; Morales, Domingo

    2015-05-01

    The number of fires in forest areas of Galicia (north-west of Spain) during the summer period is quite high. Local authorities are interested in analyzing the factors that explain this phenomenon. Poisson regression models are good tools for describing and predicting the number of fires per forest areas. This work employs area-level Poisson mixed models for treating real data about fires in forest areas. A parametric bootstrap method is applied for estimating the mean squared errors of fires predictors. The developed methodology and software are applied to a real data set of fires in forest areas of Galicia. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. APPLICATION OF ROUGH SET THEORY TO MAINTENANCE LEVEL DECISION-MAKING FOR AERO-ENGINE MODULES BASED ON INCREMENTAL KNOWLEDGE LEARNING

    Institute of Scientific and Technical Information of China (English)

    陆晓华; 左洪福; 蔡景

    2013-01-01

    The maintenance of an aero-engine usually includes three levels ,and the maintenance cost and period greatly differ depending on the different maintenance levels .To plan a reasonable maintenance budget program , airlines would like to predict the maintenance level of aero-engine before repairing in terms of performance parame-ters ,which can provide more economic benefits .The maintenance level decision rules are mined using the histori-cal maintenance data of a civil aero-engine based on the rough set theory ,and a variety of possible models of upda-ting rules produced by newly increased maintenance cases added to the historical maintenance case database are in-vestigated by the means of incremental machine learning .The continuously updated rules can provide reasonable guidance suggestions for engineers and decision support for planning a maintenance budget program before repai-ring .The results of an example show that the decision rules become more typical and robust ,and they are more accurate to predict the maintenance level of an aero-engine module as the maintenance data increase ,which illus-trates the feasibility of the represented method .

  9. Application of a modified conceptual rainfall-runoff model to simulation of groundwater level in an undefined watershed.

    Science.gov (United States)

    Hong, Nian; Hama, Takehide; Suenaga, Yuichi; Aqili, Sayed Waliullah; Huang, Xiaowu; Wei, Qiaoyan; Kawagoshi, Yasunori

    2016-01-15

    Groundwater level simulation models can help ensure the proper management and use of urban and rural water supply. In this paper, we propose a groundwater level tank model (GLTM) based on a conceptual rainfall-runoff model (tank model) to simulate fluctuations in groundwater level. The variables used in the simulations consist of daily rainfall and daily groundwater level, which were recorded between April 2011 and March 2015 at two representative observation wells in Kumamoto City, Japan. We determined the best-fit model parameters by root-mean-square error through use of the Shuffled Complex Evolution-University of Arizona algorithm on a simulated data set. Calibration and validation results were evaluated by their coefficients of determination, Nash-Sutcliffe efficiency coefficients, and root-mean-square error values. The GLTM provided accurate results in both the calibration and validation of fluctuations in groundwater level. The split sample test results indicate a good reliability. These results indicate that this model can provide a simple approach to the accurate simulation of groundwater levels. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Sea-level and solid-Earth deformation feedbacks in ice sheet modelling

    Science.gov (United States)

    Konrad, Hannes; Sasgen, Ingo; Klemann, Volker; Thoma, Malte; Grosfeld, Klaus; Martinec, Zdeněk

    2014-05-01

    The interactions of ice sheets with the sea level and the solid Earth are important factors for the stability of the ice shelves and the tributary inland ice (e.g. Thomas and Bentley, 1978; Gomez et al, 2012). First, changes in ice extent and ice thickness induce viscoelastic deformation of the Earth surface and Earth's gravity field. In turn, global and local changes in sea level and bathymetry affect the grounding line and, subsequently, alter the ice dynamic behaviour. Here, we investigate these feedbacks for a synthetic ice sheet configuration as well as for the Antarctic ice sheet using a three-dimensional thermomechanical ice sheet and shelf model, coupled to a viscoelastic solid-Earth and gravitationally self-consistent sea-level model. The respective ice sheet undergoes a forcing from rising sea level, warming ocean, and/or changing surface mass balance. The coupling is realized by exchanging ice thickness, Earth surface deformation and sea level periodically. We apply several sets of viscoelastic Earth parameters to our coupled model, e.g. simulating a low-viscous upper mantle present at the Antarctic Peninsula (Ivins et al., 2011). Special focus of our study lies on the evolution of Earth surface deformation and local sea level changes, as well as on the accompanying grounding line evolution. N. Gomez, D. Pollard, J. X. Mitrovica, P. Huybers, and P. U. Clark 2012. Evolution of a coupled marine ice sheet-sea level model, J. Geophys. Res., 117, F01013, doi:10.1029/2011JF002128. E. R. Ivins, M. M. Watkins, D.-N. Yuan, R. Dietrich, G. Casassa, and A. Rülke 2011. On-land ice loss and glacial isostatic adjustment at the Drake Passage: 2003-2009, J. Geophys. Res. 116, B02403, doi: 10.1029/2010JB007607 R. H. Thomas and C. R. Bentley 1978. A model for Holocene retreat of the West Antarctic Ice Sheet, Quaternary Research, 10 (2), pages 150-170, doi: 10.1016/0033-5894(78)90098-4.

  11. Advanced methods for modeling water-levels and estimating drawdowns with SeriesSEE, an Excel add-in

    Science.gov (United States)

    Halford, Keith; Garcia, C. Amanda; Fenelon, Joe; Mirus, Benjamin B.

    2012-12-21

    Water-level modeling is used for multiple-well aquifer tests to reliably differentiate pumping responses from natural water-level changes in wells, or “environmental fluctuations.” Synthetic water levels are created during water-level modeling and represent the summation of multiple component fluctuations, including those caused by environmental forcing and pumping. Pumping signals are modeled by transforming step-wise pumping records into water-level changes by using superimposed Theis functions. Water-levels can be modeled robustly with this Theis-transform approach because environmental fluctuations and pumping signals are simulated simultaneously. Water-level modeling with Theis transforms has been implemented in the program SeriesSEE, which is a Microsoft® Excel add-in. Moving average, Theis, pneumatic-lag, and gamma functions transform time series of measured values into water-level model components in SeriesSEE. Earth tides and step transforms are additional computed water-level model components. Water-level models are calibrated by minimizing a sum-of-squares objective function where singular value decomposition and Tikhonov regularization stabilize results. Drawdown estimates from a water-level model are the summation of all Theis transforms minus residual differences between synthetic and measured water levels. The accuracy of drawdown estimates is limited primarily by noise in the data sets, not the Theis-transform approach. Drawdowns much smaller than environmental fluctuations have been detected across major fault structures, at distances of more than 1 mile from the pumping well, and with limited pre-pumping and recovery data at sites across the United States. In addition to water-level modeling, utilities exist in SeriesSEE for viewing, cleaning, manipulating, and analyzing time-series data.

  12. A two-level strategy to realize life-cycle production optimization in an operational setting

    NARCIS (Netherlands)

    Essen, van G.M.; Hof, Van den P.M.J.; Jansen, J.D.

    2012-01-01

    We present a two-level strategy to improve robustness against uncertainty and model errors in life-cycle flooding optimization. At the upper level, a physics-based large-scale reservoir model is used to determine optimal life-cycle injection and production profiles. At the lower level these profiles

  13. A two-level strategy to realize life-cycle production optimization in an operational setting

    NARCIS (Netherlands)

    Essen, van G.M.; Hof, Van den P.M.J.; Jansen, J.D.

    2013-01-01

    We present a two-level strategy to improve robustness against uncertainty and model errors in life-cycle flooding optimization. At the upper level, a physics-based large-scale reservoir model is used to determine optimal life-cycle injection and production profiles. At the lower level these profiles

  14. Instruction-level performance modeling and characterization of multimedia applications

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Y. [Los Alamos National Lab., NM (United States). Scientific Computing Group; Cameron, K.W. [Louisiana State Univ., Baton Rouge, LA (United States). Dept. of Computer Science

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based on microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.

  15. GPU accelerated edge-region based level set evolution constrained by 2D gray-scale histogram.

    Science.gov (United States)

    Balla-Arabé, Souleymane; Gao, Xinbo; Wang, Bin

    2013-07-01

    Due to its intrinsic nature which allows to easily handle complex shapes and topological changes, the level set method (LSM) has been widely used in image segmentation. Nevertheless, LSM is computationally expensive, which limits its applications in real-time systems. For this purpose, we propose a new level set algorithm, which uses simultaneously edge, region, and 2D histogram information in order to efficiently segment objects of interest in a given scene. The computational complexity of the proposed LSM is greatly reduced by using the highly parallelizable lattice Boltzmann method (LBM) with a body force to solve the level set equation (LSE). The body force is the link with image data and is defined from the proposed LSE. The proposed LSM is then implemented using an NVIDIA graphics processing units to fully take advantage of the LBM local nature. The new algorithm is effective, robust against noise, independent to the initial contour, fast, and highly parallelizable. The edge and region information enable to detect objects with and without edges, and the 2D histogram information enable the effectiveness of the method in a noisy environment. Experimental results on synthetic and real images demonstrate subjectively and objectively the performance of the proposed method.

  16. Modeling category-level purchase timing with brand-level marketing variables

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard)

    2003-01-01

    textabstractPurchase timing of households is usually modeled at the category level. Marketing efforts are however only available at the brand level. Hence, to describe category-level interpurchase times using marketing efforts one has to construct a category-level measure of marketing efforts from

  17. Interdisciplinary education - a predator-prey model for developing a skill set in mathematics, biology and technology

    Science.gov (United States)

    van der Hoff, Quay

    2017-08-01

    The science of biology has been transforming dramatically and so the need for a stronger mathematical background for biology students has increased. Biological students reaching the senior or post-graduate level often come to realize that their mathematical background is insufficient. Similarly, students in a mathematics programme, interested in biological phenomena, find it difficult to master the complex systems encountered in biology. In short, the biologists do not have enough mathematics and the mathematicians are not being taught enough biology. The need for interdisciplinary curricula that includes disciplines such as biology, physical science, and mathematics is widely recognized, but has not been widely implemented. In this paper, it is suggested that students develop a skill set of ecology, mathematics and technology to encourage working across disciplinary boundaries. To illustrate such a skill set, a predator-prey model that contains self-limiting factors for both predator and prey is suggested. The general idea of dynamics, is introduced and students are encouraged to discover the applicability of this approach to more complex biological systems. The level of mathematics and technology required is not advanced; therefore, it is ideal for inclusion in a senior-level or introductory graduate-level course for students interested in mathematical biology.

  18. Accurate prediction of complex free surface flow around a high speed craft using a single-phase level set method

    Science.gov (United States)

    Broglia, Riccardo; Durante, Danilo

    2017-11-01

    This paper focuses on the analysis of a challenging free surface flow problem involving a surface vessel moving at high speeds, or planing. The investigation is performed using a general purpose high Reynolds free surface solver developed at CNR-INSEAN. The methodology is based on a second order finite volume discretization of the unsteady Reynolds-averaged Navier-Stokes equations (Di Mascio et al. in A second order Godunov—type scheme for naval hydrodynamics, Kluwer Academic/Plenum Publishers, Dordrecht, pp 253-261, 2001; Proceedings of 16th international offshore and polar engineering conference, San Francisco, CA, USA, 2006; J Mar Sci Technol 14:19-29, 2009); air/water interface dynamics is accurately modeled by a non standard level set approach (Di Mascio et al. in Comput Fluids 36(5):868-886, 2007a), known as the single-phase level set method. In this algorithm the governing equations are solved only in the water phase, whereas the numerical domain in the air phase is used for a suitable extension of the fluid dynamic variables. The level set function is used to track the free surface evolution; dynamic boundary conditions are enforced directly on the interface. This approach allows to accurately predict the evolution of the free surface even in the presence of violent breaking waves phenomena, maintaining the interface sharp, without any need to smear out the fluid properties across the two phases. This paper is aimed at the prediction of the complex free-surface flow field generated by a deep-V planing boat at medium and high Froude numbers (from 0.6 up to 1.2). In the present work, the planing hull is treated as a two-degree-of-freedom rigid object. Flow field is characterized by the presence of thin water sheets, several energetic breaking waves and plungings. The computational results include convergence of the trim angle, sinkage and resistance under grid refinement; high-quality experimental data are used for the purposes of validation, allowing to

  19. Level-set dynamics and mixing efficiency of passive and active scalars in DNS and LES of turbulent mixing layers

    NARCIS (Netherlands)

    Geurts, Bernard J.; Vreman, Bert; Kuerten, Hans; Luo, Kai H.

    2001-01-01

    The mixing efficiency in a turbulent mixing layer is quantified by monitoring the surface-area of level-sets of scalar fields. The Laplace transform is applied to numerically calculate integrals over arbitrary level-sets. The analysis includes both direct and large-eddy simulation and is used to

  20. Virtual Systems Pharmacology (ViSP software for mechanistic system-level model simulations

    Directory of Open Access Journals (Sweden)

    Sergey eErmakov

    2014-10-01

    Full Text Available Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user’s particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.

  1. SET overexpression in HEK293 cells regulates mitochondrial uncoupling proteins levels within a mitochondrial fission/reduced autophagic flux scenario

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Luciana O.; Goto, Renata N. [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Neto, Marinaldo P.C. [Department of Physics and Chemistry, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Sousa, Lucas O. [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Curti, Carlos [Department of Physics and Chemistry, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil); Leopoldino, Andréia M., E-mail: andreiaml@usp.br [Department of Clinical Analyses, Toxicology and Food Sciences, School of Pharmaceutical Sciences of Ribeirão Preto, University of São Paulo, Ribeirão Preto, SP (Brazil)

    2015-03-06

    We hypothesized that SET, a protein accumulated in some cancer types and Alzheimer disease, is involved in cell death through mitochondrial mechanisms. We addressed the mRNA and protein levels of the mitochondrial uncoupling proteins UCP1, UCP2 and UCP3 (S and L isoforms) by quantitative real-time PCR and immunofluorescence as well as other mitochondrial involvements, in HEK293 cells overexpressing the SET protein (HEK293/SET), either in the presence or absence of oxidative stress induced by the pro-oxidant t-butyl hydroperoxide (t-BHP). SET overexpression in HEK293 cells decreased UCP1 and increased UCP2 and UCP3 (S/L) mRNA and protein levels, whilst also preventing lipid peroxidation and decreasing the content of cellular ATP. SET overexpression also (i) decreased the area of mitochondria and increased the number of organelles and lysosomes, (ii) increased mitochondrial fission, as demonstrated by increased FIS1 mRNA and FIS-1 protein levels, an apparent accumulation of DRP-1 protein, and an increase in the VDAC protein level, and (iii) reduced autophagic flux, as demonstrated by a decrease in LC3B lipidation (LC3B-II) in the presence of chloroquine. Therefore, SET overexpression in HEK293 cells promotes mitochondrial fission and reduces autophagic flux in apparent association with up-regulation of UCP2 and UCP3; this implies a potential involvement in cellular processes that are deregulated such as in Alzheimer's disease and cancer. - Highlights: • SET, UCPs and autophagy prevention are correlated. • SET action has mitochondrial involvement. • UCP2/3 may reduce ROS and prevent autophagy. • SET protects cell from ROS via UCP2/3.

  2. An improved level set method for brain MR images segmentation and bias correction.

    Science.gov (United States)

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  3. Regional hydrogeological conceptual model of candidate Beishan area for high level radioactive waste disposal repository

    International Nuclear Information System (INIS)

    Wang Hailong; Guo Yonghai

    2014-01-01

    The numerical modeling of groundwater flow is an important aspect of hydrogeological assessment in siting of a high level radioactive waste disposal repository. Hydrogeological conceptual model is the basic and premise of numerical modeling of groundwater flow. Based on the hydrogeological analysis of candidate Beishan area, surface water system was created by using DEM data and the modeling area is determined. Three-dimensional hydrogeological structure model was created through GMS software. On the basis of analysis and description of boundary condition, flow field, groundwater budget and hydrogeological parameters, hydrogeological conceptual model was set up for the Beishan area. (authors)

  4. A suitable model plant for control of the set fuel cell-DC/DC converter

    Energy Technology Data Exchange (ETDEWEB)

    Andujar, J.M.; Segura, F.; Vasallo, M.J. [Departamento de Ingenieria Electronica, Sistemas Informaticos y Automatica, E.P.S. La Rabida, Universidad de Huelva, Ctra. Huelva - Palos de la Frontera, S/N, 21819 La Rabida - Palos de la Frontera Huelva (Spain)

    2008-04-15

    In this work a state and transfer function model of the set made up of a proton exchange membrane (PEM) fuel cell and a DC/DC converter is developed. The set is modelled as a plant controlled by the converter duty cycle. In addition to allow setting the plant operating point at any point of its characteristic curve (two interesting points are maximum efficiency and maximum power points), this approach also allows the connection of the fuel cell to other energy generation and storage devices, given that, as they all usually share a single DC bus, a thorough control of the interconnected devices is required. First, the state and transfer function models of the fuel cell and the converter are obtained. Then, both models are related in order to achieve the fuel cell+DC/DC converter set (plant) model. The results of the theoretical developments are validated by simulation on a real fuel cell model. (author)

  5. Stabilizing model predictive control : on the enlargement of the terminal set

    NARCIS (Netherlands)

    Brunner, F.D.; Lazar, M.; Allgöwer, F.

    2015-01-01

    It is well known that a large terminal set leads to a large region where the model predictive control problem is feasible without the need for a long prediction horizon. This paper proposes a new method for the enlargement of the terminal set. Different from existing approaches, the method uses the

  6. Predicting inpatient clinical order patterns with probabilistic topic models vs conventional order sets.

    Science.gov (United States)

    Chen, Jonathan H; Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B

    2017-05-01

    Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% ( P  sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  7. Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.

    Science.gov (United States)

    Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao

    2017-06-30

    Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.

  8. Sets in Coq, Coq in Sets

    Directory of Open Access Journals (Sweden)

    Bruno Barras

    2010-01-01

    Full Text Available This work is about formalizing models of various type theories of the Calculus of Constructions family. Here we focus on set theoretical models. The long-term goal is to build a formal set theoretical model of the Calculus of Inductive Constructions, so we can be sure that Coq is consistent with the language used by most mathematicians.One aspect of this work is to axiomatize several set theories: ZF possibly with inaccessible cardinals, and HF, the theory of hereditarily finite sets. On top of these theories we have developped a piece of the usual set theoretical construction of functions, ordinals and fixpoint theory. We then proved sound several models of the Calculus of Constructions, its extension with an infinite hierarchy of universes, and its extension with the inductive type of natural numbers where recursion follows the type-based termination approach.The other aspect is to try and discharge (most of these assumptions. The goal here is rather to compare the theoretical strengths of all these formalisms. As already noticed by Werner, the replacement axiom of ZF in its general form seems to require a type-theoretical axiom of choice (TTAC.

  9. PENERAPAN MODEL PEMBELAJARAN ADVANCE ORGANIZER BERVISI SETS TERHADAP PENINGKATAN PENGUASAAN KONSEP KIMIA

    Directory of Open Access Journals (Sweden)

    Ilam Pratitis

    2015-11-01

    Full Text Available This study aims to determine the effect of the application of learning model with advance organizer envisions SETS to increase mastery of chemistry concepts in the high school in Semarang on buffer solution material. The design used in this research is the design of the control group non equivalent. Sampling was conducted with a purposive sampling technique, and obtained a XI 6 science grade as experimental class and class XI 5 science grade as control class. Data collection method used is the method of documentation, testing, observation, and questionnaires. The results showed that the average cognitive achievement of experimental class was 84, while the control class was 82. The result of data analysis showed that the effect of the application of learning model with advance organizer envisions SETS was able to increase the mastery of chemical concepts of 4%, with a correlation rate of 0.2. Based on the results, it can be concluded that the learning model with advance organizer envisions SETS had positive effect of increasing mastery of the concept of chemistry on buffer solution material. The advice given is learning model with organizer envisions SETS should also be applied to other chemistry materials. This is of course accompanied by a change in order to suit the needs of its effect on learning outcomes in the form of concept mastery of chemistry to be more increased.Keywords: Advance Organizer, Buffer Solution, Concept Mastery, SETS

  10. The Influence of Study-Level Inference Models and Study Set Size on Coordinate-Based fMRI Meta-Analyses

    Directory of Open Access Journals (Sweden)

    Han Bossier

    2018-01-01

    Full Text Available Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1 the balance between false and true positives and (2 the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS, or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35. To do this, we apply a resampling scheme on a large dataset (N = 1,400 to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results.

  11. DESIRE FOR LEVELS. Background study for the policy document "Setting Environmental Quality Standards for Water and Soil"

    NARCIS (Netherlands)

    van de Meent D; Aldenberg T; Canton JH; van Gestel CAM; Slooff W

    1990-01-01

    The report provides scientific support for setting environmental quality objectives for water, sediment and soil. Quality criteria are not set in this report. Only options for decisions are given. The report is restricted to the derivation of the 'maximally acceptable risk' levels (MAR)

  12. A model for assessing the risk of human trafficking on a local level

    Science.gov (United States)

    Colegrove, Amanda

    Human trafficking is a human rights violation that is difficult to quantify. Models for estimating the number of victims of trafficking presented by previous researchers depend on inconsistent, poor quality data. As an intermediate step to help current efforts by nonprofits to combat human trafficking, this project presents a model that is not dependent on quantitative data specific to human trafficking, but rather profiles the risk of human trafficking at the local level through causative factors. Businesses, indicated by the literature, were weighted based on the presence of characteristics that increase the likelihood of trafficking in persons. The mean risk was calculated by census tract to reveal the multiplicity of risk levels in both rural and urban settings. Results indicate that labor trafficking may be a more diffuse problem in Missouri than sex trafficking. Additionally, spatial patterns of risk remained largely the same regardless of adjustments made to the model.

  13. Considering Actionability at the Participant's Research Setting Level for Anticipatable Incidental Findings from Clinical Research.

    Science.gov (United States)

    Ortiz-Osorno, Alberto Betto; Ehler, Linda A; Brooks, Judith

    2015-01-01

    Determining what constitutes an anticipatable incidental finding (IF) from clinical research and defining whether, and when, this IF should be returned to the participant have been topics of discussion in the field of human subject protections for the last 10 years. It has been debated that implementing a comprehensive IF-approach that addresses both the responsibility of researchers to return IFs and the expectation of participants to receive them can be logistically challenging. IFs have been debated at different levels, such as the ethical reasoning for considering their disclosure or the need for planning for them during the development of the research study. Some authors have discussed the methods for re-contacting participants for disclosing IFs, as well as the relevance of considering the clinical importance of the IFs. Similarly, other authors have debated about when IFs should be disclosed to participants. However, no author has addressed how the "actionability" of the IFs should be considered, evaluated, or characterized at the participant's research setting level. This paper defines the concept of "Actionability at the Participant's Research Setting Level" (APRSL) for anticipatable IFs from clinical research, discusses some related ethical concepts to justify the APRSL concept, proposes a strategy to incorporate APRSL into the planning and management of IFs, and suggests a strategy for integrating APRSL at each local research setting. © 2015 American Society of Law, Medicine & Ethics, Inc.

  14. General sets of coherent states and the Jaynes-Cummings model

    International Nuclear Information System (INIS)

    Daoud, M.; Hussin, V.

    2002-01-01

    General sets of coherent states are constructed for quantum systems admitting a nondegenerate infinite discrete energy spectrum. They are eigenstates of an annihilation operator and satisfy the usual properties of standard coherent states. The application of such a construction to the quantum optics Jaynes-Cummings model leads to a new understanding of the properties of this model. (author)

  15. Modelling high level system design and unit commitment for a microgrid

    Energy Technology Data Exchange (ETDEWEB)

    Hawkes, A.D. [Centre for Energy Policy and Technology, Imperial College London, London SW7 2AZ (United Kingdom); Leach, M.A. [Centre for Environmental Strategy, Faculty of Engineering, University of Surrey, Guildford GU2 7XH (United Kingdom)

    2009-07-15

    This article develops a linear programming cost minimisation model for the high level system design and corresponding unit commitment of generators and storage within a microgrid; a set of energy resources working co-operatively to create a cost effective, reliable and environmentally friendly energy provision system. Previous work in this area is used as a basis for formulation of a new approach to this problem, with particular emphasis on why a microgrid is different to centralised generation or other grid-connected decentralised energy resources. Specifically, the model explicitly defines the amount of time that the microgrid would be expected to operate autonomously, and restricts flow of heat between microgrid participants to defined cases. The model developed is applied to a set of United Kingdom commercial load profiles, under best current estimates of energy prices and technology capital costs, to determine investment attractiveness of the microgrid. Sensitivity analysis of results to variations in energy prices is performed. The results broadly indicate that a microgrid can offer an economic proposition, although it is necessarily slightly more expensive than regular grid-connected decentralised generation. The analysis results have raised important questions regarding a fair method for settlement between microgrid participants, and game theory has been identified as a suitable tool to analyse aspects of this situation. (author)

  16. Modelling high level system design and unit commitment for a microgrid

    International Nuclear Information System (INIS)

    Hawkes, A.D.; Leach, M.A.

    2009-01-01

    This article develops a linear programming cost minimisation model for the high level system design and corresponding unit commitment of generators and storage within a microgrid; a set of energy resources working co-operatively to create a cost effective, reliable and environmentally friendly energy provision system. Previous work in this area is used as a basis for formulation of a new approach to this problem, with particular emphasis on why a microgrid is different to centralised generation or other grid-connected decentralised energy resources. Specifically, the model explicitly defines the amount of time that the microgrid would be expected to operate autonomously, and restricts flow of heat between microgrid participants to defined cases. The model developed is applied to a set of United Kingdom commercial load profiles, under best current estimates of energy prices and technology capital costs, to determine investment attractiveness of the microgrid. Sensitivity analysis of results to variations in energy prices is performed. The results broadly indicate that a microgrid can offer an economic proposition, although it is necessarily slightly more expensive than regular grid-connected decentralised generation. The analysis results have raised important questions regarding a fair method for settlement between microgrid participants, and game theory has been identified as a suitable tool to analyse aspects of this situation. (author)

  17. Using Set Covering with Item Sampling to Analyze the Infeasibility of Linear Programming Test Assembly Models

    Science.gov (United States)

    Huitzing, Hiddo A.

    2004-01-01

    This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…

  18. On Models with Uncountable Set of Spin Values on a Cayley Tree: Integral Equations

    International Nuclear Information System (INIS)

    Rozikov, Utkir A.; Eshkobilov, Yusup Kh.

    2010-01-01

    We consider models with nearest-neighbor interactions and with the set [0, 1] of spin values, on a Cayley tree of order k ≥ 1. We reduce the problem of describing the 'splitting Gibbs measures' of the model to the description of the solutions of some nonlinear integral equation. For k = 1 we show that the integral equation has a unique solution. In case k ≥ 2 some models (with the set [0, 1] of spin values) which have a unique splitting Gibbs measure are constructed. Also for the Potts model with uncountable set of spin values it is proven that there is unique splitting Gibbs measure.

  19. New Hybrid Multiple Attribute Decision-Making Model for Improving Competence Sets: Enhancing a Company’s Core Competitiveness

    Directory of Open Access Journals (Sweden)

    Kuan-Wei Huang

    2016-02-01

    Full Text Available A company’s core competitiveness depends on the strategic allocation of its human resources in alignment with employee capabilities. Competency models can identify the range of capabilities at a company’s disposal, and this information can be used to develop internal or external education training policies for sustainable development. Such models can ensure the importation of a strategic orientation reflecting the growth of its employee competence set and enhancing human resource sustainably. This approach ensures that the most appropriate people are assigned to the most appropriate positions. In this study, we proposed a new hybrid multiple attributed decision-making model by using the Decision-making trial and Evaluation Laboratory Technique (DEMATEL to construct an influential network relation map (INRM and determined the influential weights by using the basic concept of the analytic network process (called DEMATEL-based ANP, DANP; the influential weights were then adopted with a modified Vise Kriterijumska Optimizacija I Kompromisno Resenje (VIKOR method. A simple forecasting technique as an iteration function was also proposed. The proposed model was effective. We expect that the proposed model can facilitate making timely revisions, reflecting the growth of employee competence sets, reducing the performance gap toward the aspiration level, and ensuring the sustainability of a company.

  20. Observations and a linear model of water level in an interconnected inlet-bay system

    Science.gov (United States)

    Aretxabaleta, Alfredo; Ganju, Neil K.; Butman, Bradford; Signell, Richard

    2017-01-01

    A system of barrier islands and back-barrier bays occurs along southern Long Island, New York, and in many coastal areas worldwide. Characterizing the bay physical response to water level fluctuations is needed to understand flooding during extreme events and evaluate their relation to geomorphological changes. Offshore sea level is one of the main drivers of water level fluctuations in semienclosed back-barrier bays. We analyzed observed water levels (October 2007 to November 2015) and developed analytical models to better understand bay water level along southern Long Island. An increase (∼0.02 m change in 0.17 m amplitude) in the dominant M2 tidal amplitude (containing the largest fraction of the variability) was observed in Great South Bay during mid-2014. The observed changes in both tidal amplitude and bay water level transfer from offshore were related to the dredging of nearby inlets and possibly the changing size of a breach across Fire Island caused by Hurricane Sandy (after December 2012). The bay response was independent of the magnitude of the fluctuations (e.g., storms) at a specific frequency. An analytical model that incorporates bay and inlet dimensions reproduced the observed transfer function in Great South Bay and surrounding areas. The model predicts the transfer function in Moriches and Shinnecock bays where long-term observations were not available. The model is a simplified tool to investigate changes in bay water level and enables the evaluation of future conditions and alternative geomorphological settings.

  1. Analysis of Forensic Autopsy in 120 Cases of Medical Disputes Among Different Levels of Institutional Settings.

    Science.gov (United States)

    Yu, Lin-Sheng; Ye, Guang-Hua; Fan, Yan-Yan; Li, Xing-Biao; Feng, Xiang-Ping; Han, Jun-Ge; Lin, Ke-Zhi; Deng, Miao-Wu; Li, Feng

    2015-09-01

    Despite advances in medical science, the causes of death can sometimes only be determined by pathologists after a complete autopsy. Few studies have investigated the importance of forensic autopsy in medically disputed cases among different levels of institutional settings. Our study aimed to analyze forensic autopsy in 120 cases of medical disputes among five levels of institutional settings between 2001 and 2012 in Wenzhou, China. The results showed an overall concordance rate of 55%. Of the 39% of clinically missed diagnosis, cardiovascular pathology comprises 55.32%, while respiratory pathology accounts for the remaining 44. 68%. Factors that increase the likelihood of missed diagnoses were private clinics, community settings, and county hospitals. These results support that autopsy remains an important tool in establishing causes of death in medically disputed case, which may directly determine or exclude the fault of medical care and therefore in helping in resolving these cases. © 2015 American Academy of Forensic Sciences.

  2. Sensitivity of Global Methane Bayesian Inversion to Surface Observation Data Sets and Chemical-Transport Model Resolution

    Science.gov (United States)

    Lew, E. J.; Butenhoff, C. L.; Karmakar, S.; Rice, A. L.; Khalil, A. K.

    2017-12-01

    Methane is the second most important greenhouse gas after carbon dioxide. In efforts to control emissions, a careful examination of the methane budget and source strengths is required. To determine methane surface fluxes, Bayesian methods are often used to provide top-down constraints. Inverse modeling derives unknown fluxes using observed methane concentrations, a chemical transport model (CTM) and prior information. The Bayesian inversion reduces prior flux uncertainties by exploiting information content in the data. While the Bayesian formalism produces internal error estimates of source fluxes, systematic or external errors that arise from user choices in the inversion scheme are often much larger. Here we examine model sensitivity and uncertainty of our inversion under different observation data sets and CTM grid resolution. We compare posterior surface fluxes using the data product GLOBALVIEW-CH4 against the event-level molar mixing ratio data available from NOAA. GLOBALVIEW-CH4 is a collection of CH4 concentration estimates from 221 sites, collected by 12 laboratories, that have been interpolated and extracted to provide weekly records from 1984-2008. Differently, the event-level NOAA data records methane mixing ratios field measurements from 102 sites, containing sampling frequency irregularities and gaps in time. Furthermore, the sampling platform types used by the data sets may influence the posterior flux estimates, namely fixed surface, tower, ship and aircraft sites. To explore the sensitivity of the posterior surface fluxes to the observation network geometry, inversions composed of all sites, only aircraft, only ship, only tower and only fixed surface sites, are performed and compared. Also, we investigate the sensitivity of the error reduction associated with the resolution of the GEOS-Chem simulation (4°×5° vs 2°×2.5°) used to calculate the response matrix. Using a higher resolution grid decreased the model-data error at most sites, thereby

  3. Level densities

    International Nuclear Information System (INIS)

    Ignatyuk, A.V.

    1998-01-01

    For any applications of the statistical theory of nuclear reactions it is very important to obtain the parameters of the level density description from the reliable experimental data. The cumulative numbers of low-lying levels and the average spacings between neutron resonances are usually used as such data. The level density parameters fitted to such data are compiled in the RIPL Starter File for the tree models most frequently used in practical calculations: i) For the Gilber-Cameron model the parameters of the Beijing group, based on a rather recent compilations of the neutron resonance and low-lying level densities and included into the beijing-gc.dat file, are chosen as recommended. As alternative versions the parameters provided by other groups are given into the files: jaeri-gc.dat, bombay-gc.dat, obninsk-gc.dat. Additionally the iljinov-gc.dat, and mengoni-gc.dat files include sets of the level density parameters that take into account the damping of shell effects at high energies. ii) For the backed-shifted Fermi gas model the beijing-bs.dat file is selected as the recommended one. Alternative parameters of the Obninsk group are given in the obninsk-bs.dat file and those of Bombay in bombay-bs.dat. iii) For the generalized superfluid model the Obninsk group parameters included into the obninsk-bcs.dat file are chosen as recommended ones and the beijing-bcs.dat file is included as an alternative set of parameters. iv) For the microscopic approach to the level densities the files are: obninsk-micro.for -FORTRAN 77 source for the microscopical statistical level density code developed in Obninsk by Ignatyuk and coworkers, moller-levels.gz - Moeller single-particle level and ground state deformation data base, moller-levels.for -retrieval code for Moeller single-particle level scheme. (author)

  4. Constructing set-valued fundamental diagrams from jamiton solutions in second order traffic models

    KAUST Repository

    Seibold, Benjamin

    2013-09-01

    Fundamental diagrams of vehicular traiic ow are generally multivalued in the congested ow regime. We show that such set-valued fundamental diagrams can be constructed systematically from simple second order macroscopic traiic models, such as the classical Payne-Whitham model or the inhomogeneous Aw-Rascle-Zhang model. These second order models possess nonlinear traveling wave solutions, called jamitons, and the multi-valued parts in the fundamental diagram correspond precisely to jamiton-dominated solutions. This study shows that transitions from function-valued to set-valued parts in a fundamental diagram arise naturally in well-known second order models. As a particular consequence, these models intrinsically reproduce traiic phases. © American Institute of Mathematical Sciences.

  5. Constructing set-valued fundamental diagrams from jamiton solutions in second order traffic models

    KAUST Repository

    Seibold, Benjamin; Flynn, Morris R.; Kasimov, Aslan R.; Rosales, Rodolfo Rubé n

    2013-01-01

    Fundamental diagrams of vehicular traiic ow are generally multivalued in the congested ow regime. We show that such set-valued fundamental diagrams can be constructed systematically from simple second order macroscopic traiic models, such as the classical Payne-Whitham model or the inhomogeneous Aw-Rascle-Zhang model. These second order models possess nonlinear traveling wave solutions, called jamitons, and the multi-valued parts in the fundamental diagram correspond precisely to jamiton-dominated solutions. This study shows that transitions from function-valued to set-valued parts in a fundamental diagram arise naturally in well-known second order models. As a particular consequence, these models intrinsically reproduce traiic phases. © American Institute of Mathematical Sciences.

  6. Simulation of shallow groundwater levels: Comparison of a data-driven and a conceptual model

    Science.gov (United States)

    Fahle, Marcus; Dietrich, Ottfried; Lischeid, Gunnar

    2015-04-01

    Despite an abundance of models aimed at simulating shallow groundwater levels, application of such models is often hampered by a lack of appropriate input data. Difficulties especially arise with regard to soil data, which are typically hard to obtain and prone to spatial variability, eventually leading to uncertainties in the model results. Modelling approaches relying entirely on easily measured quantities are therefore an alternative to encourage the applicability of models. We present and compare two models for calculating 1-day-ahead predictions of the groundwater level that are only based on measurements of potential evapotranspiration, precipitation and groundwater levels. The first model is a newly developed conceptual model that is parametrized using the White method (which estimates the actual evapotranspiration on basis of diurnal groundwater fluctuations) and a rainfall-response ratio. Inverted versions of the two latter approaches are then used to calculate the predictions of the groundwater level. Furthermore, as a completely data-driven alternative, a simple feed-forward multilayer perceptron neural network was trained based on the same inputs and outputs. Data of 4 growing periods (April to October) from a study site situated in the Spreewald wetland in North-east Germany were taken to set-up the models and compare their performance. In addition, response surfaces that relate model outputs to combinations of different input variables are used to reveal those aspects in which the two approaches coincide and those in which they differ. Finally, it will be evaluated whether the conceptual approach can be enhanced by extracting knowledge of the neural network. This is done by replacing in the conceptual model the default function that relates groundwater recharge and groundwater level, which is assumed to be linear, by the non-linear function extracted from the neural network.

  7. Analysis of a Farquhar-von Caemmerer-Berry leaf-level photosynthetic rate model for Populus tremuloides in the context of modeling and measurement limitations

    International Nuclear Information System (INIS)

    Lenz, Kathryn E.; Host, George E.; Roskoski, Kyle; Noormets, Asko; Sober, Anu; Karnosky, David F.

    2010-01-01

    The balance of mechanistic detail with mathematical simplicity contributes to the broad use of the Farquhar, von Caemmerer and Berry (FvCB) photosynthetic rate model. Here the FvCB model was coupled with a stomatal conductance model to form an [A,g s ] model, and parameterized for mature Populus tremuloides leaves under varying CO 2 and temperature levels. Data were selected to be within typical forest light, CO 2 and temperature ranges, reducing artifacts associated with data collected at extreme values. The error between model-predicted photosynthetic rate (A) and A data was measured in three ways and found to be up to three times greater for each of two independent data sets than for a base-line evaluation using parameterization data. The evaluation methods used here apply to comparisons of model validation results among data sets varying in number and distribution of data, as well as to performance comparisons of [A,g s ] models differing in internal-process components. - A photosynthetic rate model is parameterized for Populus tremuloides and evaluated based on its ability to predict dependent as well as independent data.

  8. Using a micro-level model to generate a macro-level model of productive successful aging.

    Science.gov (United States)

    Johnson, Jessica K M; Sarkisian, Natalia; Williamson, John B

    2015-02-01

    Aging successfully entails good physical and cognitive health, as well as ongoing participation in social and productive activity. This study hones in on participation in productive activity, a factor that makes an important contribution to successful aging. One conceptual model of productive activity in later life specifies the antecedents and consequences of productivity. This study draws on that micro-level model to develop a corresponding macro-level model and assesses its utility for examining the predictors of and explaining the relationships between one form of productivity (labor force participation rates) and one aspect of well-being (average life expectancy) among males and females. Random effects regression models and path analysis were used to analyze cross-national longitudinal data for 24 high-income Organization for Economic Co-operation and Development (OECD) countries at seven time points (1980-2010; 168 observations total). OECD countries with higher labor force participation rates among older workers have higher life expectancies. Labor force participation mediates the effects of gross domestic product per capita on male and female life expectancy, and it mediates the effect of self-employment rate for men, but it acts as a suppressor with regard to the effect of public spending on male and female life expectancy. A well-known micro-level model of productive activity can be fruitfully adapted to account for macro-level cross-national variation in productivity and well-being. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. What Time is Your Sunset? Accounting for Refraction in Sunrise/set Prediction Models

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer Lynn; Chizek Frouard, Malynda; Hilton, James; Phlips, Alan; Edgar, Roman

    2018-01-01

    Algorithms that predict sunrise and sunset times currently have an uncertainty of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, including difficulties determining whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction.We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We, then, compared these predictions with data sets of observed rise/set times taken from Mount Wilson Observatory in California, University of Alberta in Edmonton, Alberta, and onboard the SS James Franco in the Atlantic. A thorough investigation of the problem requires a more substantial data set of observed rise/set times and corresponding meteorological data from around the world.We have developed a mobile application, Sunrise & Sunset Observer, so that anyone can capture this astronomical and meteorological data using their smartphone video recorder as part of a citizen science project. The Android app for this project is available in the Google Play store. Videos can also be submitted through the project website (riseset.phy.mtu.edu). Data analysis will lead to more complete models that will provide higher accuracy rise/set predictions to benefit astronomers, navigators, and outdoorsmen everywhere.

  10. California Dental Hygiene Educators' Perceptions of an Application of the ADHA Advanced Dental Hygiene Practitioner (ADHP) Model in Medical Settings.

    Science.gov (United States)

    Smith, Lauren; Walsh, Margaret

    2015-12-01

    To assess California dental hygiene educators' perceptions of an application of the American Dental Hygienists' Association's (ADHA) advanced dental hygiene practitioner model (ADHP) in medical settings where the advanced dental hygiene practitioner collaborates in medical settings with other health professionals to meet clients' oral health needs. In 2014, 30 directors of California dental hygiene programs were contacted to participate in and distribute an online survey to their faculty. In order to capture non-respondents, 2 follow-up e-mails were sent. Descriptive analysis and cross-tabulations were analyzed using the online survey software program, Qualtrics™. The educator response rate was 18% (70/387). Nearly 90% of respondents supported the proposed application of the ADHA ADHP model and believed it would increase access to care and reduce oral health disparities. They also agreed with most of the proposed services, target populations and workplace settings. Slightly over half believed a master's degree was the appropriate educational level needed. Among California dental hygiene educators responding to this survey, there was strong support for the proposed application of the ADHA model in medical settings. More research is needed among a larger sample of dental hygiene educators and clinicians, as well as among other health professionals such as physicians, nurses and dentists. Copyright © 2015 The American Dental Hygienists’ Association.

  11. Moderation analysis using a two-level regression model.

    Science.gov (United States)

    Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott

    2014-10-01

    Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.

  12. Imaging disturbance zones ahead of a tunnel by elastic full-waveform inversion: Adjoint gradient based inversion vs. parameter space reduction using a level-set method

    Directory of Open Access Journals (Sweden)

    Andre Lamert

    2018-03-01

    Full Text Available We present and compare two flexible and effective methodologies to predict disturbance zones ahead of underground tunnels by using elastic full-waveform inversion. One methodology uses a linearized, iterative approach based on misfit gradients computed with the adjoint method while the other uses iterative, gradient-free unscented Kalman filtering in conjunction with a level-set representation. Whereas the former does not involve a priori assumptions on the distribution of elastic properties ahead of the tunnel, the latter introduces a massive reduction in the number of explicit model parameters to be inverted for by focusing on the geometric form of potential disturbances and their average elastic properties. Both imaging methodologies are validated through successful reconstructions of simple disturbances. As an application, we consider an elastic multiple disturbance scenario. By using identical synthetic time-domain seismograms as test data, we obtain satisfactory, albeit different, reconstruction results from the two inversion methodologies. The computational costs of both approaches are of the same order of magnitude, with the gradient-based approach showing a slight advantage. The model parameter space reduction approach compensates for this by additionally providing a posteriori estimates of model parameter uncertainty. Keywords: Tunnel seismics, Full waveform inversion, Seismic waves, Level-set method, Adjoint method, Kalman filter

  13. Risk Decision Making Based on Decision-theoretic Rough Set: A Three-way View Decision Model

    OpenAIRE

    Huaxiong Li; Xianzhong Zhou

    2011-01-01

    Rough set theory has witnessed great success in data mining and knowledge discovery, which provides a good support for decision making on a certain data. However, a practical decision problem always shows diversity under the same circumstance according to different personality of the decision makers. A simplex decision model can not provide a full description on such diverse decisions. In this article, a review of Pawlak rough set models and probabilistic rough set models is presented, and a ...

  14. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    Energy Technology Data Exchange (ETDEWEB)

    Hosntalab, Mohammad [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Aghaeizadeh Zoroofi, Reza [University of Tehran, Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, Tehran (Iran); Abbaspour Tehrani-Fard, Ali [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Sharif University of Technology, Department of Electrical Engineering, Tehran (Iran); Shirani, Gholamreza [Faculty of Dentistry Medical Science of Tehran University, Oral and Maxillofacial Surgery Department, Tehran (Iran)

    2008-09-15

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  15. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    International Nuclear Information System (INIS)

    Hosntalab, Mohammad; Aghaeizadeh Zoroofi, Reza; Abbaspour Tehrani-Fard, Ali; Shirani, Gholamreza

    2008-01-01

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  16. Basic priority rating model 2.0: current applications for priority setting in health promotion practice.

    Science.gov (United States)

    Neiger, Brad L; Thackeray, Rosemary; Fagen, Michael C

    2011-03-01

    Priority setting is an important component of systematic planning in health promotion and also factors into the development of a comprehensive evaluation plan. The basic priority rating (BPR) model was introduced more than 50 years ago and includes criteria that should be considered in any priority setting approach (i.e., use of predetermined criteria, standardized comparisons, and a rubric that controls bias). Although the BPR model has provided basic direction in priority setting, it does not represent the broad array of data currently available to decision makers. Elements in the model also give more weight to the impact of communicable diseases compared with chronic diseases. For these reasons, several modifications are recommended to improve the BPR model and to better assist health promotion practitioners in the priority setting process. The authors also suggest a new name, BPR 2.0, to represent this revised model.

  17. Joint Clustering and Component Analysis of Correspondenceless Point Sets: Application to Cardiac Statistical Modeling.

    Science.gov (United States)

    Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F

    2015-01-01

    Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.

  18. Physical property parameter set for modeling ICPP aqueous wastes with ASPEN electrolyte NRTL model

    International Nuclear Information System (INIS)

    Schindler, R.E.

    1996-09-01

    The aqueous waste evaporators at the Idaho Chemical Processing Plant (ICPP) are being modeled using ASPEN software. The ASPEN software calculates chemical and vapor-liquid equilibria with activity coefficients calculated using the electrolyte Non-Random Two Liquid (NRTL) model for local excess Gibbs free energies of interactions between ions and molecules in solution. The use of the electrolyte NRTL model requires the determination of empirical parameters for the excess Gibbs free energies of the interactions between species in solution. This report covers the development of a set parameters, from literature data, for the use of the electrolyte NRTL model with the major solutes in the ICPP aqueous wastes

  19. Analysis and Modeling of Urban Land Cover Change in Setúbal and Sesimbra, Portugal

    Directory of Open Access Journals (Sweden)

    Yikalo H. Araya

    2010-06-01

    Full Text Available The expansion of cities entails the abandonment of forest and agricultural lands, and these lands’ conversion into urban areas, which results in substantial impacts on ecosystems. Monitoring these changes and planning urban development can be successfully achieved using multitemporal remotely sensed data, spatial metrics, and modeling. In this paper, urban land use change analysis and modeling was carried out for the Concelhos of Setúbal and Sesimbra in Portugal. An existing land cover map for the year 1990, together with two derived land cover maps from multispectral satellite images for the years 2000 and 2006, were utilized using an object-oriented classification approach. Classification accuracy assessment revealed satisfactory results that fulfilled minimum standard accuracy levels. Urban land use dynamics, in terms of both patterns and quantities, were studied using selected landscape metrics and the Shannon Entropy index. Results show that urban areas increased by 91.11% between 1990 and 2006. In contrast, the change was only 6.34% between 2000 and 2006. The entropy value was 0.73 for both municipalities in 1990, indicating a high rate of urban sprawl in the area. In 2006, this value, for both Sesimbra and Setúbal, reached almost 0.90. This is demonstrative of a tendency toward intensive urban sprawl. Urban land use change for the year 2020 was modeled using a Cellular Automata based approach. The predictive power of the model was successfully validated using Kappa variations. Projected land cover changes show a growing tendency in urban land use, which might threaten areas that are currently reserved for natural parks and agricultural lands.

  20. Adaptable Value-Set Analysis for Low-Level Code

    OpenAIRE

    Brauer, Jörg; Hansen, René Rydhof; Kowalewski, Stefan; Larsen, Kim G.; Olesen, Mads Chr.

    2012-01-01

    This paper presents a framework for binary code analysis that uses only SAT-based algorithms. Within the framework, incremental SAT solving is used to perform a form of weakly relational value-set analysis in a novel way, connecting the expressiveness of the value sets to computational complexity. Another key feature of our framework is that it translates the semantics of binary code into an intermediate representation. This allows for a straightforward translation of the program semantics in...

  1. A population-based model for priority setting across the care continuum and across modalities

    Directory of Open Access Journals (Sweden)

    Mortimer Duncan

    2006-03-01

    Full Text Available Abstract Background The Health-sector Wide (HsW priority setting model is designed to shift the focus of priority setting away from 'program budgets' – that are typically defined by modality or disease-stage – and towards well-defined target populations with a particular disease/health problem. Methods The key features of the HsW model are i a disease/health problem framework, ii a sequential approach to covering the entire health sector, iii comprehensiveness of scope in identifying intervention options and iv the use of objective evidence. The HsW model redefines the unit of analysis over which priorities are set to include all mutually exclusive and complementary interventions for the prevention and treatment of each disease/health problem under consideration. The HsW model is therefore incompatible with the fragmented approach to priority setting across multiple program budgets that currently characterises allocation in many health systems. The HsW model employs standard cost-utility analyses and decision-rules with the aim of maximising QALYs contingent upon the global budget constraint for the set of diseases/health problems under consideration. It is recognised that the objective function may include non-health arguments that would imply a departure from simple QALY maximisation and that political constraints frequently limit degrees of freedom. In addressing these broader considerations, the HsW model can be modified to maximise value-weighted QALYs contingent upon the global budget constraint and any political constraints bearing upon allocation decisions. Results The HsW model has been applied in several contexts, recently to osteoarthritis, that has demonstrated both its practical application and its capacity to derive clear evidenced-based policy recommendations. Conclusion Comparisons with other approaches to priority setting, such as Programme Budgeting and Marginal Analysis (PBMA and modality-based cost

  2. Modeling category-level purchase timing with brand-level marketing variables

    OpenAIRE

    Fok, D.; Paap, R.

    2003-01-01

    textabstractPurchase timing of households is usually modeled at the category level. Marketing efforts are however only available at the brand level. Hence, to describe category-level interpurchase times using marketing efforts one has to construct a category-level measure of marketing efforts from the marketing mix of individual brands. In this paper we discuss two standard approaches suggested in the literature to solve this problem, that is, using individual choice shares as weights to aver...

  3. Building and calibrating a large-extent and high resolution coupled groundwater-land surface model using globally available data-sets

    Science.gov (United States)

    Sutanudjaja, E. H.; Van Beek, L. P.; de Jong, S. M.; van Geer, F.; Bierkens, M. F.

    2012-12-01

    The current generation of large-scale hydrological models generally lacks a groundwater model component simulating lateral groundwater flow. Large-scale groundwater models are rare due to a lack of hydro-geological data required for their parameterization and a lack of groundwater head data required for their calibration. In this study, we propose an approach to develop a large-extent fully-coupled land surface-groundwater model by using globally available datasets and calibrate it using a combination of discharge observations and remotely-sensed soil moisture data. The underlying objective is to devise a collection of methods that enables one to build and parameterize large-scale groundwater models in data-poor regions. The model used, PCR-GLOBWB-MOD, has a spatial resolution of 1 km x 1 km and operates on a daily basis. It consists of a single-layer MODFLOW groundwater model that is dynamically coupled to the PCR-GLOBWB land surface model. This fully-coupled model accommodates two-way interactions between surface water levels and groundwater head dynamics, as well as between upper soil moisture states and groundwater levels, including a capillary rise mechanism to sustain upper soil storage and thus to fulfill high evaporation demands (during dry conditions). As a test bed, we used the Rhine-Meuse basin, where more than 4000 groundwater head time series have been collected for validation purposes. The model was parameterized using globally available data-sets on surface elevation, drainage direction, land-cover, soil and lithology. Next, the model was calibrated using a brute force approach and massive parallel computing, i.e. by running the coupled groundwater-land surface model for more than 3000 different parameter sets. Here, we varied minimal soil moisture storage and saturated conductivities of the soil layers as well as aquifer transmissivities. Using different regularization strategies and calibration criteria we compared three calibration scenarios

  4. Comparison of Co-Temporal Modeling Algorithms on Sparse Experimental Time Series Data Sets.

    Science.gov (United States)

    Allen, Edward E; Norris, James L; John, David J; Thomas, Stan J; Turkett, William H; Fetrow, Jacquelyn S

    2010-01-01

    Multiple approaches for reverse-engineering biological networks from time-series data have been proposed in the computational biology literature. These approaches can be classified by their underlying mathematical algorithms, such as Bayesian or algebraic techniques, as well as by their time paradigm, which includes next-state and co-temporal modeling. The types of biological relationships, such as parent-child or siblings, discovered by these algorithms are quite varied. It is important to understand the strengths and weaknesses of the various algorithms and time paradigms on actual experimental data. We assess how well the co-temporal implementations of three algorithms, continuous Bayesian, discrete Bayesian, and computational algebraic, can 1) identify two types of entity relationships, parent and sibling, between biological entities, 2) deal with experimental sparse time course data, and 3) handle experimental noise seen in replicate data sets. These algorithms are evaluated, using the shuffle index metric, for how well the resulting models match literature models in terms of siblings and parent relationships. Results indicate that all three co-temporal algorithms perform well, at a statistically significant level, at finding sibling relationships, but perform relatively poorly in finding parent relationships.

  5. Sea-level rise modeling handbook: Resource guide for coastal land managers, engineers, and scientists

    Science.gov (United States)

    Doyle, Thomas W.; Chivoiu, Bogdan; Enwright, Nicholas M.

    2015-08-24

    ; utility options for setting sea-level rise and climate change scenarios; and ease or difficulty of storing, displaying, or interpreting model output. Coastal land managers, engineers, and scientists can benefit from this synthesis of tools and models that have been developed for projecting causes and consequences of sea-level change on the landscape and seascape.

  6. Social Set Analysis

    DEFF Research Database (Denmark)

    Vatrapu, Ravi; Mukkamala, Raghava Rao; Hussain, Abid

    2016-01-01

    , conceptual and formal models of social data, and an analytical framework for combining big social data sets with organizational and societal data sets. Three empirical studies of big social data are presented to illustrate and demonstrate social set analysis in terms of fuzzy set-theoretical sentiment...... automata and agent-based modeling). However, when it comes to organizational and societal units of analysis, there exists no approach to conceptualize, model, analyze, explain, and predict social media interactions as individuals' associations with ideas, values, identities, and so on. To address...... analysis, crisp set-theoretical interaction analysis, and event-studies-oriented set-theoretical visualizations. Implications for big data analytics, current limitations of the set-theoretical approach, and future directions are outlined....

  7. Description of a practice model for pharmacist medication review in a general practice setting

    DEFF Research Database (Denmark)

    Brandt, Mette; Hallas, Jesper; Hansen, Trine Graabæk

    2014-01-01

    BACKGROUND: Practical descriptions of procedures used for pharmacists' medication reviews are sparse. OBJECTIVE: To describe a model for medication review by pharmacists tailored to a general practice setting. METHODS: A stepwise model is described. The model is based on data from the medical chart...... no indication (n=47, 23%). Most interventions were aimed at cardiovascular drugs. CONCLUSION: We have provided a detailed description of a practical approach to pharmacists' medication review in a GP setting. The model was tested and found to be usable, and to deliver a medication review with high acceptance...

  8. H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    Rahman Ali

    2015-07-01

    Full Text Available Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body’s resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1 restricted one type of diabetes; (2 lack understandability and explanatory power of the techniques and decision; (3 limited either to prediction purpose or management over the structured contents; and (4 lack competence for dimensionality and vagueness of patient’s data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM and type-2 diabetes mellitus (T2DM. For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies.

  9. H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus.

    Science.gov (United States)

    Ali, Rahman; Hussain, Jamil; Siddiqi, Muhammad Hameed; Hussain, Maqbool; Lee, Sungyoung

    2015-07-03

    Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body's resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1) restricted one type of diabetes; (2) lack understandability and explanatory power of the techniques and decision; (3) limited either to prediction purpose or management over the structured contents; and (4) lack competence for dimensionality and vagueness of patient's data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM) that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM) and type-2 diabetes mellitus (T2DM). For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST) based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies.

  10. Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models.

    Science.gov (United States)

    Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus

    2017-02-01

    in the training set did not match their frequencies in natural experience or their behavioural importance. The latter factors might determine the representational prominence of semantic dimensions in higher-level ventral-stream areas. Our results demonstrate the benefits of testing both the specific representational hypothesis expressed by a model's original feature space and the hypothesis space generated by linear transformations of that feature space.

  11. Refining Sunrise/set Prediction Models by Accounting for the Effects of Refraction

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer L.

    2016-01-01

    Current atmospheric models used to predict the times of sunrise and sunset have an error of one to four minutes at mid-latitudes (0° - 55° N/S). At higher latitudes, slight changes in refraction may cause significant discrepancies, including determining even whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols, could significantly improve the standard prediction. Because sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem, we will collect this data using smartphones as part of a citizen science project. This analysis will lead to more complete models that will provide more accurate times for navigators and outdoorsman alike.

  12. Classification of Normal and Apoptotic Cells from Fluorescence Microscopy Images Using Generalized Polynomial Chaos and Level Set Function.

    Science.gov (United States)

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2016-06-01

    Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.

  13. Modelling the impact of antiretroviral use in resource-poor settings.

    Directory of Open Access Journals (Sweden)

    Rebecca F Baggaley

    2006-04-01

    Full Text Available BACKGROUND: The anticipated scale-up of antiretroviral therapy (ART in high-prevalence, resource-constrained settings requires operational research to guide policy on the design of treatment programmes. Mathematical models can explore the potential impacts of various treatment strategies, including timing of treatment initiation and provision of laboratory monitoring facilities, to complement evidence from pilot programmes. METHODS AND FINDINGS: A deterministic model of HIV transmission incorporating ART and stratifying infection progression into stages was constructed. The impact of ART was evaluated for various scenarios and treatment strategies, with different levels of coverage, patient eligibility, and other parameter values. These strategies included the provision of laboratory facilities that perform CD4 counts and viral load testing, and the timing of the stage of infection at which treatment is initiated. In our analysis, unlimited ART provision initiated at late-stage infection (AIDS increased prevalence of HIV infection. The effect of additionally treating pre-AIDS patients depended on the behaviour change of treated patients. Different coverage levels for ART do not affect benefits such as life-years gained per person-year of treatment and have minimal effect on infections averted when treating AIDS patients only. Scaling up treatment of pre-AIDS patients resulted in more infections being averted per person-year of treatment, but the absolute number of infections averted remained small. As coverage increased in the models, the emergence and risk of spread of drug resistance increased. Withdrawal of failing treatment (clinical resurgence of symptoms, immunologic (CD4 count decline, or virologic failure (viral rebound increased the number of infected individuals who could benefit from ART, but effectiveness per person is compromised. Only withdrawal at a very early stage of treatment failure, soon after viral rebound, would have a

  14. Computational Fluid Dynamics Analysis of Cold Plasma Plume Mixing with Blood Using Level Set Method Coupled with Heat Transfer

    Directory of Open Access Journals (Sweden)

    Mehrdad Shahmohammadi Beni

    2017-06-01

    Full Text Available Cold plasmas were proposed for treatment of leukemia. In the present work, conceptual designs of mixing chambers that increased the contact between the two fluids (plasma and blood through addition of obstacles within rectangular-block-shaped chambers were proposed and the dynamic mixing between the plasma and blood were studied using the level set method coupled with heat transfer. Enhancement of mixing between blood and plasma in the presence of obstacles was demonstrated. Continuous tracking of fluid mixing with determination of temperature distributions was enabled by the present model, which would be a useful tool for future development of cold plasma devices for treatment of blood-related diseases such as leukemia.

  15. Development of a multimedia radionuclide exposure model for low-level waste management

    International Nuclear Information System (INIS)

    Onishi, Y.; Whelan, G.; Skaggs, R.L.

    1982-03-01

    A method is being developed for assessing exposures of the air, water, and plants to low-level waste (LLW) as a part of an overall development effort of a LLW site evaluation methodology. The assessment methodology will predict LLW exposure levels in the environment by simulating dominant mechanisms of LLW migration and fate. The methodology consists of a series of physics-based models with proven histories of success; the models interact with each other to simulate LLW transport in the ecosystem. A scaled-down version of the methodology was developed first by combining the terrestrial ecological model, BIOTRAN; the overland transport model, ARM; the instream hydrodynamic model, DKWAV; and the instream sediment-contaminant transport model, TODAM (a one-dimensional version of SERATRA). The methodology was used to simulate the migration of 239 Pu from a shallow-land disposal site (known as Area C) located near the head of South Mortandad Canyon on the LANL site in New Mexico. The scenario assumed that 239 Pu would be deposited on the land surface through the natural processes of plant growth, LLW uptake, dryfall, and litter decomposition. Runoff events would then transport 239 Pu to and in the canyon. The model provided sets of simulated LLW levels in soil, water and terrestrial plants in the region surrounding the site under a specified land-use and a waste management option. Over a 100-yr simulation period, only an extremely small quantity (6 x 10 -9 times the original concentration) of buried 239 Pu was taken up by plants and deposited on the land surface. Only a small fraction (approximately 1%) of that contamination was further removed by soil erosion from the site and carried to the canyon, where it remained. Hence, the study reveals that the environment around Area C has integrity high enough to curtail LLW migration under recreational land use

  16. Data Set for Emperical Validation of Double Skin Facade Model

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per

    2008-01-01

    During the recent years the attention to the double skin facade (DSF) concept has greatly increased. Nevertheless, the application of the concept depends on whether a reliable model for simulation of the DSF performance will be developed or pointed out. This is, however, not possible to do, until...... the International Energy Agency (IEA) Task 34 Annex 43. This paper describes the full-scale outdoor experimental test facility ‘the Cube', where the experiments were conducted, the experimental set-up and the measurements procedure for the data sets. The empirical data is composed for the key-functioning modes...

  17. Development of System Model for Level 1 Probabilistic Safety Assessment of TRIGA PUSPATI Reactor

    International Nuclear Information System (INIS)

    Tom, P.P; Mazleha Maskin; Ahmad Hassan Sallehudin Mohd Sarif; Faizal Mohamed; Mohd Fazli Zakaria; Shaharum Ramli; Muhamad Puad Abu

    2014-01-01

    Nuclear safety is a very big issue in the world. As a consequence of the accident at Fukushima, Japan, most of the reactors in the world have been reviewed their safety of the reactors including also research reactors. To develop Level 1 Probabilistic Safety Assessment (PSA) of TRIGA PUSPATI Reactor (RTP), three organizations are involved; Nuclear Malaysia, AELB and UKM. PSA methodology is a logical, deductive technique which specifies an undesired top event and uses fault trees and event trees to model the various parallel and sequential combinations of failures that might lead to an undesired event. Fault Trees (FT) methodology is use in developing of system models. At the lowest level, the Basic Events (BE) of the fault trees (components failure and human errors) are assigned probability distributions. In this study, Risk Spectrum software used to construct the fault trees and analyze the system models. The results of system models analysis such as core damage frequency (CDF), minimum cut set (MCS) and common cause failure (CCF) uses to support decision making for upgrading or modification of the RTP?s safety system. (author)

  18. System level modeling and component level control of fuel cells

    Science.gov (United States)

    Xue, Xingjian

    This dissertation investigates the fuel cell systems and the related technologies in three aspects: (1) system-level dynamic modeling of both PEM fuel cell (PEMFC) and solid oxide fuel cell (SOFC); (2) condition monitoring scheme development of PEM fuel cell system using model-based statistical method; and (3) strategy and algorithm development of precision control with potential application in energy systems. The dissertation first presents a system level dynamic modeling strategy for PEM fuel cells. It is well known that water plays a critical role in PEM fuel cell operations. It makes the membrane function appropriately and improves the durability. The low temperature operating conditions, however, impose modeling difficulties in characterizing the liquid-vapor two phase change phenomenon, which becomes even more complex under dynamic operating conditions. This dissertation proposes an innovative method to characterize this phenomenon, and builds a comprehensive model for PEM fuel cell at the system level. The model features the complete characterization of multi-physics dynamic coupling effects with the inclusion of dynamic phase change. The model is validated using Ballard stack experimental result from open literature. The system behavior and the internal coupling effects are also investigated using this model under various operating conditions. Anode-supported tubular SOFC is also investigated in the dissertation. While the Nernst potential plays a central role in characterizing the electrochemical performance, the traditional Nernst equation may lead to incorrect analysis results under dynamic operating conditions due to the current reverse flow phenomenon. This dissertation presents a systematic study in this regard to incorporate a modified Nernst potential expression and the heat/mass transfer into the analysis. The model is used to investigate the limitations and optimal results of various operating conditions; it can also be utilized to perform the

  19. Feature Set Evaluation for Offline Handwriting Recognition Systems: Application to the Recurrent Neural Network Model.

    Science.gov (United States)

    Chherawala, Youssouf; Roy, Partha Pratim; Cheriet, Mohamed

    2016-12-01

    The performance of handwriting recognition systems is dependent on the features extracted from the word image. A large body of features exists in the literature, but no method has yet been proposed to identify the most promising of these, other than a straightforward comparison based on the recognition rate. In this paper, we propose a framework for feature set evaluation based on a collaborative setting. We use a weighted vote combination of recurrent neural network (RNN) classifiers, each trained with a particular feature set. This combination is modeled in a probabilistic framework as a mixture model and two methods for weight estimation are described. The main contribution of this paper is to quantify the importance of feature sets through the combination weights, which reflect their strength and complementarity. We chose the RNN classifier because of its state-of-the-art performance. Also, we provide the first feature set benchmark for this classifier. We evaluated several feature sets on the IFN/ENIT and RIMES databases of Arabic and Latin script, respectively. The resulting combination model is competitive with state-of-the-art systems.

  20. Development of a new model to engage patients and clinicians in setting research priorities.

    Science.gov (United States)

    Pollock, Alex; St George, Bridget; Fenton, Mark; Crowe, Sally; Firkins, Lester

    2014-01-01

    Equitable involvement of patients and clinicians in setting research and funding priorities is ethically desirable and can improve the quality, relevance and implementation of research. Survey methods used in previous priority setting projects to gather treatment uncertainties may not be sufficient to facilitate responses from patients and their lay carers for some health care topics. We aimed to develop a new model to engage patients and clinicians in setting research priorities relating to life after stroke, and to explore the use of this model within a James Lind Alliance (JLA) priority setting project. We developed a model to facilitate involvement through targeted engagement and assisted involvement (FREE TEA model). We implemented both standard surveys and the FREE TEA model to gather research priorities (treatment uncertainties) from people affected by stroke living in Scotland. We explored and configured the number of treatment uncertainties elicited from different groups by the two approaches. We gathered 516 treatment uncertainties from stroke survivors, carers and health professionals. We achieved approximately equal numbers of contributions; 281 (54%) from stroke survivors/carers; 235 (46%) from health professionals. For stroke survivors and carers, 98 (35%) treatment uncertainties were elicited from the standard survey and 183 (65%) at FREE TEA face-to-face visits. This contrasted with the health professionals for whom 198 (84%) were elicited from the standard survey and only 37 (16%) from FREE TEA visits. The FREE TEA model has implications for future priority setting projects and user-involvement relating to populations of people with complex health needs. Our results imply that reliance on standard surveys may result in poor and unrepresentative involvement of patients, thereby favouring the views of health professionals.

  1. The Use of BPMN for Modelling The MES Level in Information and Control Systems

    Directory of Open Access Journals (Sweden)

    Peter Michalik

    2013-07-01

    Full Text Available This paper describes how to use the process map to model the MES level of information and control systems. In the practical application, the BPMN standard was used for the design of the function of the management maintenance, which, according MES organization, is one of the 11 basic MES functions. The application was implemented on the flexible production line, located in the laboratory at the Department of Cybernetics and Artificial Intelligence. In this article, there is only one BPMN model of  a MES function from the set of modelled functions in our laboratory. These functions were also modelled by the UML Activity Diagram. Although the UML model is not included in this article, there is a comparison of these approaches based on selected criteria and our experiences with modelling in the laboratory at the university.

  2. Multi-domain, higher order level set scheme for 3D image segmentation on the GPU

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Zhang, Qin; Anton, François

    2010-01-01

    to evaluate level set surfaces that are $C^2$ continuous, but are slow due to high computational burden. In this paper, we provide a higher order GPU based solver for fast and efficient segmentation of large volumetric images. We also extend the higher order method to multi-domain segmentation. Our streaming...

  3. Fuzzy sets as extension of probabilistic models for evaluating human reliability

    International Nuclear Information System (INIS)

    Przybylski, F.

    1996-11-01

    On the base of a survey of established quantification methodologies for evaluating human reliability, a new computerized methodology was developed in which a differential consideration of user uncertainties is made. In this quantification method FURTHER (FUzzy Sets Related To Human Error Rate Prediction), user uncertainties are quantified separately from model and data uncertainties. As tools fuzzy sets are applied which, however, stay hidden to the method's user. The user in the quantification process only chooses an action pattern, performance shaping factors and natural language expressions. The acknowledged method HEART (Human Error Assessment and Reduction Technique) serves as foundation of the fuzzy set approach FURTHER. By means of this method, the selection of a basic task in connection with its basic error probability, the decision how correct the basic task's selection is, the selection of a peformance shaping factor, and the decision how correct the selection and how important the performance shaping factor is, were identified as aspects of fuzzification. This fuzzification is made on the base of data collection and information from literature as well as of the estimation by competent persons. To verify the ammount of additional information to be received by the usage of fuzzy sets, a benchmark session was accomplished. In this benchmark twelve actions were assessed by five test-persons. In case of the same degree of detail in the action modelling process, the bandwidths of the interpersonal evaluations decrease in FURTHER in comparison with HEART. The uncertainties of the single results could not be reduced up to now. The benchmark sessions conducted so far showed plausible results. A further testing of the fuzzy set approach by using better confirmed fuzzy sets can only be achieved in future practical application. Adequate procedures, however, are provided. (orig.) [de

  4. Modelling the economic impact of three lameness causing diseases using herd and cow level evidence

    DEFF Research Database (Denmark)

    Ettema, Jehan Frans; Østergaard, Søren; Kristensen, Anders Ringgaard

    2010-01-01

    Diseases to the cow's hoof, interdigital skin and legs are highly prevalent and of large economic impact in modern dairy farming. In order to support farmer's decisions on preventing and treating lameness and its underlying causes, decision support models can be used to predict the economic...... horn diseases. Secondly, the existing simulation model was set-up inwaythat it uses hyper-distributions describing diseases risk of the three lameness causing diseases. By combining information on herd level risk factors with prevalence of lameness or prevalence of underlying diseases among cows...

  5. Identification of Determinants of Sports Skill Level in Badminton Players Using the Multiple Regression Model

    Directory of Open Access Journals (Sweden)

    Jaworski Janusz

    2016-03-01

    Full Text Available Purpose. The aim of the study was to evaluate somatic and functional determinants of sports skill level in badminton players at three consecutive stages of training. Methods. The study examined 96 badminton players aged 11 to 19 years. The scope of the study included somatic characteristics, physical abilities and neurosensory abilities. Thirty nine variables were analysed in each athlete. Coefficients of multiple determination were used to evaluate the effect of structural and functional parameters on sports skill level in badminton players. Results. In the group of younger cadets, quality and effectiveness of playing were mostly determined by the level of physical abilities. In the group of cadets, the most important determinants were physical abilities, followed by somatic characteristics. In this group, coordination abilities were also important. In juniors, the most pronounced was a set of the variables that reflect physical abilities. Conclusions. Models of determination of sports skill level are most noticeable in the group of cadets. In all three groups of badminton players, the dominant effect on the quality of playing is due to a set of the variables that determine physical abilities.

  6. Accident sequence precursor analysis level 2/3 model development

    International Nuclear Information System (INIS)

    Lui, C.H.; Galyean, W.J.; Brownson, D.A.

    1997-01-01

    The US Nuclear Regulatory Commission's Accident Sequence Precursor (ASP) program currently uses simple Level 1 models to assess the conditional core damage probability for operational events occurring in commercial nuclear power plants (NPP). Since not all accident sequences leading to core damage will result in the same radiological consequences, it is necessary to develop simple Level 2/3 models that can be used to analyze the response of the NPP containment structure in the context of a core damage accident, estimate the magnitude of the resulting radioactive releases to the environment, and calculate the consequences associated with these releases. The simple Level 2/3 model development work was initiated in 1995, and several prototype models have been completed. Once developed, these simple Level 2/3 models are linked to the simple Level 1 models to provide risk perspectives for operational events. This paper describes the methods implemented for the development of these simple Level 2/3 ASP models, and the linkage process to the existing Level 1 models

  7. Using Mathematical Modeling and Set-Based Design Principles to Recommend an Existing CVL Design

    Science.gov (United States)

    2017-09-01

    MATHEMATICAL MODELING AND SET-BASED DESIGN PRINCIPLES TO RECOMMEND AN EXISTING CVL DESIGN by William H. Ehlies September 2017 Thesis Advisor...Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington, DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE...September 2017 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE USING MATHEMATICAL MODELING AND SET-BASED DESIGN PRINCIPLES

  8. Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation

    International Nuclear Information System (INIS)

    Schranz, C; Möller, K; Becher, T; Schädler, D; Weiler, N

    2014-01-01

    Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (p I ), inspiration and expiration time (t I , t E ) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal p I and adequate t E can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's ‘optimized’ settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end

  9. Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation.

    Science.gov (United States)

    Schranz, C; Becher, T; Schädler, D; Weiler, N; Möller, K

    2014-03-01

    Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (pI), inspiration and expiration time (tI, tE) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal pI and adequate tE can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's 'optimized' settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end-expiratory pressure.

  10. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  11. Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling

    Science.gov (United States)

    Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw

    2005-01-01

    The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.

  12. Effectiveness of reactive case detection for malaria elimination in three archetypical transmission settings: a modelling study.

    Science.gov (United States)

    Gerardin, Jaline; Bever, Caitlin A; Bridenbecker, Daniel; Hamainza, Busiku; Silumbe, Kafula; Miller, John M; Eisele, Thomas P; Eckhoff, Philip A; Wenger, Edward A

    2017-06-12

    Reactive case detection could be a powerful tool in malaria elimination, as it selectively targets transmission pockets. However, field operations have yet to demonstrate under which conditions, if any, reactive case detection is best poised to push a region to elimination. This study uses mathematical modelling to assess how baseline transmission intensity and local interconnectedness affect the impact of reactive activities in the context of other possible intervention packages. Communities in Southern Province, Zambia, where elimination operations are currently underway, were used as representatives of three archetypes of malaria transmission: low-transmission, high household density; high-transmission, low household density; and high-transmission, high household density. Transmission at the spatially-connected household level was simulated with a dynamical model of malaria transmission, and local variation in vectorial capacity and intervention coverage were parameterized according to data collected from the area. Various potential intervention packages were imposed on each of the archetypical settings and the resulting likelihoods of elimination by the end of 2020 were compared. Simulations predict that success of elimination campaigns in both low- and high-transmission areas is strongly dependent on stemming the flow of imported infections, underscoring the need for regional-scale strategies capable of reducing transmission concurrently across many connected areas. In historically low-transmission areas, treatment of clinical malaria should form the cornerstone of elimination operations, as most malaria infections in these areas are symptomatic and onward transmission would be mitigated through health system strengthening; reactive case detection has minimal impact in these settings. In historically high-transmission areas, vector control and case management are crucial for limiting outbreak size, and the asymptomatic reservoir must be addressed through

  13. System-level Modeling of Wireless Integrated Sensor Networks

    DEFF Research Database (Denmark)

    Virk, Kashif M.; Hansen, Knud; Madsen, Jan

    2005-01-01

    Wireless integrated sensor networks have emerged as a promising infrastructure for a new generation of monitoring and tracking applications. In order to efficiently utilize the extremely limited resources of wireless sensor nodes, accurate modeling of the key aspects of wireless sensor networks...... is necessary so that system-level design decisions can be made about the hardware and the software (applications and real-time operating system) architecture of sensor nodes. In this paper, we present a SystemC-based abstract modeling framework that enables system-level modeling of sensor network behavior...... by modeling the applications, real-time operating system, sensors, processor, and radio transceiver at the sensor node level and environmental phenomena, including radio signal propagation, at the sensor network level. We demonstrate the potential of our modeling framework by simulating and analyzing a small...

  14. Capacitated set-covering model considering the distance objective and dependency of alternative facilities

    Science.gov (United States)

    Wayan Suletra, I.; Priyandari, Yusuf; Jauhari, Wakhid A.

    2018-03-01

    We propose a new model of facility location to solve a kind of problem that belong to a class of set-covering problem using an integer programming formulation. Our model contains a single objective function, but it represents two goals. The first is to minimize the number of facilities, and the other is to minimize the total distance of customers to facilities. The first goal is a mandatory goal, and the second is an improvement goal that is very useful when alternate optimum solutions for the first goal exist. We use a big number as a weight on the first goal to force the solution algorithm to give first priority to the first goal. Besides considering capacity constraints, our model accommodates a kind of either-or constraints representing facilities dependency. The either-or constraints will prevent the solution algorithm to select two or more facilities from the same set of facility with mutually exclusive properties. A real location selection problem to locate a set of wastewater treatment facility (IPAL) in Surakarta city, Indonesia, will describe the implementation of our model. A numerical example is given using the data of that real problem.

  15. IMPORTANCE OF PROBLEM SETTING BEFORE DEVELOPING A BUSINESS MODEL CANVAS

    OpenAIRE

    Bekhradi , Alborz; Yannou , Bernard; Cluzel , François

    2016-01-01

    International audience; In this paper, the importance of problem setting in front end of innovation to radically innovate is emphasized prior to the use of the BMC. After discussing the context of the Business Model Canvas usage, the failure reasons of a premature use (in early design stages) of the BMC tool is discussed through some real examples of innovative startups in Paris area. This paper ends with the proposition of three main rules to follow when one wants to use the Business Model C...

  16. The Impact of Individual Differences, Types of Model and Social Settings on Block Building Performance among Chinese Preschoolers.

    Science.gov (United States)

    Tian, Mi; Deng, Zhu; Meng, Zhaokun; Li, Rui; Zhang, Zhiyi; Qi, Wenhui; Wang, Rui; Yin, Tingting; Ji, Menghui

    2018-01-01

    Children's block building performances are used as indicators of other abilities in multiple domains. In the current study, we examined individual differences, types of model and social settings as influences on children's block building performance. Chinese preschoolers ( N = 180) participated in a block building activity in a natural setting, and performance was assessed with multiple measures in order to identify a range of specific skills. Using scores generated across these measures, three dependent variables were analyzed: block building skills, structural balance and structural features. An overall MANOVA showed that there were significant main effects of gender and grade level across most measures. Types of model showed no significant effect in children's block building. There was a significant main effect of social settings on structural features, with the best performance in the 5-member group, followed by individual and then the 10-member block building. These findings suggest that boys performed better than girls in block building activity. Block building performance increased significantly from 1st to 2nd year of preschool, but not from second to third. The preschoolers created more representational constructions when presented with a model made of wooden rather than with a picture. There was partial evidence that children performed better when working with peers in a small group than when working alone or working in a large group. It is suggested that future study should examine other modalities rather than the visual one, diversify the samples and adopt a longitudinal investigation.

  17. Modeling pitting corrosion damage of high-level radioactive-waste containers, with emphasis on the stochastic approach

    Energy Technology Data Exchange (ETDEWEB)

    Henshall, G.A.; Halsey, W.G.; Clarke, W.L.; McCright, R.D.

    1993-01-01

    Recent efforts to identify methods of modeling pitting corrosion damage of high-level radioactive-waste containers are described. The need to develop models that can provide information useful to higher level system performance assessment models is emphasized, and examples of how this could be accomplished are described. Work to date has focused upon physically-based phenomenological stochastic models of pit initiation and growth. These models may provide a way to distill information from mechanistic theories in a way that provides the necessary information to the less detailed performance assessment models. Monte Carlo implementations of the stochastic theory have resulted in simulations that are, at least qualitatively, consistent with a wide variety of experimental data. The effects of environment on pitting corrosion have been included in the model using a set of simple phenomenological equations relating the parameters of the stochastic model to key environmental variables. The results suggest that stochastic models might be useful for extrapolating accelerated test data and for predicting the effects of changes in the environment on pit initiation and growth. Preliminary ideas for integrating pitting models with performance assessment models are discussed. These ideas include improving the concept of container ``failure``, and the use of ``rules-of-thumb`` to take information from the detailed process models and provide it to the higher level system and subsystem models. Finally, directions for future work are described, with emphasis on additional experimental work since it is an integral part of the modeling process.

  18. Modeling pitting corrosion damage of high-level radioactive-waste containers, with emphasis on the stochastic approach

    International Nuclear Information System (INIS)

    Henshall, G.A.; Halsey, W.G.; Clarke, W.L.; McCright, R.D.

    1993-01-01

    Recent efforts to identify methods of modeling pitting corrosion damage of high-level radioactive-waste containers are described. The need to develop models that can provide information useful to higher level system performance assessment models is emphasized, and examples of how this could be accomplished are described. Work to date has focused upon physically-based phenomenological stochastic models of pit initiation and growth. These models may provide a way to distill information from mechanistic theories in a way that provides the necessary information to the less detailed performance assessment models. Monte Carlo implementations of the stochastic theory have resulted in simulations that are, at least qualitatively, consistent with a wide variety of experimental data. The effects of environment on pitting corrosion have been included in the model using a set of simple phenomenological equations relating the parameters of the stochastic model to key environmental variables. The results suggest that stochastic models might be useful for extrapolating accelerated test data and for predicting the effects of changes in the environment on pit initiation and growth. Preliminary ideas for integrating pitting models with performance assessment models are discussed. These ideas include improving the concept of container ''failure'', and the use of ''rules-of-thumb'' to take information from the detailed process models and provide it to the higher level system and subsystem models. Finally, directions for future work are described, with emphasis on additional experimental work since it is an integral part of the modeling process

  19. MODEL PREDIKSI NILAI PERUSAHAAN MELALUI KEPEMILIKAN MANAJERIAL DAN SET KESEMPATAN INVESTASI

    Directory of Open Access Journals (Sweden)

    Herry Laksito

    2017-03-01

    Full Text Available This study empirically examined the effect of managerial ownership on firm value of Investment OpportunitySet with mediation. Model, this research examined corporate governance measured by the shares of thecompany’s value with the mediation set of investment opportunities. The purpose of this study was to analyzethe effect on the value of corporate governance mediation firm with an investment opportunity sets on manufacturingcompanies listed in Indonesia Stock Exchange. The populations in this study were all of manufacturingcompanies listed in Indonesia Stock Exchange and reporting financial statement in the Indonesian capitalmarket directory during the period 2005-2007. Determination of sample used purposive sampling. The datamet the characteristic of 37 firms. Statistical method used was path analysis. The results showed that managerialstock ownership (corporate governance did not affect the value of a company with a negative direction.Managerial stock ownership (corporate governance affected the investment opportunity set (IOS. IOS did notaffect the value of the company and investment opportunity set could not significantly mediate the effect ofmanagerial ownership (corporate governance against the value of the firm.

  20. Targeting the right input data to improve crop modeling at global level

    Science.gov (United States)

    Adam, M.; Robertson, R.; Gbegbelegbe, S.; Jones, J. W.; Boote, K. J.; Asseng, S.

    2012-12-01

    Designed for location-specific simulations, the use of crop models at a global level raises important questions. Crop models are originally premised on small unit areas where environmental conditions and management practices are considered homogeneous. Specific information describing soils, climate, management, and crop characteristics are used in the calibration process. However, when scaling up for global application, we rely on information derived from geographical information systems and weather generators. To run crop models at broad, we use a modeling platform that assumes a uniformly generated grid cell as a unit area. Specific weather, specific soil and specific management practices for each crop are represented for each of the cell grids. Studies on the impacts of the uncertainties of weather information and climate change on crop yield at a global level have been carried out (Osborne et al, 2007, Nelson et al., 2010, van Bussel et al, 2011). Detailed information on soils and management practices at global level are very scarce but recognized to be of critical importance (Reidsma et al., 2009). Few attempts to assess the impact of their uncertainties on cropping systems performances can be found. The objectives of this study are (i) to determine sensitivities of a crop model to soil and management practices, inputs most relevant to low input rainfed cropping systems, and (ii) to define hotspots of sensitivity according to the input data. We ran DSSAT v4.5 globally (CERES-CROPSIM) to simulate wheat yields at 45arc-minute resolution. Cultivar parameters were calibrated and validated for different mega-environments (results not shown). The model was run for nitrogen-limited production systems. This setting was chosen as the most representative to simulate actual yield (especially for low-input rainfed agricultural systems) and assumes crop growth to be free of any pest and diseases damages. We conducted a sensitivity analysis on contrasting management

  1. A multilevel, level-set method for optimizing eigenvalues in shape design problems

    International Nuclear Information System (INIS)

    Haber, E.

    2004-01-01

    In this paper, we consider optimal design problems that involve shape optimization. The goal is to determine the shape of a certain structure such that it is either as rigid or as soft as possible. To achieve this goal we combine two new ideas for an efficient solution of the problem. First, we replace the eigenvalue problem with an approximation by using inverse iteration. Second, we use a level set method but rather than propagating the front we use constrained optimization methods combined with multilevel continuation techniques. Combining these two ideas we obtain a robust and rapid method for the solution of the optimal design problem

  2. Comparison of different statistical methods for estimation of extreme sea levels with wave set-up contribution

    Science.gov (United States)

    Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme

    2013-04-01

    Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.

  3. Noise level in neonatal incubators: A comparative study of three models.

    Science.gov (United States)

    Fernández Zacarías, F; Beira Jiménez, J L; Bustillo Velázquez-Gaztelu, P J; Hernández Molina, R; Lubián López, Simón

    2018-04-01

    Preterm infants usually have to spend a long time in an incubator, excessive noise in which can have adverse physiological and psychological effects on neonates. In fact, incubator noise levels typically range from 45 to 70 dB but differences in this respect depend largely on the noise measuring method used. The primary aim of this work was to assess the extent to which noise in an incubator comes from its own fan and how efficiently the incubator can isolate external noise. Three different incubator models were characterized for acoustic performance by measuring their internal noise levels in an anechoic chamber, and also for noise isolation efficiency by using a pink noise source in combination with an internal and an external microphone that were connected to an SVAN958 noise analyzer. The incubators studied produced continuous equivalent noise levels of 53.5-58 dB and reduced external noise by 5.2-10.4 dB. A preterm infant in an incubator is exposed to noise levels clearly exceeding international recommendations even though such levels usually comply with the limit set in the standard IEC60601-2-19: 2009 (60 dBA) under normal conditions of use. Copyright © 2018. Published by Elsevier B.V.

  4. LevelMerge: Collaborative Game Level Editing by Merging Labeled Graphs

    OpenAIRE

    Santoni, Christian; Salvati, Gabriele; Tibaldo, Valentina; Pellacini, Fabio

    2016-01-01

    Game level editing is the process of constructing a full game level starting from 3D asset libraries, e.g. 3d models, textures, shaders, scripts. In level editing, designers define the look and behavior of the whole level by placing objects, assigning materials and lighting parameters, setting animations and physics properties and customizing the objects AI and behavior by editing scripts. The heterogeneity of the task usually translates to a workflow where a team of people, experts on separa...

  5. Recalibrating disease parameters for increasing realism in modeling epidemics in closed settings

    Directory of Open Access Journals (Sweden)

    Livio Bioglio

    2016-11-01

    Full Text Available Abstract Background The homogeneous mixing assumption is widely adopted in epidemic modelling for its parsimony and represents the building block of more complex approaches, including very detailed agent-based models. The latter assume homogeneous mixing within schools, workplaces and households, mostly for the lack of detailed information on human contact behaviour within these settings. The recent data availability on high-resolution face-to-face interactions makes it now possible to assess the goodness of this simplified scheme in reproducing relevant aspects of the infection dynamics. Methods We consider empirical contact networks gathered in different contexts, as well as synthetic data obtained through realistic models of contacts in structured populations. We perform stochastic spreading simulations on these contact networks and in populations of the same size under a homogeneous mixing hypothesis. We adjust the epidemiological parameters of the latter in order to fit the prevalence curve of the contact epidemic model. We quantify the agreement by comparing epidemic peak times, peak values, and epidemic sizes. Results Good approximations of the peak times and peak values are obtained with the homogeneous mixing approach, with a median relative difference smaller than 20 % in all cases investigated. Accuracy in reproducing the peak time depends on the setting under study, while for the peak value it is independent of the setting. Recalibration is found to be linear in the epidemic parameters used in the contact data simulations, showing changes across empirical settings but robustness across groups and population sizes. Conclusions An adequate rescaling of the epidemiological parameters can yield a good agreement between the epidemic curves obtained with a real contact network and a homogeneous mixing approach in a population of the same size. The use of such recalibrated homogeneous mixing approximations would enhance the accuracy and

  6. Recalibrating disease parameters for increasing realism in modeling epidemics in closed settings.

    Science.gov (United States)

    Bioglio, Livio; Génois, Mathieu; Vestergaard, Christian L; Poletto, Chiara; Barrat, Alain; Colizza, Vittoria

    2016-11-14

    The homogeneous mixing assumption is widely adopted in epidemic modelling for its parsimony and represents the building block of more complex approaches, including very detailed agent-based models. The latter assume homogeneous mixing within schools, workplaces and households, mostly for the lack of detailed information on human contact behaviour within these settings. The recent data availability on high-resolution face-to-face interactions makes it now possible to assess the goodness of this simplified scheme in reproducing relevant aspects of the infection dynamics. We consider empirical contact networks gathered in different contexts, as well as synthetic data obtained through realistic models of contacts in structured populations. We perform stochastic spreading simulations on these contact networks and in populations of the same size under a homogeneous mixing hypothesis. We adjust the epidemiological parameters of the latter in order to fit the prevalence curve of the contact epidemic model. We quantify the agreement by comparing epidemic peak times, peak values, and epidemic sizes. Good approximations of the peak times and peak values are obtained with the homogeneous mixing approach, with a median relative difference smaller than 20 % in all cases investigated. Accuracy in reproducing the peak time depends on the setting under study, while for the peak value it is independent of the setting. Recalibration is found to be linear in the epidemic parameters used in the contact data simulations, showing changes across empirical settings but robustness across groups and population sizes. An adequate rescaling of the epidemiological parameters can yield a good agreement between the epidemic curves obtained with a real contact network and a homogeneous mixing approach in a population of the same size. The use of such recalibrated homogeneous mixing approximations would enhance the accuracy and realism of agent-based simulations and limit the intrinsic biases of

  7. Mathematical model of the electronuclear set-up on the beam of the JINR synchrotron

    International Nuclear Information System (INIS)

    Barashenkov, V.S.; Kumawat, H.; Lobanova, V.A.; Kumar, V.

    2003-01-01

    On the base of the Monte Carlo code CASCADE, developed at JINR, a mathematical model of the deep-subcritical set-up with uranium blanket used in experiments underway at JINR using a 0.6-4 GeV proton beam, is created. The neutron spectra, yields and energies of generated particles are calculated and compared for several modifications of the set-up. The influence of paraffin and graphite moderators on the characteristics of particles escaping lead target is studied. The modelled set-up can be considered as a first step to experiments with the designed at JINR U-Pu ADS SAD with heat power of several tens of kW

  8. Mathematical Model of the Electronuclear Set-Up on the Beam of the JINR Synchrotron

    CERN Document Server

    Barashenkov, V S; Kumawat, H; Lobanova, V A

    2004-01-01

    On the base of the Monte Carlo code CASCADE, developed at JINR, a mathematical model of the deep-subcritical set-up with uranium blanket used in experiments underway at JINR using a 0.6-4 GeV proton beam, is created. The neutron spectra, yields and energies of generated particles are calculated and compared for several modifications of the set-up. The influence of paraffin and graphite moderators on the characteristics of particles escaping lead target is studied. The modelled set-up can be considered as a first step to experiments with the designed at JINR U-Pu ADS SAD with heat power of several tens of kW.

  9. Education level inequalities and transportation injury mortality in the middle aged and elderly in European settings

    NARCIS (Netherlands)

    Borrell, C.; Plasència, A.; Huisman, M.; Costa, G.; Kunst, A.; Andersen, O.; Bopp, M.; Borgan, J.-K.; Deboosere, P.; Glickman, M.; Gadeyne, S.; Minder, C.; Regidor, E.; Spadea, T.; Valkonen, T.; Mackenbach, J. P.

    2005-01-01

    OBJECTIVE: To study the differential distribution of transportation injury mortality by educational level in nine European settings, among people older than 30 years, during the 1990s. METHODS: Deaths of men and women older than 30 years from transportation injuries were studied. Rate differences

  10. Methods of mathematical modeling using polynomials of algebra of sets

    Science.gov (United States)

    Kazanskiy, Alexandr; Kochetkov, Ivan

    2018-03-01

    The article deals with the construction of discrete mathematical models for solving applied problems arising from the operation of building structures. Security issues in modern high-rise buildings are extremely serious and relevant, and there is no doubt that interest in them will only increase. The territory of the building is divided into zones for which it is necessary to observe. Zones can overlap and have different priorities. Such situations can be described using formulas algebra of sets. Formulas can be programmed, which makes it possible to work with them using computer models.

  11. Microwave imaging of dielectric cylinder using level set method and conjugate gradient algorithm

    International Nuclear Information System (INIS)

    Grayaa, K.; Bouzidi, A.; Aguili, T.

    2011-01-01

    In this paper, we propose a computational method for microwave imaging cylinder and dielectric object, based on combining level set technique and the conjugate gradient algorithm. By measuring the scattered field, we tried to retrieve the shape, localisation and the permittivity of the object. The forward problem is solved by the moment method, while the inverse problem is reformulate in an optimization one and is solved by the proposed scheme. It found that the proposed method is able to give good reconstruction quality in terms of the reconstructed shape and permittivity.

  12. Electrical modeling of semiconductor bridge (SCB) BNCP detonators with electrochemical capacitor firing sets

    Energy Technology Data Exchange (ETDEWEB)

    Marx, K.D. [Sandia National Labs., Livermore, CA (United States); Ingersoll, D.; Bickes, R.W. Jr. [Sandia National Labs., Albuquerque, NM (United States)

    1998-11-01

    In this paper the authors describe computer models that simulate the electrical characteristics and hence, the firing characteristics and performance of a semiconductor bridge (SCB) detonator for the initiation of BNCP [tetraammine-cis-bis (5-nitro-2H-tetrazolato-N{sup 2}) cobalt(III) perchlorate]. The electrical data and resultant models provide new insights into the fundamental behavior of SCB detonators, particularly with respect to the initiation mechanism and the interaction of the explosive powder with the SCB. One model developed, the Thermal Feedback Model, considers the total energy budget for the system, including the time evolution of the energy delivered to the powder by the electrical circuit, as well as that released by the ignition and subsequent chemical reaction of the powder. The authors also present data obtained using a new low-voltage firing set which employed an advanced electrochemical capacitor having a nominal capacitance of 350,000 {micro}F at 9 V, the maximum voltage rating for this particular device. A model for this firing set and detonator was developed by making measurements of the intrinsic capacitance and equivalent series resistance (ESR < 10 m{Omega}) of a single device. This model was then used to predict the behavior of BNCP SCB detonators fired alone, as well as in a multishot, parallel-string configuration using a firing set composed of either a single 9 V electrochemical capacitor or two of the capacitors wired in series and charged to 18 V.

  13. An integrated extended Kalman filter–implicit level set algorithm for monitoring planar hydraulic fractures

    International Nuclear Information System (INIS)

    Peirce, A; Rochinha, F

    2012-01-01

    We describe a novel approach to the inversion of elasto-static tiltmeter measurements to monitor planar hydraulic fractures propagating within three-dimensional elastic media. The technique combines the extended Kalman filter (EKF), which predicts and updates state estimates using tiltmeter measurement time-series, with a novel implicit level set algorithm (ILSA), which solves the coupled elasto-hydrodynamic equations. The EKF and ILSA are integrated to produce an algorithm to locate the unknown fracture-free boundary. A scaling argument is used to derive a strategy to tune the algorithm parameters to enable measurement information to compensate for unmodeled dynamics. Synthetic tiltmeter data for three numerical experiments are generated by introducing significant changes to the fracture geometry by altering the confining geological stress field. Even though there is no confining stress field in the dynamic model used by the new EKF-ILSA scheme, it is able to use synthetic data to arrive at remarkably accurate predictions of the fracture widths and footprints. These experiments also explore the robustness of the algorithm to noise and to placement of tiltmeter arrays operating in the near-field and far-field regimes. In these experiments, the appropriate parameter choices and strategies to improve the robustness of the algorithm to significant measurement noise are explored. (paper)

  14. Records for radioactive waste management up to repository closure: Managing the primary level information (PLI) set

    International Nuclear Information System (INIS)

    2004-07-01

    The objective of this publication is to highlight the importance of the early establishment of a comprehensive records system to manage primary level information (PLI) as an integrated set of information, not merely as a collection of information, throughout all the phases of radioactive waste management. Early establishment of a comprehensive records system to manage Primary Level Information as an integrated set of information throughout all phases of radioactive waste management is important. In addition to the information described in the waste inventory record keeping system (WIRKS), the PLI of a radioactive waste repository consists of the entire universe of information, data and records related to any aspect of the repository's life cycle. It is essential to establish PLI requirements based on integrated set of needs from Regulators and Waste Managers involved in the waste management chain and to update these requirements as needs change over time. Information flow for radioactive waste management should be back-end driven. Identification of an Authority that will oversee the management of PLI throughout all phases of the radioactive waste management life cycle would guarantee the information flow to future generations. The long term protection of information essential to future generations can only be assured by the timely establishment of a comprehensive and effective RMS capable of capturing, indexing and evaluating all PLI. The loss of intellectual control over the PLI will make it very difficult to subsequently identify the ILI and HLI information sets. At all times prior to the closure of a radioactive waste repository, there should be an identifiable entity with a legally enforceable financial and management responsibility for the continued operation of a PLI Records Management System. The information presented in this publication will assist Member States in ensuring that waste and repository records, relevant for retention after repository closure

  15. AN AUTOMATIC OPTICAL AND SAR IMAGE REGISTRATION METHOD USING ITERATIVE MULTI-LEVEL AND REFINEMENT MODEL

    Directory of Open Access Journals (Sweden)

    C. Xu

    2016-06-01

    Full Text Available Automatic image registration is a vital yet challenging task, particularly for multi-sensor remote sensing images. Given the diversity of the data, it is unlikely that a single registration algorithm or a single image feature will work satisfactorily for all applications. Focusing on this issue, the mainly contribution of this paper is to propose an automatic optical-to-SAR image registration method using –level and refinement model: Firstly, a multi-level strategy of coarse-to-fine registration is presented, the visual saliency features is used to acquire coarse registration, and then specific area and line features are used to refine the registration result, after that, sub-pixel matching is applied using KNN Graph. Secondly, an iterative strategy that involves adaptive parameter adjustment for re-extracting and re-matching features is presented. Considering the fact that almost all feature-based registration methods rely on feature extraction results, the iterative strategy improve the robustness of feature matching. And all parameters can be automatically and adaptively adjusted in the iterative procedure. Thirdly, a uniform level set segmentation model for optical and SAR images is presented to segment conjugate features, and Voronoi diagram is introduced into Spectral Point Matching (VSPM to further enhance the matching accuracy between two sets of matching points. Experimental results show that the proposed method can effectively and robustly generate sufficient, reliable point pairs and provide accurate registration.

  16. Cultivating an Entrepreneurial Mind-Set through Transformational Leadership: A Focus on the Corporate Context

    Directory of Open Access Journals (Sweden)

    Boris Urban

    2017-06-01

    Full Text Available Corporate leaders are increasingly embracing entrepreneurial activity as a potential source of achieving a competitive advantage. Leaders adopting an entrepreneurial orientation (EO at the firm level must foster an entrepreneurial mind-set employees. This article aims to expand understanding on how an entrepreneurial mind-set as well as transformational leadership impact levels of EO at firms in an emerging market context, South Africa. Following a survey, partial least squares structural equation modelling (PLS-SEM analysis is used to test the study hypotheses. Findings reveal positive and significant interrelationships between the study variables, where path analysis supports the study model and where both transformational leadership and an entrepreneurial mind-set amongst share a reciprocal causal relationship with higher levels of EO.

  17. Nurses' comfort level with spiritual assessment: a study among nurses working in diverse healthcare settings.

    Science.gov (United States)

    Cone, Pamela H; Giske, Tove

    2017-10-01

    To gain knowledge about nurses' comfort level in assessing spiritual matters and to learn what questions nurses use in practice related to spiritual assessment. Spirituality is important in holistic nursing care; however, nurses report feeling uncomfortable and ill-prepared to address this domain with patients. Education is reported to impact nurses' ability to engage in spiritual care. This cross-sectional exploratory survey reports on a mixed-method study examining how comfortable nurses are with spiritual assessment. In 2014, a 21-item survey with 10 demographic variables and three open-ended questions were distributed to Norwegian nurses working in diverse care settings with 172 nurse responses (72 % response rate). SPSS was used to analyse quantitative data; thematic analysis examined the open-ended questions. Norwegian nurses reported a high level of comfort with most questions even though spirituality is seen as private. Nurses with some preparation or experience in spiritual care were most comfortable assessing spirituality. Statistically significant correlations were found between the nurses' comfort level with spiritual assessment and their preparedness and sense of the importance of spiritual assessment. How well-prepared nurses felt was related to years of experience, degree of spirituality and religiosity, and importance of spiritual assessment. Many nurses are poorly prepared for spiritual assessment and care among patients in diverse care settings; educational preparation increases their comfort level with facilitating such care. Nurses who feel well prepared with spirituality feel more comfortable with the spiritual domain. By fostering a culture where patients' spirituality is discussed and reflected upon in everyday practice and in continued education, nurses' sense of preparedness, and thus their level of comfort, can increase. Clinical supervision and interprofessional collaboration with hospital chaplains and/or other spiritual leaders can

  18. THE DEVELOPMENT AND USE OF A MODEL TO PREDICT SUSTAINABILITY OF CHANGE IN HEALTH CARE SETTINGS.

    Science.gov (United States)

    Molfenter, Todd; Ford, James H; Bhattacharya, Abhik

    2011-01-01

    Innovations adopted through organizational change initiatives are often not sustained leading to diminished quality, productivity, and consumer satisfaction. Research explaining variance in the use of adopted innovations in health care settings is sparse, suggesting the need for a theoretical model to guide research and practice. In this article, we describe the development of a hybrid conjoint decision theoretic model designed to predict the sustainability of organizational change in health care settings. An initial test of the model's predictive validity using expert scored hypothetic profiles resulted in an r-squared value of .77. The test of this model offers a theoretical base for future research on the sustainability of change in health care settings.

  19. MODEL COOPERATIVE SCRIPT BERPENDEKATAN SCIENCE, ENVIRONMENT, TECHNOLOGY, AND SOCIETY (SETS TERHADAP HASIL BELAJAR

    Directory of Open Access Journals (Sweden)

    Amir Maksum

    2015-11-01

    Full Text Available This study aimed to determine the positive effects of the applicatioan of learning model by using script cooperative with SETS approach to chemistry students' learning outcomes of student in class X. The population in this study is students class X high school in Kendal. Sampling is done by cluster purposive sampling technique, obtained one class as a experiment class that uses of script cooperative learning with the model SETS approach and another class as the gain control class with expository teaching using SETS approach. Data were collected by using documentation method, testing, observation and questionnaires. Based on the analysis of  affective domain data, it gained score percentage of 80% for the experimental class and 78% for   control class. While the score percentage for the psychomotor domain data acquired 79% of the experimental class and 78% the control class. Based on the analysis of the results, obtained correlation coefficient r b 0.52 with the contribution of 28%. The conclusions in this study is the use of script cooperative learning with the model SETS approach have an effects on the the learning outcomes of chemistry class X of high school students in Kendal on the subject redox concept with contributions of 28%.

  20. A 2D model of causal set quantum gravity: the emergence of the continuum

    International Nuclear Information System (INIS)

    Brightwell, Graham; Henson, Joe; Surya, Sumati

    2008-01-01

    Non-perturbative theories of quantum gravity inevitably include configurations that fail to resemble physically reasonable spacetimes at large scales. Often, these configurations are entropically dominant and pose an obstacle to obtaining the desired classical limit. We examine this 'entropy problem' in a model of causal set quantum gravity corresponding to a discretization of 2D spacetimes. Using results from the theory of partial orders we show that, in the large volume or continuum limit, its partition function is dominated by causal sets which approximate to a region of 2D Minkowski space. This model of causal set quantum gravity thus overcomes the entropy problem and predicts the emergence of a physically reasonable geometry

  1. Wind-Induced Air-Flow Patterns in an Urban Setting: Observations and Numerical Modeling

    Science.gov (United States)

    Sattar, Ahmed M. A.; Elhakeem, Mohamed; Gerges, Bishoy N.; Gharabaghi, Bahram; Gultepe, Ismail

    2018-04-01

    City planning can have a significant effect on wind flow velocity patterns and thus natural ventilation. Buildings with different heights are roughness elements that can affect the near- and far-field wind flow velocity. This paper aims at investigating the impact of an increase in building height on the nearby velocity fields. A prototype urban setting of buildings with two different heights (25 and 62.5 cm) is built up and placed in a wind tunnel. Wind flow velocity around the buildings is mapped at different heights. Wind tunnel measurements are used to validate a 3D-numerical Reynolds averaged Naviers-Stokes model. The validated model is further used to calculate the wind flow velocity patterns for cases with different building heights. It was found that increasing the height of some buildings in an urban setting can lead to the formation of large horseshoe vortices and eddies around building corners. A separation area is formed at the leeward side of the building, and the recirculation of air behind the building leads to the formation of slow rotation vortices. The opposite effect is observed in the wake (cavity) region of the buildings, where both the cavity length and width are significantly reduced, and this resulted in a pronounced increase in the wind flow velocity. A significant increase in the wind flow velocity in the wake region of tall buildings with a value of up to 30% is observed. The spatially averaged velocities around short buildings also increased by 25% compared to those around buildings with different heights. The increase in the height of some buildings is found to have a positive effect on the wind ventilation at the pedestrian level.

  2. Soft sets combined with interval valued intuitionistic fuzzy sets of type-2 and rough sets

    Directory of Open Access Journals (Sweden)

    Anjan Mukherjee

    2015-03-01

    Full Text Available Fuzzy set theory, rough set theory and soft set theory are all mathematical tools dealing with uncertainties. The concept of type-2 fuzzy sets was introduced by Zadeh in 1975 which was extended to interval valued intuitionistic fuzzy sets of type-2 by the authors.This paper is devoted to the discussions of the combinations of interval valued intuitionistic sets of type-2, soft sets and rough sets.Three different types of new hybrid models, namely-interval valued intuitionistic fuzzy soft sets of type-2, soft rough interval valued intuitionistic fuzzy sets of type-2 and soft interval valued intuitionistic fuzzy rough sets of type-2 are proposed and their properties are derived.

  3. Set points, settling points and some alternative models: theoretical options to understand how genes and environments combine to regulate body adiposity

    Directory of Open Access Journals (Sweden)

    John R. Speakman

    2011-11-01

    Full Text Available The close correspondence between energy intake and expenditure over prolonged time periods, coupled with an apparent protection of the level of body adiposity in the face of perturbations of energy balance, has led to the idea that body fatness is regulated via mechanisms that control intake and energy expenditure. Two models have dominated the discussion of how this regulation might take place. The set point model is rooted in physiology, genetics and molecular biology, and suggests that there is an active feedback mechanism linking adipose tissue (stored energy to intake and expenditure via a set point, presumably encoded in the brain. This model is consistent with many of the biological aspects of energy balance, but struggles to explain the many significant environmental and social influences on obesity, food intake and physical activity. More importantly, the set point model does not effectively explain the ‘obesity epidemic’ – the large increase in body weight and adiposity of a large proportion of individuals in many countries since the 1980s. An alternative model, called the settling point model, is based on the idea that there is passive feedback between the size of the body stores and aspects of expenditure. This model accommodates many of the social and environmental characteristics of energy balance, but struggles to explain some of the biological and genetic aspects. The shortcomings of these two models reflect their failure to address the gene-by-environment interactions that dominate the regulation of body weight. We discuss two additional models – the general intake model and the dual intervention point model – that address this issue and might offer better ways to understand how body fatness is controlled.

  4. Requirements for High Level Models Supporting Design Space Exploration in Model-based Systems Engineering

    OpenAIRE

    Haveman, Steven P.; Bonnema, G. Maarten

    2013-01-01

    Most formal models are used in detailed design and focus on a single domain. Few effective approaches exist that can effectively tie these lower level models to a high level system model during design space exploration. This complicates the validation of high level system requirements during detailed design. In this paper, we define requirements for a high level model that is firstly driven by key systems engineering challenges present in industry and secondly connects to several formal and d...

  5. Modeling effects of traffic and landscape characteristics on ambient nitrogen dioxide levels in Connecticut

    Science.gov (United States)

    Skene, Katherine J.; Gent, Janneane F.; McKay, Lisa A.; Belanger, Kathleen; Leaderer, Brian P.; Holford, Theodore R.

    2010-12-01

    An integrated exposure model was developed that estimates nitrogen dioxide (NO 2) concentration at residences using geographic information systems (GIS) and variables derived within residential buffers representing traffic volume and landscape characteristics including land use, population density and elevation. Multiple measurements of NO 2 taken outside of 985 residences in Connecticut were used to develop the model. A second set of 120 outdoor NO 2 measurements as well as cross-validation were used to validate the model. The model suggests that approximately 67% of the variation in NO 2 levels can be explained by: traffic and land use primarily within 2 km of a residence; population density; elevation; and time of year. Potential benefits of this model for health effects research include improved spatial estimations of traffic-related pollutant exposure and reduced need for extensive pollutant measurements. The model, which could be calibrated and applied in areas other than Connecticut, has importance as a tool for exposure estimation in epidemiological studies of traffic-related air pollution.

  6. SystemC Transaction-Level Modeling of an MPSoC Platform Based on an Open Source ISS by Using Interprocess Communication

    Directory of Open Access Journals (Sweden)

    Sami Boukhechem

    2008-01-01

    Full Text Available Transaction-level modeling (TLM is a promising technique to deal with the increasing complexity of modern embedded systems. This model allows a system designer to model a complete application, composed of hardware and software parts, at several levels of abstraction. For this purpose, we use systemC, which is proposed as a standardized modeling language. This paper presents a transaction-level modeling cosimulation methodology for modeling, validating, and verifying our embedded open architecture platform. The proposed platform is an open source multiprocessor system-on-chip (MPSoC platform, integrated under the synthesis tool for adaptive and reconfigurable system-on-chip (STARSoC environment. It relies on the integration between an open source instruction set simulators (ISSs, OR1Ksim platform, and the systemC simulation environment which contains other components (wishbone bus, memories, …, etc.. The aim of this work is to provide designers with the possibility of faster and efficient architecture exploration at a higher level of abstractions, starting from an algorithmic description to implementation details.

  7. Thermal comfort assessment in a Dutch hospital settingmodel applicability

    NARCIS (Netherlands)

    Ottenheijm, E.M.M.; Loomans, M.G.L.C.; Kort, H.S.M.; Trip, A.

    2016-01-01

    SUMMARY Limited information is available on thermal comfort performance of the indoor environment in health care facilities both for staff and patients. Thermal comfort models such as Predicted Mean Vote (PMV) and Adaptive Thermal Comfort (ATC), have not been applied extensively for this setting. In

  8. A Dual Hesitant Fuzzy Multigranulation Rough Set over Two-Universe Model for Medical Diagnoses

    Science.gov (United States)

    Zhang, Chao; Li, Deyu; Yan, Yan

    2015-01-01

    In medical science, disease diagnosis is one of the difficult tasks for medical experts who are confronted with challenges in dealing with a lot of uncertain medical information. And different medical experts might express their own thought about the medical knowledge base which slightly differs from other medical experts. Thus, to solve the problems of uncertain data analysis and group decision making in disease diagnoses, we propose a new rough set model called dual hesitant fuzzy multigranulation rough set over two universes by combining the dual hesitant fuzzy set and multigranulation rough set theories. In the framework of our study, both the definition and some basic properties of the proposed model are presented. Finally, we give a general approach which is applied to a decision making problem in disease diagnoses, and the effectiveness of the approach is demonstrated by a numerical example. PMID:26858772

  9. Fluoroscopy in paediatric fractures - Setting a local diagnostic reference level

    International Nuclear Information System (INIS)

    Pillai, A.; McAuley, A.; McMurray, K.; Jain, M.

    2006-01-01

    Background: The ionizing radiations (Medical Exposure) Regulation 2000 has made it mandatory to establish diagnostic reference levels (DRLs) for all typical radiological examinations. Objectives: We attempt to provide dose data for some common fluoroscopic procedures used in orthopaedic trauma that may be used as the basis for setting DRLs for paediatric patients. Materials and methods: The dose area product (DAP) in 865 paediatric trauma examinations was analysed. Median DAP values and screening times for each procedure type along with quartile values for each range are presented. Results: In the upper limb, elbow examinations had maximum exposure with a median DAP value of 1.21 cGy cm 2 . Median DAP values for forearm and wrist examinations were 0.708 and 0.538 cGy cm 2 , respectively. In lower limb, tibia and fibula examinations had a median DAP value of 3.23 cGy cm 2 followed by ankle examinations with a median DAP of 3.10 cGy cm 2 . The rounded third quartile DAP value for each distribution can be used as a provisional DRL for the specific procedure type. (authors)

  10. Selection of low-level radioactive waste disposal sites using screening models versus more complex methodologies

    International Nuclear Information System (INIS)

    Uslu, I.; Fields, D.E.

    1993-01-01

    The task of choosing a waste-disposal site from a set of candidate sites requires an approach capable of objectively handling many environmental variables for each site. Several computer methodologies have been developed to assist in the process of choosing a site for the disposal of low-level radioactive waste; however, most of these models are costly to apply, in terms of computer resources and the time and effort required by professional modelers, geologists, and waste-disposal experts. The authors describe how the relatively simple DRASTIC methodology (a standardized system for evaluating groundwater pollution potential using hydrogeologic settings) may be used for open-quotes pre-screeningclose quotes of sites to determine which subset of candidate sites is worthy of more detailed screening. Results of site comparisons made with DRASTIC are compared with results obtained using PRESTO-II methodology, which is representative of the more complex release-transport-human exposure methodologies. 6 refs., 1 fig., 1 tab

  11. The Impact of Individual Differences, Types of Model and Social Settings on Block Building Performance among Chinese Preschoolers

    Directory of Open Access Journals (Sweden)

    Mi Tian

    2018-01-01

    Full Text Available Children’s block building performances are used as indicators of other abilities in multiple domains. In the current study, we examined individual differences, types of model and social settings as influences on children’s block building performance. Chinese preschoolers (N = 180 participated in a block building activity in a natural setting, and performance was assessed with multiple measures in order to identify a range of specific skills. Using scores generated across these measures, three dependent variables were analyzed: block building skills, structural balance and structural features. An overall MANOVA showed that there were significant main effects of gender and grade level across most measures. Types of model showed no significant effect in children’s block building. There was a significant main effect of social settings on structural features, with the best performance in the 5-member group, followed by individual and then the 10-member block building. These findings suggest that boys performed better than girls in block building activity. Block building performance increased significantly from 1st to 2nd year of preschool, but not from second to third. The preschoolers created more representational constructions when presented with a model made of wooden rather than with a picture. There was partial evidence that children performed better when working with peers in a small group than when working alone or working in a large group. It is suggested that future study should examine other modalities rather than the visual one, diversify the samples and adopt a longitudinal investigation.

  12. MODELING AND STUDY OF HYDROELECTRIC GENERATING SETS OF SMALL HYDRO POWER PLANTS WITH FREQUENCY-CONTROLLED PERMANENT MAGNET SYNCHRONOUS GENERATORS

    Directory of Open Access Journals (Sweden)

    R. I. Mustafayev

    2016-01-01

    Full Text Available Currently, the hydroelectric generating sets of small HPPs with Pelton turbines employ as their generating units conventional synchronous generators with electromagnetic excitation. To deal with the torque pulsatile behaviour, they generally install a supplementary flywheel on the system shaft that levels the pulsations. The Pelton turbine power output is adjusted by the needle changing water flow in the nozzle, whose advancement modifies the nozzle area and eventually – the flow. They limit the needle full stroke time to 20–40 sec. since quick shutting the nozzle for swift water flow reduction may result in pressure surges. For quick power adjustment so-called deflectors are employed, whose task is retraction of water jets from the Pelton turbine buckets. Thus, the mechanical method of power output regulation requires agreement between the needle stroke inside the turbine nozzles and the deflector. The paper offers employing frequency-controlled synchronous machines with permanent magnets qua generating units for the hydroelectric generating sets of small HPPs with Pelton turbines. The developed computer model reveals that this provides a higher level of adjustability towards rapid-changing loads in the grid. Furthermore, this will replace the power output mechanical control involving the valuable deflector drive and the turbine nozzle needles with electrical revolution rate and power output regulation by a frequency converter located in the generator stator circuit. Via frequency start, the controllable synchronous machine ensures stable operation of the hydroelectric generating set with negligibly small amount of water (energy carrier. Finally, in complete absence of water, the frequency-relay start facilitates shifting the generator operation to the synchronous capacitor mode, which the system operating parameter fluctograms obtained through computer modeling prove. 

  13. County-Level Poverty Is Equally Associated with Unmet Health Care Needs in Rural and Urban Settings

    Science.gov (United States)

    Peterson, Lars E.; Litaker, David G.

    2010-01-01

    Context: Regional poverty is associated with reduced access to health care. Whether this relationship is equally strong in both rural and urban settings or is affected by the contextual and individual-level characteristics that distinguish these areas, is unclear. Purpose: Compare the association between regional poverty with self-reported unmet…

  14. Requirements for high level models supporting design space exploration in model-based systems engineering

    NARCIS (Netherlands)

    Haveman, Steven; Bonnema, Gerrit Maarten

    2013-01-01

    Most formal models are used in detailed design and focus on a single domain. Few effective approaches exist that can effectively tie these lower level models to a high level system model during design space exploration. This complicates the validation of high level system requirements during

  15. A high-level power model for MPSoC on FPGA

    NARCIS (Netherlands)

    Piscitelli, R.; Pimentel, A.D.

    2011-01-01

    This paper presents a framework for high-level power estimation of multiprocessor systems-on-chip (MPSoC) architectures on FPGA. The technique is based on abstract execution profiles, called event signatures, and it operates at a higher level of abstraction than, e.g., commonly-used instruction-set

  16. Modeling decisions from experience: How models with a set of parameters for aggregate choices explain individual choices

    Directory of Open Access Journals (Sweden)

    Neha Sharma

    2017-10-01

    Full Text Available One of the paradigms (called “sampling paradigm” in judgment and decision-making involves decision-makers sample information before making a final consequential choice. In the sampling paradigm, certain computational models have been proposed where a set of single or distribution parameters is calibrated to the choice proportions of a group of participants (aggregate and hierarchical models. However, currently little is known on how aggregate and hierarchical models would account for choices made by individual participants in the sampling paradigm. In this paper, we test the ability of aggregate and hierarchical models to explain choices made by individual participants. Several models, Ensemble, Cumulative Prospect Theory (CPT, Best Estimation and Simulation Techniques (BEAST, Natural-Mean Heuristic (NMH, and Instance-Based Learning (IBL, had their parameters calibrated to individual choices in a large dataset involving the sampling paradigm. Later, these models were generalized to two large datasets in the sampling paradigm. Results revealed that the aggregate models (like CPT and IBL accounted for individual choices better than hierarchical models (like Ensemble and BEAST upon generalization to problems that were like those encountered during calibration. Furthermore, the CPT model, which relies on differential valuing of gains and losses, respectively, performed better than other models during calibration and generalization on datasets with similar set of problems. The IBL model, relying on recency and frequency of sampled information, and the NMH model, relying on frequency of sampled information, performed better than other models during generalization to a challenging dataset. Sequential analyses of results from different models showed how these models accounted for transitions from the last sample to final choice in human data. We highlight the implications of using aggregate and hierarchical models in explaining individual choices

  17. Comparing of goal setting strategy with group education method to increase physical activity level: A randomized trial.

    Science.gov (United States)

    Jiryaee, Nasrin; Siadat, Zahra Dana; Zamani, Ahmadreza; Taleban, Roya

    2015-10-01

    Designing an intervention to increase physical activity is important to be based on the health care settings resources and be acceptable by the subject group. This study was designed to assess and compare the effect of the goal setting strategy with a group education method on increasing the physical activity of mothers of children aged 1 to 5. Mothers who had at least one child of 1-5 years were randomized into two groups. The effect of 1) goal-setting strategy and 2) group education method on increasing physical activity was assessed and compared 1 month and 3 months after the intervention. Also, the weight, height, body mass index (BMI), waist and hip circumference, and well-being were compared between the two groups before and after the intervention. Physical activity level increased significantly after the intervention in the goal-setting group and it was significantly different between the two groups after intervention (P goal-setting group after the intervention. In the group education method, only the well-being score improved significantly (P goal-setting strategy to boost physical activity, improving the state of well-being and decreasing BMI, waist, and hip circumference.

  18. Multi-level emulation of complex climate model responses to boundary forcing data

    Science.gov (United States)

    Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter

    2018-04-01

    Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.

  19. Goal setting with mothers in child development services.

    Science.gov (United States)

    Forsingdal, S; St John, W; Miller, V; Harvey, A; Wearne, P

    2014-07-01

    The aim of this grounded theory study was to explore mothers' perspectives of the processes of collaborative goal setting in multidisciplinary child development services involving follow-up home therapy. Semi-structured interviews were conducted in South East Queensland, Australia with 14 mothers of children aged 3-6 years who were accessing multidisciplinary child development services. Interviews were focussed around the process of goal setting. A grounded theory of Maternal Roles in Goal Setting (The M-RIGS Model) was developed from analysis of data. Mothers assumed Dependent, Active Participator and Collaborator roles when engaging with the therapist in goal-setting processes. These roles were characterized by the mother's level of dependence on the therapist and insight into their child's needs and therapy processes. Goal Factors, Parent Factors and Therapist Factors influenced and added complexity to the goal-setting process. The M-RIGS Model highlights that mothers take on a range of roles in the goal-setting process. Although family-centred practice encourages negotiation and collaborative goal setting, parents may not always be ready to take on highly collaborative roles. Better understanding of parent roles, goal-setting processes and influencing factors will inform better engagement with families accessing multidisciplinary child development services. © 2013 John Wiley & Sons Ltd.

  20. A topology optimization method based on the level set method for the design of negative permeability dielectric metamaterials

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Izui, Kazuhiro

    2012-01-01

    This paper presents a level set-based topology optimization method for the design of negative permeability dielectric metamaterials. Metamaterials are artificial materials that display extraordinary physical properties that are unavailable with natural materials. The aim of the formulated...... optimization problem is to find optimized layouts of a dielectric material that achieve negative permeability. The presence of grayscale areas in the optimized configurations critically affects the performance of metamaterials, positively as well as negatively, but configurations that contain grayscale areas...... are highly impractical from an engineering and manufacturing point of view. Therefore, a topology optimization method that can obtain clear optimized configurations is desirable. Here, a level set-based topology optimization method incorporating a fictitious interface energy is applied to a negative...

  1. Radon levels in dwellings in chalk terrain. Development and analysis of distributional and causal models

    International Nuclear Information System (INIS)

    Killip, Ian Richmond

    2002-01-01

    This thesis investigates the range, distribution and causes of high radon levels in dwellings in the Brighton area of Southeast England. Indoor radon levels were measured in more than 1000 homes. The results show that high radon levels can arise in an area previously considered to offer low radon potential from local geological sources. Climate and building-related factors were found to affect significantly the radon levels in dwellings. Multiple regression was used to determine the influence of the various factors on indoor radon levels and an empirical model develop to predict indoor radon levels. The radon hazard, independent of building-related effects, was determined for each surveyed location by adjusting the radon measurement to that expected on the ground floor of a 'model' dwelling. This standardised set of radon levels was entered into a geographical information system (GIS) and related to surface geology. The geometric mean radon level for each lithological unit was plotted to produce a radon hazard map for the area. The highest radon levels were found to be associated with the youngest Chalk Formations, particularly where they meet overlying Tertiary deposits, and with Clay-with-Flints Quaternary deposits in the area. The results were also converted to the radon activity equivalent to that expected from the NRPB's standard dual-detector dwelling survey method and analysed by lognormal modelling to estimate the proportion of dwellings likely to exceed the UK Action Level of 200 Bq/m 3 for each lithological unit. The likely percentages of dwellings affected by radon thus obtained were mapped to lithological boundaries to produce a radon potential map. The radon hazard map and the empirical radon model facilitate the prediction of radon levels in dwellings of comparable construction and above similar geology and should further the understanding of the behaviour of radon gas in buildings to allow indoor radon concentrations to be controlled. The radon

  2. A comparison of simulation results from two terrestrial carbon cycle models using three climate data sets

    International Nuclear Information System (INIS)

    Ito, Akihiko; Sasai, Takahiro

    2006-01-01

    This study addressed how different climate data sets influence simulations of the global terrestrial carbon cycle. For the period 1982-2001, we compared the results of simulations based on three climate data sets (NCEP/NCAR, NCEP/DOE AMIP-II and ERA40) employed in meteorological, ecological and biogeochemical studies and two different models (BEAMS and Sim-CYCLE). The models differed in their parameterizations of photosynthetic and phenological processes but used the same surface climate (e.g. shortwave radiation, temperature and precipitation), vegetation, soil and topography data. The three data sets give different climatic conditions, especially for shortwave radiation, in terms of long-term means, linear trends and interannual variability. Consequently, the simulation results for global net primary productivity varied by 16%-43% only from differences in the climate data sets, especially in these regions where the shortwave radiation data differed markedly: differences in the climate data set can strongly influence simulation results. The differences among the climate data set and between the two models resulted in slightly different spatial distribution and interannual variability in the net ecosystem carbon budget. To minimize uncertainty, we should pay attention to the specific climate data used. We recommend developing an accurate standard climate data set for simulation studies

  3. Sensitivity Analysis of features in tolerancing based on constraint function level sets

    International Nuclear Information System (INIS)

    Ziegler, Philipp; Wartzack, Sandro

    2015-01-01

    Usually, the geometry of the manufactured product inherently varies from the nominal geometry. This may negatively affect the product functions and properties (such as quality and reliability), as well as the assemblability of the single components. In order to avoid this, the geometric variation of these component surfaces and associated geometry elements (like hole axes) are restricted by tolerances. Since tighter tolerances lead to significant higher manufacturing costs, tolerances should be specified carefully. Therefore, the impact of deviating component surfaces on functions, properties and assemblability of the product has to be analyzed. As physical experiments are expensive, methods of statistical tolerance analysis tools are widely used in engineering design. Current tolerance simulation tools lack of an appropriate indicator for the impact of deviating component surfaces. In the adoption of Sensitivity Analysis methods, there are several challenges, which arise from the specific framework in tolerancing. This paper presents an approach to adopt Sensitivity Analysis methods on current tolerance simulations with an interface module, which bases on level sets of constraint functions for parameters of the simulation model. The paper is an extension and generalization of Ziegler and Wartzack [1]. Mathematical properties of the constraint functions (convexity, homogeneity), which are important for the computational costs of the Sensitivity Analysis, are shown. The practical use of the method is illustrated in a case study of a plain bearing. - Highlights: • Alternative definition of Deviation Domains. • Proof of mathematical properties of the Deviation Domains. • Definition of the interface between Deviation Domains and Sensitivity Analysis. • Sensitivity analysis of a gearbox to show the methods practical use

  4. Process-based interpretation of conceptual hydrological model performance using a multinational catchment set

    Science.gov (United States)

    Poncelet, Carine; Merz, Ralf; Merz, Bruno; Parajka, Juraj; Oudin, Ludovic; Andréassian, Vazken; Perrin, Charles

    2017-08-01

    Most of previous assessments of hydrologic model performance are fragmented, based on small number of catchments, different methods or time periods and do not link the results to landscape or climate characteristics. This study uses large-sample hydrology to identify major catchment controls on daily runoff simulations. It is based on a conceptual lumped hydrological model (GR6J), a collection of 29 catchment characteristics, a multinational set of 1103 catchments located in Austria, France, and Germany and four runoff model efficiency criteria. Two analyses are conducted to assess how features and criteria are linked: (i) a one-dimensional analysis based on the Kruskal-Wallis test and (ii) a multidimensional analysis based on regression trees and investigating the interplay between features. The catchment features most affecting model performance are the flashiness of precipitation and streamflow (computed as the ratio of absolute day-to-day fluctuations by the total amount in a year), the seasonality of evaporation, the catchment area, and the catchment aridity. Nonflashy, nonseasonal, large, and nonarid catchments show the best performance for all the tested criteria. We argue that this higher performance is due to fewer nonlinear responses (higher correlation between precipitation and streamflow) and lower input and output variability for such catchments. Finally, we show that, compared to national sets, multinational sets increase results transferability because they explore a wider range of hydroclimatic conditions.

  5. Towards deep inclusion for equity-oriented health research priority-setting: A working model.

    Science.gov (United States)

    Pratt, Bridget; Merritt, Maria; Hyder, Adnan A

    2016-02-01

    Growing consensus that health research funders should align their investments with national research priorities presupposes that such national priorities exist and are just. Arguably, justice requires national health research priority-setting to promote health equity. Such a position is consistent with recommendations made by the World Health Organization and at global ministerial summits that health research should serve to reduce health inequalities between and within countries. Thus far, no specific requirements for equity-oriented research priority-setting have been described to guide policymakers. As a step towards the explication and defence of such requirements, we propose that deep inclusion is a key procedural component of equity-oriented research priority-setting. We offer a model of deep inclusion that was developed by applying concepts from work on deliberative democracy and development ethics. This model consists of three dimensions--breadth, qualitative equality, and high-quality non-elite participation. Deep inclusion is captured not only by who is invited to join a decision-making process but also by how they are involved and by when non-elite stakeholders are involved. To clarify and illustrate the proposed dimensions, we use the sustained example of health systems research. We conclude by reviewing practical challenges to achieving deep inclusion. Despite the existence of barriers to implementation, our model can help policymakers and other stakeholders design more inclusive national health research priority-setting processes and assess these processes' depth of inclusion. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Identification of candidate categories of the International Classification of Functioning Disability and Health (ICF for a Generic ICF Core Set based on regression modelling

    Directory of Open Access Journals (Sweden)

    Üstün Bedirhan T

    2006-07-01

    Full Text Available Abstract Background The International Classification of Functioning, Disability and Health (ICF is the framework developed by WHO to describe functioning and disability at both the individual and population levels. While condition-specific ICF Core Sets are useful, a Generic ICF Core Set is needed to describe and compare problems in functioning across health conditions. Methods The aims of the multi-centre, cross-sectional study presented here were: a to propose a method to select ICF categories when a large amount of ICF-based data have to be handled, and b to identify candidate ICF categories for a Generic ICF Core Set by examining their explanatory power in relation to item one of the SF-36. The data were collected from 1039 patients using the ICF checklist, the SF-36 and a Comorbidity Questionnaire. ICF categories to be entered in an initial regression model were selected following systematic steps in accordance with the ICF structure. Based on an initial regression model, additional models were designed by systematically substituting the ICF categories included in it with ICF categories with which they were highly correlated. Results Fourteen different regression models were performed. The variance the performed models account for ranged from 22.27% to 24.0%. The ICF category that explained the highest amount of variance in all the models was sensation of pain. In total, thirteen candidate ICF categories for a Generic ICF Core Set were proposed. Conclusion The selection strategy based on the ICF structure and the examination of the best possible alternative models does not provide a final answer about which ICF categories must be considered, but leads to a selection of suitable candidates which needs further consideration and comparison with the results of other selection strategies in developing a Generic ICF Core Set.

  7. Level set method for optimal shape design of MRAM core. Micromagnetic approach

    International Nuclear Information System (INIS)

    Melicher, Valdemar; Cimrak, Ivan; Keer, Roger van

    2008-01-01

    We aim at optimizing the shape of the magnetic core in MRAM memories. The evolution of the magnetization during the writing process is described by the Landau-Lifshitz equation (LLE). The actual shape of the core in one cell is characterized by the coefficient γ. Cost functional f=f(γ) expresses the quality of the writing process having in mind the competition between the full-select and the half-select element. We derive an explicit form of the derivative F=∂f/∂γ which allows for the use of gradient-type methods for the actual computation of the optimized shape (e.g., steepest descend method). The level set method (LSM) is employed for the representation of the piecewise constant coefficient γ

  8. Scientific Playworlds: a Model of Teaching Science in Play-Based Settings

    Science.gov (United States)

    Fleer, Marilyn

    2017-09-01

    Eminent scientists, like Einstein, worked with theoretical contradiction, thought experiments, mental models and visualisation—all characteristics of children's play. Supporting children's play is a strength of early childhood teachers. Promising research shows a link between imagination in science and imagination in play. A case study of 3 preschool teachers and 26 children (3.6-5.9 years; mean age of 4.6 years) over 6 weeks was undertaken, generating 59.6 h of digital observations and 788 photographs of play practices. The research sought to understand (1) how imaginative play promotes scientific learning and (2) examined how teachers engaged children in scientific play. Although play pedagogy is a strength of early childhood teachers, it was found that transforming imaginary situations into scientific narratives requires different pedagogical characteristics. The study found that the building of collective scientific narratives alongside of discourses of wondering were key determinants of science learning in play-based settings. Specifically, the pedagogical principles of using a cultural device that mirrors the science experiences, creating imaginary scientific situations, collectively building scientific problem situations, and imagining the relations between observable contexts and non-observable concepts, changed everyday practices into a scientific narrative and engagement. It is argued that these unique pedagogical characteristics promote scientific narratives in play-based settings. An approach, named as Scientific Playworlds, is presented as a possible model for teaching science in play-based settings.

  9. Flipping for success: evaluating the effectiveness of a novel teaching approach in a graduate level setting.

    Science.gov (United States)

    Moraros, John; Islam, Adiba; Yu, Stan; Banow, Ryan; Schindelka, Barbara

    2015-02-28

    Flipped Classroom is a model that's quickly gaining recognition as a novel teaching approach among health science curricula. The purpose of this study was four-fold and aimed to compare Flipped Classroom effectiveness ratings with: 1) student socio-demographic characteristics, 2) student final grades, 3) student overall course satisfaction, and 4) course pre-Flipped Classroom effectiveness ratings. The participants in the study consisted of 67 Masters-level graduate students in an introductory epidemiology class. Data was collected from students who completed surveys during three time points (beginning, middle and end) in each term. The Flipped Classroom was employed for the academic year 2012-2013 (two terms) using both pre-class activities and in-class activities. Among the 67 Masters-level graduate students, 80% found the Flipped Classroom model to be either somewhat effective or very effective (M = 4.1/5.0). International students rated the Flipped Classroom to be significantly more effective when compared to North American students (X(2) = 11.35, p Students' perceived effectiveness of the Flipped Classroom had no significant association to their academic performance in the course as measured by their final grades (r s = 0.70). However, students who found the Flipped Classroom to be effective were also more likely to be satisfied with their course experience. Additionally, it was found that the SEEQ variable scores for students enrolled in the Flipped Classroom were significantly higher than the ones for students enrolled prior to the implementation of the Flipped Classroom (p = 0.003). Overall, the format of the Flipped Classroom provided more opportunities for students to engage in critical thinking, independently facilitate their own learning, and more effectively interact with and learn from their peers. Additionally, the instructor was given more flexibility to cover a wider range and depth of material, provide in-class applied learning

  10. Uniqueness of Gibbs measure for Potts model with countable set of spin values

    International Nuclear Information System (INIS)

    Ganikhodjaev, N.N.; Rozikov, U.A.

    2004-11-01

    We consider a nearest-neighbor Potts model with countable spin values 0,1,..., and non zero external field, on a Cayley tree of order k (with k+1 neighbors). We study translation-invariant 'splitting' Gibbs measures. We reduce the problem to the description of the solutions of some infinite system of equations. For any k≥1 and any fixed probability measure ν with ν(i)>0 on the set of all non negative integer numbers Φ={0,1,...} we show that the set of translation-invariant splitting Gibbs measures contains at most one point, independently on parameters of the Potts model with countable set of spin values on Cayley tree. Also we give a full description of the class of measures ν on Φ such that wit respect to each element of this class our infinite system of equations has unique solution {a i =1,2,...}, where a is an element of (0,1). (author)

  11. Computing the dynamics of biomembranes by combining conservative level set and adaptive finite element methods

    OpenAIRE

    Laadhari , Aymen; Saramito , Pierre; Misbah , Chaouqi

    2014-01-01

    International audience; The numerical simulation of the deformation of vesicle membranes under simple shear external fluid flow is considered in this paper. A new saddle-point approach is proposed for the imposition of the fluid incompressibility and the membrane inextensibility constraints, through Lagrange multipliers defined in the fluid and on the membrane respectively. Using a level set formulation, the problem is approximated by mixed finite elements combined with an automatic adaptive ...

  12. Natural setting of Japanese islands and geologic disposal of high-level waste

    International Nuclear Information System (INIS)

    Koide, Hitoshi

    1991-01-01

    The Japanese islands are a combination of arcuate islands along boundaries between four major plates: Eurasia, North America, Pacific and Philippine Sea plates. The interaction among the four plates formed complex geological structures which are basically patchworks of small blocks of land and sea-floor sediments piled up by the subduction of oceanic plates along the margin of the Eurasia continent. Although frequent earthquakes and volcanic eruptions clearly indicate active crustal deformation, the distribution of active faults and volcanoes is localized regionally in the Japanese islands. Crustal displacement faster than 1 mm/year takes place only in restricted regions near plate boundaries or close to major active faults. Volcanic activity is absent in the region between the volcanic front and the subduction zone. The site selection is especially important in Japan. The scenarios for the long-term performance assessment of high-level waste disposal are discussed with special reference to the geological setting of Japan. The long-term prediction of tectonic disturbance, evaluation of faults and fractures in rocks and estimation of long-term water-rock interaction are key issues in the performance assessment of the high-level waste disposal in the Japanese islands. (author)

  13. Exercise Self-Efficacy as a Mediator between Goal-Setting and Physical Activity: Developing the Workplace as a Setting for Promoting Physical Activity.

    Science.gov (United States)

    Iwasaki, Yoshie; Honda, Sumihisa; Kaneko, Shuji; Kurishima, Kazuhiro; Honda, Ayumi; Kakinuma, Ayumu; Jahng, Doosub

    2017-03-01

    Physical activity (PA) is ranked as a leading health indicator and the workplace is a key setting to promote PA. The purpose of this study was to examine how goal-setting and exercise self-efficacy (SE) during a health promotion program influenced PA level among Japanese workers. Using a cross-sectional study design, we surveyed 281 employees. The short version of the International Physical Activity Questionnaire was used to assess PA level. Exercise SE was assessed using a partially modified version of Oka's exercise SE scale. Personal goals were assessed as the total numbers of "yes" responses to five items regarding "details of personal goals to perform PA". A mediational model was used to examine whether exercise SE mediates between the number of personal goals and PA level. The mean age of the participants was 46.3 years, 76.2% were men, and the most common occupational category was software engineer (30.6%). The average PA level per week exceeded the recommended level in 127 participants (45.2%). One hundred and eighty-four participants (65.5%) set some form of concrete personal goal to perform PA. The relationship between the number of personal goals and PA level was mediated by exercise SE. Our study showed that exercise SE mediates goal-setting and increases PA. The results suggest that the components of PA promotion programs should be tailored to enhance participants' confidence in performing PA.

  14. Precision assessment of model-based RSA for a total knee prosthesis in a biplanar set-up.

    Science.gov (United States)

    Trozzi, C; Kaptein, B L; Garling, E H; Shelyakova, T; Russo, A; Bragonzoni, L; Martelli, S

    2008-10-01

    Model-based Roentgen Stereophotogrammetric Analysis (RSA) was recently developed for the measurement of prosthesis micromotion. Its main advantage is that markers do not need to be attached to the implants as traditional marker-based RSA requires. Model-based RSA has only been tested in uniplanar radiographic set-ups. A biplanar set-up would theoretically facilitate the pose estimation algorithm, since radiographic projections would show more different shape features of the implants than in uniplanar images. We tested the precision of model-based RSA and compared it with that of the traditional marker-based method in a biplanar set-up. Micromotions of both tibial and femoral components were measured with both the techniques from double examinations of patients participating in a clinical study. The results showed that in the biplanar set-up model-based RSA presents a homogeneous distribution of precision for all the translation directions, but an inhomogeneous error for rotations, especially internal-external rotation presented higher errors than rotations about the transverse and sagittal axes. Model-based RSA was less precise than the marker-based method, although the differences were not significant for the translations and rotations of the tibial component, with the exception of the internal-external rotations. For both prosthesis components the precisions of model-based RSA were below 0.2 mm for all the translations, and below 0.3 degrees for rotations about transverse and sagittal axes. These values are still acceptable for clinical studies aimed at evaluating total knee prosthesis micromotion. In a biplanar set-up model-based RSA is a valid alternative to traditional marker-based RSA where marking of the prosthesis is an enormous disadvantage.

  15. Model-based gene set analysis for Bioconductor.

    Science.gov (United States)

    Bauer, Sebastian; Robinson, Peter N; Gagneur, Julien

    2011-07-01

    Gene Ontology and other forms of gene-category analysis play a major role in the evaluation of high-throughput experiments in molecular biology. Single-category enrichment analysis procedures such as Fisher's exact test tend to flag large numbers of redundant categories as significant, which can complicate interpretation. We have recently developed an approach called model-based gene set analysis (MGSA), that substantially reduces the number of redundant categories returned by the gene-category analysis. In this work, we present the Bioconductor package mgsa, which makes the MGSA algorithm available to users of the R language. Our package provides a simple and flexible application programming interface for applying the approach. The mgsa package has been made available as part of Bioconductor 2.8. It is released under the conditions of the Artistic license 2.0. peter.robinson@charite.de; julien.gagneur@embl.de.

  16. Methodology for setting the reference levels in the measurements of the dose rate absorbed in air due to the environmental gamma radiation

    International Nuclear Information System (INIS)

    Dominguez Ley, Orlando; Capote Ferrera, Eduardo; Caveda Ramos, Celia; Alonso Abad, Dolores

    2008-01-01

    Full text: The methodology for setting the reference levels of the measurements of the gamma dose rate absorbed in the air is described. The registration level was obtained using statistical methods. To set the alarm levels, it was necessary to begin with certain affectation level, which activates the investigation operation mode when being reached. It is was necessary to transform this affectation level into values of the indicators selected to set the appearance of an alarm in the network, allowing its direct comparison and at the same time a bigger operability of this one. The affectation level was assumed as an effective dose of 1 mSv/y, which is the international dose limit for public. The conversion factor obtained in a practical way as a consequence of the Chernobyl accident was assumed, converting the value of annual effective dose into values of effective dose rate in air. These factors are the most important in our work, since the main task of the National Network of Environmental Radiological Surveillance of the Republic of Cuba is detecting accidents with a situations regional affectation, and this accident is precisely an example of pollution at this scale. The alarm level setting was based on the results obtained in the first year of the Chernobyl accident. For this purpose, some transformations were achieved. In the final results, a correction factor was introduced depending on the year season the measurement was made. It was taken into account the influence of different meteorological events on the measurement of this indicator. (author)

  17. The Daily Events and Emotions of Master's-Level Family Therapy Trainees in Off-Campus Practicum Settings

    Science.gov (United States)

    Edwards, Todd M.; Patterson, Jo Ellen

    2012-01-01

    The Day Reconstruction Method (DRM) was used to assess the daily events and emotions of one program's master's-level family therapy trainees in off-campus practicum settings. This study examines the DRM reports of 35 family therapy trainees in the second year of their master's program in marriage and family therapy. Four themes emerged from the…

  18. Information behavior versus communication: application models in multidisciplinary settings

    Directory of Open Access Journals (Sweden)

    Cecília Morena Maria da Silva

    2015-05-01

    Full Text Available This paper deals with the information behavior as support for models of communication design in the areas of Information Science, Library and Music. The communication models proposition is based on models of Tubbs and Moss (2003, Garvey and Griffith (1972, adapted by Hurd (1996 and Wilson (1999. Therefore, the questions arose: (i what are the informational skills required of librarians who act as mediators in scholarly communication process and informational user behavior in the educational environment?; (ii what are the needs of music related researchers and as produce, seek, use and access the scientific knowledge of your area?; and (iii as the contexts involved in scientific collaboration processes influence in the scientific production of information science field in Brazil? The article includes a literature review on the information behavior and its insertion in scientific communication considering the influence of context and/or situation of the objects involved in motivating issues. The hypothesis is that the user information behavior in different contexts and situations influence the definition of a scientific communication model. Finally, it is concluded that the same concept or a set of concepts can be used in different perspectives, reaching up, thus, different results.

  19. The Impacts of Different Meteorology Data Sets on Nitrogen Fate and Transport in the SWAT Watershed Model

    Science.gov (United States)

    In this study, we investigated how different meteorology data sets impacts nitrogen fate and transport responses in the Soil and Water Assessment Tool (SWAT) model. We used two meteorology data sets: National Climatic Data Center (observed) and Mesoscale Model 5/Weather Research ...

  20. Novel room-temperature-setting phosphate ceramics for stabilizing combustion products and low-level mixed wastes

    International Nuclear Information System (INIS)

    Wagh, A.S.; Singh, D.

    1994-01-01

    Argonne National Laboratory, with support from the Office of Technology in the US Department of Energy (DOE), has developed a new process employing novel, chemically bonded ceramic materials to stabilize secondary waste streams. Such waste streams result from the thermal processes used to stabilize low-level, mixed wastes. The process will help the electric power industry treat its combustion and low-level mixed wastes. The ceramic materials are strong, dense, leach-resistant, and inexpensive to fabricate. The room-temperature-setting process allows stabilization of volatile components containing lead, mercury, cadmium, chromium, and nickel. The process also provides effective stabilization of fossil fuel combustion products. It is most suitable for treating fly and bottom ashes

  1. Group theoretical construction of two-dimensional models with infinite sets of conservation laws

    International Nuclear Information System (INIS)

    D'Auria, R.; Regge, T.; Sciuto, S.

    1980-01-01

    We explicitly construct some classes of field theoretical 2-dimensional models associated with symmetric spaces G/H according to a general scheme proposed in an earlier paper. We treat the SO(n + 1)/SO(n) and SU(n + 1)/U(n) case, giving their relationship with the O(n) sigma-models and the CP(n) models. Moreover, we present a new class of models associated to the SU(n)/SO(n) case. All these models are shown to possess an infinite set of local conservation laws. (orig.)

  2. Dual deep modeling: multi-level modeling with dual potencies and its formalization in F-Logic.

    Science.gov (United States)

    Neumayr, Bernd; Schuetz, Christoph G; Jeusfeld, Manfred A; Schrefl, Michael

    2018-01-01

    An enterprise database contains a global, integrated, and consistent representation of a company's data. Multi-level modeling facilitates the definition and maintenance of such an integrated conceptual data model in a dynamic environment of changing data requirements of diverse applications. Multi-level models transcend the traditional separation of class and object with clabjects as the central modeling primitive, which allows for a more flexible and natural representation of many real-world use cases. In deep instantiation, the number of instantiation levels of a clabject or property is indicated by a single potency. Dual deep modeling (DDM) differentiates between source potency and target potency of a property or association and supports the flexible instantiation and refinement of the property by statements connecting clabjects at different modeling levels. DDM comes with multiple generalization of clabjects, subsetting/specialization of properties, and multi-level cardinality constraints. Examples are presented using a UML-style notation for DDM together with UML class and object diagrams for the representation of two-level user views derived from the multi-level model. Syntax and semantics of DDM are formalized and implemented in F-Logic, supporting the modeler with integrity checks and rich query facilities.

  3. A multi-level model of emerging technology: An empirical study of the evolution of biotechnology from 1976 to 2003

    Science.gov (United States)

    van Witteloostuijn, Arjen

    2018-01-01

    In this paper, we develop an ecological, multi-level model that can be used to study the evolution of emerging technology. More specifically, by defining technology as a system composed of a set of interacting components, we can build upon the argument of multi-level density dependence from organizational ecology to develop a distribution-independent model of technological evolution. This allows us to distinguish between different stages of component development, which provides more insight into the emergence of stable component configurations, or dominant designs. We validate our hypotheses in the biotechnology industry by using patent data from the USPTO from 1976 to 2003. PMID:29795575

  4. Olkiluoto hydrogeochemistry. A 3-D modelling approach for sparce data set

    International Nuclear Information System (INIS)

    Luukkonen, A.; Partamies, S.; Pitkaenen, P.

    2003-07-01

    Olkiluoto at Eurajoki has been selected as a candidate site for final disposal repository for the used nuclear waste produced in Finland. In the long term safety assessment, one of the principal evaluation tools of safe disposal is hydrogeochemistry. For assessment purposes Posiva Oy excavates in the Olkiluoto bedrock an underground research laboratory (ONKALO). The complexity of the groundwater chemistry is characteristic to the Olkiluoto site and causes a demand to examine and visualise these hydrogeochemical features in 3-D together with the structural model. The need to study the hydrogeochemical features is not inevitable only in the stable undisturbed (pre-excavational) conditions but also in the disturbed system caused by the construction activities and open-tunnel conditions of the ONKALO. The present 3-D approach is based on integrating the independently and separately developed structural model and the results from the geochemical mixing calculations of the groundwater samples. For spatial geochemical regression purposes the study area is divided into four primary sectors on the basis of the occurrence of the samples. The geochemical information within the four primary sector are summed up in the four sector centroids that sum-up the depth distributions of the different water types within each primary sector area. The geographic locations of the centroids are used for secondary division of the study area into secondary sectors. With the aid of secondary sectors spatial regressions between the centroids can be calculated and interpolation of water type fractions within the centroid volume becomes possible. Similarly, extrapolations outside the centroid volume are possible as well. The mixing proportions of the five detected water types in an arbitrary point in the modelling volume can be estimated by applying the four centroids and by using lateral linear regression. This study utilises two separate data sets: the older data set and the newer data set. The

  5. Segmenting the Parotid Gland using Registration and Level Set Methods

    DEFF Research Database (Denmark)

    Hollensen, Christian; Hansen, Mads Fogtmann; Højgaard, Liselotte

    . The method was evaluated on a test set consisting of 8 corresponding data sets. The attained total volume Dice coefficient and mean Haussdorff distance were 0.61 ± 0.20 and 15.6 ± 7.4 mm respectively. The method has improvement potential which could be exploited in order for clinical introduction....

  6. The use of an integrated variable fuzzy sets in water resources management

    Science.gov (United States)

    Qiu, Qingtai; Liu, Jia; Li, Chuanzhe; Yu, Xinzhe; Wang, Yang

    2018-06-01

    Based on the evaluation of the present situation of water resources and the development of water conservancy projects and social economy, optimal allocation of regional water resources presents an increasing need in the water resources management. Meanwhile it is also the most effective way to promote the harmonic relationship between human and water. In view of the own limitations of the traditional evaluations of which always choose a single index model using in optimal allocation of regional water resources, on the basis of the theory of variable fuzzy sets (VFS) and system dynamics (SD), an integrated variable fuzzy sets model (IVFS) is proposed to address dynamically complex problems in regional water resources management in this paper. The model is applied to evaluate the level of the optimal allocation of regional water resources of Zoucheng in China. Results show that the level of allocation schemes of water resources ranging from 2.5 to 3.5, generally showing a trend of lower level. To achieve optimal regional management of water resources, this model conveys a certain degree of accessing water resources management, which prominently improve the authentic assessment of water resources management by using the eigenvector of level H.

  7. Ferromagnetic interaction model of activity level in workplace communication

    Science.gov (United States)

    Akitomi, Tomoaki; Ara, Koji; Watanabe, Jun-ichiro; Yano, Kazuo

    2013-03-01

    The nature of human-human interaction, specifically, how people synchronize with each other in multiple-participant conversations, is described by a ferromagnetic interaction model of people’s activity levels. We found two microscopic human interaction characteristics from a real-environment face-to-face conversation. The first characteristic is that people quite regularly synchronize their activity level with that of the other participants in a conversation. The second characteristic is that the degree of synchronization increases as the number of participants increases. Based on these microscopic ferromagnetic characteristics, a “conversation activity level” was modeled according to the Ising model. The results of a simulation of activity level based on this model well reproduce macroscopic experimental measurements of activity level. This model will give a new insight into how people interact with each other in a conversation.

  8. "Economic microscope": The agent-based model set as an instrument in an economic system research

    Science.gov (United States)

    Berg, D. B.; Zvereva, O. M.; Akenov, Serik

    2017-07-01

    To create a valid model of a social or economic system one must consider a lot of parameters, conditions and restrictions. Systems and, consequently, the corresponding models are proved to be very complicated. The problem of such system model engineering can't be solved only with mathematical methods usage. The decision could be found in computer simulation. Simulation does not reject mathematical methods, mathematical expressions could become the foundation for a computer model. In these materials the set of agent-based computer models is under discussion. All the set models simulate productive agents communications, but every model is geared towards the specific goal, and, thus, has its own algorithm and its own peculiarities. It is shown that computer simulation can discover new features of the agents' behavior that can not be obtained by analytical solvation of mathematical equations and thus plays the role of some kind of economic microscope.

  9. Transport equations, Level Set and Eulerian mechanics. Application to fluid-structure coupling

    International Nuclear Information System (INIS)

    Maitre, E.

    2008-11-01

    My works were devoted to numerical analysis of non-linear elliptic-parabolic equations, to neutron transport equation and to the simulation of fabrics draping. More recently I developed an Eulerian method based on a level set formulation of the immersed boundary method to deal with fluid-structure coupling problems arising in bio-mechanics. Some of the more efficient algorithms to solve the neutron transport equation make use of the splitting of the transport operator taking into account its characteristics. In the present work we introduced a new algorithm based on this splitting and an adaptation of minimal residual methods to infinite dimensional case. We present the case where the velocity space is of dimension 1 (slab geometry) and 2 (plane geometry) because the splitting is simpler in the former

  10. Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.

    Science.gov (United States)

    Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.

    2015-01-01

    The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.

  11. Setting the stage for master's level success

    Science.gov (United States)

    Roberts, Donna

    Comprehensive reading, writing, research, and study skills play a critical role in a graduate student's success and ability to contribute to a field of study effectively. The literature indicated a need to support graduate student success in the areas of mentoring, navigation, as well as research and writing. The purpose of this two-phased mixed methods explanatory study was to examine factors that characterize student success at the Master's level in the fields of education, sociology and social work. The study was grounded in a transformational learning framework which focused on three levels of learning: technical knowledge, practical or communicative knowledge, and emancipatory knowledge. The study included two data collection points. Phase one consisted of a Master's Level Success questionnaire that was sent via Qualtrics to graduate level students at three colleges and universities in the Central Valley of California: a California State University campus, a University of California campus, and a private college campus. The results of the chi-square indicated that seven questionnaire items were significant with p values less than .05. Phase two in the data collection included semi-structured interview questions that resulted in three themes emerged using Dedoose software: (1) the need for more language and writing support at the Master's level, (2) the need for mentoring, especially for second-language learners, and (3) utilizing the strong influence of faculty in student success. It is recommended that institutions continually assess and strengthen their programs to meet the full range of learners and to support students to degree completion.

  12. Level Design as Model Transformation

    NARCIS (Netherlands)

    Dormans, Joris

    2011-01-01

    This paper frames the process of designing a level in a game as a series of model transformations. The transformations correspond to the application of particular design principles, such as the use of locks and keys to transform a linear mission into a branching space. It shows that by using rewrite

  13. Chemical Topic Modeling: Exploring Molecular Data Sets Using a Common Text-Mining Approach.

    Science.gov (United States)

    Schneider, Nadine; Fechner, Nikolas; Landrum, Gregory A; Stiefl, Nikolaus

    2017-08-28

    Big data is one of the key transformative factors which increasingly influences all aspects of modern life. Although this transformation brings vast opportunities it also generates novel challenges, not the least of which is organizing and searching this data deluge. The field of medicinal chemistry is not different: more and more data are being generated, for instance, by technologies such as DNA encoded libraries, peptide libraries, text mining of large literature corpora, and new in silico enumeration methods. Handling those huge sets of molecules effectively is quite challenging and requires compromises that often come at the expense of the interpretability of the results. In order to find an intuitive and meaningful approach to organizing large molecular data sets, we adopted a probabilistic framework called "topic modeling" from the text-mining field. Here we present the first chemistry-related implementation of this method, which allows large molecule sets to be assigned to "chemical topics" and investigating the relationships between those. In this first study, we thoroughly evaluate this novel method in different experiments and discuss both its disadvantages and advantages. We show very promising results in reproducing human-assigned concepts using the approach to identify and retrieve chemical series from sets of molecules. We have also created an intuitive visualization of the chemical topics output by the algorithm. This is a huge benefit compared to other unsupervised machine-learning methods, like clustering, which are commonly used to group sets of molecules. Finally, we applied the new method to the 1.6 million molecules of the ChEMBL22 data set to test its robustness and efficiency. In about 1 h we built a 100-topic model of this large data set in which we could identify interesting topics like "proteins", "DNA", or "steroids". Along with this publication we provide our data sets and an open-source implementation of the new method (CheTo) which

  14. Building more effective sea level rise models for coastal management

    Science.gov (United States)

    Kidwell, D.; Buckel, C.; Collini, R.; Meckley, T.

    2017-12-01

    For over a decade, increased attention on coastal resilience and adaptation to sea level rise has resulted in a proliferation of predictive models and tools. This proliferation has enhanced our understanding of our vulnerability to sea level rise, but has also led to stakeholder fatigue in trying to realize the value of each advancement. These models vary in type and complexity ranging from GIS-based bathtub viewers to modeling systems that dynamically couple complex biophysical and geomorphic processes. These approaches and capabilities typically have the common purpose using scenarios of global and regional sea level change to inform adaptation and mitigation. In addition, stakeholders are often presented a plethora of options to address sea level rise issues from a variety of agencies, academics, and consulting firms. All of this can result in confusion, misapplication of a specific model/tool, and stakeholder feedback of "no more new science or tools, just help me understand which one to use". Concerns from stakeholders have led to the question; how do we move forward with sea level rise modeling? This presentation will provide a synthesis of the experiences and feedback derived from NOAA's Ecological Effects of Sea level Rise (EESLR) program to discuss the future of predictive sea level rise impact modeling. EESLR is an applied research program focused on the advancement of dynamic modeling capabilities in collaboration with local and regional stakeholders. Key concerns from stakeholder engagement include questions about model uncertainty, approaches for model validation, and a lack of cross-model comparisons. Effective communication of model/tool products, capabilities, and results is paramount to address these concerns. Looking forward, the most effective predictions of sea level rise impacts on our coast will be attained through a focus on coupled modeling systems, particularly those that connect natural processes and human response.

  15. Axiomatic set theory

    CERN Document Server

    Suppes, Patrick

    1972-01-01

    This clear and well-developed approach to axiomatic set theory is geared toward upper-level undergraduates and graduate students. It examines the basic paradoxes and history of set theory and advanced topics such as relations and functions, equipollence, finite sets and cardinal numbers, rational and real numbers, and other subjects. 1960 edition.

  16. Evaluation of gravity field model EIGEN-6C4 by means of various functions of gravity potential, and by GNSS/levelling

    Directory of Open Access Journals (Sweden)

    Jan Kostelecký

    2015-06-01

    Full Text Available The combined gravity field model EIGEN-6C4 (Förste et al., 2014 is the latest combined global gravity field model of GFZ Potsdam and GRGS Toulouse. EIGEN-6C4 has been generated including the satellite gravity gradiometry data of the entire GOCE mission (November 2009 till October 2013 and is of maximum spherical degree and order 2190. In this study EIGEN-6C4 has been compared with EGM2008 to its maximum degree and order via gravity disturbances and Tzz part of the Marussi tensor of the second derivatives of the disturbing potential. The emphasis is put on such areas where GOCE data (complete set of gradiometry measurements after reductions in EIGEN-6C4 obviously contributes to an improvement of the gravity field description. GNSS/levelling geoid heights are independent data source for the evaluation of gravity field models. Therefore, we use the GNSS/levelling data sets over the territories of Europe, Czech Republic and Slovakia for the evaluation of EIGEN-6C4 w.r.t. EGM2008.

  17. Enriching the ECSI model using brand strength in the retail setting

    Directory of Open Access Journals (Sweden)

    Paraskevi Sarantidou

    2017-10-01

    Full Text Available Purpose - The purpose of this paper is to investigate the role of the retailer’s brand strength as a potential predictor of loyalty. It also examines the role of customer satisfaction (CS to the retailer’s loyalty as well as its impact on the retailer’s brand strength. Design/methodology/approach - The study was conducted in the grocery context and in a market under recession using the European Customer Satisfaction Index (ECSI model. Data were collected through a telephone survey from 2,000 participants responsible for the household grocery shopping with a quota of 250 respondents from each of the leading grocery retailers in Greece. A formative measurement model was developed and the collected data were analyzed using partial least square path modeling. Findings - The findings revealed that the strength of the retailer’s brand and CS influence retail loyalty and that brand strength mediate the strength of CS to loyalty. Results also suggested that the expectations and the perceptions toward the retailer’s product offering are the most important drivers of CS and loyalty. Thus, the study has proved the importance of the functional store attributes to CS and loyalty in the grocery store setting. Originality/value - Research examining the suitability of the ECSI model in the grocery setting and in a market under economic crisis is scarce. This paper addresses these shortcomings by examining a customer loyalty model which incorporates the brand strength construct and investigates the role of brand strength as a potential predictor of loyalty as well as the role of CS in the brand strength and loyalty.

  18. Index-based groundwater vulnerability mapping models using hydrogeological settings: A critical evaluation

    International Nuclear Information System (INIS)

    Kumar, Prashant; Bansod, Baban K.S.; Debnath, Sanjit K.; Thakur, Praveen Kumar; Ghanshyam, C.

    2015-01-01

    Groundwater vulnerability maps are useful for decision making in land use planning and water resource management. This paper reviews the various groundwater vulnerability assessment models developed across the world. Each model has been evaluated in terms of its pros and cons and the environmental conditions of its application. The paper further discusses the validation techniques used for the generated vulnerability maps by various models. Implicit challenges associated with the development of the groundwater vulnerability assessment models have also been identified with scientific considerations to the parameter relations and their selections. - Highlights: • Various index-based groundwater vulnerability assessment models have been discussed. • A comparative analysis of the models and its applicability in different hydrogeological settings has been discussed. • Research problems of underlying vulnerability assessment models are also reported in this review paper

  19. Index-based groundwater vulnerability mapping models using hydrogeological settings: A critical evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Prashant, E-mail: prashantkumar@csio.res.in [CSIR-Central Scientific Instruments Organisation, Chandigarh 160030 (India); Academy of Scientific and Innovative Research—CSIO, Chandigarh 160030 (India); Bansod, Baban K.S.; Debnath, Sanjit K. [CSIR-Central Scientific Instruments Organisation, Chandigarh 160030 (India); Academy of Scientific and Innovative Research—CSIO, Chandigarh 160030 (India); Thakur, Praveen Kumar [Indian Institute of Remote Sensing (ISRO), Dehradun 248001 (India); Ghanshyam, C. [CSIR-Central Scientific Instruments Organisation, Chandigarh 160030 (India); Academy of Scientific and Innovative Research—CSIO, Chandigarh 160030 (India)

    2015-02-15

    Groundwater vulnerability maps are useful for decision making in land use planning and water resource management. This paper reviews the various groundwater vulnerability assessment models developed across the world. Each model has been evaluated in terms of its pros and cons and the environmental conditions of its application. The paper further discusses the validation techniques used for the generated vulnerability maps by various models. Implicit challenges associated with the development of the groundwater vulnerability assessment models have also been identified with scientific considerations to the parameter relations and their selections. - Highlights: • Various index-based groundwater vulnerability assessment models have been discussed. • A comparative analysis of the models and its applicability in different hydrogeological settings has been discussed. • Research problems of underlying vulnerability assessment models are also reported in this review paper.

  20. Random Intercept and Random Slope 2-Level Multilevel Models

    Directory of Open Access Journals (Sweden)

    Rehan Ahmad Khan

    2012-11-01

    Full Text Available Random intercept model and random intercept & random slope model carrying two-levels of hierarchy in the population are presented and compared with the traditional regression approach. The impact of students’ satisfaction on their grade point average (GPA was explored with and without controlling teachers influence. The variation at level-1 can be controlled by introducing the higher levels of hierarchy in the model. The fanny movement of the fitted lines proves variation of student grades around teachers.

  1. Comparing of goal setting strategy with group education method to increase physical activity level: A randomized trial

    Directory of Open Access Journals (Sweden)

    Nasrin Jiryaee

    2015-01-01

    Full Text Available Background: Designing an intervention to increase physical activity is important to be based on the health care settings resources and be acceptable by the subject group. This study was designed to assess and compare the effect of the goal setting strategy with a group education method on increasing the physical activity of mothers of children aged 1 to 5. Materials and Methods: Mothers who had at least one child of 1-5 years were randomized into two groups. The effect of 1 goal-setting strategy and 2 group education method on increasing physical activity was assessed and compared 1 month and 3 months after the intervention. Also, the weight, height, body mass index (BMI, waist and hip circumference, and well-being were compared between the two groups before and after the intervention. Results: Physical activity level increased significantly after the intervention in the goal-setting group and it was significantly different between the two groups after intervention (P < 0.05. BMI, waist circumference, hip circumference, and well-being score were significantly different in the goal-setting group after the intervention. In the group education method, only the well-being score improved significantly (P < 0.05. Conclusion: Our study presented the effects of using the goal-setting strategy to boost physical activity, improving the state of well-being and decreasing BMI, waist, and hip circumference.

  2. Formation of organic aerosol in the Paris region during the MEGAPOLI summer campaign: evaluation of the volatility-basis-set approach within the CHIMERE model

    Directory of Open Access Journals (Sweden)

    Q. J. Zhang

    2013-06-01

    Full Text Available Simulations with the chemistry transport model CHIMERE are compared to measurements performed during the MEGAPOLI (Megacities: Emissions, urban, regional and Global Atmospheric POLlution and climate effects, and Integrated tools for assessment and mitigation summer campaign in the Greater Paris region in July 2009. The volatility-basis-set approach (VBS is implemented into this model, taking into account the volatility of primary organic aerosol (POA and the chemical aging of semi-volatile organic species. Organic aerosol is the main focus and is simulated with three different configurations with a modified treatment of POA volatility and modified secondary organic aerosol (SOA formation schemes. In addition, two types of emission inventories are used as model input in order to test the uncertainty related to the emissions. Predictions of basic meteorological parameters and primary and secondary pollutant concentrations are evaluated, and four pollution regimes are defined according to the air mass origin. Primary pollutants are generally overestimated, while ozone is consistent with observations. Sulfate is generally overestimated, while ammonium and nitrate levels are well simulated with the refined emission data set. As expected, the simulation with non-volatile POA and a single-step SOA formation mechanism largely overestimates POA and underestimates SOA. Simulation of organic aerosol with the VBS approach taking into account the aging of semi-volatile organic compounds (SVOC shows the best correlation with measurements. High-concentration events observed mostly after long-range transport are well reproduced by the model. Depending on the emission inventory used, simulated POA levels are either reasonable or underestimated, while SOA levels tend to be overestimated. Several uncertainties related to the VBS scheme (POA volatility, SOA yields, the aging parameterization, to emission input data, and to simulated OH levels can be responsible for

  3. Modelling the economic impact of three lameness causing diseases using herd and cow level evidence.

    Science.gov (United States)

    Ettema, Jehan; Østergaard, Søren; Kristensen, Anders Ringgaard

    2010-06-01

    Diseases to the cow's hoof, interdigital skin and legs are highly prevalent and of large economic impact in modern dairy farming. In order to support farmer's decisions on preventing and treating lameness and its underlying causes, decision support models can be used to predict the economic profitability of such actions. An existing approach of modelling lameness as one health disorder in a dynamic, stochastic and mechanistic simulation model has been improved in two ways. First of all, three underlying diseases causing lameness were modelled: digital dermatitis, interdigital hyperplasia and claw horn diseases. Secondly, the existing simulation model was set-up in way that it uses hyper-distributions describing diseases risk of the three lameness causing diseases. By combining information on herd level risk factors with prevalence of lameness or prevalence of underlying diseases among cows, marginal posterior probability distributions for disease prevalence in the specific herd are created in a Bayesian network. Random draws from these distributions are used by the simulation model to describe disease risk. Hereby field data on prevalence is used systematically and uncertainty around herd specific risk is represented. Besides the fact that estimated profitability of halving disease risk depended on the hyper-distributions used, the estimates differed for herds with different levels of diseases risk and reproductive efficiency. (c) 2010 Elsevier B.V. All rights reserved.

  4. Multivariate Term Structure Models with Level and Heteroskedasticity Effects

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    The paper introduces and estimates a multivariate level-GARCH model for the long rate and the term-structure spread where the conditional volatility is proportional to the ãth power of the variable itself (level effects) and the conditional covariance matrix evolves according to a multivariate GA...... and the level model. GARCH effects are more important than level effects. The results are robust to the maturity of the interest rates. Udgivelsesdato: MAY...

  5. ImSET 3.1: Impact of Sector Energy Technologies Model Description and User's Guide

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Michael J.; Livingston, Olga V.; Balducci, Patrick J.; Roop, Joseph M.; Schultz, Robert W.

    2009-05-22

    This 3.1 version of the Impact of Sector Energy Technologies (ImSET) model represents the next generation of the previously-built ImSET model (ImSET 2.0) that was developed in 2005 to estimate the macroeconomic impacts of energy-efficient technology in buildings. In particular, a special-purpose version of the Benchmark National Input-Output (I-O) model was designed specifically to estimate the national employment and income effects of the deployment of Office of Energy Efficiency and Renewable Energy (EERE)–developed energy-saving technologies. In comparison with the previous versions of the model, this version features the use of the U.S. Bureau of Economic Analysis 2002 national input-output table and the central processing code has been moved from the FORTRAN legacy operating environment to a modern C++ code. ImSET is also easier to use than extant macroeconomic simulation models and incorporates information developed by each of the EERE offices as part of the requirements of the Government Performance and Results Act. While it does not include the ability to model certain dynamic features of markets for labor and other factors of production featured in the more complex models, for most purposes these excluded features are not critical. The analysis is credible as long as the assumption is made that relative prices in the economy would not be substantially affected by energy efficiency investments. In most cases, the expected scale of these investments is small enough that neither labor markets nor production cost relationships should seriously affect national prices as the investments are made. The exact timing of impacts on gross product, employment, and national wage income from energy efficiency investments is not well-enough understood that much special insight can be gained from the additional dynamic sophistication of a macroeconomic simulation model. Thus, we believe that this version of ImSET is a cost-effective solution to estimating the economic

  6. Noise in restaurants: levels and mathematical model.

    Science.gov (United States)

    To, Wai Ming; Chung, Andy

    2014-01-01

    Noise affects the dining atmosphere and is an occupational hazard to restaurant service employees worldwide. This paper examines the levels of noise in dining areas during peak hours in different types of restaurants in Hong Kong SAR, China. A mathematical model that describes the noise level in a restaurant is presented. The 1-h equivalent continuous noise level (L(eq,1-h)) was measured using a Type-1 precision integral sound level meter while the occupancy density, the floor area of the dining area, and the ceiling height of each of the surveyed restaurants were recorded. It was found that the measured noise levels using Leq,1-h ranged from 67.6 to 79.3 dBA in Chinese restaurants, from 69.1 to 79.1 dBA in fast food restaurants, and from 66.7 to 82.6 dBA in Western restaurants. Results of the analysis of variance show that there were no significant differences between means of the measured noise levels among different types of restaurants. A stepwise multiple regression analysis was employed to determine the relationships between geometrical and operational parameters and the measured noise levels. Results of the regression analysis show that the measured noise levels depended on the levels of occupancy density only. By reconciling the measured noise levels and the mathematical model, it was found that people in restaurants increased their voice levels when the occupancy density increased. Nevertheless, the maximum measured hourly noise level indicated that the noise exposure experienced by restaurant service employees was below the regulated daily noise exposure value level of 85 dBA.

  7. Noise in restaurants: Levels and mathematical model

    Directory of Open Access Journals (Sweden)

    Wai Ming To

    2014-01-01

    Full Text Available Noise affects the dining atmosphere and is an occupational hazard to restaurant service employees worldwide. This paper examines the levels of noise in dining areas during peak hours in different types of restaurants in Hong Kong SAR, China. A mathematical model that describes the noise level in a restaurant is presented. The 1-h equivalent continuous noise level (Leq,1-h was measured using a Type-1 precision integral sound level meter while the occupancy density, the floor area of the dining area, and the ceiling height of each of the surveyed restaurants were recorded. It was found that the measured noise levels using Leq,1-h ranged from 67.6 to 79.3 dBA in Chinese restaurants, from 69.1 to 79.1 dBA in fast food restaurants, and from 66.7 to 82.6 dBA in Western restaurants. Results of the analysis of variance show that there were no significant differences between means of the measured noise levels among different types of restaurants. A stepwise multiple regression analysis was employed to determine the relationships between geometrical and operational parameters and the measured noise levels. Results of the regression analysis show that the measured noise levels depended on the levels of occupancy density only. By reconciling the measured noise levels and the mathematical model, it was found that people in restaurants increased their voice levels when the occupancy density increased. Nevertheless, the maximum measured hourly noise level indicated that the noise exposure experienced by restaurant service employees was below the regulated daily noise exposure value level of 85 dBA.

  8. Special set linear algebra and special set fuzzy linear algebra

    OpenAIRE

    Kandasamy, W. B. Vasantha; Smarandache, Florentin; Ilanthenral, K.

    2009-01-01

    The authors in this book introduce the notion of special set linear algebra and special set fuzzy Linear algebra, which is an extension of the notion set linear algebra and set fuzzy linear algebra. These concepts are best suited in the application of multi expert models and cryptology. This book has five chapters. In chapter one the basic concepts about set linear algebra is given in order to make this book a self contained one. The notion of special set linear algebra and their fuzzy analog...

  9. On a Formalization of Cantor Set Theory for Natural Models of the Physical Phenomena

    Directory of Open Access Journals (Sweden)

    Nudel'man A. S.

    2010-01-01

    Full Text Available This article presents a set theory which is an extension of ZFC . In contrast to ZFC , a new theory admits absolutely non-denumerable sets. It is feasible that a symbiosis of the proposed theory and Vdovin set theory will permit to formulate a (presumably non- contradictory axiomatic set theory which will represent the core of Cantor set theory in a maximally full manner as to the essence and the contents of the latter. This is possible due to the fact that the generalized principle of choice and the generalized continuum hypothesis are proved in Vdovin theory. The theory, being more complete than ZF and more natural according to Cantor, will allow to construct and study (in its framework only natural models of the real physical phenomena.

  10. On a Formalization of Cantor Set Theory for Natural Models of the Physical Phenomena

    Directory of Open Access Journals (Sweden)

    Nudel'man A. S.

    2010-01-01

    Full Text Available This article presents a set theory which is an extension of $ZFC$. In contrast to $ZFC$, a new theory admits absolutely non-denumerable sets. It is feasible that a symbiosis of the proposed theory and Vdovin set theory will permit to formulate a (presumably non-contradictory axiomatic set theory which will represent the core of Cantor set theory in a maximally full manner as to the essence and the contents of the latter. This is possible due to the fact that the generalized principle of choice and the generalized continuum hypothesis are proved in Vdovin theory. The theory, being more complete than $ZF$ and more natural according to Cantor, will allow to construct and study (in its framework only natural models of the real physical phenomena.

  11. A fuzzy chance-constrained programming model with type 1 and type 2 fuzzy sets for solid waste management under uncertainty

    Science.gov (United States)

    Ma, Xiaolin; Ma, Chi; Wan, Zhifang; Wang, Kewei

    2017-06-01

    Effective management of municipal solid waste (MSW) is critical for urban planning and development. This study aims to develop an integrated type 1 and type 2 fuzzy sets chance-constrained programming (ITFCCP) model for tackling regional MSW management problem under a fuzzy environment, where waste generation amounts are supposed to be type 2 fuzzy variables and treated capacities of facilities are assumed to be type 1 fuzzy variables. The evaluation and expression of uncertainty overcome the drawbacks in describing fuzzy possibility distributions as oversimplified forms. The fuzzy constraints are converted to their crisp equivalents through chance-constrained programming under the same or different confidence levels. Regional waste management of the City of Dalian, China, was used as a case study for demonstration. The solutions under various confidence levels reflect the trade-off between system economy and reliability. It is concluded that the ITFCCP model is capable of helping decision makers to generate reasonable waste-allocation alternatives under uncertainties.

  12. Multi-Level Marketing as a business model

    Directory of Open Access Journals (Sweden)

    Bogdan Gregor

    2013-03-01

    Full Text Available Multi Level Marketing is a very popular business model in the Western countries. It is a kind of hybrid of the method of distribution of goods and the method of building a sales network. It is one of the safest (carries a very low risk ways of conducting a business activity. The knowledge about functioning of this business model, both among theoreticians (scanty literature on the subject and practitioners, is still insufficient in Poland. Thus, the presented paper has been prepared as — in the Authors' opinion — it, at least infinitesimally, bridges the gap in the recognition of Multi Level Marketing issues. The aim of the study was, first of all, to describe Multi Level Marketing, to indicate practical benefits of this business model as well as to present basic systems of calculating a commission, which are used in marketing plans of companies. The discussion was based on the study of literature and the knowledge gained in the course of free-form interviews with the leaders of the sector.

  13. Analysis of the experimental data of air pollution using atmospheric dispersion modeling and rough set

    International Nuclear Information System (INIS)

    Halfa, I.K.I

    2008-01-01

    This thesis contains four chapters and list of references:In chapter 1, we introduce a brief survey about the atmospheric concepts and the topological methods for data analysis.In section 1.1, we give introduce a general introduction. We recall some of atmospheric fundamentals in Section 1.2. Section 1.3, shows the concepts of modern topological methods for data analysis.In chapter 2, we have studied the properties of atmosphere and focus on concept of Rough set and its properties. This concepts of rough set has been applied to analyze the atmospheric data.In section 2.1, we introduce a general introduction about concept of rough set and properties of atmosphere. Section 2.2 focuses on the concept of rough set and its properties and generalization of approximation of rough set theory by using topological space. In section 2.3 we have studied the stabilities of atmosphere for Inshas location for all seasons using different schemes and compared these schemes using statistical and rough set methods. In section 2.4, we introduce mixing height of plume for all seasons. Section 2.5 introduced seasonal surface layer turbulence processes for Inshas location. Section 2.6 gives a comparison between the seasonal surface layer turbulence processes for Inshas location and for different locations using rough set theory.In chapter 3 we focus on the concept of variable precision rough set (VPRS) and its properties and using it to compare, between the estimated and observed data of the concentration of air pollution for Inshas location. In Section 3.1 we introduce a general introduction about VPRS and air pollution. In Section 3.2 we have focused on the concept and properties of VPRS. In Section 3.3 we have introduced a method to estimate the concentration of air pollution for Inshas location using Gaussian plume model. Section 3.4 has showed the experimental data. The estimated data have been compared with the observed data using statistical methods in Section 3.5. In Section 3

  14. Aeon: Synthesizing Scheduling Algorithms from High-Level Models

    Science.gov (United States)

    Monette, Jean-Noël; Deville, Yves; van Hentenryck, Pascal

    This paper describes the aeon system whose aim is to synthesize scheduling algorithms from high-level models. A eon, which is entirely written in comet, receives as input a high-level model for a scheduling application which is then analyzed to generate a dedicated scheduling algorithm exploiting the structure of the model. A eon provides a variety of synthesizers for generating complete or heuristic algorithms. Moreover, synthesizers are compositional, making it possible to generate complex hybrid algorithms naturally. Preliminary experimental results indicate that this approach may be competitive with state-of-the-art search algorithms.

  15. Formalization of the General Model of the Green Economy at the Regional Level

    Directory of Open Access Journals (Sweden)

    Ivan Mikhaylovich Potravny

    2016-06-01

    Full Text Available The subject-matter of the article is the research of the problems of economic and mathematical modelling of the green economy at the regional level. The purpose of the research is the substantiation and development of economic and mathematical tools for the modelling of economic and ecological systems at the regional level on the basis of the principles of the green economy. The hypothesis of the study is based on the thesis that in the conditions of the resource exhaustion and depletion of natural capital, the technogenic deposits as well as production waste and consumption can be considered as the resource base for modern production, whose use leads to the elimination of accumulated environmental damage and substitution of natural resources. In the article, the approaches to the system modelling problems are considered to develop the green economy at the level of the country and its regions. The relevance of the transition to a green economy is confirmed by the theoretical and practical research in the field of the cyclic development of socio-eco-economic systems. A number of formalized models and methods for solving current environmental-economic issues including the economic valuation of accumulated environmental damage, eco-economic assessment of the efficiency of natural resource substitution with resource-substitute are proposed as well as the choice of an optimal set of resources-substitutes taking into account the financial and natural resource constraints. The standard models of green growth are considered. These models take into account the exhaustion of natural resources, involvement of the resources of technogenic deposits in the economic circulation through the implementation of investment projects on the elimination of accumulated environmental damage. The results of the study may be used in the different regions of the country for the justification and implementation of investment projects in the framework of the Federal Target

  16. On the use of Locally Dense Basis Sets in the Calculation of EPR Hyperfine Couplings

    DEFF Research Database (Denmark)

    Hedegård, Erik D.; Sauer, Stephan P. A.; Milhøj, Birgitte O.

    2013-01-01

    The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site in the c......The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site...

  17. On the use of locally dense basis sets in the calculation of EPR hyperfine couplings

    DEFF Research Database (Denmark)

    Milhøj, Birgitte Olai; Hedegård, Erik D.; Sauer, Stephan P. A.

    2013-01-01

    The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site in the c......The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site...

  18. A model for a national low level waste program

    International Nuclear Information System (INIS)

    Blankenhorn, James A.

    2009-01-01

    A national program for the management of low level waste is essential to the success of environmental clean-up, decontamination and decommissioning, current operations and future missions. The value of a national program is recognized through procedural consistency and a shared set of resources. A national program requires a clear waste definition and an understanding of waste characteristics matched against available and proposed disposal options. A national program requires the development and implementation of standards and procedures for implementing the waste hierarchy, with a specitic emphasis on waste avoidance, minimization and recycling. It requires a common set of objectives for waste characterization based on the disposal facility's waste acceptance criteria, regulatory and license requirements and performance assessments. Finally, a national waste certification program is required to ensure compliance. To facilitate and enhance the national program, a centralized generator services organization, tasked with providing technical services to the generators on behalf of the national program, is necessary. These subject matter experts are the interface between the generating sites and the disposal facility(s). They provide an invaluable service to the generating organizations through their involvement in waste planning prior to waste generation and through championing implementation of the waste hierarchy. Through their interface, national treatment and transportation services are optimized and new business opportunities are identified. This national model is based on extensive experience in the development and on-going management of a national transuranic waste program and management of the national repository, the Waste Isolation Pilot Plant. The Low Level Program at the Savannah River Site also successfully developed and implemented the waste hierarchy, waste certification and waste generator services concepts presented below. The Savannah River Site

  19. Development of a low-level radioactive waste shipper model. National Low-Level Radioactive Waste Management Program

    International Nuclear Information System (INIS)

    1983-03-01

    During 1982, Inter/Face Associates, Inc., conducted a low-level radioactive waste management survey of Nuclear Regulatory Commission (NRC) licensees in Massachusetts for the US Department of Energy's National Low-Level Waste Management Program. In the process of conducting the survey, a model was developed, based on existing NRC license classification systems, that would identify licensees who ship low-level waste for disposal. This report presents the model and documents the procedures used in developing and testing it. After the model was tested, several modifications were developed with the goal of determining the model's ability to identify waste shippers under different parameters. The report includes a discussion of the modifications

  20. USE OF ROUGH SETS AND SPECTRAL DATA FOR BUILDING PREDICTIVE MODELS OF REACTION RATE CONSTANTS

    Science.gov (United States)

    A model for predicting the log of the rate constants for alkaline hydrolysis of organic esters has been developed with the use of gas-phase min-infrared library spectra and a rule-building software system based on the mathematical theory of rough sets. A diverse set of 41 esters ...

  1. Robust set-point regulation for ecological models with multiple management goals.

    Science.gov (United States)

    Guiver, Chris; Mueller, Markus; Hodgson, Dave; Townley, Stuart

    2016-05-01

    Population managers will often have to deal with problems of meeting multiple goals, for example, keeping at specific levels both the total population and population abundances in given stage-classes of a stratified population. In control engineering, such set-point regulation problems are commonly tackled using multi-input, multi-output proportional and integral (PI) feedback controllers. Building on our recent results for population management with single goals, we develop a PI control approach in a context of multi-objective population management. We show that robust set-point regulation is achieved by using a modified PI controller with saturation and anti-windup elements, both described in the paper, and illustrate the theory with examples. Our results apply more generally to linear control systems with positive state variables, including a class of infinite-dimensional systems, and thus have broader appeal.

  2. An export coefficient based inexact fuzzy bi-level multi-objective programming model for the management of agricultural nonpoint source pollution under uncertainty

    Science.gov (United States)

    Cai, Yanpeng; Rong, Qiangqiang; Yang, Zhifeng; Yue, Wencong; Tan, Qian

    2018-02-01

    In this research, an export coefficient based inexact fuzzy bi-level multi-objective programming (EC-IFBLMOP) model was developed through integrating export coefficient model (ECM), interval parameter programming (IPP) and fuzzy parameter programming (FPP) within a bi-level multi-objective programming framework. The proposed EC-IFBLMOP model can effectively deal with the multiple uncertainties expressed as discrete intervals and fuzzy membership functions. Also, the complexities in agricultural systems, such as the cooperation and gaming relationship between the decision makers at different levels, can be fully considered in the model. The developed model was then applied to identify the optimal land use patterns and BMP implementing levels for agricultural nonpoint source (NPS) pollution management in a subcatchment in the upper stream watershed of the Miyun Reservoir in north China. The results of the model showed that the desired optimal land use patterns and implementing levels of best management of practices (BMPs) would be obtained. It is the gaming result between the upper- and lower-level decision makers, when the allowable discharge amounts of NPS pollutants were limited. Moreover, results corresponding to different decision scenarios could provide a set of decision alternatives for the upper- and lower-level decision makers to identify the most appropriate management strategy. The model has a good applicability and can be effectively utilized for agricultural NPS pollution management.

  3. A Decomposition Model for HPLC-DAD Data Set and Its Solution by Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Lizhi Cui

    2014-01-01

    Full Text Available This paper proposes a separation method, based on the model of Generalized Reference Curve Measurement and the algorithm of Particle Swarm Optimization (GRCM-PSO, for the High Performance Liquid Chromatography with Diode Array Detection (HPLC-DAD data set. Firstly, initial parameters are generated to construct reference curves for the chromatogram peaks of the compounds based on its physical principle. Then, a General Reference Curve Measurement (GRCM model is designed to transform these parameters to scalar values, which indicate the fitness for all parameters. Thirdly, rough solutions are found by searching individual target for every parameter, and reinitialization only around these rough solutions is executed. Then, the Particle Swarm Optimization (PSO algorithm is adopted to obtain the optimal parameters by minimizing the fitness of these new parameters given by the GRCM model. Finally, spectra for the compounds are estimated based on the optimal parameters and the HPLC-DAD data set. Through simulations and experiments, following conclusions are drawn: (1 the GRCM-PSO method can separate the chromatogram peaks and spectra from the HPLC-DAD data set without knowing the number of the compounds in advance even when severe overlap and white noise exist; (2 the GRCM-PSO method is able to handle the real HPLC-DAD data set.

  4. Elevated gamma glutamyl transferase levels are associated with the location of acute pulmonary embolism. Cross-sectional evaluation in hospital setting

    Directory of Open Access Journals (Sweden)

    Ozge Korkmaz

    Full Text Available ABSTRACT CONTEXT AND OBJECTIVE: The location of embolism is associated with clinical findings and disease severity in cases of acute pulmonary embolism. The level of gamma-glutamyl transferase increases under oxidative stress-related conditions. In this study, we investigated whether gamma-glutamyl transferase levels could predict the location of pulmonary embolism. DESIGN AND SETTING: Hospital-based cross-sectional study at Cumhuriyet University, Sivas, Turkey. METHODS : 120 patients who were diagnosed with acute pulmonary embolism through computed tomography-assisted pulmonary angiography were evaluated. They were divided into two main groups (proximally and distally located, and subsequently into subgroups according to thrombus localization as follows: first group (thrombus in main pulmonary artery; n = 9; second group (thrombus in main pulmonary artery branches; n = 71; third group (thrombus in pulmonary artery segmental branches; n = 34; and fourth group (thrombus in pulmonary artery subsegmental branches; n = 8. RESULTS : Gamma-glutamyl transferase levels on admission, heart rate, oxygen saturation, right ventricular dilatation/hypokinesia, pulmonary artery systolic pressure and cardiopulmonary resuscitation requirement showed prognostic significance in univariate analysis. The multivariate logistic regression model showed that gamma-glutamyl transferase level on admission (odds ratio, OR = 1.044; 95% confidence interval, CI: 1.011-1.079; P = 0.009 and pulmonary artery systolic pressure (OR = 1.063; 95% CI: 1.005-1.124; P = 0.033 remained independently associated with proximally localized thrombus in pulmonary artery. CONCLUSIONS : The findings revealed a significant association between increased existing embolism load in the pulmonary artery and increased serum gamma-glutamyl transferase levels.

  5. Exhaustively characterizing feasible logic models of a signaling network using Answer Set Programming.

    Science.gov (United States)

    Guziolowski, Carito; Videla, Santiago; Eduati, Federica; Thiele, Sven; Cokelaer, Thomas; Siegel, Anne; Saez-Rodriguez, Julio

    2013-09-15

    Logic modeling is a useful tool to study signal transduction across multiple pathways. Logic models can be generated by training a network containing the prior knowledge to phospho-proteomics data. The training can be performed using stochastic optimization procedures, but these are unable to guarantee a global optima or to report the complete family of feasible models. This, however, is essential to provide precise insight in the mechanisms underlaying signal transduction and generate reliable predictions. We propose the use of Answer Set Programming to explore exhaustively the space of feasible logic models. Toward this end, we have developed caspo, an open-source Python package that provides a powerful platform to learn and characterize logic models by leveraging the rich modeling language and solving technologies of Answer Set Programming. We illustrate the usefulness of caspo by revisiting a model of pro-growth and inflammatory pathways in liver cells. We show that, if experimental error is taken into account, there are thousands (11 700) of models compatible with the data. Despite the large number, we can extract structural features from the models, such as links that are always (or never) present or modules that appear in a mutual exclusive fashion. To further characterize this family of models, we investigate the input-output behavior of the models. We find 91 behaviors across the 11 700 models and we suggest new experiments to discriminate among them. Our results underscore the importance of characterizing in a global and exhaustive manner the family of feasible models, with important implications for experimental design. caspo is freely available for download (license GPLv3) and as a web service at http://caspo.genouest.org/. Supplementary materials are available at Bioinformatics online. santiago.videla@irisa.fr.

  6. District health manager and mid-level provider perceptions of practice environments in acute obstetric settings in Tanzania: a mixed-method study.

    Science.gov (United States)

    Ng'ang'a, Njoki; Byrne, Mary Woods; Kruk, Margaret E; Shemdoe, Aloisia; de Pinho, Helen

    2016-08-08

    In sub-Saharan Africa, the capacity of human resources for health (HRH) managers to create positive practice environments that enable motivated, productive, and high-performing HRH is weak. We implemented a unique approach to examining HRH management practices by comparing perspectives offered by mid-level providers (MLPs) of emergency obstetric care (EmOC) in Tanzania to those presented by local health authorities, known as council health management teams (CHMTs). This study was guided by the basic strategic human resources management (SHRM) component model. A convergent mixed-method design was utilized to assess qualitative and quantitative data from the Health Systems Strengthening for Equity: The Power and Potential of Mid-Level Providers project. Survey data was obtained from 837 mid-level providers, 83 of whom participated in a critical incident interview whose aim was to elicit negative events in the practice environment that induced intention to leave their job. HRH management practices were assessed quantitatively in 48 districts with 37 members of CHMTs participating in semi-structured interviews. The eight human resources management practices enumerated in the basic SHRM component model were implemented unevenly. On the one hand, members of CHMTs and mid-level providers agreed that there were severe shortages of health workers, deficient salaries, and an overwhelming workload. On the other hand, members of CHMTs and mid-level providers differed in their perspectives on rewards and allocation of opportunities for in-service training. Although written standards of performance and supervision requirements were available in most districts, they did not reflect actual duties. Members of CHMTs reported high levels of autonomy in key HRH management practices, but mid-level providers disputed the degree to which the real situation on the ground was factored into job-related decision-making by CHMTs. The incongruence in perspectives offered by members of CHMTs

  7. Musculoskeletal Simulation Model Generation from MRI Data Sets and Motion Capture Data

    Science.gov (United States)

    Schmid, Jérôme; Sandholm, Anders; Chung, François; Thalmann, Daniel; Delingette, Hervé; Magnenat-Thalmann, Nadia

    Today computer models and computer simulations of the musculoskeletal system are widely used to study the mechanisms behind human gait and its disorders. The common way of creating musculoskeletal models is to use a generic musculoskeletal model based on data derived from anatomical and biomechanical studies of cadaverous specimens. To adapt this generic model to a specific subject, the usual approach is to scale it. This scaling has been reported to introduce several errors because it does not always account for subject-specific anatomical differences. As a result, a novel semi-automatic workflow is proposed that creates subject-specific musculoskeletal models from magnetic resonance imaging (MRI) data sets and motion capture data. Based on subject-specific medical data and a model-based automatic segmentation approach, an accurate modeling of the anatomy can be produced while avoiding the scaling operation. This anatomical model coupled with motion capture data, joint kinematics information, and muscle-tendon actuators is finally used to create a subject-specific musculoskeletal model.

  8. Algorithms for detecting and analysing autocatalytic sets.

    Science.gov (United States)

    Hordijk, Wim; Smith, Joshua I; Steel, Mike

    2015-01-01

    Autocatalytic sets are considered to be fundamental to the origin of life. Prior theoretical and computational work on the existence and properties of these sets has relied on a fast algorithm for detectingself-sustaining autocatalytic sets in chemical reaction systems. Here, we introduce and apply a modified version and several extensions of the basic algorithm: (i) a modification aimed at reducing the number of calls to the computationally most expensive part of the algorithm, (ii) the application of a previously introduced extension of the basic algorithm to sample the smallest possible autocatalytic sets within a reaction network, and the application of a statistical test which provides a probable lower bound on the number of such smallest sets, (iii) the introduction and application of another extension of the basic algorithm to detect autocatalytic sets in a reaction system where molecules can also inhibit (as well as catalyse) reactions, (iv) a further, more abstract, extension of the theory behind searching for autocatalytic sets. (i) The modified algorithm outperforms the original one in the number of calls to the computationally most expensive procedure, which, in some cases also leads to a significant improvement in overall running time, (ii) our statistical test provides strong support for the existence of very large numbers (even millions) of minimal autocatalytic sets in a well-studied polymer model, where these minimal sets share about half of their reactions on average, (iii) "uninhibited" autocatalytic sets can be found in reaction systems that allow inhibition, but their number and sizes depend on the level of inhibition relative to the level of catalysis. (i) Improvements in the overall running time when searching for autocatalytic sets can potentially be obtained by using a modified version of the algorithm, (ii) the existence of large numbers of minimal autocatalytic sets can have important consequences for the possible evolvability of

  9. Model metadata report for the Somerset Levels 3D geological model

    OpenAIRE

    Gow, H.; Cripps, C.; Thorpe, S.; Horabin, C.; Lee, J.R.

    2014-01-01

    This report summarises the data, information and methodology used in a 3D geological model of the Somerset Levels. The model was constructed using the GSI3D software package and comprises superficial deposits at 1:50,000 scale and lower resolution bedrock units.

  10. Integer Set Compression and Statistical Modeling

    DEFF Research Database (Denmark)

    Larsson, N. Jesper

    2014-01-01

    enumeration of elements may be arbitrary or random, but where statistics is kept in order to estimate probabilities of elements. We present a recursive subset-size encoding method that is able to benefit from statistics, explore the effects of permuting the enumeration order based on element probabilities......Compression of integer sets and sequences has been extensively studied for settings where elements follow a uniform probability distribution. In addition, methods exist that exploit clustering of elements in order to achieve higher compression performance. In this work, we address the case where...

  11. RESPONSIVE URBAN MODELS BY PROCESSING SETS OF HETEROGENEOUS DATA

    Directory of Open Access Journals (Sweden)

    M. Calvano

    2018-05-01

    Full Text Available This paper presents some steps in experimentation aimed at describing urban spaces made following the series of earthquakes that affected a vast area of central Italy starting on 24 August 2016. More specifically, these spaces pertain to historical centres of limited size and case studies that can be called “problematic” (due to complex morphological and settlement conditions, because they are difficult to access, or because they have been affected by calamitous events, etc.. The main objectives were to verify the use of sets of heterogeneous data that are already largely available to define a workflow and develop procedures that would allow some of the steps to be automated as much as possible. The most general goal was to use the experimentation to define a methodology to approach the problem aimed at developing descriptive responsive models of the urban space, that is, morphological and computer-based models capable of being modified in relation to the constantly updated flow of input data.

  12. A min cut-set-wise truncation procedure for importance measures computation in probabilistic safety assessment

    Energy Technology Data Exchange (ETDEWEB)

    Duflot, Nicolas [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: nicolas.duflot@areva.com; Berenguer, Christophe [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: christophe.berenguer@utt.fr; Dieulle, Laurence [Universite de technologie de Troyes, Institut Charles Delaunay/LM2S, FRE CNRS 2848, 12, rue Marie Curie, BP2060, F-10010 Troyes cedex (France)], E-mail: laurence.dieulle@utt.fr; Vasseur, Dominique [EPSNA Group (Nuclear PSA and Application), EDF Research and Development, 1, avenue du Gal de Gaulle, 92141 Clamart cedex (France)], E-mail: dominique.vasseur@edf.fr

    2009-11-15

    A truncation process aims to determine among the set of minimal cut-sets (MCS) produced by a probabilistic safety assessment (PSA) model which of them are significant. Several truncation processes have been proposed for the evaluation of the probability of core damage ensuring a fixed accuracy level. However, the evaluation of new risk indicators as importance measures requires to re-examine the truncation process in order to ensure that the produced estimates will be accurate enough. In this paper a new truncation process is developed permitting to estimate from a single set of MCS the importance measure of any basic event with the desired accuracy level. The main contribution of this new method is to propose an MCS-wise truncation criterion involving two thresholds: an absolute threshold in addition to a new relative threshold concerning the potential probability of the MCS of interest. The method has been tested on a complete level 1 PSA model of a 900 MWe NPP developed by 'Electricite de France' (EDF) and the results presented in this paper indicate that to reach the same accuracy level the proposed method produces a set of MCS whose size is significantly reduced.

  13. A min cut-set-wise truncation procedure for importance measures computation in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Duflot, Nicolas; Berenguer, Christophe; Dieulle, Laurence; Vasseur, Dominique

    2009-01-01

    A truncation process aims to determine among the set of minimal cut-sets (MCS) produced by a probabilistic safety assessment (PSA) model which of them are significant. Several truncation processes have been proposed for the evaluation of the probability of core damage ensuring a fixed accuracy level. However, the evaluation of new risk indicators as importance measures requires to re-examine the truncation process in order to ensure that the produced estimates will be accurate enough. In this paper a new truncation process is developed permitting to estimate from a single set of MCS the importance measure of any basic event with the desired accuracy level. The main contribution of this new method is to propose an MCS-wise truncation criterion involving two thresholds: an absolute threshold in addition to a new relative threshold concerning the potential probability of the MCS of interest. The method has been tested on a complete level 1 PSA model of a 900 MWe NPP developed by 'Electricite de France' (EDF) and the results presented in this paper indicate that to reach the same accuracy level the proposed method produces a set of MCS whose size is significantly reduced.

  14. Mathematical model comparing of the multi-level economics systems

    Science.gov (United States)

    Brykalov, S. M.; Kryanev, A. V.

    2017-12-01

    The mathematical model (scheme) of a multi-level comparison of the economic system, characterized by the system of indices, is worked out. In the mathematical model of the multi-level comparison of the economic systems, the indicators of peer review and forecasting of the economic system under consideration can be used. The model can take into account the uncertainty in the estimated values of the parameters or expert estimations. The model uses the multi-criteria approach based on the Pareto solutions.

  15. Development of a working set of waste package performance criteria for deepsea disposal of low-level radioactive waste. Final report

    International Nuclear Information System (INIS)

    Columbo, P.; Fuhrmann, M.; Neilson, R.M. Jr; Sailor, V.L.

    1982-11-01

    The United States ocean dumping regulations developed pursuant to PL92-532, the Marine Protection, Research, and Sanctuaries Act of 1972, as amended, provide for a general policy of isolation and containment of low-level radioactive waste after disposal into the ocean. In order to determine whether any particular waste packaging system is adequate to meet this general requirement, it is necessary to establish a set of performance criteria against which to evaluate a particular packaging system. These performance criteria must present requirements for the behavior of the waste in combination with its immobilization agent and outer container in a deepsea environment. This report presents a working set of waste package performance criteria, and includes a glossary of terms, characteristics of low-level radioactive waste, radioisotopes of importance in low-level radioactive waste, and a summary of domestic and international regulations which control the ocean disposal of these wastes

  16. Level Set-Based Topology Optimization for the Design of an Electromagnetic Cloak With Ferrite Material

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Andkjær, Jacob Anders

    2013-01-01

    . A level set-based topology optimization method incorporating a fictitious interface energy is used to find optimized configurations of the ferrite material. The numerical results demonstrate that the optimization successfully found an appropriate ferrite configuration that functions as an electromagnetic......This paper presents a structural optimization method for the design of an electromagnetic cloak made of ferrite material. Ferrite materials exhibit a frequency-dependent degree of permeability, due to a magnetic resonance phenomenon that can be altered by changing the magnitude of an externally...

  17. A Model for Teaching Rational Behavior Therapy in a Public School Setting.

    Science.gov (United States)

    Patton, Patricia L.

    A training model for the use of rational behavior therapy (RBT) with emotionally disturbed adolescents in a school setting is presented, including a structured, didactic format consisting of five basic RBT training techniques. The training sessions, lasting 10 weeks each, are described. Also presented is the organization for the actual classroom…

  18. A web-based study of the relationship of duration of insulin pump infusion set use and fasting blood glucose level in adults with type 1 diabetes.

    Science.gov (United States)

    Sampson Perrin, Alysa J; Guzzetta, Russell C; Miller, Kellee M; Foster, Nicole C; Lee, Anna; Lee, Joyce M; Block, Jennifer M; Beck, Roy W

    2015-05-01

    To evaluate the impact of infusion set use duration on glycemic control, we conducted an Internet-based study using the T1D Exchange's online patient community, Glu ( myGlu.org ). For 14 days, 243 electronically consented adults with type 1 diabetes (T1D) entered online that day's fasting blood glucose (FBG) level, the prior day's total daily insulin (TDI) dose, and whether the infusion set was changed. Mean duration of infusion set use was 3.0 days. Mean FBG level was higher with each successive day of infusion set use, increasing from 126 mg/dL on Day 1 to 133 mg/dL on Day 3 to 147 mg/dL on Day 5 (P<0.001). TDI dose did not vary with increased duration of infusion set use. Internet-based data collection was used to rapidly conduct the study at low cost. The results indicate that FBG levels increase with each additional day of insulin pump infusion set use.

  19. Level-set reconstruction algorithm for ultrafast limited-angle X-ray computed tomography of two-phase flows.

    Science.gov (United States)

    Bieberle, M; Hampel, U

    2015-06-13

    Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  20. The effects of climate downscaling technique and observational data set on modeled ecological responses

    Science.gov (United States)

    Afshin Pourmokhtarian; Charles T. Driscoll; John L. Campbell; Katharine Hayhoe; Anne M. K. Stoner

    2016-01-01

    Assessments of future climate change impacts on ecosystems typically rely on multiple climate model projections, but often utilize only one downscaling approach trained on one set of observations. Here, we explore the extent to which modeled biogeochemical responses to changing climate are affected by the selection of the climate downscaling method and training...

  1. Improving satellite-based PM2.5 estimates in China using Gaussian processes modeling in a Bayesian hierarchical setting.

    Science.gov (United States)

    Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun

    2017-08-01

    Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2  = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.

  2. Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic

    Science.gov (United States)

    Narendran, S.; Selvakumar, J.

    2018-04-01

    Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.

  3. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets.

    Science.gov (United States)

    Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing

    2017-03-01

    Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.

  4. Modelling of Signal - Level Crossing System

    Directory of Open Access Journals (Sweden)

    Daniel Novak

    2006-01-01

    Full Text Available The author presents an object-oriented model of a railway level-crossing system created for the purpose of functional requirements specification. Unified Modelling Language (UML, version 1.4, which enables specification, visualisation, construction and documentation of software system artefacts, was used. The main attention was paid to analysis and design phases. The former phase resulted in creation of use case diagrams and sequential diagrams, the latter in creation of class/object diagrams and statechart diagrams.

  5. Modeling and low-level waste management: an interagency workshop

    Energy Technology Data Exchange (ETDEWEB)

    Little, C.A.; Stratton, L.E. (comps.)

    1980-01-01

    The interagency workshop on Modeling and Low-Level Waste Management was held on December 1-4, 1980 in Denver, Colorado. Twenty papers were presented at this meeting which consisted of three sessions. First, each agency presented its point of view concerning modeling and the need for models in low-level radioactive waste applications. Second, a larger group of more technical papers was presented by persons actively involved in model development or applications. Last of all, four workshops were held to attempt to reach a consensus among participants regarding numerous waste modeling topics. Abstracts are provided for the papers presented at this workshop.

  6. Modeling and low-level waste management: an interagency workshop

    International Nuclear Information System (INIS)

    Little, C.A.; Stratton, L.E.

    1980-01-01

    The interagency workshop on Modeling and Low-Level Waste Management was held on December 1-4, 1980 in Denver, Colorado. Twenty papers were presented at this meeting which consisted of three sessions. First, each agency presented its point of view concerning modeling and the need for models in low-level radioactive waste applications. Second, a larger group of more technical papers was presented by persons actively involved in model development or applications. Last of all, four workshops were held to attempt to reach a consensus among participants regarding numerous waste modeling topics. Abstracts are provided for the papers presented at this workshop

  7. Modeling the effects of diagenesis on carbonate clumped-isotope values in deep- and shallow-water settings

    Science.gov (United States)

    Stolper, Daniel A.; Eiler, John M.; Higgins, John A.

    2018-04-01

    The measurement of multiply isotopically substituted ('clumped isotope') carbonate groups provides a way to reconstruct past mineral formation temperatures. However, dissolution-reprecipitation (i.e., recrystallization) reactions, which commonly occur during sedimentary burial, can alter a sample's clumped-isotope composition such that it partially or wholly reflects deeper burial temperatures. Here we derive a quantitative model of diagenesis to explore how diagenesis alters carbonate clumped-isotope values. We apply the model to a new dataset from deep-sea sediments taken from Ocean Drilling Project site 807 in the equatorial Pacific. This dataset is used to ground truth the model. We demonstrate that the use of the model with accompanying carbonate clumped-isotope and carbonate δ18O values provides new constraints on both the diagenetic history of deep-sea settings as well as past equatorial sea-surface temperatures. Specifically, the combination of the diagenetic model and data support previous work that indicates equatorial sea-surface temperatures were warmer in the Paleogene as compared to today. We then explore whether the model is applicable to shallow-water settings commonly preserved in the rock record. Using a previously published dataset from the Bahamas, we demonstrate that the model captures the main trends of the data as a function of burial depth and thus appears applicable to a range of depositional settings.

  8. Data, models, and views: towards integration of diverse numerical model components and data sets for scientific and public dissemination

    Science.gov (United States)

    Hofmeister, Richard; Lemmen, Carsten; Nasermoaddeli, Hassan; Klingbeil, Knut; Wirtz, Kai

    2015-04-01

    Data and models for describing coastal systems span a diversity of disciplines, communities, ecosystems, regions and techniques. Previous attempts of unifying data exchange, coupling interfaces, or metadata information have not been successful. We introduce the new Modular System for Shelves and Coasts (MOSSCO, http://www.mossco.de), a novel coupling framework that enables the integration of a diverse array of models and data from different disciplines relating to coastal research. In the MOSSCO concept, the integrating framework imposes very few restrictions on contributed data or models; in fact, there is no distinction made between data and models. The few requirements are: (1) principle coupleability, i.e. access to I/O and timing information in submodels, which has recently been referred to as the Basic Model Interface (BMI) (2) open source/open data access and licencing and (3) communication of metadata, such as spatiotemporal information, naming conventions, and physical units. These requirements suffice to integrate different models and data sets into the MOSSCO infrastructure and subsequently built a modular integrated modeling tool that can span a diversity of processes and domains. We demonstrate how diverse coastal system constituents were integrated into this modular framework and how we deal with the diverging development of constituent data sets and models at external institutions. Finally, we show results from simulations with the fully coupled system using OGC WebServices in the WiMo geoportal (http://kofserver3.hzg.de/wimo), from where stakeholders can view the simulation results for further dissemination.

  9. Paired fuzzy sets

    DEFF Research Database (Denmark)

    Rodríguez, J. Tinguaro; Franco de los Ríos, Camilo; Gómez, Daniel

    2015-01-01

    In this paper we want to stress the relevance of paired fuzzy sets, as already proposed in previous works of the authors, as a family of fuzzy sets that offers a unifying view for different models based upon the opposition of two fuzzy sets, simply allowing the existence of different types...

  10. System level modelling with open source tools

    DEFF Research Database (Denmark)

    Jakobsen, Mikkel Koefoed; Madsen, Jan; Niaki, Seyed Hosein Attarzadeh

    , called ForSyDe. ForSyDe is available under the open Source approach, which allows small and medium enterprises (SME) to get easy access to advanced modeling capabilities and tools. We give an introduction to the design methodology through the system level modeling of a simple industrial use case, and we...

  11. Delineating Facies Spatial Distribution by Integrating Ensemble Data Assimilation and Indicator Geostatistics with Level Set Transformation.

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn Edward; Song, Xuehang; Ye, Ming; Dai, Zhenxue; Zachara, John; Chen, Xingyuan

    2017-03-01

    A new approach is developed to delineate the spatial distribution of discrete facies (geological units that have unique distributions of hydraulic, physical, and/or chemical properties) conditioned not only on direct data (measurements directly related to facies properties, e.g., grain size distribution obtained from borehole samples) but also on indirect data (observations indirectly related to facies distribution, e.g., hydraulic head and tracer concentration). Our method integrates for the first time ensemble data assimilation with traditional transition probability-based geostatistics. The concept of level set is introduced to build shape parameterization that allows transformation between discrete facies indicators and continuous random variables. The spatial structure of different facies is simulated by indicator models using conditioning points selected adaptively during the iterative process of data assimilation. To evaluate the new method, a two-dimensional semi-synthetic example is designed to estimate the spatial distribution and permeability of two distinct facies from transient head data induced by pumping tests. The example demonstrates that our new method adequately captures the spatial pattern of facies distribution by imposing spatial continuity through conditioning points. The new method also reproduces the overall response in hydraulic head field with better accuracy compared to data assimilation with no constraints on spatial continuity on facies.

  12. Effect of liner design, pulsator setting, and vacuum level on bovine teat tissue changes and milking characteristics as measured by ultrasonography

    Directory of Open Access Journals (Sweden)

    Gleeson David E

    2004-05-01

    Full Text Available Friesian-type dairy cows were milked with different machine settings to determine the effect of these settings on teat tissue reaction and on milking characteristics. Three teat-cup liner designs were used with varying upper barrel dimensions (wide-bore WB = 31.6 mm; narrow-bore NB = 21.0 mm; narrow-bore NB1 = 25.0 mm. These liners were tested with alternate and simultaneous pulsation patterns, pulsator ratios (60:40 and 67:33 and three system vacuum levels (40, 44 and 50 kPa. Teat tissue was measured using ultrasonography, before milking and directly after milking. The measurements recorded were teat canal length (TCL, teat diameter (TD, cistern diameter (CD and teat wall thickness (TWT. Teat tissue changes were similar with a system vacuum level of either 50 kPa (mid-level or 40 kPa (low-level. Widening the liner upper barrel bore dimension from 21.0 mm (P

  13. A book of set theory

    CERN Document Server

    Pinter, Charles C

    2014-01-01

    Suitable for upper-level undergraduates, this accessible approach to set theory poses rigorous but simple arguments. Each definition is accompanied by commentary that motivates and explains new concepts. Starting with a repetition of the familiar arguments of elementary set theory, the level of abstract thinking gradually rises for a progressive increase in complexity.A historical introduction presents a brief account of the growth of set theory, with special emphasis on problems that led to the development of the various systems of axiomatic set theory. Subsequent chapters explore classes and

  14. Clinical validation of the LKB model and parameter sets for predicting radiation-induced pneumonitis from breast cancer radiotherapy

    International Nuclear Information System (INIS)

    Tsougos, Ioannis; Mavroidis, Panayiotis; Theodorou, Kyriaki; Rajala, J; Pitkaenen, M A; Holli, K; Ojala, A T; Hyoedynmaa, S; Jaervenpaeae, Ritva; Lind, Bengt K; Kappas, Constantin

    2006-01-01

    The choice of the appropriate model and parameter set in determining the relation between the incidence of radiation pneumonitis and dose distribution in the lung is of great importance, especially in the case of breast radiotherapy where the observed incidence is fairly low. From our previous study based on 150 breast cancer patients, where the fits of dose-volume models to clinical data were estimated (Tsougos et al 2005 Evaluation of dose-response models and parameters predicting radiation induced pneumonitis using clinical data from breast cancer radiotherapy Phys. Med. Biol. 50 3535-54), one could get the impression that the relative seriality is significantly better than the LKB NTCP model. However, the estimation of the different NTCP models was based on their goodness-of-fit on clinical data, using various sets of published parameters from other groups, and this fact may provisionally justify the results. Hence, we sought to investigate further the LKB model, by applying different published parameter sets for the very same group of patients, in order to be able to compare the results. It was shown that, depending on the parameter set applied, the LKB model is able to predict the incidence of radiation pneumonitis with acceptable accuracy, especially when implemented on a sub-group of patients (120) receiving D-bar-bar vertical bar EUD higher than 8 Gy. In conclusion, the goodness-of-fit of a certain radiobiological model on a given clinical case is closely related to the selection of the proper scoring criteria and parameter set as well as to the compatibility of the clinical case from which the data were derived. (letter to the editor)

  15. Set-Pi: Set Membership pi-Calculus

    DEFF Research Database (Denmark)

    Bruni, Alessandro; Mödersheim, Sebastian Alexander; Nielson, Flemming

    2015-01-01

    Communication protocols often rely on stateful mechanisms to ensure certain security properties. For example, counters and timestamps can be used to ensure authentication, or the security of communication can depend on whether a particular key is registered to a server or it has been revoked. Pro......Verif, like other state of the art tools for protocol analysis, achieves good performance by converting a formal protocol specification into a set of Horn clauses, that represent a monotonically growing set of facts that a Dolev-Yao attacker can derive from the system. Since this set of facts is not state...... method with three examples, a simple authentication protocol based on counters, a key registration protocol, and a model of the Yubikey security device....

  16. System-level modeling of acetone-butanol-ethanol fermentation.

    Science.gov (United States)

    Liao, Chen; Seo, Seung-Oh; Lu, Ting

    2016-05-01

    Acetone-butanol-ethanol (ABE) fermentation is a metabolic process of clostridia that produces bio-based solvents including butanol. It is enabled by an underlying metabolic reaction network and modulated by cellular gene regulation and environmental cues. Mathematical modeling has served as a valuable strategy to facilitate the understanding, characterization and optimization of this process. In this review, we highlight recent advances in system-level, quantitative modeling of ABE fermentation. We begin with an overview of integrative processes underlying the fermentation. Next we survey modeling efforts including early simple models, models with a systematic metabolic description, and those incorporating metabolism through simple gene regulation. Particular focus is given to a recent system-level model that integrates the metabolic reactions, gene regulation and environmental cues. We conclude by discussing the remaining challenges and future directions towards predictive understanding of ABE fermentation. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Multipurpose optimization models for high level waste vitrification

    International Nuclear Information System (INIS)

    Hoza, M.

    1994-08-01

    Optimal Waste Loading (OWL) models have been developed as multipurpose tools for high-level waste studies for the Tank Waste Remediation Program at Hanford. Using nonlinear programming techniques, these models maximize the waste loading of the vitrified waste and optimize the glass formers composition such that the glass produced has the appropriate properties within the melter, and the resultant vitrified waste form meets the requirements for disposal. The OWL model can be used for a single waste stream or for blended streams. The models can determine optimal continuous blends or optimal discrete blends of a number of different wastes. The OWL models have been used to identify the most restrictive constraints, to evaluate prospective waste pretreatment methods, to formulate and evaluate blending strategies, and to determine the impacts of variability in the wastes. The OWL models will be used to aid in the design of frits and the maximize the waste in the glass for High-Level Waste (HLW) vitrification

  18. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  19. Evaluating climate model performance with various parameter sets using observations over the recent past

    Directory of Open Access Journals (Sweden)

    M. F. Loutre

    2011-05-01

    Full Text Available Many sources of uncertainty limit the accuracy of climate projections. Among them, we focus here on the parameter uncertainty, i.e. the imperfect knowledge of the values of many physical parameters in a climate model. Therefore, we use LOVECLIM, a global three-dimensional Earth system model of intermediate complexity and vary several parameters within a range based on the expert judgement of model developers. Nine climatic parameter sets and three carbon cycle parameter sets are selected because they yield present-day climate simulations coherent with observations and they cover a wide range of climate responses to doubled atmospheric CO2 concentration and freshwater flux perturbation in the North Atlantic. Moreover, they also lead to a large range of atmospheric CO2 concentrations in response to prescribed emissions. Consequently, we have at our disposal 27 alternative versions of LOVECLIM (each corresponding to one parameter set that provide very different responses to some climate forcings. The 27 model versions are then used to illustrate the range of responses provided over the recent past, to compare the time evolution of climate variables over the time interval for which they are available (the last few decades up to more than one century and to identify the outliers and the "best" versions over that particular time span. For example, between 1979 and 2005, the simulated global annual mean surface temperature increase ranges from 0.24 °C to 0.64 °C, while the simulated increase in atmospheric CO2 concentration varies between 40 and 50 ppmv. Measurements over the same period indicate an increase in global annual mean surface temperature of 0.45 °C (Brohan et al., 2006 and an increase in atmospheric CO2 concentration of 44 ppmv (Enting et al., 1994; GLOBALVIEW-CO2, 2006. Only a few parameter sets yield simulations that reproduce the observed key variables of the climate system over the last

  20. How important is the choice of the nutrient profile model used to regulate broadcast advertising of foods to children? A comparison using a targeted data set.

    Science.gov (United States)

    Scarborough, P; Payne, C; Agu, C G; Kaur, A; Mizdrak, A; Rayner, M; Halford, J C G; Boyland, E

    2013-08-01

    The World Health Assembly recommends that children's exposure to marketing of unhealthy foods should be reduced. Nutrient profile models have been developed that define 'unhealthy' to support regulation of broadcast advertising of foods to children. The level of agreement between these models is not clear. The objective of this study was to measure the agreement between eight nutrient profile models that have been proposed for the regulation of marketing to children over (a) how many and (b) what kind of foods should be permitted to be advertised during television viewed by children. A representative data set of commercials for foods broadcast during television viewed by children in the UK was collected in 2008. The data set consisted of 11,763 commercials for 336 different products or brands. This data set was supplemented with nutrition data from company web sites, food packaging and a food composition table, and the nutrient profile models were applied. The percentage of commercials that would be permitted by the different nutrient profile models ranged from 2.1% (0.4%, 3.7%) to 47.4% (42.1%, 52.6%). Half of the pairwise comparisons between models yielded kappa statistics less than 0.2, indicating that there was little agreement between models. Policy makers considering the regulation of broadcast advertising to children should carefully consider the choice of nutrient profile model to support the regulation, as this choice will have considerable influence on the outcome of the regulation.