WorldWideScience

Sample records for modeling sem analyses

  1. Prophylometric and SEM analyses of four different finishing methods

    Science.gov (United States)

    CHIODERA, G.; CERUTTI, F.; CERUTTI, A.; PUTIGNANO, A.; MANGANI, F.

    2013-01-01

    Summary Adhesion is the pivot of the modern restorative dentistry. Inlays, onlays and veneers have become a valid alternative to the traditional prosthetic treatments even in the rehabilitation of extremely damaged teeth, allowing a consistent saving of sound tooth tissues. Composite resins and dental adhesive are continously investigated and improved, nevertheless the optimization of the tooth-adhesive interface has to be considered: in fact, the long-term stability of adhesion between tooth and composite material depends on the treatment of the amelo-dentinal surfaces. This study investigated the quality of the occlusal walls of a cavity prepared to receive an inlay and finished with four different systems: thin and extra-thin diamond coated burs, a 12-blades carbide burs and a diamond-coated tip driven by sonic instrument. Consequently, prophylometric and SEM analyses were performed on the samples. The average roughness values recorded by the prophylometer were expressed by the parameters Ra and RZ: there is a correspondence between the numeric values and the pictures of the SEM. The results show a better quality (low roughness values) of the surface treated with multi-blade burs, followed by the this and extra-thin diamond coated burs. The 25 micron diamond-coated tip of the sonic instrument obtains the roughest surface and a sensibly higher amount of smear layer than the other tested systems. PMID:23741601

  2. Testing inequality constrained hypotheses in SEM Models

    NARCIS (Netherlands)

    Van de Schoot, R.; Hoijtink, H.J.A.; Dekovic, M.

    2010-01-01

    Researchers often have expectations that can be expressed in the form of inequality constraints among the parameters of a structural equation model. It is currently not possible to test these so-called informative hypotheses in structural equation modeling software. We offer a solution to this probl

  3. semPLS: Structural Equation Modeling Using Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Armin Monecke

    2012-05-01

    Full Text Available Structural equation models (SEM are very popular in many disciplines. The partial least squares (PLS approach to SEM offers an alternative to covariance-based SEM, which is especially suited for situations when data is not normally distributed. PLS path modelling is referred to as soft-modeling-technique with minimum demands regarding mea- surement scales, sample sizes and residual distributions. The semPLS package provides the capability to estimate PLS path models within the R programming environment. Different setups for the estimation of factor scores can be used. Furthermore it contains modular methods for computation of bootstrap confidence intervals, model parameters and several quality indices. Various plot functions help to evaluate the model. The well known mobile phone dataset from marketing research is used to demonstrate the features of the package.

  4. BEYOND SEM: GENERAL LATENT VARIABLE MODELING

    National Research Council Canada - National Science Library

    Muthén, Bengt O

    2002-01-01

    This article gives an overview of statistical analysis with latent variables. Using traditional structural equation modeling as a starting point, it shows how the idea of latent variables captures a wide variety of statistical concepts...

  5. metaSEM: an R package for meta-analysis using structural equation modeling.

    Science.gov (United States)

    Cheung, Mike W-L

    2014-01-01

    The metaSEM package provides functions to conduct univariate, multivariate, and three-level meta-analyses using a structural equation modeling (SEM) approach via the OpenMx package in the R statistical platform. It also implements the two-stage SEM approach to conducting fixed- and random-effects meta-analytic SEM on correlation or covariance matrices. This paper briefly outlines the theories and their implementations. It provides a summary on how meta-analyses can be formulated as structural equation models. The paper closes with a conclusion on several relevant topics to this SEM-based meta-analysis. Several examples are used to illustrate the procedures in the supplementary material.

  6. Comparing SVARs and SEMs : Two models of the UK economy

    NARCIS (Netherlands)

    Jacobs, J.P.A.M.; Wallis, K.F.

    2005-01-01

    The structural vector autoregression (SVAR) and simultaneous equation macroeconometric model (SEM) styles of empirical macroeconomic modelling are compared and contrasted, with reference to two models of the UK economy, namely the long-run structural VAR model of Garratt, Lee, Pesaran and Shin and t

  7. Fluid flow and reaction fronts: characterization of physical processes at the microscale using SEM analyses

    Science.gov (United States)

    Beaudoin, Nicolas; Koehn, Daniel; Toussaint, Renaud; Gomez-Rivas, Enrique; Bons, Paul; Chung, Peter; Martín-Martín, Juan Diego

    2014-05-01

    Fluid migrations are the principal agent for mineral replacement in the upper crust, leading to dramatic changes in the porosity and permeability of rocks over several kilometers. Consequently, a better understanding of the physical parameters leading to mineral replacement is required to better understand and model fluid flow and rock reservoir properties. Large-scale dolostone bodies are one of the best and most debated examples of such fluid-related mineral replacement. These formations received a lot of attention lately, and although genetic mechanics and implications for fluid volume are understood, the mechanisms controlling the formation and propagation of the dolomitization reaction front remain unclear. This contribution aims at an improvement of the knowledge about how this replacement front propagates over space and time. We study the front sharpness on hand specimen and thin section scale and what the influence of advection versus diffusion of material is on the front development. In addition, we demonstrate how preexisting heterogeneities in the host rock affect the propagation of the reaction front. The rock is normally not homogeneous but contains grain boundaries, fractures and stylolites, and such structures are important on the scale of the front width. Using Scanning Electron Microscopy and Raman Spectroscopy we characterized the reaction front chemistry and morphology in different context. Specimens of dolomitization fronts, collected from carbonate sequences of the southern Maestrat Basin, Spain and the Southwestern Scottish Highlands suggest that the front thickness is about several mm being relatively sharp. Fluid infiltrated grain boundaries and fractures forming mm-scale transition zone. We study the structure of the reaction zone in detail and discuss implications for fluid diffusion-advection models and mineral replacement. In addition we formulate a numerical model taking into account fluid flow, diffusion and advection of the mobile

  8. Characterization of iodine particles with Volatilization-Humidification Tandem Differential Mobility Analyser (VH-TDMA, Raman and SEM techniques

    Directory of Open Access Journals (Sweden)

    Z. D. Ristovski

    2006-02-01

    Full Text Available Particles formed upon photo-oxidation of CH2I2 and particles of I2O5 and HIO3 have been studied using a Volatilisation and Humidification Tandem Differential Mobility Analyser (VH-TDMA system. Volatilization and hygroscopic behaviour have been investigated as function of temperature (from 25 to 400°C, humidity (RH from 80 to 98%, initial aerosol sizes (from 27 to 100 nm mobility diameter and in nitrogen or air as the sheath gasses. The volatility behaviour of particles formed upon photo-oxidation of CH2I2 is more similar to that of HIO3 particles in a filtered sheath air than in nitrogen, with the particle shrinkage occurring at 190°C and accompanied by hygroscopic growth. Despite its high solubility, HIO3 was found not to be hygroscopic at room temperature with no significant growth displayed until the thermodenuder temperature reached 200°C or above when the particles have transformed into I2O5. Diiodopentaoxide (I2O5 particles exhibit relatively low hygroscopic growth factors of 1.2–2 in the humidity range investigated. Scanning Electron Microscopy (SEM of particles formed upon photo-oxidation of CH2I2 shows that their primary elemental components were iodine and oxygen in a stoichiometric ratio of approximately 1:2 with 10% error. Both Raman spectra and SEM show poor crystallinity for all the aerosols produced.

  9. SEM++: A particle model of cellular growth, signaling and migration

    Science.gov (United States)

    Milde, Florian; Tauriello, Gerardo; Haberkern, Hannah; Koumoutsakos, Petros

    2014-06-01

    We present a discrete particle method to model biological processes from the sub-cellular to the inter-cellular level. Particles interact through a parametrized force field to model cell mechanical properties, cytoskeleton remodeling, growth and proliferation as well as signaling between cells. We discuss the guiding design principles for the selection of the force field and the validation of the particle model using experimental data. The proposed method is integrated into a multiscale particle framework for the simulation of biological systems.

  10. An extensible analysable system model

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, Rene Rydhof

    2008-01-01

    , this does not hold for real physical systems. Approaches such as threat modelling try to target the formalisation of the real-world domain, but still are far from the rigid techniques available in security research. Many currently available approaches to assurance of critical infrastructure security...... allows for easy development of analyses for the abstracted systems. We briefly present one application of our approach, namely the analysis of systems for potential insider threats....

  11. From patterns to causal understanding: Structural equation modeling (SEM) in soil ecology

    Science.gov (United States)

    Eisenhauer, Nico; Powell, Jeff R; Grace, James B.; Bowker, Matthew A.

    2015-01-01

    In this perspectives paper we highlight a heretofore underused statistical method in soil ecological research, structural equation modeling (SEM). SEM is commonly used in the general ecological literature to develop causal understanding from observational data, but has been more slowly adopted by soil ecologists. We provide some basic information on the many advantages and possibilities associated with using SEM and provide some examples of how SEM can be used by soil ecologists to shift focus from describing patterns to developing causal understanding and inspiring new types of experimental tests. SEM is a promising tool to aid the growth of soil ecology as a discipline, particularly by supporting research that is increasingly hypothesis-driven and interdisciplinary, thus shining light into the black box of interactions belowground.

  12. Wild 2 grains characterized combining MIR/FIR/Raman micro-spectroscopy and FE-SEM/EDS analyses

    Science.gov (United States)

    Ferrari, M.; Rotundi, A.; Rietmeijer, F. J. M.; Della Corte, V.; Baratta, G. A.; Brunetto, R.; Dartois, E.; Djouadi, Z.; Merouane, S.; Borg, J.; Brucato, J. R.; Le Sergeant d'Hendecourt, L.; Mennella, V.; Palumbo, M. E.; Palumbo, P.

    We present the results of the analyses \\cite{Rotundi14} of two bulk terminal particles (TPs), C2112,7,171,0,0 (TP2) and C2112,9,171,0,0 (TP3), derived from the Jupiter-Family comet 81P/Wild 2 returned by the NASA Stardust mission \\cite{Brownlee06}. Each particle, embedded in a slab of silica aerogel, was pressed in a diamond cell. Aerogel is usually cause of problems when characterizing the minerals and organic materials present in the embedded particles. We overcame this common issue by means of the combination of FE-SEM/EDS, IR and Raman mu -spectroscopy, three non-destructive analytical techniques, which provided bulk mineralogical and organic information on TP2 and TP3. This approach proved to be a practical solution for preliminary characterization, i.e. scanning particles for chemical and mineralogical heterogeneity. Using this type of bulk characterization prior to more detailed studies, could be taken into account as a standard procedure to be followed for selecting Stardust particles-of-interest. TP2 and TP3 are dominated by Ca-free and low-Ca, Mg-rich, Mg,Fe-olivine. The presence of melilite in both particles is supported by IR mu -spectroscopy and corroborated by FE-SEM/EDS analyses, but is not confirmed by Raman mu -spectroscopy possibly because the amount of this mineral is too small to be detected. TP2 and TP3 show similar silicate mineral compositions, but Ni-free, low-Ni, sub-sulfur (Fe,Ni)S grains are present only in TP2. TP2 contains indigenous amorphous carbon hot spots, while no indigenous carbon was identified in TP3. These non-chondritic particles probably originated in a differentiated body. The presence of high temperature melilite group minerals (incl. gehlenite) in TP2 and TP3 reinforces the notion that collisionally-ejected refractory debris from differentiated asteroids may be common in Jupiter-Family comets. This raises the question whether similar debris and other clearly asteroidal particles could be present in Halley-type comets

  13. Analog and numerical modeling on the propagation of seismic electromagnetic signals (SEMS)

    Science.gov (United States)

    Huang, Q.; Lin, Y.; Wang, Q.

    2010-12-01

    Study on propagation of seismic electromagnetic signals (SEMS) plays an important role in understanding earthquake-related electromagnetic phenomena. Some laboratory analog experiments based on a geographical scaling model and a waveguide model were developed to simulate the propagation of SEMS. These experimental results showed that the geographical effect such as the distribution of ocean and land may lead to some aspect of the selectivity phenomenon. Some analytical and numerical works based on a conductive channel model were also presented as an explanation of the selectivity phenomenon. However, whether or not such conclusion holds for a more realistic 3D model deserves further investigation. In this paper, we simulate the propagation characteristics of SEMS in some typical 3-D models using COMSOL Multiphysics, a software of finite element method (FEM). After some validation tests of the above FEM software, we investigated the possible effects on the propagation characteristics of SEMS from the model parameters. Then, we considered a model with a conductive fault buried in a three-layered media, and an electric dipole source located close to the center of the fault. The simulation results indicated that the amplification effect of a conductive channel, which has been adopted as a possible explanation of some SEMS observations, can be expected only at a much lower frequency. We also simulated the possible ocean effect on the propagation of SEMS. As a case study, we modeled the Greek archipelago, in which numerious SEMS have been reported. The numerical results showed a decayed pattern of SEMS at a frequency lower than the cut-off frequency, and a rippled propagation pattern at a frequency higher than the cut-off frequency. These results are consistent with the previous analog experimental results. Further examples of analog and numerical simulations are investigated. The numerical simulations combined with the analog experiments may provide possible explanation

  14. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    Science.gov (United States)

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  15. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    Science.gov (United States)

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  16. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    Science.gov (United States)

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  17. Extended unified SEM approach for modeling event-related fMRI data.

    Science.gov (United States)

    Gates, Kathleen M; Molenaar, Peter C M; Hillary, Frank G; Slobounov, Semyon

    2011-01-15

    There has been increasing emphasis in fMRI research on the examination of how regions covary in a distributed neural network. Event-related data designs present a unique challenge to modeling how couplings among regions change in the presence of experimental manipulations. The present paper presents the extended unified SEM (euSEM), a novel approach for acquiring effective connectivity maps with event-related data. The euSEM adds to the unified SEM, which models both lagged and contemporaneous effects, by estimating the direct effects that experimental manipulations have on blood-oxygen-level dependent activity as well as the modulating effects the manipulations have on couplings among regions. Monte Carlos simulations included in this paper offer support for the model's ability to recover covariance patterns used to estimate data. Next, we apply the model to empirical data to demonstrate feasibility. Finally, the results of the empirical data are compared to those found using dynamic causal modeling. The euSEM provides a flexible approach for modeling event-related data as it may be employed in an exploratory, partially exploratory, or entirely confirmatory manner.

  18. Mathematical model of the seismic electromagnetic signals (SEMS) in non crystalline substances

    Energy Technology Data Exchange (ETDEWEB)

    Dennis, L. C. C.; Yahya, N.; Daud, H.; Shafie, A. [Electromagnetic cluster, Universiti Teknologi Petronas, 31750 Tronoh, Perak (Malaysia)

    2012-09-26

    The mathematical model of seismic electromagnetic waves in non crystalline substances is developed and the solutions are discussed to show the possibility of improving the electromagnetic waves especially the electric field. The shear stress of the medium in fourth order tensor gives the equation of motion. Analytic methods are selected for the solutions written in Hansen vector form. From the simulated SEMS, the frequency of seismic waves has significant effects to the SEMS propagating characteristics. EM waves transform into SEMS or energized seismic waves. Traveling distance increases once the frequency of the seismic waves increases from 100% to 1000%. SEMS with greater seismic frequency will give seismic alike waves but greater energy is embedded by EM waves and hence further distance the waves travel.

  19. Morphological modelling of three-phase microstructures of anode layers using SEM images.

    Science.gov (United States)

    Abdallah, Bassam; Willot, François; Jeulin, Dominique

    2016-07-01

    A general method is proposed to model 3D microstructures representative of three-phases anode layers used in fuel cells. The models are based on SEM images of cells with varying morphologies. The materials are first characterized using three morphological measurements: (cross-)covariances, granulometry and linear erosion. They are measured on segmented SEM images, for each of the three phases. Second, a generic model for three-phases materials is proposed. The model is based on two independent underlying random sets which are otherwise arbitrary. The validity of this model is verified using the cross-covariance functions of the various phases. In a third step, several types of Boolean random sets and plurigaussian models are considered for the unknown underlying random sets. Overall, good agreement is found between the SEM images and three-phases models based on plurigaussian random sets, for all morphological measurements considered in the present work: covariances, granulometry and linear erosion. The spatial distribution and shapes of the phases produced by the plurigaussian model are visually very close to the real material. Furthermore, the proposed models require no numerical optimization and are straightforward to generate using the covariance functions measured on the SEM images.

  20. Prescriptive Statements and Educational Practice: What Can Structural Equation Modeling (SEM) Offer?

    Science.gov (United States)

    Martin, Andrew J.

    2011-01-01

    Longitudinal structural equation modeling (SEM) can be a basis for making prescriptive statements on educational practice and offers yields over "traditional" statistical techniques under the general linear model. The extent to which prescriptive statements can be made will rely on the appropriate accommodation of key elements of research design,…

  1. Subjective Values of Quality of Life Dimensions in Elderly People. A SEM Preference Model Approach

    Science.gov (United States)

    Elosua, Paula

    2011-01-01

    This article proposes a Thurstonian model in the framework of Structural Equation Modelling (SEM) to assess preferences among quality of life dimensions for the elderly. Data were gathered by a paired comparison design in a sample comprised of 323 people aged from 65 to 94 years old. Five dimensions of quality of life were evaluated: Health,…

  2. Abstracting and reasoning over ship trajectories and web data with the Simple Event Model (SEM)

    NARCIS (Netherlands)

    W.R. van Hage; V. Malaisé; G.K.D. de Vries; A.Th. Schreiber; M.W. van Someren

    2012-01-01

    Bridging the gap between low-level features and semantics is a problem commonly acknowledged in the Multimedia community. Event modeling can fill this gap by representing knowledge about the data at different level of abstraction. In this paper we present the Simple Event Model (SEM) and its applica

  3. Combining ship trajectories and semantics with the simple event model (SEM)

    NARCIS (Netherlands)

    W.R. van Hage; V. Malaisé; G. de Vries; G. Schreiber; M. van Someren

    2009-01-01

    Bridging the gap between low-level features and semantics is a problem commonly acknowledged in the Multimedia community. Event modeling can fill the gap. In this paper we present the Simple Event Model (SEM) and its application in a Maritime Safety and Security use case about Situational Awareness.

  4. A Model for Integrating Fixed-, Random-, and Mixed-Effects Meta-Analyses into Structural Equation Modeling

    Science.gov (United States)

    Cheung, Mike W.-L.

    2008-01-01

    Meta-analysis and structural equation modeling (SEM) are two important statistical methods in the behavioral, social, and medical sciences. They are generally treated as two unrelated topics in the literature. The present article proposes a model to integrate fixed-, random-, and mixed-effects meta-analyses into the SEM framework. By applying an…

  5. BIB-SEM of representative area clay structures paving towards an alternative model of porosity

    Science.gov (United States)

    Desbois, G.; Urai, J. L.; Houben, M.; Hemes, S.; Klaver, J.

    2012-04-01

    A major contribution to understanding the sealing capacity, coupled flow, capillary processes and associated deformation in clay-rich geomaterials is based on detailed investigation of the rock microstructures. However, the direct characterization of pores in representative elementary area (REA) and below µm-scale resolution remains challenging. To investigate directly the mm- to nm-scale porosity, SEM is certainly the most direct approach, but it is limited by the poor quality of the investigated surfaces. The recent development of ion milling tools (BIB and FIB; Desbois et al, 2009, 2011; Heath et al., 2011; Keller et al., 2011) and cryo-SEM allows respectively producing exceptional high quality polished cross-sections suitable for high resolution porosity SEM-imaging at nm-scale and investigating samples under wet conditions by cryogenic stabilization. This contribution focuses mainly on the SEM description of pore microstructures in 2D BIB-polished cross-sections of Boom (Mol site, Belgium) and Opalinus (Mont Terri, Switzerland) clays down to the SEM resolution. Pores detected in images are statistically analyzed to perform porosity quantification in REA. On the one hand, BIB-SEM results allow retrieving MIP measurements obtained from larger sample volumes. On the other hand, the BIB-SEM approach allows characterizing porosity-homogeneous and -predictable islands, which form the elementary components of an alternative concept of porosity/permeability model based on pore microstructures. Desbois G., Urai J.L. and Kukla P.A. (2009) Morphology of the pore space in claystones - evidence from BIB/FIB ion beam sectioning and cryo-SEM observations. E-Earth, 4, 15-22. Desbois G., Urai J.L., Kukla P.A., Konstanty J. and Baerle C. (2011). High-resolution 3D fabric and porosity model in a tight gas sandstone reservoir: a new approach to investigate microstructures from mm- to nm-scale combining argon beam cross-sectioning and SEM imaging . Journal of Petroleum Science

  6. Growth profile and SEM analyses of Candida albicans and Escherichia coli with Hymenocallis littoralis (Jacq.) Salisb leaf extract.

    Science.gov (United States)

    Rosli, N; Sumathy, V; Vikneswaran, M; Sreeramanan, S

    2014-12-01

    Hymenocallis littoralis (Jacq.) Salisb (Melong kecil) commonly known as 'Spider Lily' is an herbaceous plant from the family Amaryllidaceae. Study was carried out to determine the effect of H. littoralis leaf extract on the growth and morphogenesis of two pathogenic microbes, Candida albicans and Escherichia coli. The leaf extract displayed favourable anticandidal and antibacterial activity with a minimum inhibition concentration (MIC) of 6.25 mg/mL. Time kill study showed both microbes were completely killed after treated with leaf extract at 20 h. Both microbes' cell walls were heavily ruptured based on scanning electron microscopy (SEM) analysis. The significant anticandidal and antibacterial activities showed by H. littoralis leaf extract suggested the potential antimicrobial agent against C. albicans and E. coli.

  7. SEM-EDS Analyses of Small Craters in Stardust Aluminum Foils: Implications for the Wild-2 Dust Distribution

    Science.gov (United States)

    Borg, J.; Horz, F.; Bridges, J. C.; Burchell, M. J.; Djouadi, Z.; Floss, C.; Graham, G. A.; Green, S. F.; Heck, P. R.; Hoppe, P.; Huth, J.; Kearsley, A; Leroux, H.; Marhas, K.; Stadermann, F. J.; Teslich, N.

    2007-01-01

    Aluminium foils were used on Stardust to stabilize the aerogel specimens in the modular collector tray. Part of these foils were fully exposed to the flux of cometary grains emanating from Wild 2. Because the exposed part of these foils had to be harvested before extraction of the aerogel, numerous foil strips some 1.7 mm wide and 13 or 33 mm long were generated during Stardusts's Preliminary Examination (PE). These strips are readily accommodated in their entirety in the sample chambers of modern SEMs, thus providing the opportunity to characterize in situ the size distribution and residue composition - employing EDS methods - of statistically more significant numbers of cometary dust particles compared to aerogel, the latter mandating extensive sample preparation. We describe here the analysis of nearly 300 impact craters and their implications for Wild 2 dust.

  8. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...... of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating....

  9. Hybrid OPC modeling with SEM contour technique for 10nm node process

    Science.gov (United States)

    Hitomi, Keiichiro; Halle, Scott; Miller, Marshal; Graur, Ioana; Saulnier, Nicole; Dunn, Derren; Okai, Nobuhiro; Hotta, Shoji; Yamaguchi, Atsuko; Komuro, Hitoshi; Ishimoto, Toru; Koshihara, Shunsuke; Hojo, Yutaka

    2014-03-01

    Hybrid OPC modeling is investigated using both CDs from 1D and simple 2D structures and contours extracted from complex 2D structures, which are obtained by a Critical Dimension-Scanning Electron Microscope (CD-SEM). Recent studies have addressed some of key issues needed for the implementation of contour extraction, including an edge detection algorithm consistent with conventional CD measurements, contour averaging and contour alignment. Firstly, pattern contours obtained from CD-SEM images were used to complement traditional site driven CD metrology for the calibration of OPC models for both metal and contact layers of 10 nm-node logic device, developed in Albany Nano-Tech. The accuracy of hybrid OPC model was compared with that of conventional OPC model, which was created with only CD data. Accuracy of the model, defined as total error root-mean-square (RMS), was improved by 23% with the use of hybrid OPC modeling for contact layer and 18% for metal layer, respectively. Pattern specific benefit of hybrid modeling was also examined. Resist shrink correction was applied to contours extracted from CD-SEM images in order to improve accuracy of the contours, and shrink corrected contours were used for OPC modeling. The accuracy of OPC model with shrink correction was compared with that without shrink correction, and total error RMS was decreased by 0.2nm (12%) with shrink correction technique. Variation of model accuracy among 8 modeling runs with different model calibration patterns was reduced by applying shrink correction. The shrink correction of contours can improve accuracy and stability of OPC model.

  10. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...

  11. Analysis of Balance Scorecards Model Performance and Perspective Strategy Synergized by SEM

    Directory of Open Access Journals (Sweden)

    Waluyo Minto

    2016-01-01

    Full Text Available The performance assessment analysis after the economic crisis by using Balanced Scorecard (BSC method becomes a powerful and effective tool and can provide an integrated view of the performance of an organization. This strategy led to the Indonesian economy being stretched positively after the economic crisis. Taking effective decisions is not spared from combining four BSC perspectives and strategies that focus on a system with different behavior or steps. This paper combines two methods of BSC with structural equation modeling (SEM because they have the same concept, which is a causal relationship, where the research model concept SEM variables use BSC variable. The purpose of this paper is to investigate the influence of variables that synergized between balanced scorecard with SEM as a means of strategic planning in the future. This study used primary data with a large enough sample to meet the maximum likelihood estimation by assessment scale of seven semantic points. This research model is a combination of one and two step models. The next step is to test the measurement model, structural equation modeling, and modification models. The test results indicated that the model has multi colinearities. Therefore, the model is converted into one step model. The test results after being modified into a model of the goodness of fit indices showed a good score. All BSC variables have direct significant influence, including the perspective of strategic goals and sustainable competitive advantage. The implication of the simulation model of goodness of fit-modification results are DF = 227, Chi-square =276.550, P =0.058, CMIN/DF = 1.150, GFI = 0.831, AGFI = 0.791, CFI = 0.972, TLI = 0.965 and RMSEA = 0.039.

  12. Modelling and analysis of FMS productivity variables by ISM, SEM and GTMA approach

    Science.gov (United States)

    Jain, Vineet; Raj, Tilak

    2014-09-01

    Productivity has often been cited as a key factor in a flexible manufacturing system (FMS) performance, and actions to increase it are said to improve profitability and the wage earning capacity of employees. Improving productivity is seen as a key issue for survival and success in the long term of a manufacturing system. The purpose of this paper is to make a model and analysis of the productivity variables of FMS. This study was performed by different approaches viz. interpretive structural modelling (ISM), structural equation modelling (SEM), graph theory and matrix approach (GTMA) and a cross-sectional survey within manufacturing firms in India. ISM has been used to develop a model of productivity variables, and then it has been analyzed. Exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) are powerful statistical techniques. CFA is carried by SEM. EFA is applied to extract the factors in FMS by the statistical package for social sciences (SPSS 20) software and confirming these factors by CFA through analysis of moment structures (AMOS 20) software. The twenty productivity variables are identified through literature and four factors extracted, which involves the productivity of FMS. The four factors are people, quality, machine and flexibility. SEM using AMOS 20 was used to perform the first order four-factor structures. GTMA is a multiple attribute decision making (MADM) methodology used to find intensity/quantification of productivity variables in an organization. The FMS productivity index has purposed to intensify the factors which affect FMS.

  13. Modelling and analysis of FMS productivity variables by ISM, SEM and GTMA approach

    Science.gov (United States)

    Jain, Vineet; Raj, Tilak

    2014-09-01

    Productivity has often been cited as a key factor in a flexible manufacturing system (FMS) performance, and actions to increase it are said to improve profitability and the wage earning capacity of employees. Improving productivity is seen as a key issue for survival and success in the long term of a manufacturing system. The purpose of this paper is to make a model and analysis of the productivity variables of FMS. This study was performed by different approaches viz. interpretive structural modelling (ISM), structural equation modelling (SEM), graph theory and matrix approach (GTMA) and a cross-sectional survey within manufacturing firms in India. ISM has been used to develop a model of productivity variables, and then it has been analyzed. Exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) are powerful statistical techniques. CFA is carried by SEM. EFA is applied to extract the factors in FMS by the statistical package for social sciences (SPSS 20) software and confirming these factors by CFA through analysis of moment structures (AMOS 20) software. The twenty productivity variables are identified through literature and four factors extracted, which involves the productivity of FMS. The four factors are people, quality, machine and flexibility. SEM using AMOS 20 was used to perform the first order four-factor structures. GTMA is a multiple attribute decision making (MADM) methodology used to find intensity/quantification of productivity variables in an organization. The FMS productivity index has purposed to intensify the factors which affect FMS.

  14. Automatic search for fMRI connectivity mapping: an alternative to Granger causality testing using formal equivalences among SEM path modeling, VAR, and unified SEM.

    Science.gov (United States)

    Gates, Kathleen M; Molenaar, Peter C M; Hillary, Frank G; Ram, Nilam; Rovine, Michael J

    2010-04-15

    Modeling the relationships among brain regions of interest (ROIs) carries unique potential to explicate how the brain orchestrates information processing. However, hurdles arise when using functional MRI data. Variation in ROI activity contains sequential dependencies and shared influences on synchronized activation. Consequently, both lagged and contemporaneous relationships must be considered for unbiased statistical parameter estimation. Identifying these relationships using a data-driven approach could guide theory-building regarding integrated processing. The present paper demonstrates how the unified SEM attends to both lagged and contemporaneous influences on ROI activity. Additionally, this paper offers an approach akin to Granger causality testing, Lagrange multiplier testing, for statistically identifying directional influence among ROIs and employs this approach using an automatic search procedure to arrive at the optimal model. Rationale for this equivalence is offered by explicating the formal relationships among path modeling, vector autoregression, and unified SEM. When applied to simulated data, biases in estimates which do not consider both lagged and contemporaneous paths become apparent. Finally, the use of unified SEM with the automatic search procedure is applied to an empirical data example.

  15. High-resolution 3D analyses of the shape and internal constituents of small volcanic ash particles: The contribution of SEM micro-computed tomography (SEM micro-CT)

    Science.gov (United States)

    Vonlanthen, Pierre; Rausch, Juanita; Ketcham, Richard A.; Putlitz, Benita; Baumgartner, Lukas P.; Grobéty, Bernard

    2015-02-01

    The morphology of small volcanic ash particles is fundamental to our understanding of magma fragmentation, and in transport modeling of volcanic plumes and clouds. Until recently, the analysis of 3D features in small objects ( 20 μm3 (~ 3.5 μm in diameter) can be successfully reconstructed and quantified. In addition, new functionalities of the Blob3D software were developed to allow the particle shape factors frequently used as input parameters in ash transport and dispersion models to be calculated. This study indicates that SEM micro-CT is very well suited to quantify the various aspects of shape in fine volcanic ash, and potentially also to investigate the 3D morphology and internal structure of any object < 0.1 mm3.

  16. CUFE at SemEval-2016 Task 4: A Gated Recurrent Model for Sentiment Classification

    KAUST Repository

    Nabil, Mahmoud

    2016-06-16

    In this paper we describe a deep learning system that has been built for SemEval 2016 Task4 (Subtask A and B). In this work we trained a Gated Recurrent Unit (GRU) neural network model on top of two sets of word embeddings: (a) general word embeddings generated from unsupervised neural language model; and (b) task specific word embeddings generated from supervised neural language model that was trained to classify tweets into positive and negative categories. We also added a method for analyzing and splitting multi-words hashtags and appending them to the tweet body before feeding it to our model. Our models achieved 0.58 F1-measure for Subtask A (ranked 12/34) and 0.679 Recall for Subtask B (ranked 12/19).

  17. Bayesian Evaluation of inequality-constrained Hypotheses in SEM Models using Mplus.

    Science.gov (United States)

    van de Schoot, Rens; Hoijtink, Herbert; Hallquist, Michael N; Boelen, Paul A

    2012-10-01

    Researchers in the behavioural and social sciences often have expectations that can be expressed in the form of inequality constraints among the parameters of a structural equation model resulting in an informative hypothesis. The question they would like an answer to is "Is the Hypothesis Correct" or "Is the hypothesis incorrect?". We demonstrate a Bayesian approach to compare an inequality-constrained hypothesis with its complement in an SEM framework. The method is introduced and its utility is illustrated by means of an example. Furthermore, the influence of the specification of the prior distribution is examined. Finally, it is shown how the approach proposed can be implemented using Mplus.

  18. Challenges and Opportunities in Analysing Students Modelling

    Science.gov (United States)

    Blanco-Anaya, Paloma; Justi, Rosária; Díaz de Bustamante, Joaquín

    2017-01-01

    Modelling-based teaching activities have been designed and analysed from distinct theoretical perspectives. In this paper, we use one of them--the model of modelling diagram (MMD)--as an analytical tool in a regular classroom context. This paper examines the challenges that arise when the MMD is used as an analytical tool to characterise the…

  19. U-Sem: Semantic Enrichment, User Modeling and Mining of Usage Data on the Social Web

    CERN Document Server

    Abel, Fabian; Hauff, Claudia; Hollink, Laura; Houben, Geert-Jan

    2011-01-01

    With the growing popularity of Social Web applications, more and more user data is published on the Web everyday. Our research focuses on investigating ways of mining data from such platforms that can be used for modeling users and for semantically augmenting user profiles. This process can enhance adaptation and personalization in various adaptive Web-based systems. In this paper, we present the U-Sem people modeling service, a framework for the semantic enrichment and mining of people's profiles from usage data on the Social Web. We explain the architecture of our people modeling service and describe its application in an adult e-learning context as an example. Versions: Mar 21, 10:10, Mar 25, 09:37

  20. Pulse electrochemical machining on Invar alloy: Optical microscopic/SEM and non-contact 3D measurement study of surface analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S.H.; Choi, S.G.; Choi, W.K.; Yang, B.Y. [School of Mechanical Engineering, Inha University, Incheon 402-751 (Korea, Republic of); Lee, E.S., E-mail: leees@dreamwiz.com [Department of Mechanical Engineering, Inha University, Incheon 402-751 (Korea, Republic of)

    2014-09-30

    Highlights: • Invar alloy was electrochemically polished and then subjected to PECM (Pulse Electro Chemical Machining) in a mixture of NaCl, glycerin, and distilled water. • Optical microscopic/SEM and non-contact 3D measurement study of Invar surface analyses. • Analysis result shows that applied voltage and electrode shape are factors that affect the surface conditions. - Abstract: In this study, Invar alloy (Fe 63.5%, Ni 36.5%) was electrochemically polished by PECM (Pulse Electro Chemical Machining) in a mixture of NaCl, glycerin, and distilled water. A series of PECM experiments were carried out with different voltages and different electrode shapes, and then the surfaces of polished Invar alloy were investigated. The polished Invar alloy surfaces were investigated by optical microscope, scanning electron microscope (SEM), and non-contact 3D measurement (white light microscopes) and it was found that different applied voltages produced different surface characteristics on the Invar alloy surface because of the locally concentrated applied voltage on the Invar alloy surface. Moreover, we found that the shapes of electrode also have an effect on the surface characteristics on Invar alloy surface by influencing the applied voltage. These experimental findings provide fundamental knowledge for PECM of Invar alloy by surface analysis.

  1. Incorporating Latent Variables into Discrete Choice Models - A Simultaneous Estimation Approach Using SEM Software

    Directory of Open Access Journals (Sweden)

    Dirk Temme

    2008-12-01

    Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.

  2. Projects Delay Factors of Saudi Arabia Construction Industry Using PLS-SEM Path Modelling Approach

    Directory of Open Access Journals (Sweden)

    Abdul RahmanIsmail

    2016-01-01

    Full Text Available This paper presents the development of PLS-SEM Path Model of delay factors of Saudi Arabia construction industry focussing on Mecca City. The model was developed and assessed using SmartPLS v3.0 software and it consists of 37 factors/manifests in 7 groups/independent variables and one dependent variable which is delay of the construction projects. The model was rigorously assessed at measurement and structural components and the outcomes found that the model has achieved the required threshold values. At structural level of the model, among the seven groups, the client and consultant group has the highest impact on construction delay with path coefficient β-value of 0.452 and the project management and contract administration group is having the least impact to the construction delay with β-value of 0.016. The overall model has moderate explaining power ability with R2 value of 0.197 for Saudi Arabia construction industry representation. This model will able to assist practitioners in Mecca city to pay more attention in risk analysis for potential construction delay.

  3. SEM-contour-based OPC model calibration through the process window

    Science.gov (United States)

    Vasek, Jim; Menedeva, Ovadya; Levitzky, Dan; Lindman, Ofer; Nemadi, Youval; Bailey, George E.; Sturtevant, John L.

    2007-03-01

    As design rules shrink, there is an unavoidable increase in the complexity of OPC/RET schemes required to enable design printability. These complex OPC/RET schemes have been facilitating unprecedented yield at k I factors previously deemed "unmanufacturable", but they increase the mask complexity and production cost, and can introduce yield-detracting errors. The most common errors are found in OPC design itself, and in the resulting patterning robustness across the process window. Two factors in the OPC design process that contribute to these errors are a) that 2D structures used in the design are not sufficiently well-represented in the OPC model calibration test pattern suite, and b) that the OPC model calibration is done only at the nominal process settings and not across the entire focus-exposure window. This work compares two alternative methods for calibrating OPC models. The first method uses a traditional industry flow for making CD measurements on standard calibration target structures. The second method uses 2D contour profiles extracted automatically by the CD-SEM over varying focus and exposure conditions. OPC models were developed for aggressive quadrupole illumination conditions (k I=0.35) used in 65nm- and 45nm-node logic gate patterning. Model accuracy improvement using 2D contours for calibration through the process window is demonstrated. Additionally this work addresses the issues of automating the contour extraction and calibration process, reducing the data collection burden with improved calibration cycle time.

  4. A Structural Equation Model (SEM of Governing Factors Influencing the Implementation of T-Government

    Directory of Open Access Journals (Sweden)

    Sameer Alshetewi

    2015-11-01

    Full Text Available Governments around the world have invested significant sums of money on Information and Communication Technology (ICT to improve the efficiency and effectiveness of services been provided to their citizens. However, they have not achieved the desired results because of the lack of interoperability between different government entities. Therefore, many governments have started shifting away from the original concept of e-Government towards a much more transformational approach that encompasses the entire relationship between different government departments and users of public services, which can be termed as transformational government (t- Government. In this paper, a model is proposed for governing factors that impact the implementation of t-Government such as strategy, leadership, stakeholders, citizen centricity and funding in the context of Saudi Arabia. Five constructs are hypothesised to be related to the implementation of t-Government. To clarify the relationships among these constructs, a structural equation model (SEM is utilised to examine the model fit with the five hypotheses. The results show that there are positive and significant relationships among the constructs such as the relationships between strategy and t-Government; the relationships between stakeholders and t-Government; the relationships between leadership and t-Government. This study also showed an insignificant relationship between citizens’ centricity and t-Government and also an insignificant relationship between funding and t-Government. document is a “live” template and already defines the components of your paper [title, text, heads, etc.] in its style sheet.

  5. Externalizing Behaviour for Analysing System Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof

    2013-01-01

    attackers. Therefore, many attacks are considerably easier to be performed for insiders than for outsiders. However, current models do not support explicit specification of different behaviours. Instead, behaviour is deeply embedded in the analyses supported by the models, meaning that it is a complex......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate......System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside...

  6. Modelling and Analysing Socio-Technical Systems

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi; Ivanova, Marieta Georgieva; Nielson, Flemming

    2015-01-01

    with social engineering. Due to this combination of attack steps on technical and social levels, risk assessment in socio-technical systems is complex. Therefore, established risk assessment methods often abstract away the internal structure of an organisation and ignore human factors when modelling...... and assessing attacks. In our work we model all relevant levels of socio-technical systems, and propose evaluation techniques for analysing the security properties of the model. Our approach simplifies the identification of possible attacks and provides qualified assessment and ranking of attacks based...... on the expected impact. We demonstrate our approach on a home-payment system. The system is specifically designed to help elderly or disabled people, who may have difficulties leaving their home, to pay for some services, e.g., care-taking or rent. The payment is performed using the remote control of a television...

  7. The SEM Risk Behavior (SRB) Model: A New Conceptual Model of how Pornography Influences the Sexual Intentions and HIV Risk Behavior of MSM.

    Science.gov (United States)

    Wilkerson, J Michael; Iantaffi, Alex; Smolenski, Derek J; Brady, Sonya S; Horvath, Keith J; Grey, Jeremy A; Rosser, B R Simon

    2012-01-01

    While the effects of sexually explicit media (SEM) on heterosexuals' sexual intentions and behaviors have been studied, little is known about the consumption and possible influence of SEM among men who have sex with men (MSM). Importantly, conceptual models of how Internet-based SEM influences behavior are lacking. Seventy-nine MSM participated in online focus groups about their SEM viewing preferences and sexual behavior. Twenty-three participants reported recent exposure to a new behavior via SEM. Whether participants modified their sexual intentions and/or engaged in the new behavior depended on three factors: arousal when imagining the behavior, pleasure when attempting the behavior, and trust between sex partners. Based on MSM's experience, we advance a model of how viewing a new sexual behavior in SEM influences sexual intentions and behaviors. The model includes five paths. Three paths result in the maintenance of sexual intentions and behaviors. One path results in a modification of sexual intentions while maintaining previous sexual behaviors, and one path results in a modification of both sexual intentions and behaviors. With this model, researchers have a framework to test associations between SEM consumption and sexual intentions and behavior, and public health programs have a framework to conceptualize SEM-based HIV/STI prevention programs.

  8. A SEM Model in Assessing the Effect of Convergent, Divergent and Logical Thinking on Students' Understanding of Chemical Phenomena

    Science.gov (United States)

    Stamovlasis, D.; Kypraios, N.; Papageorgiou, G.

    2015-01-01

    In this study, structural equation modeling (SEM) is applied to an instrument assessing students' understanding of chemical change. The instrument comprised items on understanding the structure of substances, chemical changes and their interpretation. The structural relationships among particular groups of items are investigated and analyzed using…

  9. APLIKASI STRUCTURAL EQUATION MODEL (SEM DALAM PENENTUAN ALTERNATIF PENGELOLAAN LINGKUNGAN INDUSTRI KOMPONEN ALAT BERAT BERBASIS PARTISIPASI DAN KEMITRAAN MASYARAKAT

    Directory of Open Access Journals (Sweden)

    Budi Setyo Utomo

    2012-07-01

    Full Text Available As a company engaged in the industrial sector by producing certain components and localized in an industrial area, there will be an impact on the environment. These impacts can be positive in the form of employment, reducing dependence on imported heavy equipment, increase in foreign exchange due to reduced imports and increased exports, increased government revenue from taxes, public facilities improvement and supporting infrastructure, and opening up opportunities for other related industries. These impacts can also be negative in the form of environmental degradation such as noise disturbance, dust, and micro climate change, and changes in social and cultural conditions surrounding the industry. Data analysis was performed descriptively and with the Structural Equation Model (SEM. SEM is a multivariate statistical technique which is a combination of factor analysis and regression analysis (correlation, which aims to test the connections between existing variables in a model, whether it is between the indicator with the construct, or the connections between constructs. SEM model consists of two parts, which is the latent variable model and the observed variable model. In contrast to ordinary regression linking the causality between the observed variables, it is also possible in SEM to identify the causality between latent variables. The results of SEM analysis showed that the developed model has a fairly high level of validity that is shown by the minimum fit chi-square value of 93.15 (P = 0.00029. Based on said model, it shows that the company's performance in waste management is largely determined by employee integrity and objectivity of the new employees followed later by the independence of the employees in waste management. The most important factor that determines the employee integrity in waste management in the model is honesty, individual wisdom, and a sense of responsibility. The most important factor in the employee objectivity

  10. Modeling the antecedents of in ternet banking service adoption (IBSA in Jordan: A Struct ural Equation Modeling (SEM approach

    Directory of Open Access Journals (Sweden)

    Malek AL-Majali

    2011-04-01

    Full Text Available After ten years from the introduction of the Internet Banking Services (IBS by Jordanian banks, the adoption of these services is stil l quite low. Hence, ident ifying success factors (SF to improve the level of IBS adoption is crucial. This paper is concerned with an empirical investigation of success factors that could predict successful IBSA in Jordan through applications of Innovation Diffusion Th eory (IDT. The research model consists of six exogenous variables: perceived ease of use, perceived usefulness, compatibility, trialability, trust and awareness and one endogenous: IBSA. 700 questionnaires survey among university staff was implemented and 532 data sets were collected. This represents 76% response rate. After rigorous data screening process such as outliers, normality, reliability and validity, 517 data is ready for structural equation modeling (SEM analysis. Confirmatory Factor Anal ysis (CFA was performed to examine the composite reliability, convergent validity and goodness of fit of individual construct and measurement models. The revised struct ural model demonstrates significant and positive direct relationships between all of six exogenous variables and IBSA

  11. In Depth Analyses of LEDs by a Combination of X-ray Computed Tomography (CT) and Light Microscopy (LM) Correlated with Scanning Electron Microscopy (SEM).

    Science.gov (United States)

    Meyer, Jörg; Thomas, Christian; Tappe, Frank; Ogbazghi, Tekie

    2016-06-16

    In failure analysis, device characterization and reverse engineering of light emitting diodes (LEDs), and similar electronic components of micro-characterization, plays an important role. Commonly, different techniques like X-ray computed tomography (CT), light microscopy (LM) and scanning electron microscopy (SEM) are used separately. Similarly, the results have to be treated for each technique independently. Here a comprehensive study is shown which demonstrates the potentials leveraged by linking CT, LM and SEM. In depth characterization is performed on a white emitting LED, which can be operated throughout all characterization steps. Major advantages are: planned preparation of defined cross sections, correlation of optical properties to structural and compositional information, as well as reliable identification of different functional regions. This results from the breadth of information available from identical regions of interest (ROIs): polarization contrast, bright and dark-field LM images, as well as optical images of the LED cross section in operation. This is supplemented by SEM imaging techniques and micro-analysis using energy dispersive X-ray spectroscopy.

  12. Externalizing Behaviour for Analysing System Models

    NARCIS (Netherlands)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof; Kammüller, Florian

    Systems models have recently been introduced to model organisationsandevaluate their vulnerability to threats and especially insiderthreats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside

  13. Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.

  14. Bayesian Uncertainty Analyses Via Deterministic Model

    Science.gov (United States)

    Krzysztofowicz, R.

    2001-05-01

    Rational decision-making requires that the total uncertainty about a variate of interest (a predictand) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Suppose the state-of-knowledge is embodied in a deterministic model, which is imperfect and outputs only an estimate of the predictand. Fundamentals are presented of three Bayesian approaches to producing a probability distribution of the predictand via any deterministic model. The Bayesian Processor of Output (BPO) quantifies the total uncertainty in terms of a posterior distribution, conditional on model output. The Bayesian Processor of Ensemble (BPE) quantifies the total uncertainty in terms of a posterior distribution, conditional on an ensemble of model output. The Bayesian Forecasting System (BFS) decomposes the total uncertainty into input uncertainty and model uncertainty, which are characterized independently and then integrated into a predictive distribution.

  15. Analysing Social Epidemics by Delayed Stochastic Models

    Directory of Open Access Journals (Sweden)

    Francisco-José Santonja

    2012-01-01

    Full Text Available We investigate the dynamics of a delayed stochastic mathematical model to understand the evolution of the alcohol consumption in Spain. Sufficient condition for stability in probability of the equilibrium point of the dynamic model with aftereffect and stochastic perturbations is obtained via Kolmanovskii and Shaikhet general method of Lyapunov functionals construction. We conclude that alcohol consumption in Spain will be constant (with stability in time with around 36.47% of nonconsumers, 62.94% of nonrisk consumers, and 0.59% of risk consumers. This approach allows us to emphasize the possibilities of the dynamical models in order to study human behaviour.

  16. Modelling, analyses and design of switching converters

    Science.gov (United States)

    Cuk, S. M.; Middlebrook, R. D.

    1978-01-01

    A state-space averaging method for modelling switching dc-to-dc converters for both continuous and discontinuous conduction mode is developed. In each case the starting point is the unified state-space representation, and the end result is a complete linear circuit model, for each conduction mode, which correctly represents all essential features, namely, the input, output, and transfer properties (static dc as well as dynamic ac small-signal). While the method is generally applicable to any switching converter, it is extensively illustrated for the three common power stages (buck, boost, and buck-boost). The results for these converters are then easily tabulated owing to the fixed equivalent circuit topology of their canonical circuit model. The insights that emerge from the general state-space modelling approach lead to the design of new converter topologies through the study of generic properties of the cascade connection of basic buck and boost converters.

  17. Sample Size Limits for Estimating Upper Level Mediation Models Using Multilevel SEM

    Science.gov (United States)

    Li, Xin; Beretvas, S. Natasha

    2013-01-01

    This simulation study investigated use of the multilevel structural equation model (MLSEM) for handling measurement error in both mediator and outcome variables ("M" and "Y") in an upper level multilevel mediation model. Mediation and outcome variable indicators were generated with measurement error. Parameter and standard…

  18. Assessing Actual Visit Behavior through Antecedents of Tourists Satisfaction among International Tourists in Jordan: A Structural Equation Modeling (SEM Approach

    Directory of Open Access Journals (Sweden)

    Ayed Moh’d Al Muala

    2011-06-01

    Full Text Available Jordan tourism industry is facing fluctuating tourist visit provoked by dissatisfaction, high visit risk, low hotel service, or negative Jordan image. This study aims to examine the relationships between the antecedents of tourist satisfaction and actual visit behavior in tourism of Jordan, and the mediating effect of tourist satisfaction (SAT in the relationship between Jordan image (JOM, service climate (SER and actual visit behavior (ACT. A total of 850 international tourists completed a survey that were conducted at southern sites in Jordan. Using structural equation modeling (SEM technique, confirmatory Factor Analysis (CFA was performed to examine the reliability and validity of the measurement, and the structural equation modeling techniques (Amos 6.0 were used to evaluate the casual model. Results of the study demonstrate the strong predictive power and explain of international tourists’ behavior in Jordan. The findings highlighted that the relationship between Jordan image and service climate are significant and positive on actual visit behavior.

  19. A bacterial spore model of pulsed electric fields on spore morphology change revealed by simulation and SEM.

    Science.gov (United States)

    Qiu, Xing; Lee, Yin Tung; Yung, Pun To

    2014-01-01

    A two-layered spore model was proposed to analyze morphological change of bacterial spores subjected under pulsed electric fields. The outer layer, i.e. spore coat, was defined by Mooney-Rivlin hyper-elastic material model. The inner layer, i.e. peptidoglycan and spore core, was modeled by applying additional adhesion forces. The effect of pulsed electric fields on surface displacement was simulated in COMSOL Multiphysics and verified by SEM. The electro-mechanical theory, considering spore coat as a capacitor, was used to explain concavity; and the thin viscoelastic film theory, considering membrane bilayer as fluctuating surfaces, was used to explain leakage forming. Mutual interaction of external electric fields, charged spores, adhesion forces and ions movement were all predicted to contribute to concavity and leakage.

  20. SemInf: a burst-based semantic influence model for biomedical topic influence.

    Science.gov (United States)

    He, Dan; Parker, Douglas S

    2014-03-01

    In this study, we model how biomedical topics influence one another, given they are organized in a topic hierarchy, medical subject headings, in which the edges capture a parent-child/subsumption relationship among topics. This information enables studying influence of topics from a semantic perspective, which might be very important in analyzing topic evolution and is missing from the current literature. We first define a burst-based action for topics, which models upward momentum in popularity (or “elevated occurrences” of the topics), and use it to define two types of influence: accumulation influence and propagation influence. We then propose a model of influence between topics, and develop an efficient algorithm (TIPS) to identify influential topics. Experiments show that our model is successful at identifying influential topics and the algorithm is very efficient.

  1. Theory of planned behavior and smoking: meta-analysis and SEM model

    Directory of Open Access Journals (Sweden)

    Gabriela Topa

    2010-12-01

    Full Text Available Gabriela Topa, Juan Antonio MorianoDepartment of Social and Organizational Psychology, UNED, Madrid, SpainAbstract: To examine if the theory of planned behavior (TPB predicts smoking behavior, 35 data sets (N = 267,977 have been synthesized, containing 219 effect sizes between the model variables, using a meta-analytic structural equation modeling approach (MASEM. Consistent with the TPB's predictions, 1 smoking behavior was related to smoking intentions (weighted mean r = 0.30, 2 intentions were based on attitudes (weighted mean r = 0.16, and subjective norms (weighted mean r = 0.20. Consistent with TPB's hypotheses, perceived behavioral control was related to smoking intentions (weighted mean r = -0.24 and behaviors (weighted mean r = -0.20 and it contributes significantly to cigarette consumption. The strength of the associations, however, was influenced by the characteristics of the studies and participants.Keywords: theory of planned behavior, smoking, meta-analysis, structural equation modeling

  2. Determination of the distribution of copper and chromium in partly remediated CCA-treated pine wood using SEM and EDX analyses

    DEFF Research Database (Denmark)

    Christensen, Iben Vernegren; Ottosen, Lisbeth M.; Melcher, Eckhard;

    2005-01-01

    Soaking in different acids and electrodialytic remediation (EDR) were applied for removing copper and chromium from freshly Chromated Copper Arsenate (CCA) impregnated EN 113 pine wood samples. After remedial treatments, AAS analyses revealed that the concentration of copper (Cu) and chromium (Cr...... large amounts of Cu and no Cr. Cr was most effectively removed after soaking in oxalic acid and subsequent EDR treatment or dual soaking in phosphoric acid and oxalic acid with and without subsequent EDR....

  3. Modelling and Analyses of Embedded Systems Design

    DEFF Research Database (Denmark)

    Brekling, Aske Wiid

    We present the MoVES languages: a language with which embedded systems can be specified at a stage in the development process where an application is identified and should be mapped to an execution platform (potentially multi- core). We give a formal model for MoVES that captures and gives......-based verification is a promising approach for assisting developers of embedded systems. We provide examples of system verifications that, in size and complexity, point in the direction of industrially-interesting systems....

  4. SEM in applied marketing research

    DEFF Research Database (Denmark)

    Sørensen, Bjarne Taulo; Tudoran, Ana Alina

    In this paper we discuss two SEM approaches: an exploratory structural equation modelling based on a more liberalised and inductive philosophy versus the classical SEM based on the traditional hypothetical-deductive approach. We apply these two modelling techniques to data from a consumer survey ...

  5. Filipino Nursing Students' Behavioral Intentions toward Geriatric Care: A Structural Equation Model (SEM)

    Science.gov (United States)

    de Guzman, Allan B.; Jimenez, Benito Christian B.; Jocson, Kathlyn P.; Junio, Aileen R.; Junio, Drazen E.; Jurado, Jasper Benjamin N.; Justiniano, Angela Bianca F.

    2013-01-01

    Anchored on the key constucts of Ajzen's Theory of Planned Behavior (1985), this paper seeks to test a model that explores the influence of knowledge, attitude, and caring behavior on nursing students' behavioral intention toward geriatric care. A five-part survey-questionnaire was administered to 839 third and fourth year nursing students from a…

  6. Filipino Nursing Students' Behavioral Intentions toward Geriatric Care: A Structural Equation Model (SEM)

    Science.gov (United States)

    de Guzman, Allan B.; Jimenez, Benito Christian B.; Jocson, Kathlyn P.; Junio, Aileen R.; Junio, Drazen E.; Jurado, Jasper Benjamin N.; Justiniano, Angela Bianca F.

    2013-01-01

    Anchored on the key constucts of Ajzen's Theory of Planned Behavior (1985), this paper seeks to test a model that explores the influence of knowledge, attitude, and caring behavior on nursing students' behavioral intention toward geriatric care. A five-part survey-questionnaire was administered to 839 third and fourth year nursing students from a…

  7. An SEM Approach to Continuous Time Modeling of Panel Data: Relating Authoritarianism and Anomia

    Science.gov (United States)

    Voelkle, Manuel C.; Oud, Johan H. L.; Davidov, Eldad; Schmidt, Peter

    2012-01-01

    Panel studies, in which the same subjects are repeatedly observed at multiple time points, are among the most popular longitudinal designs in psychology. Meanwhile, there exists a wide range of different methods to analyze such data, with autoregressive and cross-lagged models being 2 of the most well known representatives. Unfortunately, in these…

  8. How Can E-Services Influence On Customers' Intentions toward Online Book Repurchasing (SEM Method and TPB Model

    Directory of Open Access Journals (Sweden)

    Hossein Rezaei Dolatabadi

    2012-06-01

    Full Text Available Efficiency and effectiveness of e- commerce lead increasingly applying it by organizations in all aspects of transactions. To be strong and stable enough in today's competitive world, organizations need to improve their technological and communicational infrastructure. Nowadays, superior Electronic services can add high customer value and create competitive advantage for the organization. Therefore, providing electronic purchasing for consumers have become one of the key issues for organizations. The main aim of this study is to investigate attitude and intention toward electronic repurchasing of books, by effective factor of e-service quality, using the TPB model and structural equation modeling method (SEM. Samples of this survey are electronic buyers of books (through online book stores who have used electronic services which books sellers companies offer. The results show that e-service quality strongly affects intentions and attitudes towards repurchasing.

  9. Theory of planned behavior and smoking: meta-analysis and SEM model.

    Science.gov (United States)

    Topa, Gabriela; Moriano, Juan Antonio

    2010-01-01

    To examine if the theory of planned behavior (TPB) predicts smoking behavior, 35 data sets (N = 267,977) have been synthesized, containing 219 effect sizes between the model variables, using a meta-analytic structural equation modeling approach (MASEM). Consistent with the TPB's predictions, 1) smoking behavior was related to smoking intentions (weighted mean r = 0.30), 2) intentions were based on attitudes (weighted mean r = 0.16), and subjective norms (weighted mean r = 0.20). Consistent with TPB's hypotheses, perceived behavioral control was related to smoking intentions (weighted mean r = -0.24) and behaviors (weighted mean r = -0.20) and it contributes significantly to cigarette consumption. The strength of the associations, however, was influenced by the characteristics of the studies and participants.

  10. Development of fuel-model interfaces: Investigations by XPS, TEM, SEM and AFM

    Science.gov (United States)

    Stumpf, S.; Seibert, A.; Gouder, T.; Huber, F.; Wiss, T.; Römer, J.

    2009-03-01

    The presented work aims to reproducibly prepare UO 2-Pd thin film model systems for spent nuclear fuel in order to further investigate surface reactions of these films under relevant redox conditions. The sputter co-deposition of U and Pd (fission product) in the presence of O 2 results in the homogenous distribution of Pd in a crystalline UO 2 matrix. Heating the films causes the diffusion of film components. Hereby, the formation of ɛ-particles has to be clarified. First electrochemical studies show the influence of the nobel metal Pd on the redox behaviour of UO 2. With increasing Pd concentration the matrix dissolution is decreased. However, we could demonstrate that blocked oxidation processes are of temporary nature. The passivation of the Pd reactive sites with increasing number of cycles finally induces the approximation of the mixed system to the redox behaviour of the pure UO 2 system.

  11. Investigation of fibre orientation using SEM micrograph and prediction of mechanical properties through micromechanical modelling

    Indian Academy of Sciences (India)

    SUCHHANDA SRABANEE SWAIN; SUSHANTA K SAMAL; SMITA MOHANTY; SANJAY K NAYAK

    2016-06-01

    The present study concerns the fabrication of short sisal fibre-reinforced polypropylene (PP/SF) composites by melt mixing with different fibre length (3, 6 and 10 mm) comprising of 70% of matrix PP and 30% of SFs followed by injection moulding. The PP/MA-g-PP/SF composites were prepared with the ratio of 65:5:30 with the optimized fibre length of 6 mm at different mould temperatures (25, 45 and 65$^{\\circ}$C) in a similar fashion. This work also further extended to study the effect of fibre orientation of the composites by numerical calculation of secondorder orientation tensor. To evaluate the fibre orientations PP/MA-g-PP/SF composites at different mould temperatures,scanning electron microscope micrographs were used to estimate the accurate principal directions and two-dimensional fibre orientation distributions through centre coordinates of the elliptical fibre images. Finally, a mathematical model of modified rule of mixture was adopted to compare the predicted tensile strength and modulus with the experimental findings.

  12. VIPRE modeling of VVER-1000 reactor core for DNB analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)

    1995-09-01

    Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.

  13. Developing an Effective Model for Shale Gas Flow in Nano-scale Pore Clusters based on FIB-SEM Images

    Science.gov (United States)

    Jiang, W. B.; Lin, M.; Yi, Z. X.; Li, H. S.

    2016-12-01

    Nano-scale pores existed in the form of clusters are the controlling void space in shale gas reservoir. Gas transport in nanopores which has a significant influence on shale gas' recoverability displays multiple transport regimes, including viscous, slippage flow and Knudsen diffusion. In addition, it is also influenced by pore space characteristics. For convenience and efficiency consideration, it is necessary to develop an upscaling model from nano pore to pore cluster scale. Existing models are more like framework functions that provide a format, because the parameters that represent pore space characteristics are underdetermined and may have multiple possibilities. Therefore, it is urgent to make them clear and obtained a model that is closer to reality. FIB-SEM imaging technology is able to acquire three dimensional images with nanometer resolution that nano pores can be visible. Based on the images of two shale samples, we used a high-precision pore network extraction algorithm to generate equivalent pore networks and simulate multiple regime (non-Darcy) flow in it. Several structural parameters can be obtained through pore network modelling. It is found that although the throat-radius distributions are very close, throat flux-radius distributions of different samples can be divided into two categories. The variation of tortuosity with pressure and the overall trend of throat-flux distribution changes with pressure are disclosed. A deeper understanding of shale gas flow in nano-scale pore clusters is obtained. After all, an upscaling model that connects absolute permeability, apparent permeability and other characteristic parameters is proposed, and the best parameter scheme considering throat number-radius distribution and flowing porosity for this model is selected out of three schemes based on pore scale results, and it can avoid multiple-solution problem and is useful in reservoir modelling and experiment result analysis, etc. This work is supported by

  14. Nutrition, Balance and Fear of Falling as Predictors of Risk for Falls among Filipino Elderly in Nursing Homes: A Structural Equation Model (SEM)

    Science.gov (United States)

    de Guzman, Allan B.; Ines, Joanna Louise C.; Inofinada, Nina Josefa A.; Ituralde, Nielson Louie J.; Janolo, John Robert E.; Jerezo, Jnyv L.; Jhun, Hyae Suk J.

    2013-01-01

    While a number of empirical studies have been conducted regarding risk for falls among the elderly, there is still a paucity of similar studies in a developing country like the Philippines. This study purports to test through Structural Equation Modeling (SEM) a model that shows the interaction between and among nutrition, balance, fear of…

  15. Nutrition, Balance and Fear of Falling as Predictors of Risk for Falls among Filipino Elderly in Nursing Homes: A Structural Equation Model (SEM)

    Science.gov (United States)

    de Guzman, Allan B.; Ines, Joanna Louise C.; Inofinada, Nina Josefa A.; Ituralde, Nielson Louie J.; Janolo, John Robert E.; Jerezo, Jnyv L.; Jhun, Hyae Suk J.

    2013-01-01

    While a number of empirical studies have been conducted regarding risk for falls among the elderly, there is still a paucity of similar studies in a developing country like the Philippines. This study purports to test through Structural Equation Modeling (SEM) a model that shows the interaction between and among nutrition, balance, fear of…

  16. Ao leitor sem medo

    Directory of Open Access Journals (Sweden)

    José Eisenberg

    2000-05-01

    Full Text Available O texto resenha Ao leitor sem medo, de Renato Janine Ribeiro (Belo Horizonte, UFMG, 1999.This text is a review of Ao leitor sem medo by Renato Janine Ribeiro (Belo Horizonte, UFMG, 1999

  17. Modelling longevity bonds: Analysing the Swiss Re Kortis bond

    OpenAIRE

    2015-01-01

    A key contribution to the development of the traded market for longevity risk was the issuance of the Kortis bond, the world's first longevity trend bond, by Swiss Re in 2010. We analyse the design of the Kortis bond, develop suitable mortality models to analyse its payoff and discuss the key risk factors for the bond. We also investigate how the design of the Kortis bond can be adapted and extended to further develop the market for longevity risk.

  18. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or m

  19. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Ihekwaba, Adoha

    2007-01-01

    A. Ihekwaba, R. Mardare. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems. Case study: NFkB system. In Proc. of International Conference of Computational Methods in Sciences and Engineering (ICCMSE), American Institute of Physics, AIP Proceedings, N 2...

  20. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Ihekwaba, Adoha

    2007-01-01

    A. Ihekwaba, R. Mardare. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems. Case study: NFkB system. In Proc. of International Conference of Computational Methods in Sciences and Engineering (ICCMSE), American Institute of Physics, AIP Proceedings, N 2...

  1. The method of characteristics applied to analyse 2DH models

    NARCIS (Netherlands)

    Sloff, C.J.

    1992-01-01

    To gain insight into the physical behaviour of 2D hydraulic models (mathematically formulated as a system of partial differential equations), the method of characteristics is used to analyse the propagation of physical meaningful disturbances. These disturbances propagate as wave fronts along bichar

  2. Analysing the temporal dynamics of model performance for hydrological models

    Directory of Open Access Journals (Sweden)

    D. E. Reusser

    2008-11-01

    Full Text Available The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or model structure. Dealing with a set of performance measures evaluated at a high temporal resolution implies analyzing and interpreting a high dimensional data set. This paper presents a method for such a hydrological model performance assessment with a high temporal resolution and illustrates its application for two very different rainfall-runoff modeling case studies. The first is the Wilde Weisseritz case study, a headwater catchment in the eastern Ore Mountains, simulated with the conceptual model WaSiM-ETH. The second is the Malalcahuello case study, a headwater catchment in the Chilean Andes, simulated with the physics-based model Catflow. The proposed time-resolved performance assessment starts with the computation of a large set of classically used performance measures for a moving window. The key of the developed approach is a data-reduction method based on self-organizing maps (SOMs and cluster analysis to classify the high-dimensional performance matrix. Synthetic peak errors are used to interpret the resulting error classes. The final outcome of the proposed method is a time series of the occurrence of dominant error types. For the two case studies analyzed here, 6 such error types have been identified. They show clear temporal patterns which can lead to the identification of model structural errors.

  3. Analysing the temporal dynamics of model performance for hydrological models

    Directory of Open Access Journals (Sweden)

    E. Zehe

    2009-07-01

    Full Text Available The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or model structure. Dealing with a set of performance measures evaluated at a high temporal resolution implies analyzing and interpreting a high dimensional data set. This paper presents a method for such a hydrological model performance assessment with a high temporal resolution and illustrates its application for two very different rainfall-runoff modeling case studies. The first is the Wilde Weisseritz case study, a headwater catchment in the eastern Ore Mountains, simulated with the conceptual model WaSiM-ETH. The second is the Malalcahuello case study, a headwater catchment in the Chilean Andes, simulated with the physics-based model Catflow. The proposed time-resolved performance assessment starts with the computation of a large set of classically used performance measures for a moving window. The key of the developed approach is a data-reduction method based on self-organizing maps (SOMs and cluster analysis to classify the high-dimensional performance matrix. Synthetic peak errors are used to interpret the resulting error classes. The final outcome of the proposed method is a time series of the occurrence of dominant error types. For the two case studies analyzed here, 6 such error types have been identified. They show clear temporal patterns, which can lead to the identification of model structural errors.

  4. Social Network Analyses and Nutritional Behavior: An Integrated Modeling Approach

    Directory of Open Access Journals (Sweden)

    Alistair McNair Senior

    2016-01-01

    Full Text Available Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent advances in nutrition research, combining state-space models of nutritional geometry with agent-based models of systems biology, show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a tangible and practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit agent-based models that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition. Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interaction in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  5. Graphic-based musculoskeletal model for biomechanical analyses and animation.

    Science.gov (United States)

    Chao, Edmund Y S

    2003-04-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.

  6. Analyse titanium surface irradiated with laser, with and without deposited of durapatite Análise da superfície de titânio sob efeito do laser com e sem deposição de hidroxiapatita

    Directory of Open Access Journals (Sweden)

    Karin Ellen Sisti

    2006-01-01

    Full Text Available PURPOSE: The aim of this study was to analyse the surface of titanium implants using disc irradiated with lasers. METHODS: Titanium discs were irradiated with laser high insensitive (Nd-YAG, deposited durapatite and used thermal treatment. Sample received qualitative morphological analyse trough micrographics with many size in SEM (Scanning Electron Microscopy. RESULTS: Surface laser irradiation shows roughness and isomorphic characteristic. The durapatite amplified the titanium surface area by method biomimetic. CONCLUSION: The surface treatment presented more deposition of durapatite, roughness on the surface, better isomorphic characteristic and increase quantitative in titanium surface area, samples shows rugous, roughness and homogeneity there is not found in the implants available at the market.OBJETIVO: Estudar a superfície de implantes osseointegráveis utilizando discos de titânio irradiados com feixe de laser. MÉTODOS: A amostra foi irradiada com feixes de laser de alta intensidade (Nd-YAG, posteriormente depositado hidróxiapatita e submetido a tratamento térmico. Foi analisada sob MEV (Microscópio Eletrônico de Varredura e realizada análise morfológica qualitativa com microfotografias em vários aumentos. RESULTADOS: A superfície irradiada com laser apresentou deformidade superficial e característica isomórfica; a aplicação de hidroxiapatita pelo método de biomimético aumentou quantitativamente a área da superfície de titânio. CONCLUSÃO: A deposição de hidroxiapatita apresentou melhor característica isomórfica e aumento quantitativo da área superficial estudada, a amostra demonstrou características não encontradas nos implantes disposto no mercado.

  7. Comparing modelling techniques for analysing urban pluvial flooding.

    Science.gov (United States)

    van Dijk, E; van der Meulen, J; Kluck, J; Straatman, J H M

    2014-01-01

    Short peak rainfall intensities cause sewer systems to overflow leading to flooding of streets and houses. Due to climate change and densification of urban areas, this is expected to occur more often in the future. Hence, next to their minor (i.e. sewer) system, municipalities have to analyse their major (i.e. surface) system in order to anticipate urban flooding during extreme rainfall. Urban flood modelling techniques are powerful tools in both public and internal communications and transparently support design processes. To provide more insight into the (im)possibilities of different urban flood modelling techniques, simulation results have been compared for an extreme rainfall event. The results show that, although modelling software is tending to evolve towards coupled one-dimensional (1D)-two-dimensional (2D) simulation models, surface flow models, using an accurate digital elevation model, prove to be an easy and fast alternative to identify vulnerable locations in hilly and flat areas. In areas at the transition between hilly and flat, however, coupled 1D-2D simulation models give better results since catchments of major and minor systems can differ strongly in these areas. During the decision making process, surface flow models can provide a first insight that can be complemented with complex simulation models for critical locations.

  8. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Du, Qiang [Pennsylvania State Univ., State College, PA (United States)

    2014-11-12

    The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of which is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next

  9. Modeling hard clinical end-point data in economic analyses.

    Science.gov (United States)

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are more appropriate to accurately reflect the trial data.

  10. Analysing regenerative potential in zebrafish models of congenital muscular dystrophy.

    Science.gov (United States)

    Wood, A J; Currie, P D

    2014-11-01

    The congenital muscular dystrophies (CMDs) are a clinically and genetically heterogeneous group of muscle disorders. Clinically hypotonia is present from birth, with progressive muscle weakness and wasting through development. For the most part, CMDs can mechanistically be attributed to failure of basement membrane protein laminin-α2 sufficiently binding with correctly glycosylated α-dystroglycan. The majority of CMDs therefore arise as the result of either a deficiency of laminin-α2 (MDC1A) or hypoglycosylation of α-dystroglycan (dystroglycanopathy). Here we consider whether by filling a regenerative medicine niche, the zebrafish model can address the present challenge of delivering novel therapeutic solutions for CMD. In the first instance the readiness and appropriateness of the zebrafish as a model organism for pioneering regenerative medicine therapies in CMD is analysed, in particular for MDC1A and the dystroglycanopathies. Despite the recent rapid progress made in gene editing technology, these approaches have yet to yield any novel zebrafish models of CMD. Currently the most genetically relevant zebrafish models to the field of CMD, have all been created by N-ethyl-N-nitrosourea (ENU) mutagenesis. Once genetically relevant models have been established the zebrafish has several important facets for investigating the mechanistic cause of CMD, including rapid ex vivo development, optical transparency up to the larval stages of development and relative ease in creating transgenic reporter lines. Together, these tools are well suited for use in live-imaging studies such as in vivo modelling of muscle fibre detachment. Secondly, the zebrafish's contribution to progress in effective treatment of CMD was analysed. Two approaches were identified in which zebrafish could potentially contribute to effective therapies. The first hinges on the augmentation of functional redundancy within the system, such as upregulating alternative laminin chains in the candyfloss

  11. [Approach to depressogenic genes from genetic analyses of animal models].

    Science.gov (United States)

    Yoshikawa, Takeo

    2004-01-01

    Human depression or mood disorder is defined as a complex disease, making positional cloning of susceptibility genes a formidable task. We have undertaken genetic analyses of three different animal models for depression, comparing our results with advanced database resources. We first performed quantitative trait loci (QTL) analysis on two mouse models of "despair", namely, the forced swim test (FST) and tail suspension test (TST), and detected multiple chromosomal loci that control immobility time in these tests. Since one QTL detected on mouse chromosome 11 harbors the GABA A receptor subunit genes, we tested these genes for association in human mood disorder patients. We obtained significant associations of the alpha 1 and alpha 6 subunit genes with the disease, particularly in females. This result was striking, because we had previously detected an epistatic interaction between mouse chromosomes 11 and X that regulates immobility time in these animals. Next, we performed genome-wide expression analyses using a rat model of depression, learned helplessness (LH). We found that in the frontal cortex of LH rats, a disease implicated region, the LIM kinase 1 gene (Limk 1) showed greatest alteration, in this case down-regulation. By combining data from the QTL analysis of FST/TST and DNA microarray analysis of mouse frontal cortex, we identified adenylyl cyclase-associated CAP protein 1 (Cap 1) as another candidate gene for depression susceptibility. Both Limk 1 and Cap 1 are key players in the modulation of actin G-F conversion. In summary, our current study using animal models suggests disturbances of GABAergic neurotransmission and actin turnover as potential pathophysiologies for mood disorder.

  12. Structural model of risk factors for safety accidents in coal mine based on CA-SEM%基于 CA -SEM 的煤矿安全事故风险因素结构模型

    Institute of Scientific and Technical Information of China (English)

    汪刘凯; 孟祥瑞; 何叶荣; 王向前; 李慧宗

    2015-01-01

    围绕煤矿安全风险管理的内涵,通过因子分析和层次聚类分析,识别煤矿安全事故风险主要包含人因风险、管理风险、信息风险、环境风险和设备风险5个风险因素层的22项风险因子。以人因风险层的6个风险因素为内源潜变量,其他4个风险层的16个风险因素为外源潜变量,构建煤矿安全事故风险因素的CA-SEM模型。运用SPASS17.0和AMOS7.0剖析各风险因素层以及各风险因子对煤矿安全事故风险的综合影响及其作用机理,从而为实现煤矿本质化安全提供决策依据。%Centering on the connotation of risk management for coal mine safety, through factor analysis and hierar-chical clustering analysis, the risk of safety accidents in coal mine was identified, which included 22 risk factors in 5 risk factor levels such as human factor risk, management risk, information risk, environment risk and equipment risk.Taking the 6 risk factors in human factor risk level as endogenous latent variable and the 16 risk factors in oth-er 4 levels as exogenous latent variables, a CA-SEM model of risk factors for safety accidents in coal mine was es-tablished.The comprehensive influence and effect mechanism of each risk factor level and each risk factor on risk of safety accidents in coal mine were analyzed by using SPASS17 .0 and AMOS7 .0 , so as to provide decision basis for realizing the essence safety of coal mine.

  13. Magnetic fabric analyses in analogue models of clays

    Science.gov (United States)

    García-Lasanta, Cristina; Román-Berdiel, Teresa; Izquierdo-Llavall, Esther; Casas-Sainz, Antonio

    2017-04-01

    Anisotropy of magnetic susceptibility (AMS) studies in sedimentary rocks subjected to deformation indicate that magnetic fabrics orientation can be conditioned by multiple factors: sedimentary conditions, magnetic mineralogy, successive tectonic events, etc. All of them difficult the interpretation of the AMS as a marker of the deformation conditions. Analogue modeling allows to isolate the variables that act in a geological process and to determine the factors and in which extent they influence in the process. This study shows the magnetic fabric analyses applied to several analogue models developed with common commercial red clays. This material resembles natural clay materials that, despite their greater degree of impurities and heterogeneity, have been proved to record a robust magnetic signal carried by a mixture of para- and ferromagnetic minerals. The magnetic behavior of the modeled clay has been characterized by temperature dependent magnetic susceptibility curves (from 40 to 700°C). The measurements were performed combining a KLY-3S Kappabridge susceptometer with a CS3 furnace (AGICO Inc., Czech Republic). The obtained results indicate the presence of an important content of hematite as ferromagnetic phase, as well as a remarkable paramagnetic fraction, probably constituted by phyllosilicates. This mineralogy is common in natural materials such as Permo-Triassic red facies, and magnetic fabric analyses in these natural examples have given consistent results in different tectonic contexts. In this study, sedimentary conditions and magnetic mineralogy are kept constant and the influence of the tectonic regime in the magnetic fabrics is analyzed. Our main objective is to reproduce several tectonic contexts (strike-slip and compression) in a sedimentary environment where material is not yet compacted, in order to determine how tectonic conditions influence the magnetic fabric registered in each case. By dispersing the clays in water and after allowing their

  14. Modeling planetary seismic data for icy worlds and terrestrial planets with AxiSEM/Instaseis: Example data and a model for the Europa noise environment

    Science.gov (United States)

    Panning, Mark Paul; Stähler, Simon; Kedar, Sharon; van Driel, Martin; Nissen-Meyer, Tarje; Vance, Steve

    2016-10-01

    Seismology is one of our best tools for detailing interior structure of planetary bodies, and seismometers are likely to be considered for future lander missions to other planetary bodies after the planned landing of InSight on Mars in 2018. In order to guide instrument design and mission requirements, however, it is essential to model likely seismic signals in advance to determine the most promising data needed to meet science goals. Seismic data for multiple planetary bodies can now be simulated rapidly for arbitrary source-receiver configurations to frequencies of 1 Hz and above using the numerical wave propagation codes AxiSEM and Instaseis (van Driel et al., 2015) using 1D models derived from thermodynamic constraints (e.g. Cammarano et al., 2006). We present simulations for terrestrial planets and icy worlds to demonstrate the types of seismic signals we may expect to retrieve. We also show an application that takes advantage of the computational strengths of this method to construct a model of the thermal cracking noise environment for Europa under a range of assumptions of activity levels and elastic and anelastic structure.M. van Driel, L. Krischer, S.C. Stähler, K. Hosseini, and T. Nissen-Meyer (2015), "Instaseis: instant global seismograms based on a broadband waveform database," Solid Earth, 6, 701-717, doi: 10.5194/se-6-701-2015.F. Cammarano, V. Lekic, M. Manga, M.P. Panning, and B.A. Romanowicz (2006), "Long-period seismology on Europa: 1. Physically consistent interior models," J. Geophys. Res., 111, E12009, doi: 10.1029/2006JE002710.

  15. Multi-state models: metapopulation and life history analyses

    Directory of Open Access Journals (Sweden)

    Arnason, A. N.

    2004-06-01

    Full Text Available Multi–state models are designed to describe populations that move among a fixed set of categorical states. The obvious application is to population interchange among geographic locations such as breeding sites or feeding areas (e.g., Hestbeck et al., 1991; Blums et al., 2003; Cam et al., 2004 but they are increasingly used to address important questions of evolutionary biology and life history strategies (Nichols & Kendall, 1995. In these applications, the states include life history stages such as breeding states. The multi–state models, by permitting estimation of stage–specific survival and transition rates, can help assess trade–offs between life history mechanisms (e.g. Yoccoz et al., 2000. These trade–offs are also important in meta–population analyses where, for example, the pre–and post–breeding rates of transfer among sub–populations can be analysed in terms of target colony distance, density, and other covariates (e.g., Lebreton et al. 2003; Breton et al., in review. Further examples of the use of multi–state models in analysing dispersal and life–history trade–offs can be found in the session on Migration and Dispersal. In this session, we concentrate on applications that did not involve dispersal. These applications fall in two main categories: those that address life history questions using stage categories, and a more technical use of multi–state models to address problems arising from the violation of mark–recapture assumptions leading to the potential for seriously biased predictions or misleading insights from the models. Our plenary paper, by William Kendall (Kendall, 2004, gives an overview of the use of Multi–state Mark–Recapture (MSMR models to address two such violations. The first is the occurrence of unobservable states that can arise, for example, from temporary emigration or by incomplete sampling coverage of a target population. Such states can also occur for life history reasons, such

  16. Dipole model test with one superconducting coil; results analysed

    CERN Document Server

    Durante, M; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  17. Dipole model test with one superconducting coil: results analysed

    CERN Document Server

    Bajas, H; Benda, V; Berriaud, C; Bajko, M; Bottura, L; Caspi, S; Charrondiere, M; Clément, S; Datskov, V; Devaux, M; Durante, M; Fazilleau, P; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  18. Incorporating flood event analyses and catchment structures into model development

    Science.gov (United States)

    Oppel, Henning; Schumann, Andreas

    2016-04-01

    The space-time variability in catchment response results from several hydrological processes which differ in their relevance in an event-specific way. An approach to characterise this variance consists in comparisons between flood events in a catchment and between flood responses of several sub-basins in such an event. In analytical frameworks the impact of space and time variability of rainfall on runoff generation due to rainfall excess can be characterised. Moreover the effect of hillslope and channel network routing on runoff timing can be specified. Hence, a modelling approach is needed to specify the runoff generation and formation. Knowing the space-time variability of rainfall and the (spatial averaged) response of a catchment it seems worthwhile to develop new models based on event and catchment analyses. The consideration of spatial order and the distribution of catchment characteristics in their spatial variability and interaction with the space-time variability of rainfall provides additional knowledge about hydrological processes at the basin scale. For this purpose a new procedure to characterise the spatial heterogeneity of catchments characteristics in their succession along the flow distance (differentiated between river network and hillslopes) was developed. It was applied to study of flood responses at a set of nested catchments in a river basin in eastern Germany. In this study the highest observed rainfall-runoff events were analysed, beginning at the catchment outlet and moving upstream. With regard to the spatial heterogeneities of catchment characteristics, sub-basins were separated by new algorithms to attribute runoff-generation, hillslope and river network processes. With this procedure the cumulative runoff response at the outlet can be decomposed and individual runoff features can be assigned to individual aspects of the catchment. Through comparative analysis between the sub-catchments and the assigned effects on runoff dynamics new

  19. Um modelo semântico de publicações eletrônicas | A semantic model for electronic publishing

    Directory of Open Access Journals (Sweden)

    Carlos Henrique Marcondes

    2011-03-01

    Full Text Available Resumo Publicações eletrônicas, apesar dos avanços das Tecnologias da Informação, são ainda calcados no modelo impresso. O formato textual impede que programas possam ser usados para o processamento “semântico” desses conteúdos. È porposto um modelo “semântico” de publicações cientificas eletrônicas, no qual as conclusões contidas no texto do artigo fornecidas por autores e representadas em formato “inteligível” por programas, permitindo recuperação semântica, identificação de indícios de novas descobertas científicas e de incoerências sobre este conhecimento. O modelo se baseia nos conceitos de estrutura profunda, ou semântica, da linguagem (CHOMSKY, 1975, de microestrutura, macroestrutura e superestrutura, (KINTSH, VAN DIJK, 1972, na estrutura retórica de artigos científicos (HUTCHINS, 1977, (GROSS, 1990 e nos elementos de metodologia cientifica, como problema, questão, objetivo, hipótese, experimento e conclusão. Resulta da análise de 89 artigos biomédicos. Foi desenvolvido um protótipo de sistema que implementa parcialmente o modelo. Questionários foram usados com autores para embasar o desenvolvimento do protótipo. O protótipo foi testando com pesquisadores-autores. Foram identificados quatro padrões de raciocínio e encadeamento dos elementos semânticos em artigos científicos. O modelo de conteúdo foi implementado como uma ontologia computacional. Foi desenvolvido e avaliado um protótipo de uma interface web de submissão artigos pelos autores a um sistema eletrônico de publicação de periódicos que implementa o modelo. Palavras-chave publicações eletrônicas; metodológica científica; comunicação científica; representação do conhecimento; ontologias; processamento semântico de conteúdos; e-Ciência Abstract Electronic publishing, although Information Technologies advancements, are still based in the print text model. The textual format prevents programs to semantic process

  20. SEM microcharacterization of semiconductors

    CERN Document Server

    Holt, D B

    1989-01-01

    Applications of SEM techniques of microcharacterization have proliferated to cover every type of material and virtually every branch of science and technology. This book emphasizes the fundamental physical principles. The first section deals with the foundation of microcharacterization in electron beam instruments and the second deals with the interpretation of the information obtained in the main operating modes of a scanning electron microscope.

  1. A theoretical model for analysing gender bias in medicine

    Directory of Open Access Journals (Sweden)

    Johansson Eva E

    2009-08-01

    Full Text Available Abstract During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  2. An Illumination Modeling System for Human Factors Analyses

    Science.gov (United States)

    Huynh, Thong; Maida, James C.; Bond, Robert L. (Technical Monitor)

    2002-01-01

    Seeing is critical to human performance. Lighting is critical for seeing. Therefore, lighting is critical to human performance. This is common sense, and here on earth, it is easily taken for granted. However, on orbit, because the sun will rise or set every 45 minutes on average, humans working in space must cope with extremely dynamic lighting conditions. Contrast conditions of harsh shadowing and glare is also severe. The prediction of lighting conditions for critical operations is essential. Crew training can factor lighting into the lesson plans when necessary. Mission planners can determine whether low-light video cameras are required or whether additional luminaires need to be flown. The optimization of the quantity and quality of light is needed because of the effects on crew safety, on electrical power and on equipment maintainability. To address all of these issues, an illumination modeling system has been developed by the Graphics Research and Analyses Facility (GRAF) and Lighting Environment Test Facility (LETF) in the Space Human Factors Laboratory at NASA Johnson Space Center. The system uses physically based ray tracing software (Radiance) developed at Lawrence Berkeley Laboratories, a human factors oriented geometric modeling system (PLAID) and an extensive database of humans and environments. Material reflectivity properties of major surfaces and critical surfaces are measured using a gonio-reflectometer. Luminaires (lights) are measured for beam spread distribution, color and intensity. Video camera performances are measured for color and light sensitivity. 3D geometric models of humans and the environment are combined with the material and light models to form a system capable of predicting lighting conditions and visibility conditions in space.

  3. Comparison of two potato simulation models under climate change. I. Model calibration and sensitivity analyses

    NARCIS (Netherlands)

    Wolf, J.

    2002-01-01

    To analyse the effects of climate change on potato growth and production, both a simple growth model, POTATOS, and a comprehensive model, NPOTATO, were applied. Both models were calibrated and tested against results from experiments and variety trials in The Netherlands. The sensitivity of model

  4. Space Experiment Module (SEM)

    Science.gov (United States)

    Brodell, Charles L.

    1999-01-01

    The Space Experiment Module (SEM) Program is an education initiative sponsored by the National Aeronautics and Space Administration (NASA) Shuttle Small Payloads Project. The program provides nationwide educational access to space for Kindergarten through University level students. The SEM program focuses on the science of zero-gravity and microgravity. Within the program, NASA provides small containers or "modules" for students to fly experiments on the Space Shuttle. The experiments are created, designed, built, and implemented by students with teacher and/or mentor guidance. Student experiment modules are flown in a "carrier" which resides in the cargo bay of the Space Shuttle. The carrier supplies power to, and the means to control and collect data from each experiment.

  5. Micromechanical Failure Analyses for Finite Element Polymer Modeling

    Energy Technology Data Exchange (ETDEWEB)

    CHAMBERS,ROBERT S.; REEDY JR.,EARL DAVID; LO,CHI S.; ADOLF,DOUGLAS B.; GUESS,TOMMY R.

    2000-11-01

    Polymer stresses around sharp corners and in constrained geometries of encapsulated components can generate cracks leading to system failures. Often, analysts use maximum stresses as a qualitative indicator for evaluating the strength of encapsulated component designs. Although this approach has been useful for making relative comparisons screening prospective design changes, it has not been tied quantitatively to failure. Accurate failure models are needed for analyses to predict whether encapsulated components meet life cycle requirements. With Sandia's recently developed nonlinear viscoelastic polymer models, it has been possible to examine more accurately the local stress-strain distributions in zones of likely failure initiation looking for physically based failure mechanisms and continuum metrics that correlate with the cohesive failure event. This study has identified significant differences between rubbery and glassy failure mechanisms that suggest reasonable alternatives for cohesive failure criteria and metrics. Rubbery failure seems best characterized by the mechanisms of finite extensibility and appears to correlate with maximum strain predictions. Glassy failure, however, seems driven by cavitation and correlates with the maximum hydrostatic tension. Using these metrics, two three-point bending geometries were tested and analyzed under variable loading rates, different temperatures and comparable mesh resolution (i.e., accuracy) to make quantitative failure predictions. The resulting predictions and observations agreed well suggesting the need for additional research. In a separate, additional study, the asymptotically singular stress state found at the tip of a rigid, square inclusion embedded within a thin, linear elastic disk was determined for uniform cooling. The singular stress field is characterized by a single stress intensity factor K{sub a} and the applicable K{sub a} calibration relationship has been determined for both fully bonded and

  6. Dual FIB-SEM 3D Imaging and Lattice Boltzmann Modeling of Porosimetry and Multiphase Flow in Chalk

    Science.gov (United States)

    Rinehart, A. J.; Yoon, H.; Dewers, T. A.; Heath, J. E.; Petrusak, R.

    2010-12-01

    Mercury intrusion porosimetry (MIP) is an often-applied technique for determining pore throat distributions and seal analysis of fine-grained rocks. Due to closure effects, potential pore collapse, and complex pore network topologies, MIP data interpretation can be ambiguous, and often biased toward smaller pores in the distribution. We apply 3D imaging techniques and lattice-Boltzmann modeling in interpreting MIP data for samples of the Cretaceous Selma Group Chalk. In the Mississippi Interior Salt Basin, the Selma Chalk is the apparent seal for oil and gas fields in the underlying Eutaw Fm., and, where unfractured, the Selma Chalk is one of the regional-scale seals identified by the Southeast Regional Carbon Sequestration Partnership for CO2 injection sites. Dual focused ion - scanning electron beam and laser scanning confocal microscopy methods are used for 3D imaging of nanometer-to-micron scale microcrack and pore distributions in the Selma Chalk. A combination of image analysis software is used to obtain geometric pore body and throat distributions and other topological properties, which are compared to MIP results. 3D data sets of pore-microfracture networks are used in Lattice Boltzmann simulations of drainage (wetting fluid displaced by non-wetting fluid via the Shan-Chen algorithm), which in turn are used to model MIP procedures. Results are used in interpreting MIP results, understanding microfracture-matrix interaction during multiphase flow, and seal analysis for underground CO2 storage. This work was supported by the US Department of Energy, Office of Basic Energy Sciences as part of an Energy Frontier Research Center. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.

  7. Analyses on Four Models and Cases of Enterprise Informatization

    Institute of Scientific and Technical Information of China (English)

    Shi Chunsheng(石春生); Han Xinjuan; Yang Cuilan; Zhao Dongbai

    2003-01-01

    The basic conditions of the enterprise informatization in Heilongjiang province are analyzed and 4 models are designed to drive the industrial and commercial information enterprise. The 4 models are the Resource Integration Informatization Model, the Flow Management Informatization Model, the Intranet E-commerce Informatization Model and the Network Enterprise Informatization Model. The conditions for using and problems needing attentions of these 4 models are also analyzed.

  8. OPC model data collection for 45-nm technology node using automatic CD-SEM offline recipe creation

    Science.gov (United States)

    Fischer, Daniel; Talbi, Mohamed; Wei, Alex; Menadeva, Ovadya; Cornell, Roger

    2007-03-01

    Optical and Process Correction in the 45nm node is requiring an ever higher level of characterization. The greater complexity drives a need for automation of the metrology process allowing more efficient, accurate and effective use of the engineering resources and metrology tool time in the fab, helping to satisfy what seems an insatiable appetite for data by lithographers and modelers charged with development of 45nm and 32nm processes. The scope of the work referenced here is a 45nm design cycle "full-loop automation", starting with gds formatted target design layout and ending with the necessary feedback of one and two dimensional printed wafer metrology. In this paper the authors consider the key elements of software, algorithmic framework and Critical Dimension Scanning Electron Microscope (CDSEM) functionality necessary to automate its recipe creation. We evaluate specific problems with the methodology of the former art, "on-tool on-wafer" recipe construction, and discuss how the implementation of the design based recipe generation improves upon the overall metrology process. Individual target-by-target construction, use of a one pattern recognition template fits all approach, a blind navigation to the desired measurement feature, lengthy sessions on tool to construct recipes and limited ability to determine measurement quality in the resultant data set are each discussed as to how the state of the art Design Based Metrology (DBM) approach is implemented. The offline created recipes have shown pattern recognition success rates of up to 100% and measurement success rates of up to 93% for line/space as well as for 2D Minimum/Maximum measurements without manual assists during measurement.

  9. Testing Mediation Using Multiple Regression and Structural Equation Modeling Analyses in Secondary Data

    Science.gov (United States)

    Li, Spencer D.

    2011-01-01

    Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…

  10. Comparing SVARs and SEMs : more shocking stories

    NARCIS (Netherlands)

    Jacobs, Jan; Wallis, Kenneth F.

    2002-01-01

    The structural vector autoregression (SVAR) and simultaneous equation macroeconometric model (SEM) styles of empirical macroeconomic modelling are compared and contrasted, with reference to two models of the UK economy, namely the Cambridge long-run structural VAR model and the COMPACT model.

  11. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Gunzburger, Max [Florida State Univ., Tallahassee, FL (United States)

    2015-02-17

    We have treated the modeling, analysis, numerical analysis, and algorithmic development for nonlocal models of diffusion and mechanics. Variational formulations were developed and finite element methods were developed based on those formulations for both steady state and time dependent problems. Obstacle problems and optimization problems for the nonlocal models were also treated and connections made with fractional derivative models.

  12. Unmix 6.0 Model for environmental data analyses

    Science.gov (United States)

    Unmix Model is a mathematical receptor model developed by EPA scientists that provides scientific support for the development and review of the air and water quality standards, exposure research, and environmental forensics.

  13. Analysing Models as a Knowledge Technology in Transport Planning

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik

    2011-01-01

    Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame for such a ......Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame...... critical analytic literature on knowledge utilization and policy influence. A simple scheme based in this literature is drawn up to provide a framework for discussing the interface between urban transport planning and model use. A successful example of model use in Stockholm, Sweden is used as a heuristic...

  14. Analyses of Tsunami Events using Simple Propagation Models

    Science.gov (United States)

    Chilvery, Ashwith Kumar; Tan, Arjun; Aggarwal, Mohan

    2012-03-01

    Tsunamis exhibit the characteristics of ``canal waves'' or ``gravity waves'' which belong to the class of ``long ocean waves on shallow water.'' The memorable tsunami events including the 2004 Indian Ocean tsunami and the 2011 Pacific Ocean tsunami off the coast of Japan are analyzed by constructing simple tsunami propagation models including the following: (1) One-dimensional propagation model; (2) Two-dimensional propagation model on flat surface; (3) Two-dimensional propagation model on spherical surface; and (4) A finite line-source model on two-dimensional surface. It is shown that Model 1 explains the basic features of the tsunami including the propagation speed, depth of the ocean, dispersion-less propagation and bending of tsunamis around obstacles. Models 2 and 3 explain the observed amplitude variations for long-distance tsunami propagation across the Pacific Ocean, including the effect of the equatorial ocean current on the arrival times. Model 3 further explains the enhancement effect on the amplitude due to the curvature of the Earth past the equatorial distance. Finally, Model 4 explains the devastating effect of superposition of tsunamis from two subduction event, which struck the Phuket region during the 2004 Indian Ocean tsunami.

  15. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The pur

  16. Hyperelastic Modelling and Finite Element Analysing of Rubber Bushing

    Directory of Open Access Journals (Sweden)

    Merve Yavuz ERKEK

    2015-03-01

    Full Text Available The objective of this paper is to obtain stiffness curves of rubber bushings which are used in automotive industry with hyperelastic finite element model. Hyperelastic material models were obtained with different material tests. Stress and strain values and static stiffness curves were determined. It is shown that, static stiffness curves are nonlinear. The level of stiffness affects the vehicle dynamics behaviour.

  17. Modelling theoretical uncertainties in phenomenological analyses for particle physics

    CERN Document Server

    Charles, Jérôme; Niess, Valentin; Silva, Luiz Vale

    2016-01-01

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding $p$-values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive $p$-value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavour p...

  18. Modeling theoretical uncertainties in phenomenological analyses for particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)

    2017-04-15

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)

  19. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  20. Assessment of a geological model by surface wave analyses

    Science.gov (United States)

    Martorana, R.; Capizzi, P.; Avellone, G.; D'Alessandro, A.; Siragusa, R.; Luzio, D.

    2017-02-01

    A set of horizontal to vertical spectral ratio (HVSR) and multichannel analysis of surface waves (MASW) measurements, carried out in the Altavilla Milicia (Sicily) area, is analyzed to test a geological model of the area. Statistical techniques have been used in different stages of the data analysis, to optimize the reliability of the information extracted from geophysical measurements. In particular, cluster analysis algorithms have been implemented to select the time windows of the microseismic signal to be used for calculating the spectral ratio H/V and to identify sets of spectral ratio peaks likely caused by the same underground structures. Using results of reflection seismic lines, typical values of P-wave and S-wave velocity were estimated for each geological formation present in the area. These were used to narrow down the research space of parameters for the HVSR interpretation. MASW profiles have been carried out close to each HVSR measuring point, provided the parameters of the shallower layers for the HVSR models. MASW inversion has been constrained by extrapolating thicknesses from a known stratigraphic sequence. Preliminary 1D seismic models were obtained by adding deeper layers to models that resulted from MASW inversion. These justify the peaks of the HVSR curves due to layers deeper than MASW investigation depth. Furthermore, much deeper layers were included in the HVSR model, as suggested by geological setting and stratigraphic sequence. This choice was made considering that these latter layers do not generate other HVSR peaks and do not significantly affect the misfit. The starting models have been used to limit the starting research space for a more accurate interpretation, made considering the noise as a superposition of Rayleigh and Love waves. Results allowed to recognize four main seismic layers and to associate them to the main stratigraphic successions. The lateral correlation of seismic velocity models, joined with tectonic evidences

  1. Structural Equation Modeling (SEM of Performance Evaluation Indices in General Directorate of Youth and Sport of Guilan Province with Partial Least Squares (PLS

    Directory of Open Access Journals (Sweden)

    Hamidreza Goharrostami

    2016-10-01

    Full Text Available Purpose : to evaluate the performance evaluation the indexes of general directorate of youth and sport of Guilan province by using the BSC approach. Material : This was a descriptive and field -based survey. The population included managers and experts from the general directorate of youth and sport of Guilan province. The purposive sampling was used. A questionnaire was used to collect data. Content validity and reliability were approved by experts Cronbach's alpha test (0.89 respectively. For data analyzing and model fitting the structural equation modeling (SEM with PLS software was used. Results : performance evaluation model of general directorate of youth and sport of Guilan province has four factors, 12 dimensions and 55 indicators. So that learning and development factor has 4 dimensions and 13 indicators, internal processes have 4 dimensions and 23 indicators, financial factor has 2 dimensions and 7 indicators and customer and sport results have 2 dimensions 12 indicators. Internal processes, customer and sporting results, learning and development and financial factors had coefficients of factor loading of 0.91, 0.83, 0.81 and 0.80 respectively. Conclusion : We concluded that, in evaluating the performance of the organization, special attention should be paid on four studied terms and their confirmed dimensions and indicators. Based on the factor loading priority of activities and evaluation should be allocated to internal processes, customer and sporting results, learning and development and financial factors. So this index can be used to design a model to evaluate the performance of the general directorate of youth and sport of Guilan province.

  2. Compound dislocation models (CDMs) for volcano deformation analyses

    Science.gov (United States)

    Nikkhoo, Mehdi; Walter, Thomas R.; Lundgren, Paul R.; Prats-Iraola, Pau

    2017-02-01

    Volcanic crises are often preceded and accompanied by volcano deformation caused by magmatic and hydrothermal processes. Fast and efficient model identification and parameter estimation techniques for various sources of deformation are crucial for process understanding, volcano hazard assessment and early warning purposes. As a simple model that can be a basis for rapid inversion techniques, we present a compound dislocation model (CDM) that is composed of three mutually orthogonal rectangular dislocations (RDs). We present new RD solutions, which are free of artefact singularities and that also possess full rotational degrees of freedom. The CDM can represent both planar intrusions in the near field and volumetric sources of inflation and deflation in the far field. Therefore, this source model can be applied to shallow dikes and sills, as well as to deep planar and equidimensional sources of any geometry, including oblate, prolate and other triaxial ellipsoidal shapes. In either case the sources may possess any arbitrary orientation in space. After systematically evaluating the CDM, we apply it to the co-eruptive displacements of the 2015 Calbuco eruption observed by the Sentinel-1A satellite in both ascending and descending orbits. The results show that the deformation source is a deflating vertical lens-shaped source at an approximate depth of 8 km centred beneath Calbuco volcano. The parameters of the optimal source model clearly show that it is significantly different from an isotropic point source or a single dislocation model. The Calbuco example reflects the convenience of using the CDM for a rapid interpretation of deformation data.

  3. A Formal Model to Analyse the Firewall Configuration Errors

    Directory of Open Access Journals (Sweden)

    T. T. Myo

    2015-01-01

    Full Text Available The firewall is widely known as a brandmauer (security-edge gateway. To provide the demanded security, the firewall has to be appropriately adjusted, i.e. be configured. Unfortunately, when configuring, even the skilled administrators may make mistakes, which result in decreasing level of a network security and network infiltration undesirable packages.The network can be exposed to various threats and attacks. One of the mechanisms used to ensure network security is the firewall.The firewall is a network component, which, using a security policy, controls packages passing through the borders of a secured network. The security policy represents the set of rules.Package filters work in the mode without inspection of a state: they investigate packages as the independent objects. Rules take the following form: (condition, action. The firewall analyses the entering traffic, based on the IP address of the sender and recipient, the port number of the sender and recipient, and the used protocol. When the package meets rule conditions, the action specified in the rule is carried out. It can be: allow, deny.The aim of this article is to develop tools to analyse a firewall configuration with inspection of states. The input data are the file with the set of rules. It is required to submit the analysis of a security policy in an informative graphic form as well as to reveal discrepancy available in rules. The article presents a security policy visualization algorithm and a program, which shows how the firewall rules act on all possible packages. To represent a result in an intelligible form a concept of the equivalence region is introduced.Our task is the program to display results of rules action on the packages in a convenient graphic form as well as to reveal contradictions between the rules. One of problems is the large number of measurements. As it was noted above, the following parameters are specified in the rule: Source IP address, appointment IP

  4. Analysing the Organizational Culture of Universities: Two Models

    Science.gov (United States)

    Folch, Marina Tomas; Ion, Georgeta

    2009-01-01

    This article presents the findings of two research projects, examining organizational culture by means of two different models of analysis--one at university level and one at department level--which were carried out over the last four years at Catalonian public universities (Spain). Theoretical and methodological approaches for the two…

  5. Enhancing Technology-Mediated Communication: Tools, Analyses, and Predictive Models

    Science.gov (United States)

    2007-09-01

    the home (see, for example, Nagel, Hudson, & Abowd, 2004), in social Chapter 2: Background 17 settings (see Kern, Antifakos, Schiele ...on Computer Supported Cooperative Work (CSCW 2006), pp. 525-528 ACM Press. Kern, N., Antifakos, S., Schiele , B., & Schwaninger, A. (2004). A model

  6. Gene Discovery and Functional Analyses in the Model Plant Arabidopsis

    Institute of Scientific and Technical Information of China (English)

    Cai-Ping Feng; John Mundy

    2006-01-01

    The present mini-review describes newer methods and strategies, including transposon and T-DNA insertions,TILLING, Deleteagene, and RNA interference, to functionally analyze genes of interest in the model plant Arabidopsis. The relative advantages and disadvantages of the systems are also discussed.

  7. Gene Discovery and Functional Analyses in the Model Plant Arabidopsis

    DEFF Research Database (Denmark)

    Feng, Cai-ping; Mundy, J.

    2006-01-01

    The present mini-review describes newer methods and strategies, including transposon and T-DNA insertions, TILLING, Deleteagene, and RNA interference, to functionally analyze genes of interest in the model plant Arabidopsis. The relative advantages and disadvantages of the systems are also...

  8. A new model for analysing thermal stress in granular composite

    Institute of Scientific and Technical Information of China (English)

    郑茂盛; 金志浩; 浩宏奇

    1995-01-01

    A double embedding model of inletting reinforcement grain and hollow matrix ball into the effective media of the particulate-reinforced composite is advanced. And with this model the distributions of thermal stress in different phases of the composite during cooling are studied. Various expressions for predicting elastic and elastoplastic thermal stresses are derived. It is found that the reinforcement suffers compressive hydrostatic stress and the hydrostatic stress in matrix zone is a tensile one when temperature decreases; when temperature further decreases, yield area in matrix forms; when the volume fraction of reinforcement is enlarged, compressive stress on grain and tensile hydrostatic stress in matrix zone decrease; the initial temperature difference of the interface of reinforcement and matrix yielding rises, while that for the matrix yielding overall decreases.

  9. Analysing an Analytical Solution Model for Simultaneous Mobility

    Directory of Open Access Journals (Sweden)

    Md. Ibrahim Chowdhury

    2013-12-01

    Full Text Available Current mobility models for simultaneous mobility h ave their convolution in designing simultaneous movement where mobile nodes (MNs travel randomly f rom the two adjacent cells at the same time and also have their complexity in the measurement of th e occurrences of simultaneous handover. Simultaneou s mobility problem incurs when two of the MNs start h andover approximately at the same time. As Simultaneous mobility is different for the other mo bility pattern, generally occurs less number of tim es in real time; we analyze that a simplified simultaneou s mobility model can be considered by taking only symmetric positions of MNs with random steps. In ad dition to that, we simulated the model using mSCTP and compare the simulation results in different sce narios with customized cell ranges. The analytical results shows that with the bigger the cell sizes, simultaneous handover with random steps occurrences become lees and for the sequential mobility (where initial positions of MNs is predetermined with ran dom steps, simultaneous handover is more frequent.

  10. A simulation model for analysing brain structure deformations

    Energy Technology Data Exchange (ETDEWEB)

    Bona, Sergio Di [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy); Lutzemberger, Ludovico [Department of Neuroscience, Institute of Neurosurgery, University of Pisa, Via Roma, 67-56100 Pisa (Italy); Salvetti, Ovidio [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy)

    2003-12-21

    Recent developments of medical software applications from the simulation to the planning of surgical operations have revealed the need for modelling human tissues and organs, not only from a geometric point of view but also from a physical one, i.e. soft tissues, rigid body, viscoelasticity, etc. This has given rise to the term 'deformable objects', which refers to objects with a morphology, a physical and a mechanical behaviour of their own and that reflects their natural properties. In this paper, we propose a model, based upon physical laws, suitable for the realistic manipulation of geometric reconstructions of volumetric data taken from MR and CT scans. In particular, a physically based model of the brain is presented that is able to simulate the evolution of different nature pathological intra-cranial phenomena such as haemorrhages, neoplasm, haematoma, etc and to describe the consequences that are caused by their volume expansions and the influences they have on the anatomical and neuro-functional structures of the brain.

  11. Analyses of Cometary Silicate Crystals: DDA Spectral Modeling of Forsterite

    Science.gov (United States)

    Wooden, Diane

    2012-01-01

    Comets are the Solar System's deep freezers of gases, ices, and particulates that were present in the outer protoplanetary disk. Where comet nuclei accreted was so cold that CO ice (approximately 50K) and other supervolatile ices like ethane (C2H2) were preserved. However, comets also accreted high temperature minerals: silicate crystals that either condensed (greater than or equal to 1400 K) or that were annealed from amorphous (glassy) silicates (greater than 850-1000 K). By their rarity in the interstellar medium, cometary crystalline silicates are thought to be grains that formed in the inner disk and were then radially transported out to the cold and ice-rich regimes near Neptune. The questions that comets can potentially address are: How fast, how far, and over what duration were crystals that formed in the inner disk transported out to the comet-forming region(s)? In comets, the mass fractions of silicates that are crystalline, f_cryst, translate to benchmarks for protoplanetary disk radial transport models. The infamous comet Hale-Bopp has crystalline fractions of over 55%. The values for cometary crystalline mass fractions, however, are derived assuming that the mineralogy assessed for the submicron to micron-sized portion of the size distribution represents the compositional makeup of all larger grains in the coma. Models for fitting cometary SEDs make this assumption because models can only fit the observed features with submicron to micron-sized discrete crystals. On the other hand, larger (0.1-100 micrometer radii) porous grains composed of amorphous silicates and amorphous carbon can be easily computed with mixed medium theory wherein vacuum mixed into a spherical particle mimics a porous aggregate. If crystalline silicates are mixed in, the models completely fail to match the observations. Moreover, models for a size distribution of discrete crystalline forsterite grains commonly employs the CDE computational method for ellipsoidal platelets (c:a:b=8

  12. Temporal variations analyses and predictive modeling of microbiological seawater quality.

    Science.gov (United States)

    Lušić, Darija Vukić; Kranjčević, Lado; Maćešić, Senka; Lušić, Dražen; Jozić, Slaven; Linšak, Željko; Bilajac, Lovorka; Grbčić, Luka; Bilajac, Neiro

    2017-08-01

    Bathing water quality is a major public health issue, especially for tourism-oriented regions. Currently used methods within EU allow at least a 2.2 day period for obtaining the analytical results, making outdated the information forwarded to the public. Obtained results and beach assessment are influenced by the temporal and spatial characteristics of sample collection, and numerous environmental parameters, as well as by differences of official water standards. This paper examines the temporal variation of microbiological parameters during the day, as well as the influence of the sampling hour, on decision processes in the management of the beach. Apart from the fecal indicators stipulated by the EU Bathing Water Directive (E. coli and enterococci), additional fecal (C. perfringens) and non-fecal (S. aureus and P. aeriginosa) parameters were analyzed. Moreover, the effects of applying different evaluation criteria (national, EU and U.S. EPA) to beach ranking were studied, and the most common reasons for exceeding water-quality standards were investigated. In order to upgrade routine monitoring, a predictive statistical model was developed. The highest concentrations of fecal indicators were recorded early in the morning (6 AM) due to the lack of solar radiation during the night period. When compared to enterococci, E. coli criteria appears to be more stringent for the detection of fecal pollution. In comparison to EU and U.S. EPA criteria, Croatian national evaluation criteria provide stricter public health standards. Solar radiation and precipitation were the predominant environmental parameters affecting beach water quality, and these parameters were included in the predictive model setup. Predictive models revealed great potential for the monitoring of recreational water bodies, and with further development can become a useful tool for the improvement of public health protection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Constructing a starting 3D shear velocity model with sharp interfaces for SEM-based upper mantle tomography in North America

    Science.gov (United States)

    Calo, M.; Bodin, T.; Yuan, H.; Romanowicz, B. A.; Larmat, C. S.; Maceira, M.

    2013-12-01

    Seismic tomography is currently evolving towards 3D earth models that satisfy full seismic waveforms at increasingly high frequencies. This evolution is possible thanks to the advent of powerful numerical methods such as the Spectral Element Method (SEM) that allow accurate computation of the seismic wavefield in complex media, and the drastic increase of computational resources. However, the production of such models requires handling complex misfit functions with more than one local minimum. Standard linearized inversion methods (such as gradient methods) have two main drawbacks: 1) they produce solution models highly dependent on the starting model; 2) they do not provide a means of estimating true model uncertainties. However, these issues can be addressed with stochastic methods that can sample the space of possible solutions efficiently. Such methods are prohibitively challenging computationally in 3D, but increasingly accessible in 1D. In previous work (Yuan and Romanowicz, 2010; Yuan et al., 2011) we developed a continental scale anisotropic upper mantle model of north America based on a combination of long period seismic waveforms and SKS splitting measurements, showing the pervasive presence of layering of anisotropy in the cratonic lithosphere with significant variations in depth of the mid-lithospheric boundary. The radial anisotropy part of the model has been recently updated using the spectral element method for forward wavefield computations and waveform data from the latest deployments of USarray (Yuan and Romanowicz, 2013). However, the long period waveforms (periods > 40s) themselves only provide a relatively smooth view of the mantle if the starting model is smooth, and the mantle discontinuities necessary for geodynamical interpretation are not imaged. Increasing the frequency of the computations to constrain smaller scale features is possible, but challenging computationally, and at the risk of falling in local minima of the misfit function. In

  14. Analysing the Competency of Mathematical Modelling in Physics

    CERN Document Server

    Redish, Edward F

    2016-01-01

    A primary goal of physics is to create mathematical models that allow both predictions and explanations of physical phenomena. We weave maths extensively into our physics instruction beginning in high school, and the level and complexity of the maths we draw on grows as our students progress through a physics curriculum. Despite much research on the learning of both physics and math, the problem of how to successfully teach most of our students to use maths in physics effectively remains unsolved. A fundamental issue is that in physics, we don't just use maths, we think about the physical world with it. As a result, we make meaning with math-ematical symbology in a different way than mathematicians do. In this talk we analyze how developing the competency of mathematical modeling is more than just "learning to do math" but requires learning to blend physical meaning into mathematical representations and use that physical meaning in solving problems. Examples are drawn from across the curriculum.

  15. Fluctuating selection models and McDonald-Kreitman type analyses.

    Directory of Open Access Journals (Sweden)

    Toni I Gossmann

    Full Text Available It is likely that the strength of selection acting upon a mutation varies through time due to changes in the environment. However, most population genetic theory assumes that the strength of selection remains constant. Here we investigate the consequences of fluctuating selection pressures on the quantification of adaptive evolution using McDonald-Kreitman (MK style approaches. In agreement with previous work, we show that fluctuating selection can generate evidence of adaptive evolution even when the expected strength of selection on a mutation is zero. However, we also find that the mutations, which contribute to both polymorphism and divergence tend, on average, to be positively selected during their lifetime, under fluctuating selection models. This is because mutations that fluctuate, by chance, to positive selected values, tend to reach higher frequencies in the population than those that fluctuate towards negative values. Hence the evidence of positive adaptive evolution detected under a fluctuating selection model by MK type approaches is genuine since fixed mutations tend to be advantageous on average during their lifetime. Never-the-less we show that methods tend to underestimate the rate of adaptive evolution when selection fluctuates.

  16. A workflow model to analyse pediatric emergency overcrowding.

    Science.gov (United States)

    Zgaya, Hayfa; Ajmi, Ines; Gammoudi, Lotfi; Hammadi, Slim; Martinot, Alain; Beuscart, Régis; Renard, Jean-Marie

    2014-01-01

    The greatest source of delay in patient flow is the waiting time from the health care request, and especially the bed request to exit from the Pediatric Emergency Department (PED) for hospital admission. It represents 70% of the time that these patients occupied in the PED waiting rooms. Our objective in this study is to identify tension indicators and bottlenecks that contribute to overcrowding. Patient flow mapping through the PED was carried out in a continuous 2 years period from January 2011 to December 2012. Our method is to use the collected real data, basing on accurate visits made in the PED of the Regional University Hospital Center (CHRU) of Lille (France), in order to construct an accurate and complete representation of the PED processes. The result of this representation is a Workflow model of the patient journey in the PED representing most faithfully possible the reality of the PED of CHRU of Lille. This model allowed us to identify sources of delay in patient flow and aspects of the PED activity that could be improved. It must be enough retailed to produce an analysis allowing to identify the dysfunctions of the PED and also to propose and to estimate prevention indicators of tensions. Our survey is integrated into the French National Research Agency project, titled: "Hospital: optimization, simulation and avoidance of strain" (ANR HOST).

  17. Extracting porosity and modelling permeability from μCT and FIB-SEM data of fractured dolomites from a hydrocarbon reservoir

    Science.gov (United States)

    Voorn, M. H.; Rath, A.; Exner, U.

    2012-04-01

    Currently oil and gas in the Vienna Basin are produced partly from the Upper Triassic Hauptdolomit formation. Various drill-cores were retrieved from densely fractured dolomites from depths between 3000 and 5300 m. Porosity and permeability assessment in specimen from such fractured rocks proves to be difficult by common laboratory methods, and also 2D sample analysis alone is insufficient to this end. In our study, X-ray micro-Computed Tomography (µCT) is used to visualise the inside of core samples of fractured Hauptdolomit. The biggest advantage of µCT is that it provides a 3D view of the fractures and other porosity, without destroying the sample. Core sample descriptions, 2D thin section analysis and standard laboratory measurements are used for extended analysis and cross-calibration of the results. In addition, 3D porosity visualisations at the micro- to nano-scale are obtained from Focussed Ion Beam - Scanning Electron Microscopy (FIB-SEM) on thin sections. The narrow fractures encountered in the Hauptdolomit samples require sufficient resolution µCT scans (i.e. better than ca. 25 µm). Full 10 cm diameter cores of sample prove to be too thick and dense, so that the fracture network cannot be recorded properly. 3 cm sized plugs on the other hand do provide workable results. After obtaining good datasets, the fractures need to be segmented (separated) from the full dataset for further analysis. A large amount of different segmentation routines is available from literature, but very little are applicable for segmenting narrow fractures, especially not in geological literature. Our current best results stem from applying the so-called "Frangi filter" used in segmentation routines in the medical sciences for segmenting blood vessels. After this segmentation, the fracture patterns can be extracted, and quantitative analysis of the bulk porosity and porosity distribution, fracture aperture and length can be performed. The data obtained by FIB-SEM is treated in

  18. Geographically Isolated Wetlands and Catchment Hydrology: A Modified Model Analyses

    Science.gov (United States)

    Evenson, G.; Golden, H. E.; Lane, C.; D'Amico, E.

    2014-12-01

    Geographically isolated wetlands (GIWs), typically defined as depressional wetlands surrounded by uplands, support an array of hydrological and ecological processes. However, key research questions concerning the hydrological connectivity of GIWs and their impacts on downgradient surface waters remain unanswered. This is particularly important for regulation and management of these systems. For example, in the past decade United States Supreme Court decisions suggest that GIWs can be afforded protection if significant connectivity exists between these waters and traditional navigable waters. Here we developed a simulation procedure to quantify the effects of various spatial distributions of GIWs across the landscape on the downgradient hydrograph using a refined version of the Soil and Water Assessment Tool (SWAT), a catchment-scale hydrological simulation model. We modified the SWAT FORTRAN source code and employed an alternative hydrologic response unit (HRU) definition to facilitate an improved representation of GIW hydrologic processes and connectivity relationships to other surface waters, and to quantify their downgradient hydrological effects. We applied the modified SWAT model to an ~ 202 km2 catchment in the Coastal Plain of North Carolina, USA, exhibiting a substantial population of mapped GIWs. Results from our series of GIW distribution scenarios suggest that: (1) Our representation of GIWs within SWAT conforms to field-based characterizations of regional GIWs in most respects; (2) GIWs exhibit substantial seasonally-dependent effects upon downgradient base flow; (3) GIWs mitigate peak flows, particularly following high rainfall events; and (4) The presence of GIWs on the landscape impacts the catchment water balance (e.g., by increasing groundwater outflows). Our outcomes support the hypothesis that GIWs have an important catchment-scale effect on downgradient streamflow.

  19. Reporting Results from Structural Equation Modeling Analyses in Archives of Scientific Psychology.

    Science.gov (United States)

    Hoyle, Rick H; Isherwood, Jennifer C

    2013-02-01

    Psychological research typically involves the analysis of data (e.g., questionnaire responses, records of behavior) using statistical methods. The description of how those methods are used and the results they produce is a key component of scholarly publications. Despite their importance, these descriptions are not always complete and clear. In order to ensure the completeness and clarity of these descriptions, the Archives of Scientific Psychology requires that authors of manuscripts to be considered for publication adhere to a set of publication standards. Although the current standards cover most of the statistical methods commonly used in psychological research, they do not cover them all. In this manuscript, we propose adjustments to the current standards and the addition of additional standards for a statistical method not adequately covered in the current standards-structural equation modeling (SEM). Adherence to the standards we propose would ensure that scholarly publications that report results of data analyzed using SEM are complete and clear.

  20. Analysis of non-metallic inclusions in steel by SEM/EDS experiences with the technique as applied to the plain carbon steel Cf53 (1.1213); Analyse nichtmetallischer Einschluesse in Stahl mittels REM/EDS und Anwendung auf den unlegierten Kohlenstoffstahl Cf53 (1.1213)

    Energy Technology Data Exchange (ETDEWEB)

    Lietzau, Jens [GKN Driveline International GmbH, Lohmar (Germany). Research and Product Development Centre

    2010-12-15

    The SEM/EDS provides detailed information on the chemical nature of the non-metallic inclusions present in a steel. However, this information cannot be accepted uncritically. There can be positive and negative preparation artefacts and inclusions might be mis-classified because some elements are not or wrongly identified. Also, this detailed information comes at the cost of considerably longer measurement times than automated LM inclusion analyses require, since LM image acquisition is faster and no time is spent on EDS. (orig.)

  1. Fixed- and random-effects meta-analytic structural equation modeling: examples and analyses in R.

    Science.gov (United States)

    Cheung, Mike W-L

    2014-03-01

    Meta-analytic structural equation modeling (MASEM) combines the ideas of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Cheung and Chan (Psychological Methods 10:40-64, 2005b, Structural Equation Modeling 16:28-53, 2009) proposed a two-stage structural equation modeling (TSSEM) approach to conducting MASEM that was based on a fixed-effects model by assuming that all studies have the same population correlation or covariance matrices. The main objective of this article is to extend the TSSEM approach to a random-effects model by the inclusion of study-specific random effects. Another objective is to demonstrate the procedures with two examples using the metaSEM package implemented in the R statistical environment. Issues related to and future directions for MASEM are discussed.

  2. Using System Dynamic Model and Neural Network Model to Analyse Water Scarcity in Sudan

    Science.gov (United States)

    Li, Y.; Tang, C.; Xu, L.; Ye, S.

    2017-07-01

    Many parts of the world are facing the problem of Water Scarcity. Analysing Water Scarcity quantitatively is an important step to solve the problem. Water scarcity in a region is gauged by WSI (water scarcity index), which incorporate water supply and water demand. To get the WSI, Neural Network Model and SDM (System Dynamic Model) that depict how environmental and social factors affect water supply and demand are developed to depict how environmental and social factors affect water supply and demand. The uneven distribution of water resource and water demand across a region leads to an uneven distribution of WSI within this region. To predict WSI for the future, logistic model, Grey Prediction, and statistics are applied in predicting variables. Sudan suffers from severe water scarcity problem with WSI of 1 in 2014, water resource unevenly distributed. According to the result of modified model, after the intervention, Sudan’s water situation will become better.

  3. 基于SEM的城市公交方式选择行为模型%Mode Choice Behavior Model of Urban Public Transport Based on SEM

    Institute of Scientific and Technical Information of China (English)

    陈坚; 杨亚璪; 李小兵; 穆礼彬

    2014-01-01

    To solve the existing mode choice behavior model of urban public transport considered only the observable variables, but do not relate to the unobserved latent variables which affect travel choices. And a structural equation model(SEM) of urban public transport mode choice behavior is built based on latent variables and observable variables. The quantitative relationship between various influencing factors and the effect to choice results are analyzed. With traveler personality traits, behavior characteristics of different groups are studied. Finally, the model is used in the Chengdu example analysis. It is pointed that the perceived value has a significant influence on choice results and can explain 62% of the travel behavioral intention. The service quality effect on perceived value is bigger than the price rationality. Choice behavior characteristics of different groups exist certain difference.%为解决现有居民公交方式选择行为模型中仅考虑可直接观测的显变量,尚未涉及对选择结果具有一定影响的不可直接观测的潜变量的问题,本文从潜变量与显变量角度,构建了公交方式选择行为结构方程模型。分析了各种影响因素之间的定量关系,以及对选择结果的影响作用大小,并依据出行者人格特质进行了分群行为特性研究。最后,将模型运用于成都公交出行实例分析中。结果表明:知觉价值对选择结果具有显著影响,可解释出行行为意向的62%信息,服务品质对知觉价值的影响高于票价合理性,且不同群体的选择行为特征存在一定差异。

  4. Pan-European modelling of riverine nutrient concentrations - spatial patterns, source detection, trend analyses, scenario modelling

    Science.gov (United States)

    Bartosova, Alena; Arheimer, Berit; Capell, Rene; Donnelly, Chantal; Strömqvist, Johan

    2016-04-01

    Nutrient transport models are important tools for large scale assessments of macro-nutrient fluxes (nitrogen, phosphorus) and thus can serve as support tool for environmental assessment and management. Results from model applications over large areas, i.e. from major river basin to continental scales can fill a gap where monitoring data is not available. Here, we present results from the pan-European rainfall-runoff and nutrient transfer model E-HYPE, which is based on open data sources. We investigate the ability of the E-HYPE model to replicate the spatial and temporal variations found in observed time-series of riverine N and P concentrations, and illustrate the model usefulness for nutrient source detection, trend analyses, and scenario modelling. The results show spatial patterns in N concentration in rivers across Europe which can be used to further our understanding of nutrient issues across the European continent. E-HYPE results show hot spots with highest concentrations of total nitrogen in Western Europe along the North Sea coast. Source apportionment was performed to rank sources of nutrient inflow from land to sea along the European coast. An integrated dynamic model as E-HYPE also allows us to investigate impacts of climate change and measure programs, which was illustrated in a couple of scenarios for the Baltic Sea. Comparing model results with observations shows large uncertainty in many of the data sets and the assumptions used in the model set-up, e.g. point source release estimates. However, evaluation of model performance at a number of measurement sites in Europe shows that mean N concentration levels are generally well simulated. P levels are less well predicted which is expected as the variability of P concentrations in both time and space is higher. Comparing model performance with model set-ups using local data for the Weaver River (UK) did not result in systematically better model performance which highlights the complexity of model

  5. Quantitative approach on SEM images of microstructure of clay soils

    Institute of Scientific and Technical Information of China (English)

    施斌; 李生林; M.Tolkachev

    1995-01-01

    The working principles of Videolab Image Processing System (VIPS), the examining methods of orientation of microstructural units of clay soils and analysing results on SEM images of some typical microstructures of clay soils using the VIPS are introduced.

  6. Taxing CO2 and subsidising biomass: Analysed in a macroeconomic and sectoral model

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik

    2000-01-01

    This paper analyses the combination of taxes and subsidies as an instrument to enable a reduction in CO2 emission. The objective of the study is to compare recycling of a CO2 tax revenue as a subsidy for biomass use as opposed to traditional recycling such as reduced income or corporate taxation....... A model of Denmark's energy supply sector is used to analyse the e€ect of a CO2 tax combined with using the tax revenue for biomass subsidies. The energy supply model is linked to a macroeconomic model such that the macroeconomic consequences of tax policies can be analysed along with the consequences...

  7. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  8. Pathway models for analysing and managing the introduction of alien plant pests - an overview and categorization

    NARCIS (Netherlands)

    Douma, J.C.; Pautasso, M.; Venette, R.C.; Robinet, C.; Hemerik, L.; Mourits, M.C.M.; Schans, J.; Werf, van der W.

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative

  9. FIB-SEM tomography in biology.

    Science.gov (United States)

    Kizilyaprak, Caroline; Bittermann, Anne Greet; Daraspe, Jean; Humbel, Bruno M

    2014-01-01

    Three-dimensional information is much easier to understand than a set of two-dimensional images. Therefore a layman is thrilled by the pseudo-3D image taken in a scanning electron microscope (SEM) while, when seeing a transmission electron micrograph, his imagination is challenged. First approaches to gain insight in the third dimension were to make serial microtome sections of a region of interest (ROI) and then building a model of the object. Serial microtome sectioning is a tedious and skill-demanding work and therefore seldom done. In the last two decades with the increase of computer power, sophisticated display options, and the development of new instruments, an SEM with a built-in microtome as well as a focused ion beam scanning electron microscope (FIB-SEM), serial sectioning, and 3D analysis has become far easier and faster.Due to the relief like topology of the microtome trimmed block face of resin-embedded tissue, the ROI can be searched in the secondary electron mode, and at the selected spot, the ROI is prepared with the ion beam for 3D analysis. For FIB-SEM tomography, a thin slice is removed with the ion beam and the newly exposed face is imaged with the electron beam, usually by recording the backscattered electrons. The process, also called "slice and view," is repeated until the desired volume is imaged.As FIB-SEM allows 3D imaging of biological fine structure at high resolution of only small volumes, it is crucial to perform slice and view at carefully selected spots. Finding the region of interest is therefore a prerequisite for meaningful imaging. Thin layer plastification of biofilms offers direct access to the original sample surface and allows the selection of an ROI for site-specific FIB-SEM tomography just by its pronounced topographic features.

  10. METROLOGICAL PERFORMANCE OF SEM 3D TECHNIQUES

    DEFF Research Database (Denmark)

    Marinello, Francesco; Carmignato, Simone; Savio, Enrico;

    2008-01-01

    This paper addresses the metrological performance of three-dimensional measurements performed with Scanning Electron Microscopes (SEMs) using reconstruction of surface topography through stereo-photogrammetry. Reconstruction is based on the model function introduced by Piazzesi adapted for eucent...... condition are studied, in order to define a strategy to optimise the measurements taking account of the critical factors in SEM 3D reconstruction. Investigations were performed on a novel sample, specifically developed and implemented for the tests.......This paper addresses the metrological performance of three-dimensional measurements performed with Scanning Electron Microscopes (SEMs) using reconstruction of surface topography through stereo-photogrammetry. Reconstruction is based on the model function introduced by Piazzesi adapted...... and the instrument set-up; the second concerns the quality of scanned images and represents the major criticality in the application of SEMs for 3D characterizations. In particular the critical role played by the tilting angle and its relative uncertainty, the magnification and the deviations from the eucentricity...

  11. On the Nature of SEM Estimates of ARMA Parameters.

    Science.gov (United States)

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2002-01-01

    Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…

  12. Fungal-Induced Deterioration of Mural Paintings: In Situ and Mock-Model Microscopy Analyses.

    Science.gov (United States)

    Unković, Nikola; Grbić, Milica Ljaljević; Stupar, Miloš; Savković, Željko; Jelikić, Aleksa; Stanojević, Dragan; Vukojević, Jelena

    2016-04-01

    Fungal deterioration of frescoes was studied in situ on a selected Serbian church, and on a laboratory model, utilizing standard and newly implemented microscopy techniques. Scanning electron microscopy (SEM) with energy-dispersive X-ray confirmed the limestone components of the plaster. Pigments used were identified as carbon black, green earth, iron oxide, ocher, and an ocher/cinnabar mixture. In situ microscopy, applied via a portable microscope ShuttlePix P-400R, proved very useful for detection of invisible micro-impairments and hidden, symptomless, microbial growth. SEM and optical microscopy established that observed deterioration symptoms, predominantly discoloration and pulverization of painted layers, were due to bacterial filaments and fungal hyphal penetration, and formation of a wide range of fungal structures (i.e., melanized hyphae, chlamydospores, microcolonial clusters, Cladosporium-like conidia, and Chaetomium perithecia and ascospores). The all year-round monitoring of spontaneous and induced fungal colonization of a "mock painting" in controlled laboratory conditions confirmed the decisive role of humidity level (70.18±6.91% RH) in efficient colonization of painted surfaces, as well as demonstrated increased bioreceptivity of painted surfaces to fungal colonization when plant-based adhesives (ilinocopie, murdent), compared with organic adhesives of animal origin (bone glue, egg white), are used for pigment sizing.

  13. An improved lake model for climate simulations: Model structure, evaluation, and sensitivity analyses in CESM1

    Directory of Open Access Journals (Sweden)

    Zachary Subin

    2012-02-01

    Full Text Available Lakes can influence regional climate, yet most general circulation models have, at best, simple and largely untested representations of lakes. We developed the Lake, Ice, Snow, and Sediment Simulator(LISSS for inclusion in the land-surface component (CLM4 of an earth system model (CESM1. The existing CLM4 lake modelperformed poorly at all sites tested; for temperate lakes, summer surface water temperature predictions were 10–25uC lower than observations. CLM4-LISSS modifies the existing model by including (1 a treatment of snow; (2 freezing, melting, and ice physics; (3 a sediment thermal submodel; (4 spatially variable prescribed lakedepth; (5 improved parameterizations of lake surface properties; (6 increased mixing under ice and in deep lakes; and (7 correction of previous errors. We evaluated the lake model predictions of water temperature and surface fluxes at three small temperate and boreal lakes where extensive observational data was available. We alsoevaluated the predicted water temperature and/or ice and snow thicknesses for ten other lakes where less comprehensive forcing observations were available. CLM4-LISSS performed very well compared to observations for shallow to medium-depth small lakes. For large, deep lakes, the under-prediction of mixing was improved by increasing the lake eddy diffusivity by a factor of 10, consistent with previouspublished analyses. Surface temperature and surface flux predictions were improved when the aerodynamic roughness lengths were calculated as a function of friction velocity, rather than using a constant value of 1 mm or greater. We evaluated the sensitivity of surface energy fluxes to modeled lake processes and parameters. Largechanges in monthly-averaged surface fluxes (up to 30 W m22 were found when excluding snow insulation or phase change physics and when varying the opacity, depth, albedo of melting lake ice, and mixing strength across ranges commonly found in real lakes. Typical

  14. Secondary emission monitor (SEM) grids.

    CERN Multimedia

    Patrice Loïez

    2002-01-01

    A great variety of Secondary Emission Monitors (SEM) are used all over the PS Complex. At other accelerators they are also called wire-grids, harps, etc. They are used to measure beam density profiles (from which beam size and emittance can be derived) in single-pass locations (not on circulating beams). Top left: two individual wire-planes. Top right: a combination of a horizontal and a vertical wire plane. Bottom left: a ribbon grid in its frame, with connecting wires. Bottom right: a SEM-grid with its insertion/retraction mechanism.

  15. A modified Lee-Carter model for analysing short-base-period data.

    Science.gov (United States)

    Zhao, Bojuan Barbara

    2012-03-01

    This paper introduces a new modified Lee-Carter model for analysing short-base-period mortality data, for which the original Lee-Carter model produces severely fluctuating predicted age-specific mortality. Approximating the unknown parameters in the modified model by linearized cubic splines and other additive functions, the model can be simplified into a logistic regression when fitted to binomial data. The expected death rate estimated from the modified model is smooth, not only over ages but also over years. The analysis of mortality data in China (2000-08) demonstrates the advantages of the new model over existing models.

  16. Comparison of linear measurements and analyses taken from plaster models and three-dimensional images.

    Science.gov (United States)

    Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins

    2014-11-01

    Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p plaster models using the caliper and from the digital models using O3d software were identical.

  17. Three-dimensional lake water quality modeling: sensitivity and uncertainty analyses.

    Science.gov (United States)

    Missaghi, Shahram; Hondzo, Miki; Melching, Charles

    2013-11-01

    Two sensitivity and uncertainty analysis methods are applied to a three-dimensional coupled hydrodynamic-ecological model (ELCOM-CAEDYM) of a morphologically complex lake. The primary goals of the analyses are to increase confidence in the model predictions, identify influential model parameters, quantify the uncertainty of model prediction, and explore the spatial and temporal variabilities of model predictions. The influence of model parameters on four model-predicted variables (model output) and the contributions of each of the model-predicted variables to the total variations in model output are presented. The contributions of predicted water temperature, dissolved oxygen, total phosphorus, and algal biomass contributed 3, 13, 26, and 58% of total model output variance, respectively. The fraction of variance resulting from model parameter uncertainty was calculated by two methods and used for evaluation and ranking of the most influential model parameters. Nine out of the top 10 parameters identified by each method agreed, but their ranks were different. Spatial and temporal changes of model uncertainty were investigated and visualized. Model uncertainty appeared to be concentrated around specific water depths and dates that corresponded to significant storm events. The results suggest that spatial and temporal variations in the predicted water quality variables are sensitive to the hydrodynamics of physical perturbations such as those caused by stream inflows generated by storm events. The sensitivity and uncertainty analyses identified the mineralization of dissolved organic carbon, sediment phosphorus release rate, algal metabolic loss rate, internal phosphorus concentration, and phosphorus uptake rate as the most influential model parameters.

  18. Processes models, environmental analyses, and cognitive architectures: quo vadis quantum probability theory?

    Science.gov (United States)

    Marewski, Julian N; Hoffrage, Ulrich

    2013-06-01

    A lot of research in cognition and decision making suffers from a lack of formalism. The quantum probability program could help to improve this situation, but we wonder whether it would provide even more added value if its presumed focus on outcome models were complemented by process models that are, ideally, informed by ecological analyses and integrated into cognitive architectures.

  19. Analyses and simulations in income frame regulation model for the network sector from 2007; Analyser og simuleringer i inntektsrammereguleringsmodellen for nettbransjen fra 2007

    Energy Technology Data Exchange (ETDEWEB)

    Askeland, Thomas Haave; Fjellstad, Bjoern

    2007-07-01

    Analyses of the income frame regulation model for the network sector in Norway, introduced 1.st of January 2007. The model's treatment of the norm cost is evaluated, especially the effect analyses carried out by a so called Data Envelopment Analysis model. It is argued that there may exist an age lopsidedness in the data set, and that this should and can be corrected in the effect analyses. The adjustment is proposed corrected for by introducing an age parameter in the data set. Analyses of how the calibration effects in the regulation model affect the business' total income frame, as well as each network company's income frame have been made. It is argued that the calibration, the way it is presented, is not working according to its intention, and should be adjusted in order to provide the sector with the rate of reference in return (ml)

  20. SEM: A Cultural Change Agent

    Science.gov (United States)

    Barnes, Bradley; Bourke, Brian

    2015-01-01

    The authors advance the concept that institutional culture is a purposeful framework by which to view SEM's utility, particularly as a cultural change agent. Through the connection of seemingly independent functions of performance and behavior, implications emerge that deepen the understanding of the influence of culture on performance outcomes…

  1. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  2. Sensitivity analyses of spatial population viability analysis models for species at risk and habitat conservation planning.

    Science.gov (United States)

    Naujokaitis-Lewis, Ilona R; Curtis, Janelle M R; Arcese, Peter; Rosenfeld, Jordan

    2009-02-01

    Population viability analysis (PVA) is an effective framework for modeling species- and habitat-recovery efforts, but uncertainty in parameter estimates and model structure can lead to unreliable predictions. Integrating complex and often uncertain information into spatial PVA models requires that comprehensive sensitivity analyses be applied to explore the influence of spatial and nonspatial parameters on model predictions. We reviewed 87 analyses of spatial demographic PVA models of plants and animals to identify common approaches to sensitivity analysis in recent publications. In contrast to best practices recommended in the broader modeling community, sensitivity analyses of spatial PVAs were typically ad hoc, inconsistent, and difficult to compare. Most studies applied local approaches to sensitivity analyses, but few varied multiple parameters simultaneously. A lack of standards for sensitivity analysis and reporting in spatial PVAs has the potential to compromise the ability to learn collectively from PVA results, accurately interpret results in cases where model relationships include nonlinearities and interactions, prioritize monitoring and management actions, and ensure conservation-planning decisions are robust to uncertainties in spatial and nonspatial parameters. Our review underscores the need to develop tools for global sensitivity analysis and apply these to spatial PVA.

  3. Design evaluation and optimisation in crossover pharmacokinetic studies analysed by nonlinear mixed effects models

    OpenAIRE

    Nguyen, Thu Thuy; Bazzoli, Caroline; Mentré, France

    2012-01-01

    International audience; Bioequivalence or interaction trials are commonly studied in crossover design and can be analysed by nonlinear mixed effects models as an alternative to noncompartmental approach. We propose an extension of the population Fisher information matrix in nonlinear mixed effects models to design crossover pharmacokinetic trials, using a linearisation of the model around the random effect expectation, including within-subject variability and discrete covariates fixed or chan...

  4. Analysing outsourcing policies in an asset management context: a six-stage model

    OpenAIRE

    Schoenmaker, R.; Verlaan, J.G.

    2013-01-01

    Asset managers of civil infrastructure are increasingly outsourcing their maintenance. Whereas maintenance is a cyclic process, decisions to outsource decisions are often project-based, and confusing the discussion on the degree of outsourcing. This paper presents a six-stage model that facilitates the top-down discussion for analysing the degree of outsourcing maintenance. The model is based on the cyclic nature of maintenance. The six-stage model can: (1) give clear statements about the pre...

  5. Connecting SEM Analysis and Profile Analysis via MDS.

    Science.gov (United States)

    Kim, Se-Kang; Davison, Mark L.

    This study was designed to explain how Profile Analysis via Multidimensional Scaling (PAMS) could be viewed as a structural equations model (SEM). The study replicated the major profiles extracted from PAMS in the context of the latent variables in SEM. Data involved the Basic Theme Scales of the Strong Campbell Interest Inventory (Campbell and…

  6. Structural Equations and Causal Explanations: Some Challenges for Causal SEM

    Science.gov (United States)

    Markus, Keith A.

    2010-01-01

    One common application of structural equation modeling (SEM) involves expressing and empirically investigating causal explanations. Nonetheless, several aspects of causal explanation that have an impact on behavioral science methodology remain poorly understood. It remains unclear whether applications of SEM should attempt to provide complete…

  7. Geographical variation of sporadic Legionnaires' disease analysed in a grid model

    DEFF Research Database (Denmark)

    Rudbeck, M.; Jepsen, Martin Rudbeck; Sonne, I.B.;

    2010-01-01

    clusters. Four cells had excess incidence in all three time periods. The analysis in 25 different grid positions indicated a low risk of overlooking cells with excess incidence in a random grid. The coefficient of variation ranged from 0.08 to 0.11 independent of the threshold. By application of a random......The aim was to analyse variation in incidence of sporadic Legionnaires' disease in a geographical information system in three time periods (1990-2005) by the application of a grid model and to assess the model's validity by analysing variation according to grid position. Coordinates...

  8. X-ray CT analyses, models and numerical simulations: a comparison with petrophysical analyses in an experimental CO2 study

    Science.gov (United States)

    Henkel, Steven; Pudlo, Dieter; Enzmann, Frieder; Reitenbach, Viktor; Albrecht, Daniel; Ganzer, Leonhard; Gaupp, Reinhard

    2016-06-01

    An essential part of the collaborative research project H2STORE (hydrogen to store), which is funded by the German government, was a comparison of various analytical methods for characterizing reservoir sandstones from different stratigraphic units. In this context Permian, Triassic and Tertiary reservoir sandstones were analysed. Rock core materials, provided by RWE Gasspeicher GmbH (Dortmund, Germany), GDF Suez E&P Deutschland GmbH (Lingen, Germany), E.ON Gas Storage GmbH (Essen, Germany) and RAG Rohöl-Aufsuchungs Aktiengesellschaft (Vienna, Austria), were processed by different laboratory techniques; thin sections were prepared, rock fragments were crushed and cubes of 1 cm edge length and plugs 3 to 5 cm in length with a diameter of about 2.5 cm were sawn from macroscopic homogeneous cores. With this prepared sample material, polarized light microscopy and scanning electron microscopy, coupled with image analyses, specific surface area measurements (after Brunauer, Emmet and Teller, 1938; BET), He-porosity and N2-permeability measurements and high-resolution microcomputer tomography (μ-CT), which were used for numerical simulations, were applied. All these methods were practised on most of the same sample material, before and on selected Permian sandstones also after static CO2 experiments under reservoir conditions. A major concern in comparing the results of these methods is an appraisal of the reliability of the given porosity, permeability and mineral-specific reactive (inner) surface area data. The CO2 experiments modified the petrophysical as well as the mineralogical/geochemical rock properties. These changes are detectable by all applied analytical methods. Nevertheless, a major outcome of the high-resolution μ-CT analyses and following numerical data simulations was that quite similar data sets and data interpretations were maintained by the different petrophysical standard methods. Moreover, the μ-CT analyses are not only time saving, but also non

  9. RooStatsCms: a tool for analyses modelling, combination and statistical studies

    Science.gov (United States)

    Piparo, D.; Schott, G.; Quast, G.

    2009-12-01

    The RooStatsCms (RSC) software framework allows analysis modelling and combination, statistical studies together with the access to sophisticated graphics routines for results visualisation. The goal of the project is to complement the existing analyses by means of their combination and accurate statistical studies.

  10. RooStatsCms: a tool for analyses modelling, combination and statistical studies

    CERN Document Server

    Piparo, D; Quast, Prof G

    2008-01-01

    The RooStatsCms (RSC) software framework allows analysis modelling and combination, statistical studies together with the access to sophisticated graphics routines for results visualisation. The goal of the project is to complement the existing analyses by means of their combination and accurate statistical studies.

  11. Combined Task and Physical Demands Analyses towards a Comprehensive Human Work Model

    Science.gov (United States)

    2014-09-01

    velocities, and accelerations over time for each postural sequence. Neck strain measures derived from biomechanical analyses of these postural...and whole missions. The result is a comprehensive model of tasks and associated physical demands from which one can estimate the accumulative neck ...Griffon Helicopter aircrew (Pilots and Flight Engineers) reported neck pain particularly when wearing Night Vision Goggles (NVGs) (Forde et al. , 2011

  12. Dutch AG-MEMOD model; A tool to analyse the agri-food sector

    NARCIS (Netherlands)

    Leeuwen, van M.G.A.; Tabeau, A.A.

    2005-01-01

    Agricultural policies in the European Union (EU) have a history of continuous reform. AG-MEMOD, acronym for Agricultural sector in the Member states and EU: econometric modelling for projections and analysis of EU policies on agriculture, forestry and the environment, provides a system for analysing

  13. Supply Chain Modeling for Fluorspar and Hydrofluoric Acid and Implications for Further Analyses

    Science.gov (United States)

    2015-04-01

    analysis. 15. SUBJECT TERMS supply chain , model, fluorspar, hydrofluoric acid, shortfall, substitution, Defense Logistics Agency, National Defense...unlimited. IDA Document D-5379 Log: H 15-000099 INSTITUTE FOR DEFENSE ANALYSES 4850 Mark Center Drive Alexandria, Virginia 22311-1882 Supply Chain ...E F E N S E A N A L Y S E S IDA Document D-5379 D. Sean Barnett Jerome Bracken Supply Chain Modeling for Fluorspar and Hydrofluoric Acid and

  14. Wavelet-based spatial comparison technique for analysing and evaluating two-dimensional geophysical model fields

    Directory of Open Access Journals (Sweden)

    S. Saux Picart

    2011-11-01

    Full Text Available Complex numerical models of the Earth's environment, based around 3-D or 4-D time and space domains are routinely used for applications including climate predictions, weather forecasts, fishery management and environmental impact assessments. Quantitatively assessing the ability of these models to accurately reproduce geographical patterns at a range of spatial and temporal scales has always been a difficult problem to address. However, this is crucial if we are to rely on these models for decision making. Satellite data are potentially the only observational dataset able to cover the large spatial domains analysed by many types of geophysical models. Consequently optical wavelength satellite data is beginning to be used to evaluate model hindcast fields of terrestrial and marine environments. However, these satellite data invariably contain regions of occluded or missing data due to clouds, further complicating or impacting on any comparisons with the model. A methodology has recently been developed to evaluate precipitation forecasts using radar observations. It allows model skill to be evaluated at a range of spatial scales and rain intensities. Here we extend the original method to allow its generic application to a range of continuous and discontinuous geophysical data fields, and therefore allowing its use with optical satellite data. This is achieved through two major improvements to the original method: (i all thresholds are determined based on the statistical distribution of the input data, so no a priori knowledge about the model fields being analysed is required and (ii occluded data can be analysed without impacting on the metric results. The method can be used to assess a model's ability to simulate geographical patterns over a range of spatial scales. We illustrate how the method provides a compact and concise way of visualising the degree of agreement between spatial features in two datasets. The application of the new method, its

  15. Surface area and volume measurements of volcanic ash particles using micro-computed tomography (micro-CT): A comparison with scanning electron microscope (SEM) stereoscopic imaging and Brunauer-Emmett-Teller (BET) model

    Science.gov (United States)

    Ersoy, Orkun; Şen, Erdal; Aydar, Erkan; Tatar, Ä.°Lkan; Ćelik, H. Hamdi

    2010-05-01

    Volcanic ash particles are important components of explosive eruptions and their surface texture is the subject of intense research. Characterization of ash surfaces is crucial for understanding the physics of the volcanic plumes, remote sensing measurements of ash and aerosols, interfacial processes, modelling transportation and deposition of tephra and characterizing eruptive styles. A number of different methods have been used over the years to arrive at surface area estimates. The more common methods include estimates based on the geometric considerations (geometric surface area) and the physisorption of gas molecules on the surface of interest (physical surface area). In this study, micro computed tomography (micro-CT), a non-destructive method providing three-dimensional data enabled the measurement of surface areas and volumes of individual ash particles. Specific surface area estimates for ash particles were also obtained using nitrogen as gas adsorbent and the BET (Brunauer-Emmett-Teller) model. Results were compared with the values obtained from SEM stereoscopic imaging and geometric considerations. Surface area estimates of micro-CT and SEM stereoscopic imaging overlaps with mean specific surface area results of 0.0167 and 0.0214 m2/g, respectively. However, ash particle surface textures present quite a deviation from that of their geometric forms and approximation to sphere and ellipsoid both seemed to be inadequate for representation of real ash surfaces. The higher surface area estimate (> 0.4 m2/g) obtained from the technique based on physical sorption of gases (BET model here) was attributed to its capability for surface areas associated even with angstrom-sized pores. SEM stereoscopic and/or micro-CT imaging were suggested for characterization of textures on macro-pore regions of ash particles.

  16. Stellar abundance analyses in the light of 3D hydrodynamical model atmospheres

    CERN Document Server

    Asplund, M

    2003-01-01

    I describe recent progress in terms of 3D hydrodynamical model atmospheres and 3D line formation and their applications to stellar abundance analyses of late-type stars. Such 3D studies remove the free parameters inherent in classical 1D investigations (mixing length parameters, macro- and microturbulence) yet are highly successful in reproducing a large arsenal of observational constraints such as detailed line shapes and asymmetries. Their potential for abundance analyses is illustrated by discussing the derived oxygen abundances in the Sun and in metal-poor stars, where they seem to resolve long-standing problems as well as significantly alter the inferred conclusions.

  17. WOMBAT: a tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML).

    Science.gov (United States)

    Meyer, Karin

    2007-11-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html).

  18. Application of an approximate vectorial diffraction model to analysing diffractive micro-optical elements

    Institute of Scientific and Technical Information of China (English)

    Niu Chun-Hui; Li Zhi-Yuan; Ye Jia-Sheng; Gu Ben-Yuan

    2005-01-01

    Scalar diffraction theory, although simple and efficient, is too rough for analysing diffractive micro-optical elements.Rigorous vectorial diffraction theory requires extensive numerical efforts, and is not a convenient design tool. In this paper we employ a simple approximate vectorial diffraction model which combines the principle of the scalar diffraction theory with an approximate local field model to analyse the diffraction of optical waves by some typical two-dimensional diffractive micro-optical elements. The TE and TM polarization modes are both considered. We have found that the approximate vectorial diffraction model can agree much better with the rigorous electromagnetic simulation results than the scalar diffraction theory for these micro-optical elements.

  19. International Conference on SEMS 2012

    CERN Document Server

    Liu, Chuang; Scientific explanation and methodology of science; SEMS 2012

    2014-01-01

    This volume contains the contributed papers of invitees to SEMS 2012 who have also given talks at the conference. The invitees are experts in philosophy of science and technology from Asia (besides China), Australia, Europe, Latin America, North America, as well as from within China. The papers in this volume represent the latest work of each researcher in his or her expertise; and as a result, they give a good representation of the cutting-edge researches in diverse areas in different parts of the world.

  20. Analysing and combining atmospheric general circulation model simulations forced by prescribed SST. Tropical response

    Energy Technology Data Exchange (ETDEWEB)

    Moron, V. [Universite' de Provence, UFR des sciences geographiques et de l' amenagement, Aix-en-Provence (France); Navarra, A. [Istituto Nazionale di Geofisica e Vulcanologia, Bologna (Italy); Ward, M. N. [University of Oklahoma, Cooperative Institute for Mesoscale Meteorological Studies, Norman OK (United States); Foland, C. K. [Hadley Center for Climate Prediction and Research, Meteorological Office, Bracknell (United Kingdom); Friederichs, P. [Meteorologisches Institute des Universitaet Bonn, Bonn (Germany); Maynard, K.; Polcher, J. [Paris Universite' Pierre et Marie Curie, Paris (France). Centre Nationale de la Recherche Scientifique, Laboratoire de Meteorologie Dynamique, Paris

    2001-08-01

    The ECHAM 3.2 (T21), ECHAM (T30) and LMD (version 6, grid-point resolution with 96 longitudes x 72 latitudes) atmospheric general circulation models were integrated through the period 1961 to 1993 forces with the same observed Sea Surface Temperatures (SSTs) as compiled at the Hadley Centre. Three runs were made for each model starting from different initial conditions. The large-scale tropical inter-annual variability is analysed to give a picture of a skill of each model and of some sort of combination of the three models. To analyse the similarity of model response averaged over the same key regions, several widely-used indices are calculated: Southern Oscillation Index (SOI), large-scale wind shear indices of the boreal summer monsoon in Asia and West Africa and rainfall indices for NE Brazil, Sahel and India. Even for the indices where internal noise is large, some years are consistent amongst all the runs, suggesting inter-annual variability of the strength of SST forcing. Averaging the ensemble mean of the three models (the super-ensemble mean) yields improved skill. When each run is weighted according to its skill, taking three runs from different models instead of three runs of the same model improves the mean skill. There is also some indication that one run of a given model could be better than another, suggesting that persistent anomalies could change its sensitivity to SST. The index approach lacks flexibility to assess whether a model's response to SST has been geographically displaced. It can focus on the first mode in the global tropics, found through singular value decomposition analysis, which is clearly related to El Nino/Southern Oscillation (ENSO) in all seasons. The Observed-Model and Model-Model analyses lead to almost the same patterns, suggesting that the dominant pattern of model response is also the most skilful mode. Seasonal modulation of both skill and spatial patterns (both model and observed) clearly exists with highest skill

  1. Analysing, Interpreting, and Testing the Invariance of the Actor-Partner Interdependence Model

    Directory of Open Access Journals (Sweden)

    Gareau, Alexandre

    2016-09-01

    Full Text Available Although in recent years researchers have begun to utilize dyadic data analyses such as the actor-partner interdependence model (APIM, certain limitations to the applicability of these models still exist. Given the complexity of APIMs, most researchers will often use observed scores to estimate the model's parameters, which can significantly limit and underestimate statistical results. The aim of this article is to highlight the importance of conducting a confirmatory factor analysis (CFA of equivalent constructs between dyad members (i.e. measurement equivalence/invariance; ME/I. Different steps for merging CFA and APIM procedures will be detailed in order to shed light on new and integrative methods.

  2. Distinguishing Mediational Models and Analyses in Clinical Psychology: Atemporal Associations Do Not Imply Causation.

    Science.gov (United States)

    Winer, E Samuel; Cervone, Daniel; Bryant, Jessica; McKinney, Cliff; Liu, Richard T; Nadorff, Michael R

    2016-09-01

    A popular way to attempt to discern causality in clinical psychology is through mediation analysis. However, mediation analysis is sometimes applied to research questions in clinical psychology when inferring causality is impossible. This practice may soon increase with new, readily available, and easy-to-use statistical advances. Thus, we here provide a heuristic to remind clinical psychological scientists of the assumptions of mediation analyses. We describe recent statistical advances and unpack assumptions of causality in mediation, underscoring the importance of time in understanding mediational hypotheses and analyses in clinical psychology. Example analyses demonstrate that statistical mediation can occur despite theoretical mediation being improbable. We propose a delineation of mediational effects derived from cross-sectional designs into the terms temporal and atemporal associations to emphasize time in conceptualizing process models in clinical psychology. The general implications for mediational hypotheses and the temporal frameworks from within which they may be drawn are discussed. © 2016 Wiley Periodicals, Inc.

  3. FluxExplorer: A general platform for modeling and analyses of metabolic networks based on stoichiometry

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Stoichiometry-based analyses of meta- bolic networks have aroused significant interest of systems biology researchers in recent years. It is necessary to develop a more convenient modeling platform on which users can reconstruct their network models using completely graphical operations, and explore them with powerful analyzing modules to get a better understanding of the properties of metabolic systems. Herein, an in silico platform, FluxExplorer, for metabolic modeling and analyses based on stoichiometry has been developed as a publicly available tool for systems biology research. This platform integrates various analytic approaches, in- cluding flux balance analysis, minimization of meta- bolic adjustment, extreme pathways analysis, shadow prices analysis, and singular value decom- position, providing a thorough characterization of the metabolic system. Using a graphic modeling process, metabolic networks can be reconstructed and modi- fied intuitively and conveniently. The inconsistencies of a model with respect to the FBA principles can be proved automatically. In addition, this platform sup- ports systems biology markup language (SBML). FluxExplorer has been applied to rebuild a metabolic network in mammalian mitochondria, producing meaningful results. Generally, it is a powerful and very convenient tool for metabolic network modeling and analysis.

  4. Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts.

    Energy Technology Data Exchange (ETDEWEB)

    Sevougian, S. David; Freeze, Geoffrey A.; Gardner, William Payton; Hammond, Glenn Edward; Mariner, Paul

    2014-09-01

    directly, rather than through simplified abstractions. It also a llows for complex representations of the source term, e.g., the explicit representation of many individual waste packages (i.e., meter - scale detail of an entire waste emplacement drift). This report fulfills the Generic Disposal System Analysis Work Packa ge Level 3 Milestone - Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts (M 3 FT - 1 4 SN08080 3 2 ).

  5. Calibration of back-analysed model parameters for landslides using classification statistics

    Science.gov (United States)

    Cepeda, Jose; Henderson, Laura

    2016-04-01

    Back-analyses are useful for characterizing the geomorphological and mechanical processes and parameters involved in the initiation and propagation of landslides. These processes and parameters can in turn be used for improving forecasts of scenarios and hazard assessments in areas or sites which have similar settings to the back-analysed cases. The selection of the modeled landslide that produces the best agreement with the actual observations requires running a number of simulations by varying the type of model and the sets of input parameters. The comparison of the simulated and observed parameters is normally performed by visual comparison of geomorphological or dynamic variables (e.g., geometry of scarp and final deposit, maximum velocities and depths). Over the past six years, a method developed by NGI has been used by some researchers for a more objective selection of back-analysed input model parameters. That method includes an adaptation of the equations for calculation of classifiers, and a comparative evaluation of classifiers of the selected parameter sets in the Receiver Operating Characteristic (ROC) space. This contribution presents an updating of the methodology. The proposed procedure allows comparisons between two or more "clouds" of classifiers. Each cloud represents the performance of a model over a range of input parameters (e.g., samples of probability distributions). Considering the fact that each cloud does not necessarily produce a full ROC curve, two new normalised ROC-space parameters are introduced for characterizing the performance of each cloud. The first parameter is representative of the cloud position relative to the point of perfect classification. The second parameter characterizes the position of the cloud relative to the theoretically perfect ROC curve and the no-discrimination line. The methodology is illustrated with back-analyses of slope stability and landslide runout of selected case studies. This research activity has been

  6. Volvo Logistics Corporation Returnable Packaging System : a model for analysing cost savings when switching packaging system

    OpenAIRE

    2008-01-01

    This thesis is a study for analysing costs affected by packaging in a producing industry. The purpose is to develop a model that will calculate and present possible cost savings for the customer by using Volvo Logistics Corporations, VLC’s, returnable packaging instead of other packaging solutions. The thesis is based on qualitative data gained from both theoretical and empirical studies. The methodology for gaining information has been to study theoretical sources such as course literature a...

  7. Computational model for supporting SHM systems design: Damage identification via numerical analyses

    Science.gov (United States)

    Sartorato, Murilo; de Medeiros, Ricardo; Vandepitte, Dirk; Tita, Volnei

    2017-02-01

    This work presents a computational model to simulate thin structures monitored by piezoelectric sensors in order to support the design of SHM systems, which use vibration based methods. Thus, a new shell finite element model was proposed and implemented via a User ELement subroutine (UEL) into the commercial package ABAQUS™. This model was based on a modified First Order Shear Theory (FOST) for piezoelectric composite laminates. After that, damaged cantilever beams with two piezoelectric sensors in different positions were investigated by using experimental analyses and the proposed computational model. A maximum difference in the magnitude of the FRFs between numerical and experimental analyses of 7.45% was found near the resonance regions. For damage identification, different levels of damage severity were evaluated by seven damage metrics, including one proposed by the present authors. Numerical and experimental damage metrics values were compared, showing a good correlation in terms of tendency. Finally, based on comparisons of numerical and experimental results, it is shown a discussion about the potentials and limitations of the proposed computational model to be used for supporting SHM systems design.

  8. Model error analyses of photochemistry mechanisms using the BEATBOX/BOXMOX data assimilation toy model

    Science.gov (United States)

    Knote, C. J.; Eckl, M.; Barré, J.; Emmons, L. K.

    2016-12-01

    Simplified descriptions of photochemistry in the atmosphere ('photochemical mechanisms') necessary to reduce the computational burden of a model simulation contribute significantly to the overall uncertainty of an air quality model. Understanding how the photochemical mechanism contributes to observed model errors through examination of results of the complete model system is next to impossible due to cancellation and amplification effects amongst the tightly interconnected model components. Here we present BEATBOX, a novel method to evaluate photochemical mechanisms using the underlying chemistry box model BOXMOX. With BOXMOX we can rapidly initialize various mechanisms (e.g. MOZART, RACM, CBMZ, MCM) with homogenized observations (e.g. from field campaigns) and conduct idealized 'chemistry in a jar' simulations under controlled conditions. BEATBOX is a data assimilation toy model built upon BOXMOX which allows to simulate the effects of assimilating observations (e.g., CO, NO2, O3) into these simulations. In this presentation we show how we use the Master Chemical Mechanism (MCM, U Leeds) as benchmark for more simplified mechanisms like MOZART, use BEATBOX to homogenize the chemical environment and diagnose errors within the more simplified mechanisms. We present BEATBOX as a new, freely available tool that allows researchers to rapidly evaluate their chemistry mechanism against a range of others under varying chemical conditions.

  9. Modeling and performance analyses of evaporators in frozen-food supermarket display cabinets at low temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Getu, H.M.; Bansal, P.K. [Department of Mechanical Engineering, The University of Auckland, Private Bag 92019, Auckland (New Zealand)

    2007-11-15

    This paper presents modeling and experimental analyses of evaporators in 'in situ' frozen-food display cabinets at low temperatures in the supermarket industry. Extensive experiments were conducted to measure store and display cabinet relative humidities and temperatures, and pressures, temperatures and mass flow rates of the refrigerant. The mathematical model adopts various empirical correlations of heat transfer coefficients and frost properties in a fin-tube heat exchanger in order to investigate the influence of indoor conditions on the performance of the display cabinets. The model is validated with the experimental data of 'in situ' cabinets. The model would be a good guide tool to the design engineers to evaluate the performance of supermarket display cabinet heat exchangers under various store conditions. (author)

  10. Using Weather Data and Climate Model Output in Economic Analyses of Climate Change

    Energy Technology Data Exchange (ETDEWEB)

    Auffhammer, M.; Hsiang, S. M.; Schlenker, W.; Sobel, A.

    2013-06-28

    Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overview of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.

  11. A Structural Equation Model (SEM) of the Impact of Transformational, Visionary, Charismatic and Ethical Leadership Styles on the Development of Wise Leadership among Filipino Private Secondary School Principals

    Science.gov (United States)

    Parco-Tropicales, Marishirl; de Guzman, Allan B.

    2014-01-01

    In recent years, wisdom is seen as a key resource for school leaders in dealing with the dynamics of the changing school environments. This study purports to expand the growing interest on wisdom by testing a model that describes the impact of transformational, visionary, charismatic and ethical leadership styles on wise leadership development…

  12. Risk Factor Analyses for the Return of Spontaneous Circulation in the Asphyxiation Cardiac Arrest Porcine Model

    Directory of Open Access Journals (Sweden)

    Cai-Jun Wu

    2015-01-01

    Full Text Available Background: Animal models of asphyxiation cardiac arrest (ACA are frequently used in basic research to mirror the clinical course of cardiac arrest (CA. The rates of the return of spontaneous circulation (ROSC in ACA animal models are lower than those from studies that have utilized ventricular fibrillation (VF animal models. The purpose of this study was to characterize the factors associated with the ROSC in the ACA porcine model. Methods: Forty-eight healthy miniature pigs underwent endotracheal tube clamping to induce CA. Once induced, CA was maintained untreated for a period of 8 min. Two minutes following the initiation of cardiopulmonary resuscitation (CPR, defibrillation was attempted until ROSC was achieved or the animal died. To assess the factors associated with ROSC in this CA model, logistic regression analyses were performed to analyze gender, the time of preparation, the amplitude spectrum area (AMSA from the beginning of CPR and the pH at the beginning of CPR. A receiver-operating characteristic (ROC curve was used to evaluate the predictive value of AMSA for ROSC. Results: ROSC was only 52.1% successful in this ACA porcine model. The multivariate logistic regression analyses revealed that ROSC significantly depended on the time of preparation, AMSA at the beginning of CPR and pH at the beginning of CPR. The area under the ROC curve in for AMSA at the beginning of CPR was 0.878 successful in predicting ROSC (95% confidence intervals: 0.773∼0.983, and the optimum cut-off value was 15.62 (specificity 95.7% and sensitivity 80.0%. Conclusions: The time of preparation, AMSA and the pH at the beginning of CPR were associated with ROSC in this ACA porcine model. AMSA also predicted the likelihood of ROSC in this ACA animal model.

  13. Prediction Uncertainty Analyses for the Combined Physically-Based and Data-Driven Models

    Science.gov (United States)

    Demissie, Y. K.; Valocchi, A. J.; Minsker, B. S.; Bailey, B. A.

    2007-12-01

    The unavoidable simplification associated with physically-based mathematical models can result in biased parameter estimates and correlated model calibration errors, which in return affect the accuracy of model predictions and the corresponding uncertainty analyses. In this work, a physically-based groundwater model (MODFLOW) together with error-correcting artificial neural networks (ANN) are used in a complementary fashion to obtain an improved prediction (i.e. prediction with reduced bias and error correlation). The associated prediction uncertainty of the coupled MODFLOW-ANN model is then assessed using three alternative methods. The first method estimates the combined model confidence and prediction intervals using first-order least- squares regression approximation theory. The second method uses Monte Carlo and bootstrap techniques for MODFLOW and ANN, respectively, to construct the combined model confidence and prediction intervals. The third method relies on a Bayesian approach that uses analytical or Monte Carlo methods to derive the intervals. The performance of these approaches is compared with Generalized Likelihood Uncertainty Estimation (GLUE) and Calibration-Constrained Monte Carlo (CCMC) intervals of the MODFLOW predictions alone. The results are demonstrated for a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study comprises structural, parameter, and measurement uncertainties. The preliminary results indicate that the proposed three approaches yield comparable confidence and prediction intervals, thus making the computationally efficient first-order least-squares regression approach attractive for estimating the coupled model uncertainty. These results will be compared with GLUE and CCMC results.

  14. The importance of accurate muscle modelling for biomechanical analyses: a case study with a lizard skull

    Science.gov (United States)

    Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.

    2013-01-01

    Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944

  15. Multiple-Group Analysis Using the sem Package in the R System

    Science.gov (United States)

    Evermann, Joerg

    2010-01-01

    Multiple-group analysis in covariance-based structural equation modeling (SEM) is an important technique to ensure the invariance of latent construct measurements and the validity of theoretical models across different subpopulations. However, not all SEM software packages provide multiple-group analysis capabilities. The sem package for the R…

  16. Analysing adverse events by time-to-event models: the CLEOPATRA study.

    Science.gov (United States)

    Proctor, Tanja; Schumacher, Martin

    2016-07-01

    When analysing primary and secondary endpoints in a clinical trial with patients suffering from a chronic disease, statistical models for time-to-event data are commonly used and accepted. This is in contrast to the analysis of data on adverse events where often only a table with observed frequencies and corresponding test statistics is reported. An example is the recently published CLEOPATRA study where a three-drug regimen is compared with a two-drug regimen in patients with HER2-positive first-line metastatic breast cancer. Here, as described earlier, primary and secondary endpoints (progression-free and overall survival) are analysed using time-to-event models, whereas adverse events are summarized in a simple frequency table, although the duration of study treatment differs substantially. In this paper, we demonstrate the application of time-to-event models to first serious adverse events using the data of the CLEOPATRA study. This will cover the broad range between a simple incidence rate approach over survival and competing risks models (with death as a competing event) to multi-state models. We illustrate all approaches by means of graphical displays highlighting the temporal dynamics and compare the obtained results. For the CLEOPATRA study, the resulting hazard ratios are all in the same order of magnitude. But the use of time-to-event models provides valuable and additional information that would potentially be overlooked by only presenting incidence proportions. These models adequately address the temporal dynamics of serious adverse events as well as death of patients. Copyright © 2016 John Wiley & Sons, Ltd.

  17. SEM Investigation of Superheater Deposits from Biomass-Fired Boilers

    DEFF Research Database (Denmark)

    Jensen, Peter Arendt; Frandsen, Flemming; Hansen, Jørn

    2004-01-01

    , mature superheater deposit samples were extracted from two straw-fired boilers, Masnedø and Ensted, with fuel inputs of 33 MWth and 100 MWth, respectively. SEM (scanning electron microscopy) images and EDX (energy dispersive X-ray) analyses were performed on the deposit samples. Different strategies...

  18. Models and analyses for inertial-confinement fusion-reactor studies

    Energy Technology Data Exchange (ETDEWEB)

    Bohachevsky, I.O.

    1981-05-01

    This report describes models and analyses devised at Los Alamos National Laboratory to determine the technical characteristics of different inertial confinement fusion (ICF) reactor elements required for component integration into a functional unit. We emphasize the generic properties of the different elements rather than specific designs. The topics discussed are general ICF reactor design considerations; reactor cavity phenomena, including the restoration of interpulse ambient conditions; first-wall temperature increases and material losses; reactor neutronics and hydrodynamic blanket response to neutron energy deposition; and analyses of loads and stresses in the reactor vessel walls, including remarks about the generation and propagation of very short wavelength stress waves. A discussion of analytic approaches useful in integrations and optimizations of ICF reactor systems concludes the report.

  19. Moderators, mediators, and bidirectional relationships in the International Classification of Functioning, Disability and Health (ICF) framework: An empirical investigation using a longitudinal design and Structural Equation Modeling (SEM).

    Science.gov (United States)

    Rouquette, Alexandra; Badley, Elizabeth M; Falissard, Bruno; Dub, Timothée; Leplege, Alain; Coste, Joël

    2015-06-01

    The International Classification of Functioning, Disability and Health (ICF) published in 2001 describes the consequences of health conditions with three components of impairments in body structures or functions, activity limitations and participation restrictions. Two of the new features of the conceptual model were the possibility of feedback effects between each ICF component and the introduction of contextual factors conceptualized as moderators of the relationship between the components. The aim of this longitudinal study is to provide empirical evidence of these two kinds of effect. Structural equation modeling was used to analyze data from a French population-based cohort of 548 patients with knee osteoarthritis recruited between April 2007 and March 2009 and followed for three years. Indicators of the body structure and function, activity and participation components of the ICF were derived from self-administered standardized instruments. The measurement model revealed four separate factors for body structures impairments, body functions impairments, activity limitations and participation restrictions. The classic sequence from body impairments to participation restrictions through activity limitations was found at each assessment time. Longitudinal study of the ICF component relationships showed a feedback pathway indicating that the level of participation restrictions at baseline was predictive of activity limitations three years later. Finally, the moderating role of personal (age, sex, mental health, etc.) and environmental factors (family relationships, mobility device use, etc.) was investigated. Three contextual factors (sex, family relationships and walking stick use) were found to be moderators for the relationship between the body impairments and the activity limitations components. Mental health was found to be a mediating factor of the effect of activity limitations on participation restrictions.

  20. Dynamics and spatial structure of ENSO from re-analyses versus CMIP5 models

    Science.gov (United States)

    Serykh, Ilya; Sonechkin, Dmitry

    2016-04-01

    Basing on a mathematical idea about the so-called strange nonchaotic attractor (SNA) in the quasi-periodically forced dynamical systems, the currently available re-analyses data are considered. It is found that the El Niño - Southern Oscillation (ENSO) is driven not only by the seasonal heating, but also by three more external periodicities (incommensurate to the annual period) associated with the ~18.6-year lunar-solar nutation of the Earth rotation axis, ~11-year sunspot activity cycle and the ~14-month Chandler wobble in the Earth's pole motion. Because of the incommensurability of their periods all four forces affect the system in inappropriate time moments. As a result, the ENSO time series look to be very complex (strange in mathematical terms) but nonchaotic. The power spectra of ENSO indices reveal numerous peaks located at the periods that are multiples of the above periodicities as well as at their sub- and super-harmonic. In spite of the above ENSO complexity, a mutual order seems to be inherent to the ENSO time series and their spectra. This order reveals itself in the existence of a scaling of the power spectrum peaks and respective rhythms in the ENSO dynamics that look like the power spectrum and dynamics of the SNA. It means there are no limits to forecast ENSO, in principle. In practice, it opens a possibility to forecast ENSO for several years ahead. Global spatial structures of anomalies during El Niño and power spectra of ENSO indices from re-analyses are compared with the respective output quantities in the CMIP5 climate models (the Historical experiment). It is found that the models reproduce global spatial structures of the near surface temperature and sea level pressure anomalies during El Niño very similar to these fields in the re-analyses considered. But the power spectra of the ENSO indices from the CMIP5 models show no peaks at the same periods as the re-analyses power spectra. We suppose that it is possible to improve modeled

  1. Design evaluation and optimisation in crossover pharmacokinetic studies analysed by nonlinear mixed effects models.

    Science.gov (United States)

    Nguyen, Thu Thuy; Bazzoli, Caroline; Mentré, France

    2012-05-20

    Bioequivalence or interaction trials are commonly studied in crossover design and can be analysed by nonlinear mixed effects models as an alternative to noncompartmental approach. We propose an extension of the population Fisher information matrix in nonlinear mixed effects models to design crossover pharmacokinetic trials, using a linearisation of the model around the random effect expectation, including within-subject variability and discrete covariates fixed or changing between periods. We use the expected standard errors of treatment effect to compute the power for the Wald test of comparison or equivalence and the number of subjects needed for a given power. We perform various simulations mimicking crossover two-period trials to show the relevance of these developments. We then apply these developments to design a crossover pharmacokinetic study of amoxicillin in piglets and implement them in the new version 3.2 of the r function PFIM.

  2. An age-dependent model to analyse the evolutionary stability of bacterial quorum sensing.

    Science.gov (United States)

    Mund, A; Kuttler, C; Pérez-Velázquez, J; Hense, B A

    2016-09-21

    Bacterial communication is enabled through the collective release and sensing of signalling molecules in a process called quorum sensing. Cooperative processes can easily be destabilized by the appearance of cheaters, who contribute little or nothing at all to the production of common goods. This especially applies for planktonic cultures. In this study, we analyse the dynamics of bacterial quorum sensing and its evolutionary stability under two levels of cooperation, namely signal and enzyme production. The model accounts for mutation rates and switches between planktonic and biofilm state of growth. We present a mathematical approach to model these dynamics using age-dependent colony models. We explore the conditions under which cooperation is stable and find that spatial structuring can lead to long-term scenarios such as coexistence or bistability, depending on the non-linear combination of different parameters like death rates and production costs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Assessing Cognitive Processes with Diffusion Model Analyses: A Tutorial based on fast-dm-30

    Directory of Open Access Journals (Sweden)

    Andreas eVoss

    2015-03-01

    Full Text Available Diffusion models can be used to infer cognitive processes involved in fast binary decision tasks. The model assumes that information is accumulated continuously until one of two thresholds is hit. In the analysis, response time distributions from numerous trials of the decision task are used to estimate a set of parameters mapping distinct cognitive processes. In recent years, diffusion model analyses have become more and more popular in different fields of psychology. This increased popularity is based on the recent development of several software solutions for the parameter estimation. Although these programs make the application of the model relatively easy, there is a shortage of knowledge about different steps of a state-of-the-art diffusion model study. In this paper, we give a concise tutorial on diffusion modelling, and we present fast-dm-30, a thoroughly revised and extended version of the fast-dm software (Voss & Voss, 2007 for diffusion model data analysis. The most important improvement of the fast-dm version is the possibility to choose between different optimization criteria (i.e., Maximum Likelihood, Chi-Square, and Kolmogorov-Smirnov, which differ in applicability for different data sets.

  4. Models of population-based analyses for data collected from large extended families.

    Science.gov (United States)

    Wang, Wenyu; Lee, Elisa T; Howard, Barbara V; Fabsitz, Richard R; Devereux, Richard B; MacCluer, Jean W; Laston, Sandra; Comuzzie, Anthony G; Shara, Nawar M; Welty, Thomas K

    2010-12-01

    Large studies of extended families usually collect valuable phenotypic data that may have scientific value for purposes other than testing genetic hypotheses if the families were not selected in a biased manner. These purposes include assessing population-based associations of diseases with risk factors/covariates and estimating population characteristics such as disease prevalence and incidence. Relatedness among participants however, violates the traditional assumption of independent observations in these classic analyses. The commonly used adjustment method for relatedness in population-based analyses is to use marginal models, in which clusters (families) are assumed to be independent (unrelated) with a simple and identical covariance (family) structure such as those called independent, exchangeable and unstructured covariance structures. However, using these simple covariance structures may not be optimally appropriate for outcomes collected from large extended families, and may under- or over-estimate the variances of estimators and thus lead to uncertainty in inferences. Moreover, the assumption that families are unrelated with an identical family structure in a marginal model may not be satisfied for family studies with large extended families. The aim of this paper is to propose models incorporating marginal models approaches with a covariance structure for assessing population-based associations of diseases with their risk factors/covariates and estimating population characteristics for epidemiological studies while adjusting for the complicated relatedness among outcomes (continuous/categorical, normally/non-normally distributed) collected from large extended families. We also discuss theoretical issues of the proposed models and show that the proposed models and covariance structure are appropriate for and capable of achieving the aim.

  5. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    Science.gov (United States)

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  6. Sampling and sensitivity analyses tools (SaSAT for computational modelling

    Directory of Open Access Journals (Sweden)

    Wilson David P

    2008-02-01

    Full Text Available Abstract SaSAT (Sampling and Sensitivity Analysis Tools is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  7. Sampling and sensitivity analyses tools (SaSAT) for computational modelling.

    Science.gov (United States)

    Hoare, Alexander; Regan, David G; Wilson, David P

    2008-02-27

    SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab, a numerical mathematical software package, and utilises algorithms contained in the Matlab Statistics Toolbox. However, Matlab is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  8. Analysing animal social network dynamics: the potential of stochastic actor-oriented models.

    Science.gov (United States)

    Fisher, David N; Ilany, Amiyaal; Silk, Matthew J; Tregenza, Tom

    2017-03-01

    Animals are embedded in dynamically changing networks of relationships with conspecifics. These dynamic networks are fundamental aspects of their environment, creating selection on behaviours and other traits. However, most social network-based approaches in ecology are constrained to considering networks as static, despite several calls for such analyses to become more dynamic. There are a number of statistical analyses developed in the social sciences that are increasingly being applied to animal networks, of which stochastic actor-oriented models (SAOMs) are a principal example. SAOMs are a class of individual-based models designed to model transitions in networks between discrete time points, as influenced by network structure and covariates. It is not clear, however, how useful such techniques are to ecologists, and whether they are suited to animal social networks. We review the recent applications of SAOMs to animal networks, outlining findings and assessing the strengths and weaknesses of SAOMs when applied to animal rather than human networks. We go on to highlight the types of ecological and evolutionary processes that SAOMs can be used to study. SAOMs can include effects and covariates for individuals, dyads and populations, which can be constant or variable. This allows for the examination of a wide range of questions of interest to ecologists. However, high-resolution data are required, meaning SAOMs will not be useable in all study systems. It remains unclear how robust SAOMs are to missing data and uncertainty around social relationships. Ultimately, we encourage the careful application of SAOMs in appropriate systems, with dynamic network analyses likely to prove highly informative. Researchers can then extend the basic method to tackle a range of existing questions in ecology and explore novel lines of questioning. © 2016 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.

  9. Power analyses for negative binomial models with application to multiple sclerosis clinical trials.

    Science.gov (United States)

    Rettiganti, Mallik; Nagaraja, H N

    2012-01-01

    We use negative binomial (NB) models for the magnetic resonance imaging (MRI)-based brain lesion count data from parallel group (PG) and baseline versus treatment (BVT) trials for relapsing remitting multiple sclerosis (RRMS) patients, and describe the associated likelihood ratio (LR), score, and Wald tests. We perform power analyses and sample size estimation using the simulated percentiles of the exact distribution of the test statistics for the PG and BVT trials. When compared to the corresponding nonparametric test, the LR test results in 30-45% reduction in sample sizes for the PG trials and 25-60% reduction for the BVT trials.

  10. Analysing and modelling battery drain of 3G terminals due to port scan attacks

    OpenAIRE

    Pascual Trigos, Mar

    2010-01-01

    In this thesis there is detected a threat in 3G mobile phone, specifically in the eventual draining terminal's battery due to undesired data traffic. The objectives of the thesis are to analyse the battery drain of 3G mobile phones because of uplink and downlink traffic and to model the battery drain. First of all, there is described how we can make a mobile phone to increase its consumption, and therefore to shorten its battery life time. Concretely, we focus in data traffic. This traffic ca...

  11. Analysing the Effects of Flood-Resilience Technologies in Urban Areas Using a Synthetic Model Approach

    Directory of Open Access Journals (Sweden)

    Reinhard Schinke

    2016-11-01

    Full Text Available Flood protection systems with their spatial effects play an important role in managing and reducing flood risks. The planning and decision process as well as the technical implementation are well organized and often exercised. However, building-related flood-resilience technologies (FReT are often neglected due to the absence of suitable approaches to analyse and to integrate such measures in large-scale flood damage mitigation concepts. Against this backdrop, a synthetic model-approach was extended by few complementary methodical steps in order to calculate flood damage to buildings considering the effects of building-related FReT and to analyse the area-related reduction of flood risks by geo-information systems (GIS with high spatial resolution. It includes a civil engineering based investigation of characteristic properties with its building construction including a selection and combination of appropriate FReT as a basis for derivation of synthetic depth-damage functions. Depending on the real exposition and the implementation level of FReT, the functions can be used and allocated in spatial damage and risk analyses. The application of the extended approach is shown at a case study in Valencia (Spain. In this way, the overall research findings improve the integration of FReT in flood risk management. They provide also some useful information for advising of individuals at risk supporting the selection and implementation of FReT.

  12. Modeling of high homologous temperature deformation behavior for stress and life-time analyses

    Energy Technology Data Exchange (ETDEWEB)

    Krempl, E. [Rensselaer Polytechnic Institute, Troy, NY (United States)

    1997-12-31

    Stress and lifetime analyses need realistic and accurate constitutive models for the inelastic deformation behavior of engineering alloys at low and high temperatures. Conventional creep and plasticity models have fundamental difficulties in reproducing high homologous temperature behavior. To improve the modeling capabilities {open_quotes}unified{close_quotes} state variable theories were conceived. They consider all inelastic deformation rate-dependent and do not have separate repositories for creep and plasticity. The viscoplasticity theory based on overstress (VBO), one of the unified theories, is introduced and its properties are delineated. At high homologous temperature where secondary and tertiary creep are observed modeling is primarily accomplished by a static recovery term and a softening isotropic stress. At low temperatures creep is merely a manifestation of rate dependence. The primary creep modeled at low homologous temperature is due to the rate dependence of the flow law. The model is unaltered in the transition from low to high temperature except that the softening of the isotropic stress and the influence of the static recovery term increase with an increase of the temperature.

  13. Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.

    Science.gov (United States)

    Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A

    2013-02-01

    The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on

  14. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2016-10-01

    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  15. Structural identifiability analyses of candidate models for in vitro Pitavastatin hepatic uptake.

    Science.gov (United States)

    Grandjean, Thomas R B; Chappell, Michael J; Yates, James W T; Evans, Neil D

    2014-05-01

    In this paper a review of the application of four different techniques (a version of the similarity transformation approach for autonomous uncontrolled systems, a non-differential input/output observable normal form approach, the characteristic set differential algebra and a recent algebraic input/output relationship approach) to determine the structural identifiability of certain in vitro nonlinear pharmacokinetic models is provided. The Organic Anion Transporting Polypeptide (OATP) substrate, Pitavastatin, is used as a probe on freshly isolated animal and human hepatocytes. Candidate pharmacokinetic non-linear compartmental models have been derived to characterise the uptake process of Pitavastatin. As a prerequisite to parameter estimation, structural identifiability analyses are performed to establish that all unknown parameters can be identified from the experimental observations available.

  16. A conceptual model for analysing informal learning in online social networks for health professionals.

    Science.gov (United States)

    Li, Xin; Gray, Kathleen; Chang, Shanton; Elliott, Kristine; Barnett, Stephen

    2014-01-01

    Online social networking (OSN) provides a new way for health professionals to communicate, collaborate and share ideas with each other for informal learning on a massive scale. It has important implications for ongoing efforts to support Continuing Professional Development (CPD) in the health professions. However, the challenge of analysing the data generated in OSNs makes it difficult to understand whether and how they are useful for CPD. This paper presents a conceptual model for using mixed methods to study data from OSNs to examine the efficacy of OSN in supporting informal learning of health professionals. It is expected that using this model with the dataset generated in OSNs for informal learning will produce new and important insights into how well this innovation in CPD is serving professionals and the healthcare system.

  17. Rockslide and Impulse Wave Modelling in the Vajont Reservoir by DEM-CFD Analyses

    Science.gov (United States)

    Zhao, T.; Utili, S.; Crosta, G. B.

    2016-06-01

    This paper investigates the generation of hydrodynamic water waves due to rockslides plunging into a water reservoir. Quasi-3D DEM analyses in plane strain by a coupled DEM-CFD code are adopted to simulate the rockslide from its onset to the impact with the still water and the subsequent generation of the wave. The employed numerical tools and upscaling of hydraulic properties allow predicting a physical response in broad agreement with the observations notwithstanding the assumptions and characteristics of the adopted methods. The results obtained by the DEM-CFD coupled approach are compared to those published in the literature and those presented by Crosta et al. (Landslide spreading, impulse waves and modelling of the Vajont rockslide. Rock mechanics, 2014) in a companion paper obtained through an ALE-FEM method. Analyses performed along two cross sections are representative of the limit conditions of the eastern and western slope sectors. The max rockslide average velocity and the water wave velocity reach ca. 22 and 20 m/s, respectively. The maximum computed run up amounts to ca. 120 and 170 m for the eastern and western lobe cross sections, respectively. These values are reasonably similar to those recorded during the event (i.e. ca. 130 and 190 m, respectively). Therefore, the overall study lays out a possible DEM-CFD framework for the modelling of the generation of the hydrodynamic wave due to the impact of a rapid moving rockslide or rock-debris avalanche.

  18. Economic modeling of electricity production from hot dry rock geothermal reservoirs: methodology and analyses. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Cummings, R.G.; Morris, G.E.

    1979-09-01

    An analytical methodology is developed for assessing alternative modes of generating electricity from hot dry rock (HDR) geothermal energy sources. The methodology is used in sensitivity analyses to explore relative system economics. The methodology used a computerized, intertemporal optimization model to determine the profit-maximizing design and management of a unified HDR electric power plant with a given set of geologic, engineering, and financial conditions. By iterating this model on price, a levelized busbar cost of electricity is established. By varying the conditions of development, the sensitivity of both optimal management and busbar cost to these conditions are explored. A plausible set of reference case parameters is established at the outset of the sensitivity analyses. This reference case links a multiple-fracture reservoir system to an organic, binary-fluid conversion cycle. A levelized busbar cost of 43.2 mills/kWh ($1978) was determined for the reference case, which had an assumed geothermal gradient of 40/sup 0/C/km, a design well-flow rate of 75 kg/s, an effective heat transfer area per pair of wells of 1.7 x 10/sup 6/ m/sup 2/, and plant design temperature of 160/sup 0/C. Variations in the presumed geothermal gradient, size of the reservoir, drilling costs, real rates of return, and other system parameters yield minimum busbar costs between -40% and +76% of the reference case busbar cost.

  19. Establishing a Numerical Modeling Framework for Hydrologic Engineering Analyses of Extreme Storm Events

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby

    2017-08-01

    In this study a numerical modeling framework for simulating extreme storm events was established using the Weather Research and Forecasting (WRF) model. Such a framework is necessary for the derivation of engineering parameters such as probable maximum precipitation that are the cornerstone of large water management infrastructure design. Here this framework was built based on a heavy storm that occurred in Nashville (USA) in 2010, and verified using two other extreme storms. To achieve the optimal setup, several combinations of model resolutions, initial/boundary conditions (IC/BC), cloud microphysics and cumulus parameterization schemes were evaluated using multiple metrics of precipitation characteristics. The evaluation suggests that WRF is most sensitive to IC/BC option. Simulation generally benefits from finer resolutions up to 5 km. At the 15km level, NCEP2 IC/BC produces better results, while NAM IC/BC performs best at the 5km level. Recommended model configuration from this study is: NAM or NCEP2 IC/BC (depending on data availability), 15km or 15km-5km nested grids, Morrison microphysics and Kain-Fritsch cumulus schemes. Validation of the optimal framework suggests that these options are good starting choices for modeling extreme events similar to the test cases. This optimal framework is proposed in response to emerging engineering demands of extreme storm events forecasting and analyses for design, operations and risk assessment of large water infrastructures.

  20. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper;

    2009-01-01

    an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...

  1. Development of steady-state model for MSPT and detailed analyses of receiver

    Science.gov (United States)

    Yuasa, Minoru; Sonoda, Masanori; Hino, Koichi

    2016-05-01

    Molten salt parabolic trough system (MSPT) uses molten salt as heat transfer fluid (HTF) instead of synthetic oil. The demonstration plant of MSPT was constructed by Chiyoda Corporation and Archimede Solar Energy in Italy in 2013. Chiyoda Corporation developed a steady-state model for predicting the theoretical behavior of the demonstration plant. The model was designed to calculate the concentrated solar power and heat loss using ray tracing of incident solar light and finite element modeling of thermal energy transferred into the medium. This report describes the verification of the model using test data on the demonstration plant, detailed analyses on the relation between flow rate and temperature difference on the metal tube of receiver and the effect of defocus angle on concentrated power rate, for solar collector assembly (SCA) development. The model is accurate to an extent of 2.0% as systematic error and 4.2% as random error. The relationships between flow rate and temperature difference on metal tube and the effect of defocus angle on concentrated power rate are shown.

  2. A STRONGLY COUPLED REACTOR CORE ISOLATION COOLING SYSTEM MODEL FOR EXTENDED STATION BLACK-OUT ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory

    2015-03-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety

  3. Evaluation of hydrological models for scenario analyses: signal-to-noise-ratio between scenario effects and model uncertainty

    Directory of Open Access Journals (Sweden)

    H. Bormann

    2005-01-01

    Full Text Available Many model applications suffer from the fact that although it is well known that model application implies different sources of uncertainty there is no objective criterion to decide whether a model is suitable for a particular application or not. This paper introduces a comparative index between the uncertainty of a model and the change effects of scenario calculations which enables the modeller to objectively decide about suitability of a model to be applied in scenario analysis studies. The index is called "signal-to-noise-ratio", and it is applied for an exemplary scenario study which was performed within the GLOWA-IMPETUS project in Benin. The conceptual UHP model was applied on the upper Ouémé basin. Although model calibration and validation were successful, uncertainties on model parameters and input data could be identified. Applying the "signal-to-noise-ratio" on regional scale subcatchments of the upper Ouémé comparing water availability indicators for uncertainty studies and scenario analyses the UHP model turned out to be suitable to predict long-term water balances under the present poor data availability and changing environmental conditions in subhumid West Africa.

  4. A model intercomparison analysing the link between column ozone and geopotential height anomalies in January

    Directory of Open Access Journals (Sweden)

    P. Braesicke

    2008-05-01

    Full Text Available A statistical framework to evaluate the performance of chemistry-climate models with respect to the interaction between meteorology and column ozone during northern hemisphere mid-winter, in particularly January, is used. Different statistical diagnostics from four chemistry-climate models (E39C, ME4C, UMUCAM, ULAQ are compared with the ERA-40 re-analysis. First, we analyse vertical coherence in geopotential height anomalies as described by linear correlations between two different pressure levels (30 and 200 hPa of the atmosphere. In addition, linear correlations between column ozone and geopotential height anomalies at 200 hPa are discussed to motivate a simple picture of the meteorological impacts on column ozone on interannual timescales. Secondly, we discuss characteristic spatial structures in geopotential height and column ozone anomalies as given by their first two empirical orthogonal functions. Finally, we describe the covariance patterns between reconstructed anomalies of geopotential height and column ozone. In general we find good agreement between the models with higher horizontal resolution (E39C, ME4C, UMUCAM and ERA-40. The Pacific-North American (PNA pattern emerges as a useful qualitative benchmark for the model performance. Models with higher horizontal resolution and high upper boundary (ME4C and UMUCAM show good agreement with the PNA tripole derived from ERA-40 data, including the column ozone modulation over the Pacfic sector. The model with lowest horizontal resolution does not show a classic PNA pattern (ULAQ, and the model with the lowest upper boundary (E39C does not capture the PNA related column ozone variations over the Pacific sector. Those discrepancies have to be taken into account when providing confidence intervals for climate change integrations.

  5. PASMet: a web-based platform for prediction, modelling and analyses of metabolic systems.

    Science.gov (United States)

    Sriyudthsak, Kansuporn; Mejia, Ramon Francisco; Arita, Masanori; Hirai, Masami Yokota

    2016-07-01

    PASMet (Prediction, Analysis and Simulation of Metabolic networks) is a web-based platform for proposing and verifying mathematical models to understand the dynamics of metabolism. The advantages of PASMet include user-friendliness and accessibility, which enable biologists and biochemists to easily perform mathematical modelling. PASMet offers a series of user-functions to handle the time-series data of metabolite concentrations. The functions are organised into four steps: (i) Prediction of a probable metabolic pathway and its regulation; (ii) Construction of mathematical models; (iii) Simulation of metabolic behaviours; and (iv) Analysis of metabolic system characteristics. Each function contains various statistical and mathematical methods that can be used independently. Users who may not have enough knowledge of computing or programming can easily and quickly analyse their local data without software downloads, updates or installations. Users only need to upload their files in comma-separated values (CSV) format or enter their model equations directly into the website. Once the time-series data or mathematical equations are uploaded, PASMet automatically performs computation on server-side. Then, users can interactively view their results and directly download them to their local computers. PASMet is freely available with no login requirement at http://pasmet.riken.jp/ from major web browsers on Windows, Mac and Linux operating systems.

  6. Correlation of Klebsiella pneumoniae comparative genetic analyses with virulence profiles in a murine respiratory disease model.

    Directory of Open Access Journals (Sweden)

    Ramy A Fodah

    Full Text Available Klebsiella pneumoniae is a bacterial pathogen of worldwide importance and a significant contributor to multiple disease presentations associated with both nosocomial and community acquired disease. ATCC 43816 is a well-studied K. pneumoniae strain which is capable of causing an acute respiratory disease in surrogate animal models. In this study, we performed sequencing of the ATCC 43816 genome to support future efforts characterizing genetic elements required for disease. Furthermore, we performed comparative genetic analyses to the previously sequenced genomes from NTUH-K2044 and MGH 78578 to gain an understanding of the conservation of known virulence determinants amongst the three strains. We found that ATCC 43816 and NTUH-K2044 both possess the known virulence determinant for yersiniabactin, as well as a Type 4 secretion system (T4SS, CRISPR system, and an acetonin catabolism locus, all absent from MGH 78578. While both NTUH-K2044 and MGH 78578 are clinical isolates, little is known about the disease potential of these strains in cell culture and animal models. Thus, we also performed functional analyses in the murine macrophage cell lines RAW264.7 and J774A.1 and found that MGH 78578 (K52 serotype was internalized at higher levels than ATCC 43816 (K2 and NTUH-K2044 (K1, consistent with previous characterization of the antiphagocytic properties of K1 and K2 serotype capsules. We also examined the three K. pneumoniae strains in a novel BALB/c respiratory disease model and found that ATCC 43816 and NTUH-K2044 are highly virulent (LD50<100 CFU while MGH 78578 is relatively avirulent.

  7. Kinetic analyses and mathematical modeling of primary photochemical and photoelectrochemical processes in plant photosystems.

    Science.gov (United States)

    Vredenberg, Wim

    2011-02-01

    In this paper the model and simulation of primary photochemical and photo-electrochemical reactions in dark-adapted intact plant leaves is presented. A descriptive algorithm has been derived from analyses of variable chlorophyll a fluorescence and P700 oxidation kinetics upon excitation with multi-turnover pulses (MTFs) of variable intensity and duration. These analyses have led to definition and formulation of rate equations that describe the sequence of primary linear electron transfer (LET) steps in photosystem II (PSII) and of cyclic electron transport (CET) in PSI. The model considers heterogeneity in PSII reaction centers (RCs) associated with the S-states of the OEC and incorporates in a dark-adapted state the presence of a 15-35% fraction of Q(B)-nonreducing RCs that probably is identical with the S₀ fraction. The fluorescence induction algorithm (FIA) in the 10 μs-1s excitation time range considers a photochemical O-J-D, a photo-electrochemical J-I and an I-P phase reflecting the response of the variable fluorescence to the electric trans-thylakoid potential generated by the proton pump fuelled by CET in PSI. The photochemical phase incorporates the kinetics associated with the double reduction of the acceptor pair of pheophytin (Phe) and plastoquinone Q(A) [PheQ(A)] in Q(B) nonreducing RCs and the associated doubling of the variable fluorescence, in agreement with the three-state trapping model (TSTM) of PS II. The decline in fluorescence emission during the so called SMT in the 1-100s excitation time range, known as the Kautsky curve, is shown to be associated with a substantial decrease of CET-powered proton efflux from the stroma into the chloroplast lumen through the ATPsynthase of the photosynthetic machinery.

  8. D Recording for 2d Delivering - the Employment of 3d Models for Studies and Analyses -

    Science.gov (United States)

    Rizzi, A.; Baratti, G.; Jiménez, B.; Girardi, S.; Remondino, F.

    2011-09-01

    In the last years, thanks to the advances of surveying sensors and techniques, many heritage sites could be accurately replicated in digital form with very detailed and impressive results. The actual limits are mainly related to hardware capabilities, computation time and low performance of personal computer. Often, the produced models are not visible on a normal computer and the only solution to easily visualized them is offline using rendered videos. This kind of 3D representations is useful for digital conservation, divulgation purposes or virtual tourism where people can visit places otherwise closed for preservation or security reasons. But many more potentialities and possible applications are available using a 3D model. The problem is the ability to handle 3D data as without adequate knowledge this information is reduced to standard 2D data. This article presents some surveying and 3D modeling experiences within the APSAT project ("Ambiente e Paesaggi dei Siti d'Altura Trentini", i.e. Environment and Landscapes of Upland Sites in Trentino). APSAT is a multidisciplinary project funded by the Autonomous Province of Trento (Italy) with the aim documenting, surveying, studying, analysing and preserving mountainous and hill-top heritage sites located in the region. The project focuses on theoretical, methodological and technological aspects of the archaeological investigation of mountain landscape, considered as the product of sequences of settlements, parcelling-outs, communication networks, resources, and symbolic places. The mountain environment preserves better than others the traces of hunting and gathering, breeding, agricultural, metallurgical, symbolic activities characterised by different lengths and environmental impacts, from Prehistory to the Modern Period. Therefore the correct surveying and documentation of this heritage sites and material is very important. Within the project, the 3DOM unit of FBK is delivering all the surveying and 3D material to

  9. Normalisation genes for expression analyses in the brown alga model Ectocarpus siliculosus

    Directory of Open Access Journals (Sweden)

    Rousvoal Sylvie

    2008-08-01

    Full Text Available Abstract Background Brown algae are plant multi-cellular organisms occupying most of the world coasts and are essential actors in the constitution of ecological niches at the shoreline. Ectocarpus siliculosus is an emerging model for brown algal research. Its genome has been sequenced, and several tools are being developed to perform analyses at different levels of cell organization, including transcriptomic expression analyses. Several topics, including physiological responses to osmotic stress and to exposure to contaminants and solvents are being studied in order to better understand the adaptive capacity of brown algae to pollution and environmental changes. A series of genes that can be used to normalise expression analyses is required for these studies. Results We monitored the expression of 13 genes under 21 different culture conditions. These included genes encoding proteins and factors involved in protein translation (ribosomal protein 26S, EF1alpha, IF2A, IF4E and protein degradation (ubiquitin, ubiquitin conjugating enzyme or folding (cyclophilin, and proteins involved in both the structure of the cytoskeleton (tubulin alpha, actin, actin-related proteins and its trafficking function (dynein, as well as a protein implicated in carbon metabolism (glucose 6-phosphate dehydrogenase. The stability of their expression level was assessed using the Ct range, and by applying both the geNorm and the Normfinder principles of calculation. Conclusion Comparisons of the data obtained with the three methods of calculation indicated that EF1alpha (EF1a was the best reference gene for normalisation. The normalisation factor should be calculated with at least two genes, alpha tubulin, ubiquitin-conjugating enzyme or actin-related proteins being good partners of EF1a. Our results exclude actin as a good normalisation gene, and, in this, are in agreement with previous studies in other organisms.

  10. DESCRIPTION OF MODELING ANALYSES IN SUPPORT OF THE 200-ZP-1 REMEDIAL DESIGN/REMEDIAL ACTION

    Energy Technology Data Exchange (ETDEWEB)

    VONGARGEN BH

    2009-11-03

    The Feasibility Study/or the 200-ZP-1 Groundwater Operable Unit (DOE/RL-2007-28) and the Proposed Plan/or Remediation of the 200-ZP-1 Groundwater Operable Unit (DOE/RL-2007-33) describe the use of groundwater pump-and-treat technology for the 200-ZP-1 Groundwater Operable Unit (OU) as part of an expanded groundwater remedy. During fiscal year 2008 (FY08), a groundwater flow and contaminant transport (flow and transport) model was developed to support remedy design decisions at the 200-ZP-1 OU. This model was developed because the size and influence of the proposed 200-ZP-1 groundwater pump-and-treat remedy will have a larger areal extent than the current interim remedy, and modeling is required to provide estimates of influent concentrations and contaminant mass removal rates to support the design of the aboveground treatment train. The 200 West Area Pre-Conceptual Design/or Final Extraction/Injection Well Network: Modeling Analyses (DOE/RL-2008-56) documents the development of the first version of the MODFLOW/MT3DMS model of the Hanford Site's Central Plateau, as well as the initial application of that model to simulate a potential well field for the 200-ZP-1 remedy (considering only the contaminants carbon tetrachloride and technetium-99). This document focuses on the use of the flow and transport model to identify suitable extraction and injection well locations as part of the 200 West Area 200-ZP-1 Pump-and-Treat Remedial Design/Remedial Action Work Plan (DOEIRL-2008-78). Currently, the model has been developed to the extent necessary to provide approximate results and to lay a foundation for the design basis concentrations that are required in support of the remedial design/remediation action (RD/RA) work plan. The discussion in this document includes the following: (1) Assignment of flow and transport parameters for the model; (2) Definition of initial conditions for the transport model for each simulated contaminant of concern (COC) (i.e., carbon

  11. Assessing the hydrodynamic boundary conditions for risk analyses in coastal areas: a stochastic storm surge model

    Directory of Open Access Journals (Sweden)

    T. Wahl

    2011-11-01

    Full Text Available This paper describes a methodology to stochastically simulate a large number of storm surge scenarios (here: 10 million. The applied model is very cheap in computation time and will contribute to improve the overall results from integrated risk analyses in coastal areas. Initially, the observed storm surge events from the tide gauges of Cuxhaven (located in the Elbe estuary and Hörnum (located in the southeast of Sylt Island are parameterised by taking into account 25 parameters (19 sea level parameters and 6 time parameters. Throughout the paper, the total water levels are considered. The astronomical tides are semidiurnal in the investigation area with a tidal range >2 m. The second step of the stochastic simulation consists in fitting parametric distribution functions to the data sets resulting from the parameterisation. The distribution functions are then used to run Monte-Carlo-Simulations. Based on the simulation results, a large number of storm surge scenarios are reconstructed. Parameter interdependencies are considered and different filter functions are applied to avoid inconsistencies. Storm surge scenarios, which are of interest for risk analyses, can easily be extracted from the results.

  12. Models for regionalizing economic data and their applications within the scope of forensic disaster analyses

    Science.gov (United States)

    Schmidt, Hanns-Maximilian; Wiens, rer. pol. Marcus, , Dr.; Schultmann, rer. pol. Frank, Prof. _., Dr.

    2015-04-01

    The impact of natural hazards on the economic system can be observed in many different regions all over the world. Once the local economic structure is hit by an event direct costs instantly occur. However, the disturbance on a local level (e.g. parts of city or industries along a river bank) might also cause monetary damages in other, indirectly affected sectors. If the impact of an event is strong, these damages are likely to cascade and spread even on an international scale (e.g. the eruption of Eyjafjallajökull and its impact on the automotive sector in Europe). In order to determine these special impacts, one has to gain insights into the directly hit economic structure before being able to calculate these side effects. Especially, regarding the development of a model used for near real-time forensic disaster analyses any simulation needs to be based on data that is rapidly available or easily to be computed. Therefore, we investigated commonly used or recently discussed methodologies for regionalizing economic data. Surprisingly, even for German federal states there is no official input-output data available that can be used, although it might provide detailed figures concerning economic interrelations between different industry sectors. In the case of highly developed countries, such as Germany, we focus on models for regionalizing nationwide input-output table which is usually available at the national statistical offices. However, when it comes to developing countries (e.g. South-East Asia) the data quality and availability is usually much poorer. In this case, other sources need to be found for the proper assessment of regional economic performance. We developed an indicator-based model that can fill this gap because of its flexibility regarding the level of aggregation and the composability of different input parameters. Our poster presentation brings up a literature review and a summary on potential models that seem to be useful for this specific task

  13. Modifications in the AA5083 Johnson-Cook Material Model for Use in Friction Stir Welding Computational Analyses

    Science.gov (United States)

    2011-12-30

    REPORT Modifications in the AA5083 Johnson-Cook Material Model for Use in Friction Stir Welding Computational Analyses 14. ABSTRACT 16. SECURITY...TERMS AA5083, friction stir welding , Johnson-Cook material model M. Grujicic, B. Pandurangan, C.-F. Yen, B. A. Cheeseman Clemson University Office of...Use in Friction Stir Welding Computational Analyses Report Title ABSTRACT Johnson-Cook strength material model is frequently used in finite-element

  14. A model for analysing factors which may influence quality management procedures in higher education

    Directory of Open Access Journals (Sweden)

    Cătălin MAICAN

    2015-12-01

    Full Text Available In all universities, the Office for Quality Assurance defines the procedure for assessing the performance of the teaching staff, with a view to establishing students’ perception as regards the teachers’ activity from the point of view of the quality of the teaching process, of the relationship with the students and of the assistance provided for learning. The present paper aims at creating a combined model for evaluation, based on Data Mining statistical methods: starting from the findings revealed by the evaluations teachers performed to students, using the cluster analysis and the discriminant analysis, we identified the subjects which produced significant differences between students’ grades, subjects which were subsequently subjected to an evaluation by students. The results of these analyses allowed the formulation of certain measures for enhancing the quality of the evaluation process.

  15. Evaluation of Temperature and Humidity Profiles of Unified Model and ECMWF Analyses Using GRUAN Radiosonde Observations

    Directory of Open Access Journals (Sweden)

    Young-Chan Noh

    2016-07-01

    Full Text Available Temperature and water vapor profiles from the Korea Meteorological Administration (KMA and the United Kingdom Met Office (UKMO Unified Model (UM data assimilation systems and from reanalysis fields from the European Centre for Medium-Range Weather Forecasts (ECMWF were assessed using collocated radiosonde observations from the Global Climate Observing System (GCOS Reference Upper-Air Network (GRUAN for January–December 2012. The motivation was to examine the overall performance of data assimilation outputs. The difference statistics of the collocated model outputs versus the radiosonde observations indicated a good agreement for the temperature, amongst datasets, while less agreement was found for the relative humidity. A comparison of the UM outputs from the UKMO and KMA revealed that they are similar to each other. The introduction of the new version of UM into the KMA in May 2012 resulted in an improved analysis performance, particularly for the moisture field. On the other hand, ECMWF reanalysis data showed slightly reduced performance for relative humidity compared with the UM, with a significant humid bias in the upper troposphere. ECMWF reanalysis temperature fields showed nearly the same performance as the two UM analyses. The root mean square differences (RMSDs of the relative humidity for the three models were larger for more humid conditions, suggesting that humidity forecasts are less reliable under these conditions.

  16. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Directory of Open Access Journals (Sweden)

    Sung-Chien Lin

    2014-07-01

    Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.

  17. Testing a dual-systems model of adolescent brain development using resting-state connectivity analyses.

    Science.gov (United States)

    van Duijvenvoorde, A C K; Achterberg, M; Braams, B R; Peters, S; Crone, E A

    2016-01-01

    The current study aimed to test a dual-systems model of adolescent brain development by studying changes in intrinsic functional connectivity within and across networks typically associated with cognitive-control and affective-motivational processes. To this end, resting-state and task-related fMRI data were collected of 269 participants (ages 8-25). Resting-state analyses focused on seeds derived from task-related neural activation in the same participants: the dorsal lateral prefrontal cortex (dlPFC) from a cognitive rule-learning paradigm and the nucleus accumbens (NAcc) from a reward-paradigm. Whole-brain seed-based resting-state analyses showed an age-related increase in dlPFC connectivity with the caudate and thalamus, and an age-related decrease in connectivity with the (pre)motor cortex. nAcc connectivity showed a strengthening of connectivity with the dorsal anterior cingulate cortex (ACC) and subcortical structures such as the hippocampus, and a specific age-related decrease in connectivity with the ventral medial PFC (vmPFC). Behavioral measures from both functional paradigms correlated with resting-state connectivity strength with their respective seed. That is, age-related change in learning performance was mediated by connectivity between the dlPFC and thalamus, and age-related change in winning pleasure was mediated by connectivity between the nAcc and vmPFC. These patterns indicate (i) strengthening of connectivity between regions that support control and learning, (ii) more independent functioning of regions that support motor and control networks, and (iii) more independent functioning of regions that support motivation and valuation networks with age. These results are interpreted vis-à-vis a dual-systems model of adolescent brain development.

  18. Comparative modeling analyses of Cs-137 fate in the rivers impacted by Chernobyl and Fukushima accidents

    Energy Technology Data Exchange (ETDEWEB)

    Zheleznyak, M.; Kivva, S. [Institute of Environmental Radioactivity, Fukushima University (Japan)

    2014-07-01

    The consequences of two largest nuclear accidents of the last decades - at Chernobyl Nuclear Power Plant (ChNPP) (1986) and at Fukushima Daiichi NPP (FDNPP) (2011) clearly demonstrated that radioactive contamination of water bodies in vicinity of NPP and on the waterways from it, e.g., river- reservoir water after Chernobyl accident and rivers and coastal marine waters after Fukushima accident, in the both cases have been one of the main sources of the public concerns on the accident consequences. The higher weight of water contamination in public perception of the accidents consequences in comparison with the real fraction of doses via aquatic pathways in comparison with other dose components is a specificity of public perception of environmental contamination. This psychological phenomenon that was confirmed after these accidents provides supplementary arguments that the reliable simulation and prediction of the radionuclide dynamics in water and sediments is important part of the post-accidental radioecological research. The purpose of the research is to use the experience of the modeling activities f conducted for the past more than 25 years within the Chernobyl affected Pripyat River and Dnieper River watershed as also data of the new monitoring studies in Japan of Abukuma River (largest in the region - the watershed area is 5400 km{sup 2}), Kuchibuto River, Uta River, Niita River, Natsui River, Same River, as also of the studies on the specific of the 'water-sediment' {sup 137}Cs exchanges in this area to refine the 1-D model RIVTOX and 2-D model COASTOX for the increasing of the predictive power of the modeling technologies. The results of the modeling studies are applied for more accurate prediction of water/sediment radionuclide contamination of rivers and reservoirs in the Fukushima Prefecture and for the comparative analyses of the efficiency of the of the post -accidental measures to diminish the contamination of the water bodies. Document

  19. Development of microbial-enzyme-mediated decomposition model parameters through steady-state and dynamic analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Gangsheng [ORNL; Post, Wilfred M [ORNL; Mayes, Melanie [ORNL

    2013-01-01

    We developed a Microbial-ENzyme-mediated Decomposition (MEND) model, based on the Michaelis-Menten kinetics, that describes the dynamics of physically defined pools of soil organic matter (SOC). These include particulate, mineral-associated, dissolved organic matter (POC, MOC, and DOC, respectively), microbial biomass, and associated exoenzymes. The ranges and/or distributions of parameters were determined by both analytical steady-state and dynamic analyses with SOC data from the literature. We used an improved multi-objective parameter sensitivity analysis (MOPSA) to identify the most important parameters for the full model: maintenance of microbial biomass, turnover and synthesis of enzymes, and carbon use efficiency (CUE). The model predicted an increase of 2 C (baseline temperature =12 C) caused the pools of POC-Cellulose, MOC, and total SOC to increase with dynamic CUE and decrease with constant CUE, as indicated by the 50% confidence intervals. Regardless of dynamic or constant CUE, the pool sizes of POC, MOC, and total SOC varied from 8% to 8% under +2 C. The scenario analysis using a single parameter set indicates that higher temperature with dynamic CUE might result in greater net increases in both POC-Cellulose and MOC pools. Different dynamics of various SOC pools reflected the catalytic functions of specific enzymes targeting specific substrates and the interactions between microbes, enzymes, and SOC. With the feasible parameter values estimated in this study, models incorporating fundamental principles of microbial-enzyme dynamics can lead to simulation results qualitatively different from traditional models with fast/slow/passive pools.

  20. Genomic analyses with biofilter 2.0: knowledge driven filtering, annotation, and model development.

    Science.gov (United States)

    Pendergrass, Sarah A; Frase, Alex; Wallace, John; Wolfe, Daniel; Katiyar, Neerja; Moore, Carrie; Ritchie, Marylyn D

    2013-12-30

    The ever-growing wealth of biological information available through multiple comprehensive database repositories can be leveraged for advanced analysis of data. We have now extensively revised and updated the multi-purpose software tool Biofilter that allows researchers to annotate and/or filter data as well as generate gene-gene interaction models based on existing biological knowledge. Biofilter now has the Library of Knowledge Integration (LOKI), for accessing and integrating existing comprehensive database information, including more flexibility for how ambiguity of gene identifiers are handled. We have also updated the way importance scores for interaction models are generated. In addition, Biofilter 2.0 now works with a range of types and formats of data, including single nucleotide polymorphism (SNP) identifiers, rare variant identifiers, base pair positions, gene symbols, genetic regions, and copy number variant (CNV) location information. Biofilter provides a convenient single interface for accessing multiple publicly available human genetic data sources that have been compiled in the supporting database of LOKI. Information within LOKI includes genomic locations of SNPs and genes, as well as known relationships among genes and proteins such as interaction pairs, pathways and ontological categories.Via Biofilter 2.0 researchers can:• Annotate genomic location or region based data, such as results from association studies, or CNV analyses, with relevant biological knowledge for deeper interpretation• Filter genomic location or region based data on biological criteria, such as filtering a series SNPs to retain only SNPs present in specific genes within specific pathways of interest• Generate Predictive Models for gene-gene, SNP-SNP, or CNV-CNV interactions based on biological information, with priority for models to be tested based on biological relevance, thus narrowing the search space and reducing multiple hypothesis-testing. Biofilter is a software

  1. Controls on Yardang Morphology: Insights from Field Measurements, Lidar Topographic Analyses, and Numerical Modeling

    Science.gov (United States)

    Pelletier, J. D.; Kapp, P. A.

    2014-12-01

    Yardangs are streamlined bedforms sculpted by the wind and wind-blown sand. They can form as relatively resistant exposed rocks erode more slowly than surrounding exposed rocks, thus causing the more resistant rocks to stand higher in the landscape and deflect the wind and wind-blown sand into adjacent troughs in a positive feedback. How this feedback gives rise to streamlined forms that locally have a consistent size is not well understood theoretically. In this study we combine field measurements in the yardangs of Ocotillo Wells SVRA with analyses of airborne and terrestrial lidar datasets and numerical modeling to quantify and understand the controls on yardang morphology. The classic model for yardang morphology is that they evolve to an ideal 4:1 length-to-width aspect ratio that minimizes aerodynamic drag. We show using computational fluid dynamics (CFD) modeling that this model is incorrect: the 4:1 aspect ratio is the value corresponding to minimum drag for free bodies, i.e. obstacles around which air flows on all sides. Yardangs, in contrast, are embedded in Earth's surface. For such rough streamlined half-bodies, the aspect ratio corresponding to minimum drag is larger than 20:1. As an alternative to the minimum-drag model, we propose that the aspect ratio of yardangs not significantly influenced by structural controls is controlled by the angle of dispersion of the aerodynamic jet created as deflected wind and wind-blown sand exits the troughs between incipient yardang noses. Aerodynamic jets have a universal dispersion angle of 11.8 degrees, thus predicting a yardang aspect ratio of ~5:1. We developed a landscape evolution model that combines the physics of boundary layer flow with aeolian saltation and bedrock erosion to form yardangs with a range of sizes and aspect ratios similar to those observed in nature. Yardangs with aspect ratios both larger and smaller than 5:1 occur in the model since the strike and dip of the resistant rock unit also exerts

  2. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    Science.gov (United States)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    Certain contaminants may travel faster through soils when they are sorbed to subsurface colloidal particles. Indeed, subsurface colloids may act as carriers of some contaminants accelerating their translocation through the soil into the water table. This phenomenon is known as colloid-facilitated contaminant transport. It plays a significant role in contaminant transport in soils and has been recognized as a source of groundwater contamination. From a mechanistic point of view, the attachment/detachment of the colloidal particles from the soil matrix or from the air-water interface and the straining process may modify the hydraulic properties of the porous media. Šimůnek et al. (2006) developed a model that can simulate the colloid-facilitated contaminant transport in variably saturated porous media. The model is based on the solution of a modified advection-dispersion equation that accounts for several processes, namely: straining, exclusion and attachement/detachement kinetics of colloids through the soil matrix. The solutions of these governing, partial differential equations are obtained using a standard Galerkin-type, linear finite element scheme, implemented in the HYDRUS-2D/3D software (Šimůnek et al., 2012). Modeling colloid transport through the soil and the interaction of colloids with the soil matrix and other contaminants is complex and requires the characterization of many model parameters. In practice, it is very difficult to assess actual transport parameter values, so they are often calibrated. However, before calibration, one needs to know which parameters have the greatest impact on output variables. This kind of information can be obtained through a sensitivity analysis of the model. The main objective of this work is to perform local and global sensitivity analyses of the colloid-facilitated contaminant transport module of HYDRUS. Sensitivity analysis was performed in two steps: (i) we applied a screening method based on Morris' elementary

  3. A Hidden Markov model web application for analysing bacterial genomotyping DNA microarray experiments.

    Science.gov (United States)

    Newton, Richard; Hinds, Jason; Wernisch, Lorenz

    2006-01-01

    Whole genome DNA microarray genomotyping experiments compare the gene content of different species or strains of bacteria. A statistical approach to analysing the results of these experiments was developed, based on a Hidden Markov model (HMM), which takes adjacency of genes along the genome into account when calling genes present or absent. The model was implemented in the statistical language R and applied to three datasets. The method is numerically stable with good convergence properties. Error rates are reduced compared with approaches that ignore spatial information. Moreover, the HMM circumvents a problem encountered in a conventional analysis: determining the cut-off value to use to classify a gene as absent. An Apache Struts web interface for the R script was created for the benefit of users unfamiliar with R. The application may be found at http://hmmgd.cryst.bbk.ac.uk/hmmgd. The source code illustrating how to run R scripts from an Apache Struts-based web application is available from the corresponding author on request. The application is also available for local installation if required.

  4. Segmented Foil SEM Grids at Fermilab

    CERN Document Server

    Kopp, Sacha E; Childress, Sam; Ford, R; Harris, Debbie; Indurthy, Dharmaraj; Kendziora, Cary; Moore, Craig D; Pavlovich, Zarko; Proga, Marek; Tassotto, Gianni; Zwaska, Robert M

    2005-01-01

    We present recent beam data from a new design of a profile monitor for proton beams at Fermilab. The monitors, consisting of grids of segmented Ti foils 5micrometers thick, are secondary-electron emission monitors (SEM's). We review data on the device's precision on beam centroid position, beam width, and on beam loss associated with the SEM material placed in the beam.

  5. Global isoprene emissions estimated using MEGAN, ECMWF analyses and a detailed canopy environment model

    Directory of Open Access Journals (Sweden)

    J.-F. Müller

    2008-03-01

    Full Text Available The global emissions of isoprene are calculated at 0.5° resolution for each year between 1995 and 2006, based on the MEGAN (Model of Emissions of Gases and Aerosols from Nature version 2 model (Guenther et al., 2006 and a detailed multi-layer canopy environment model for the calculation of leaf temperature and visible radiation fluxes. The calculation is driven by meteorological fields – air temperature, cloud cover, downward solar irradiance, windspeed, volumetric soil moisture in 4 soil layers – provided by analyses of the European Centre for Medium-Range Weather Forecasts (ECMWF. The estimated annual global isoprene emission ranges between 374 Tg (in 1996 and 449 Tg (in 1998 and 2005, for an average of ca. 410 Tg/year over the whole period, i.e. about 30% less than the standard MEGAN estimate (Guenther et al., 2006. This difference is due, to a large extent, to the impact of the soil moisture stress factor, which is found here to decrease the global emissions by more than 20%. In qualitative agreement with past studies, high annual emissions are found to be generally associated with El Niño events. The emission inventory is evaluated against flux measurement campaigns at Harvard forest (Massachussets and Tapajós in Amazonia, showing that the model can capture quite well the short-term variability of emissions, but that it fails to reproduce the observed seasonal variation at the tropical rainforest site, with largely overestimated wet season fluxes. The comparison of the HCHO vertical columns calculated by a chemistry and transport model (CTM with HCHO distributions retrieved from space provides useful insights on tropical isoprene emissions. For example, the relatively low emissions calculated over Western Amazonia (compared to the corresponding estimates in the inventory of Guenther et al., 1995 are validated by the excellent agreement found between the CTM and HCHO data over this region. The parameterized impact of the soil moisture

  6. Stream Tracer Integrity: Comparative Analyses of Rhodamine-WT and Sodium Chloride through Transient Storage Modeling

    Science.gov (United States)

    Smull, E. M.; Wlostowski, A. N.; Gooseff, M. N.; Bowden, W. B.; Wollheim, W. M.

    2013-12-01

    Solute transport in natural channels describes the transport of water and dissolved matter through a river reach of interest. Conservative tracers allow us to label a parcel of stream water, such that we can track its movement downstream through space and time. A transient storage model (TSM) can be fit to the breakthrough curve (BTC) following a stream tracer experiment, as a way to quantify advection, dispersion, and transient storage processes. Arctic streams and rivers, in particular, are continuously underlain by permafrost, which provides for a simplified surface water-groundwater exchange. Sodium chloride (NaCl) and Rhodamine-WT (RWT) are widely used tracers, and differences between the two in conservative behavior and detection limits have been noted in small-scale field and laboratory studies. This study seeks to further this understanding by applying the OTIS model to NaCl and RWT BTC data from a field study on the Kuparuk River, Alaska, at varying flow rates. There are two main questions to be answered: 1) Do differences in NaCl and RWT manifest in OTIS parameter values? 2) Are the OTIS model results reliable for NaCl, RWT, or both? Fieldwork was performed in the summer of 2012 on the Kuparuk River, and modeling was performed using a modified OTIS framework, which provided for parameter optimization and further global sensitivity analyses. The results of this study will contribute to the greater body of literature surrounding Arctic stream hydrology, and it will assist in methodology for future tracer field studies. Additionally, the modeling work will provide an analysis for OTIS parameter identifiability, and assess stream tracer integrity (i.e. how well the BTC data represents the system) and its relation to TSM performance (i.e. how well the TSM can find a unique fit to the BTC data). The quantitative tools used can be applied to other solute transport studies, to better understand potential deviations in model outcome due to stream tracer choice and

  7. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    Directory of Open Access Journals (Sweden)

    Varsha H. Rallapalli

    2016-10-01

    Full Text Available Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM has demonstrated that the signal-to-noise ratio (SNRENV from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N is assumed to: (a reduce S + N envelope power by filling in dips within clean speech (S and (b introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  8. 3DSEM++: Adaptive and intelligent 3D SEM surface reconstruction.

    Science.gov (United States)

    Tafti, Ahmad P; Holz, Jessica D; Baghaie, Ahmadreza; Owen, Heather A; He, Max M; Yu, Zeyun

    2016-08-01

    Structural analysis of microscopic objects is a longstanding topic in several scientific disciplines, such as biological, mechanical, and materials sciences. The scanning electron microscope (SEM), as a promising imaging equipment has been around for decades to determine the surface properties (e.g., compositions or geometries) of specimens by achieving increased magnification, contrast, and resolution greater than one nanometer. Whereas SEM micrographs still remain two-dimensional (2D), many research and educational questions truly require knowledge and facts about their three-dimensional (3D) structures. 3D surface reconstruction from SEM images leads to remarkable understanding of microscopic surfaces, allowing informative and qualitative visualization of the samples being investigated. In this contribution, we integrate several computational technologies including machine learning, contrario methodology, and epipolar geometry to design and develop a novel and efficient method called 3DSEM++ for multi-view 3D SEM surface reconstruction in an adaptive and intelligent fashion. The experiments which have been performed on real and synthetic data assert the approach is able to reach a significant precision to both SEM extrinsic calibration and its 3D surface modeling.

  9. Recent advances in 3D SEM surface reconstruction.

    Science.gov (United States)

    Tafti, Ahmad P; Kirkpatrick, Andrew B; Alavi, Zahrasadat; Owen, Heather A; Yu, Zeyun

    2015-11-01

    The scanning electron microscope (SEM), as one of the most commonly used instruments in biology and material sciences, employs electrons instead of light to determine the surface properties of specimens. However, the SEM micrographs still remain 2D images. To effectively measure and visualize the surface attributes, we need to restore the 3D shape model from the SEM images. 3D surface reconstruction is a longstanding topic in microscopy vision as it offers quantitative and visual information for a variety of applications consisting medicine, pharmacology, chemistry, and mechanics. In this paper, we attempt to explain the expanding body of the work in this area, including a discussion of recent techniques and algorithms. With the present work, we also enhance the reliability, accuracy, and speed of 3D SEM surface reconstruction by designing and developing an optimized multi-view framework. We then consider several real-world experiments as well as synthetic data to examine the qualitative and quantitative attributes of our proposed framework. Furthermore, we present a taxonomy of 3D SEM surface reconstruction approaches and address several challenging issues as part of our future work.

  10. SEM probe of IC radiation sensitivity

    Science.gov (United States)

    Gauthier, M. K.; Stanley, A. G.

    1979-01-01

    Scanning Electron Microscope (SEM) used to irradiate single integrated circuit (IC) subcomponent to test for radiation sensitivity can localize area of IC less than .03 by .03 mm for determination of exact location of radiation sensitive section.

  11. Does Sexually Explicit Media (SEM) Affect Me?

    DEFF Research Database (Denmark)

    Hald, Gert Martin; Træen, Bente; Noor, Syed W

    2015-01-01

    and understanding of one’s sexual orientation.First-person effects refer to self-perceived and self-reported effects of SEM consumptionas experienced by the consumer. In addition, the study examined and provided athorough validation of the psychometric properties of the seven-item PornographyConsumption Effect...... Scale (PCES). The study found that 93% of MSM reported smallto-largepositive effects from their SEM consumption on their sexual knowledge,enjoyment of and interest in sex, attitudes towards sex and understanding of theirsexual orientation. Only 7% reported any negative effects from their SEM......Using a self-selected online sample of 448 Norwegian men who have sex with men(MSM) and a cross-sectional design, the present study investigated first-person effectsof sexually explicit media (SEM) consumption on sexual knowledge, enjoyment of andinterest in sex, attitudes towards sex...

  12. Detection of Coaxial Backscattered Electrons in SEM

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    We present a coaxial detection of the backscattered electrons in SEM. The lens-aperture has been used to filter in energy and focus the backscattered electrons. This particular geometry allows us to eliminate the iow energy backscattered electrons and collect the backscattered electrons, which are backscattered close to the incident beam orientation. The main advantage of this geometry is adapted to topographic contrast attenuation and atomic number contrast enhancement. Thus this new SEM is very suitable to analyze the material composition.

  13. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input subs

  14. Molecular analyses of neurogenic defects in a human pluripotent stem cell model of fragile X syndrome.

    Science.gov (United States)

    Boland, Michael J; Nazor, Kristopher L; Tran, Ha T; Szücs, Attila; Lynch, Candace L; Paredes, Ryder; Tassone, Flora; Sanna, Pietro Paolo; Hagerman, Randi J; Loring, Jeanne F

    2017-01-29

    New research suggests that common pathways are altered in many neurodevelopmental disorders including autism spectrum disorder; however, little is known about early molecular events that contribute to the pathology of these diseases. The study of monogenic, neurodevelopmental disorders with a high incidence of autistic behaviours, such as fragile X syndrome, has the potential to identify genes and pathways that are dysregulated in autism spectrum disorder as well as fragile X syndrome. In vitro generation of human disease-relevant cell types provides the ability to investigate aspects of disease that are impossible to study in patients or animal models. Differentiation of human pluripotent stem cells recapitulates development of the neocortex, an area affected in both fragile X syndrome and autism spectrum disorder. We have generated induced human pluripotent stem cells from several individuals clinically diagnosed with fragile X syndrome and autism spectrum disorder. When differentiated to dorsal forebrain cell fates, our fragile X syndrome human pluripotent stem cell lines exhibited reproducible aberrant neurogenic phenotypes. Using global gene expression and DNA methylation profiling, we have analysed the early stages of neurogenesis in fragile X syndrome human pluripotent stem cells. We discovered aberrant DNA methylation patterns at specific genomic regions in fragile X syndrome cells, and identified dysregulated gene- and network-level correlates of fragile X syndrome that are associated with developmental signalling, cell migration, and neuronal maturation. Integration of our gene expression and epigenetic analysis identified altered epigenetic-mediated transcriptional regulation of a distinct set of genes in fragile X syndrome. These fragile X syndrome-aberrant networks are significantly enriched for genes associated with autism spectrum disorder, giving support to the idea that underlying similarities exist among these neurodevelopmental diseases.

  15. Expanding the Conversation about SEM: Advancing SEM Efforts to Improve Student Learning and Persistence--Part II

    Science.gov (United States)

    Yale, Amanda

    2010-01-01

    The first article in this two-part series focused on the need for enrollment management conceptual and organizational models to focus more intentionally and purposefully on efforts related to improving student learning, success, and persistence. Time and again, SEM is viewed from a conventional lens comprising marketing, recruitment and …

  16. A simple beam model to analyse the durability of adhesively bonded tile floorings in presence of shrinkage

    Directory of Open Access Journals (Sweden)

    S. de Miranda

    2014-07-01

    Full Text Available A simple beam model for the evaluation of tile debonding due to substrate shrinkage is presented. The tile-adhesive-substrate package is modeled as an Euler-Bernoulli beam laying on a two-layer elastic foundation. An effective discrete model for inter-tile grouting is introduced with the aim of modelling workmanship defects due to partial filled groutings. The model is validated using the results of a 2D FE model. Different defect configurations and adhesive typologies are analysed, focusing the attention on the prediction of normal stresses in the adhesive layer under the assumption of Mode I failure of the adhesive.

  17. 基于SEM的科研组织核心竞争力评价模型%Evaluation Model of Core Competencies in Scientific Research Organizations Based on SEM

    Institute of Scientific and Technical Information of China (English)

    霍国庆; 张晓东; 董帅; 肖建华; 谢晔

    2011-01-01

    应用战略管理中的资源基础理论,分析科研组织的战略要素及形成竞争优势的过程,提出科研组织核心竞争力由战略领导力、人才凝聚力、科教激发力、科研协同力与合作竞争力五种核心能力构成,进而构建评价指标体系,以中科院研究所为研究对象,利用LISREL软件进行SEM中的验证性因子分析,表明指标体系和评价模型的效果较好。%According to the resource-based review in strategic management,this paper analyzes the strategic factors and formative processes of competitive advantage in scientific research organizations,and then proposes that its core competencies actually consist of strategic leadership competency,talent cohesion competency,research teaching inspiration competency,research synergy competency,and co-competition competency.Based on the theory,a multi-indicators evaluation system is constructed.This paper takes institutes of the Chinese Academy of Sciences as study objects and uses LISREL software to implement confirmatory factor analysis in SEM.In the study,both the effect multi-indicators system and evaluation model contain a good result.

  18. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters.

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-09-11

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.

  19. Three dimensional analysis of the pore space in fine-grained Boom Clay, using BIB-SEM (broad-ion beam scanning electron microscopy), combined with FIB (focused ion-beam) serial cross-sectioning, pore network modeling and Wood's metal injection

    Science.gov (United States)

    Hemes, Susanne; Klaver, Jop; Desbois, Guillaume; Urai, Janos

    2014-05-01

    The Boom Clay is, besides the Ypresian clays, one of the potential host rock materials for radioactive waste disposal in Belgium (Gens et al., 2003; Van Marcke & Laenen, 2005; Verhoef et al., 2011). To access parameters, which are relevant for the diffusion controlled transport of radionuclides in the material, such as porosity, pore connectivity and permeability, it is crucial to characterize the pore space at high resolution (nm-scale) and in 3D. Focused-ion-beam (FIB) serial cross-sectioning in combination with high resolution scanning electron microscopy (SEM), pore network modeling, Wood's metal injection and broad-ion-beam (BIB) milling, constitute a superior set of methods to characterize the 3D pore space in fine-grained, clayey materials, down to the nm-scale resolution. In the present study, we identified characteristic 3D pore space morphologies, determined the 3D volume porosity of the material and applied pore network extraction modeling (Dong and Blunt, 2009), to access the connectivity of the pore space and to discriminate between pore bodies and pore throats. Moreover, we used Wood's metal injection (WMI) in combination with BIB-SEM imaging to assess the pore connectivity at a larger scale and even higher resolution. The FIB-SEM results show a highly (~ 90 %) interconnected pore space in Boom Clay, down to the resolution of ~ 3E+03 nm³ (voxel-size), with a total volume porosity of ~ 20 %. Pore morphologies of large (> 5E+08 nm³), highly interconnected pores are complex, with high surface area to volume ratios (shape factors G ~ 0.01), whereas small (BIB-SEM, down to a resolution of ~ 50 nm² pixel-size, indicates an interconnected porosity fraction of ~ 80 %, of a total measured 2D porosity of ~ 20 %. Determining and distinguishing between pore bodies and pore throats enables us to compare 3D FIB-SEM pore-size distributions to 2D BIB-SEM data, as well as MIP data. Results show a good agreement between the 2D BIB-SEM and 3D FIB-SEM inferred pore

  20. Insights into the evolution of tectonically-active glaciated mountain ranges from digital elevation model analyses

    Science.gov (United States)

    Brocklehurst, S. H.; Whipple, K. X.

    2003-12-01

    Glaciers have played an important role in the development of most active mountain ranges around the world during the Quaternary, but the interaction between glacial erosion (as modulated by climate change) and tectonic processes is poorly understood. The so-called glacial buzzsaw hypothesis (Brozovic et al., 1997) proposes that glaciers can incise as rapidly as the most rapid rock uplift rates, such that glaciated landscapes experiencing different rock uplift rates but the same snowline elevation will look essentially the same, with mean elevations close to the snowline. Digital elevation model-based analyses of the glaciated landscapes of the Nanga Parbat region, Pakistan, and the Southern Alps, New Zealand, lend some support to this hypothesis, but also reveal considerably more variety to the landscapes of glaciated, tectonically-active mountain ranges. Larger glaciers in the Nanga Parbat region maintain a low downvalley gradient and valley floor elevations close to the snowline, even in the face of extremely rapid rock uplift. However, smaller glaciers steepen in response to rapid uplift, similar to the response of rivers. A strong correlation between the height of hillslopes rising from the cirque floors and rock uplift rates implies that erosion processes on hillslopes cannot initially keep up with more rapid glacial incision rates. It is these staggering hillslopes that permit mountain peaks to rise above 8000m. The glacial buzzsaw hypothesis does not describe the evolution of the Southern Alps as well, because here mean elevations rise in areas of more rapid rock uplift. The buzzsaw hypothesis may work well in the Nanga Parbat region because the zone of rapid rock uplift is structurally confined to a narrow region. Alternatively, the Southern Alps may not have been rising sufficiently rapidly or sufficiently long for the glacial buzzsaw to be imposed outside the most rapidly uplifting region, around Mount Cook. The challenge now is to understand in detail

  1. Soil carbon response to land-use change: evaluation of a global vegetation model using observational meta-analyses

    Science.gov (United States)

    Nyawira, Sylvia S.; Nabel, Julia E. M. S.; Don, Axel; Brovkin, Victor; Pongratz, Julia

    2016-10-01

    Global model estimates of soil carbon changes from past land-use changes remain uncertain. We develop an approach for evaluating dynamic global vegetation models (DGVMs) against existing observational meta-analyses of soil carbon changes following land-use change. Using the DGVM JSBACH, we perform idealized simulations where the entire globe is covered by one vegetation type, which then undergoes a land-use change to another vegetation type. We select the grid cells that represent the climatic conditions of the meta-analyses and compare the mean simulated soil carbon changes to the meta-analyses. Our simulated results show model agreement with the observational data on the direction of changes in soil carbon for some land-use changes, although the model simulated a generally smaller magnitude of changes. The conversion of crop to forest resulted in soil carbon gain of 10 % compared to a gain of 42 % in the data, whereas the forest-to-crop change resulted in a simulated loss of -15 % compared to -40 %. The model and the observational data disagreed for the conversion of crop to grasslands. The model estimated a small soil carbon loss (-4 %), while observational data indicate a 38 % gain in soil carbon for the same land-use change. These model deviations from the observations are substantially reduced by explicitly accounting for crop harvesting and ignoring burning in grasslands in the model. We conclude that our idealized simulation approach provides an appropriate framework for evaluating DGVMs against meta-analyses and that this evaluation helps to identify the causes of deviation of simulated soil carbon changes from the meta-analyses.

  2. Some Esoteric Aspects of SEM that Its Practitioners Should Want to Know

    Science.gov (United States)

    Rozeboom, William W.

    2009-01-01

    The topic of this article is the interpretation of structural equation modeling (SEM) solutions. Its purpose is to augment structural modeling's metatheoretic resources while enhancing awareness of how problematic is the causal significance of SEM-parameter solutions. Part I focuses on the nonuniqueness and consequent dubious interpretability of…

  3. A very simple dynamic soil acidification model for scenario analyses and target load calculations

    NARCIS (Netherlands)

    Posch, M.; Reinds, G.J.

    2009-01-01

    A very simple dynamic soil acidification model, VSD, is described, which has been developed as the simplest extension of steady-state models for critical load calculations and with an eye on regional applications. The model requires only a minimum set of inputs (compared to more detailed models) and

  4. A Conceptual Model for Analysing Management Development in the UK Hospitality Industry

    Science.gov (United States)

    Watson, Sandra

    2007-01-01

    This paper presents a conceptual, contingent model of management development. It explains the nature of the UK hospitality industry and its potential influence on MD practices, prior to exploring dimensions and relationships in the model. The embryonic model is presented as a model that can enhance our understanding of the complexities of the…

  5. Secondary Evaluations of MTA 36-Month Outcomes: Propensity Score and Growth Mixture Model Analyses

    Science.gov (United States)

    Swanson, James M.; Hinshaw, Stephen P.; Arnold, L. Eugene; Gibbons, Robert D.; Marcus, Sue; Hur, Kwan; Jensen, Peter S.; Vitiello, Benedetto; Abikoff, Howard B.: Greenhill, Laurence L.; Hechtman, Lily; Pelham, William E.; Wells, Karen C.; Conners, C. Keith; March, John S.; Elliott, Glen R.; Epstein, Jeffery N.; Hoagwood, Kimberly; Hoza, Betsy; Molina, Brooke S. G.; Newcorn, Jeffrey H.; Severe, Joanne B.; Wigal, Timothy

    2007-01-01

    Objective: To evaluate two hypotheses: that self-selection bias contributed to lack of medication advantage at the 36-month assessment of the Multimodal Treatment Study of Children With ADHD (MTA) and that overall improvement over time obscured treatment effects in subgroups with different outcome trajectories. Method: Propensity score analyses,…

  6. Secondary Evaluations of MTA 36-Month Outcomes: Propensity Score and Growth Mixture Model Analyses

    Science.gov (United States)

    Swanson, James M.; Hinshaw, Stephen P.; Arnold, L. Eugene; Gibbons, Robert D.; Marcus, Sue; Hur, Kwan; Jensen, Peter S.; Vitiello, Benedetto; Abikoff, Howard B.: Greenhill, Laurence L.; Hechtman, Lily; Pelham, William E.; Wells, Karen C.; Conners, C. Keith; March, John S.; Elliott, Glen R.; Epstein, Jeffery N.; Hoagwood, Kimberly; Hoza, Betsy; Molina, Brooke S. G.; Newcorn, Jeffrey H.; Severe, Joanne B.; Wigal, Timothy

    2007-01-01

    Objective: To evaluate two hypotheses: that self-selection bias contributed to lack of medication advantage at the 36-month assessment of the Multimodal Treatment Study of Children With ADHD (MTA) and that overall improvement over time obscured treatment effects in subgroups with different outcome trajectories. Method: Propensity score analyses,…

  7. Partial Least Squares Strukturgleichungsmodellierung (PLS-SEM)

    DEFF Research Database (Denmark)

    Hair, Joe F.; Hult, G. Thomas M.; Ringle, Christian M.

    (PLS-SEM) hat sich in der wirtschafts- und sozialwissenschaftlichen Forschung als geeignetes Verfahren zur Schätzung von Kausalmodellen behauptet. Dank der Anwenderfreundlichkeit des Verfahrens und der vorhandenen Software ist es inzwischen auch in der Praxis etabliert. Dieses Buch liefert eine...... anwendungsorientierte Einführung in die PLS-SEM. Der Fokus liegt auf den Grundlagen des Verfahrens und deren praktischer Umsetzung mit Hilfe der SmartPLS-Software. Das Konzept des Buches setzt dabei auf einfache Erläuterungen statistischer Ansätze und die anschauliche Darstellung zahlreicher Anwendungsbeispiele anhand...... einer einheitlichen Fallstudie. Viele Grafiken, Tabellen und Illustrationen erleichtern das Verständnis der PLS-SEM. Zudem werden dem Leser herunterladbare Datensätze, Aufgaben und weitere Fachartikel zur Vertiefung angeboten. Damit eignet sich das Buch hervorragend für Studierende, Forscher und...

  8. Using an operating cost model to analyse the selection of aircraft type on short-haul routes

    CSIR Research Space (South Africa)

    Ssamula, B

    2006-08-01

    Full Text Available and the effect of passenger volume analysed. The model was applied to a specific route within Africa, and thereafter varying passenger numbers, to choose the least costly aircraft. The results showed that smaller capacity aircraft, even though limited by maximum...

  9. Pathway models for analysing and managing the introduction of alien plant pests—an overview and categorization

    Science.gov (United States)

    J.C. Douma; M. Pautasso; R.C. Venette; C. Robinet; L. Hemerik; M.C.M. Mourits; J. Schans; W. van der Werf

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative estimates of introduction risks and effectiveness of management options....

  10. Developing computational model-based diagnostics to analyse clinical chemistry data

    NARCIS (Netherlands)

    Schalkwijk, D.B. van; Bochove, K. van; Ommen, B. van; Freidig, A.P.; Someren, E.P. van; Greef, J. van der; Graaf, A.A. de

    2010-01-01

    This article provides methodological and technical considerations to researchers starting to develop computational model-based diagnostics using clinical chemistry data.These models are of increasing importance, since novel metabolomics and proteomics measuring technologies are able to produce large

  11. Bio-economic farm modelling to analyse agricultural land productivity in Rwanda

    NARCIS (Netherlands)

    Bidogeza, J.C.

    2011-01-01

    Keywords: Rwanda; farm household typology; sustainable technology adoption; multivariate analysis;
    land degradation; food security; bioeconomic model; crop simulation models; organic fertiliser; inorganic fertiliser; policy incentives In Rwanda, land degradation contributes to the low and

  12. Comparative study analysing women's childbirth satisfaction and obstetric outcomes across two different models of maternity care

    OpenAIRE

    Conesa Ferrer, Ma Belén; Canteras Jordana, Manuel; Ballesteros Meseguer, Carmen; Carrillo García, César; Martínez Roche, M Emilia

    2016-01-01

    Objectives To describe the differences in obstetrical results and women's childbirth satisfaction across 2 different models of maternity care (biomedical model and humanised birth). Setting 2 university hospitals in south-eastern Spain from April to October 2013. Design A correlational descriptive study. Participants A convenience sample of 406 women participated in the study, 204 of the biomedical model and 202 of the humanised model. Results The differences in obstetrical results were (biom...

  13. Quantifying and Analysing Neighbourhood Characteristics Supporting Urban Land-Use Modelling

    DEFF Research Database (Denmark)

    Hansen, Henning Sten

    2009-01-01

    Land-use modelling and spatial scenarios have gained increased attention as a means to meet the challenge of reducing uncertainty in the spatial planning and decision-making. Several organisations have developed software for land-use modelling. Many of the recent modelling efforts incorporate cel...

  14. Driver Model of a Powered Wheelchair Operation as a Tool of Theoretical Analyses

    Science.gov (United States)

    Ito, Takuma; Inoue, Takenobu; Shino, Motoki; Kamata, Minoru

    This paper describes the construction of a driver model of a powered wheelchair operation for the understanding of the characteristics of the driver. The main targets of existing researches about driver models are the operation of the automobiles and motorcycles, not a low-speed vehicle such as powered wheelchairs. Therefore, we started by verifying the possibility of modeling the turning operation at a corner of a corridor. At first, we conducted an experiment on a daily powered wheelchair user by using his vehicle. High reproducibility of driving and the driving characteristics for the construction of a driver model were both confirmed from the result of the experiment. Next, experiments with driving simulators were conducted for the collection of quantitative driving data. The parameters of the proposed driver model were identified from experimental results. From the simulations with the proposed driver model and identified parameters, the characteristics of the proposed driver model were analyzed.

  15. Search Engine Marketing (SEM: Financial & Competitive Advantages of an Effective Hotel SEM Strategy

    Directory of Open Access Journals (Sweden)

    Leora Halpern Lanz

    2015-05-01

    Full Text Available Search Engine Marketing and Optimization (SEO, SEM are keystones of a hotels marketing strategy, in fact research shows that 90% of travelers start their vacation planning with a Google search. Learn five strategies that can enhance a hotels SEO and SEM strategies to boost bookings.

  16. Building a SEM Analytics Reporting Portfolio

    Science.gov (United States)

    Goff, Jay W.; Williams, Brian G.; Kilgore, Wendy

    2016-01-01

    Effective strategic enrollment management (SEM) efforts require vast amounts of internal and external data to ensure that meaningful reporting and analysis systems can assist managers in decision making. A wide range of information is integral for leading effective and efficient student recruitment and retention programs. This article is designed…

  17. Building a SEM Analytics Reporting Portfolio

    Science.gov (United States)

    Goff, Jay W.; Williams, Brian G.; Kilgore, Wendy

    2016-01-01

    Effective strategic enrollment management (SEM) efforts require vast amounts of internal and external data to ensure that meaningful reporting and analysis systems can assist managers in decision making. A wide range of information is integral for leading effective and efficient student recruitment and retention programs. This article is designed…

  18. Advanced Applications of Structural Equation Modeling in Counseling Psychology Research

    Science.gov (United States)

    Martens, Matthew P.; Haase, Richard F.

    2006-01-01

    Structural equation modeling (SEM) is a data-analytic technique that allows researchers to test complex theoretical models. Most published applications of SEM involve analyses of cross-sectional recursive (i.e., unidirectional) models, but it is possible for researchers to test more complex designs that involve variables observed at multiple…

  19. WOMBAT——A tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML)

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model;estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses.Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from http://agbu.une.edu.au/~kmeyer/wombat.html

  20. Statistics and Evaluation of Music Landscape Consumption Based on Coupling TAM-SEM Model%耦合TAM-SEM模型的音乐景观消费影响因素研究

    Institute of Scientific and Technical Information of China (English)

    张伶

    2015-01-01

    基于技术接受模型(简称TAM )、耦合结构方程式分析法(SEM ),对704位音乐爱好者(其中中国公民652位,外籍人士52位)的调查数据进行统计研究,并针对音乐爱好者音乐培训(交流)的参与度对实际消费情况所产生的直接作用进行了分析,对其参加音乐培训(交流)是否会影响音乐爱好者所感知的消费便捷性、有用性和实际消费情况之间的关系进行了评估,分析影响音乐爱好者实际消费的因素。研究表明:音乐爱好者各类音乐景观消费所感知到的有用性(P U )是十分重要的,它对音乐爱好者的音乐类培训参与度有着直接的影响,且能够促进和提高音乐爱好者的消费频率与水平;音乐爱好者对参与消费所感知到的便捷性(PEOC)对各类音乐景观的实际消费没有直接的影响,但它通过音乐爱好者感知的有用性和音乐培训(交流)参与对其实际消费情况产生影响;音乐培训(交流)参与度能够提高其消费水平,且能够影响音乐爱好者的感知有用性和实际消费。%T he primary contribution of this research is to further our understanding of the factors that affect music lovers'music landscape consumption .The structural equation model was employed to analyze the data collected from 704 music lovers . This was done using a survey questionnaire inspired by the technology acceptance model (TAM) ,which postulates the importance of how music lovers perceived the usefulness and ease of music landscape consumption .Further ,this study investigates the direct effect of music workshop attendance on actual consumption and assesses whether music workshop attendance mediates the relationship between perceived ease of use ,perceived usefulness ,and actual consumption . Structural equation modeling results suggest that the perceived usefulness of music landscape consumption is vital as it directly affects music

  1. Kinetic models for analysing myocardial [{sup 11}C]palmitate data

    Energy Technology Data Exchange (ETDEWEB)

    Jong, Hugo W.A.M. de [University Medical Centre Utrecht, Department of Radiology and Nuclear Medicine, Utrecht (Netherlands); VU University Medical Centre, Department of Nuclear Medicine and PET Research, Amsterdam (Netherlands); Rijzewijk, Luuk J.; Diamant, Michaela [VU University Medical Centre, Diabetes Centre, Amsterdam (Netherlands); Lubberink, Mark; Lammertsma, Adriaan A. [VU University Medical Centre, Department of Nuclear Medicine and PET Research, Amsterdam (Netherlands); Meer, Rutger W. van der; Lamb, Hildo J. [Leiden University Medical Centre, Department of Radiology, Leiden (Netherlands); Smit, Jan W.A. [Leiden University Medical Centre, Department of Endocrinology, Leiden (Netherlands)

    2009-06-15

    [{sup 11}C]Palmitate PET can be used to study myocardial fatty acid metabolism in vivo. Several models have been applied to describe and quantify its kinetics, but to date no systematic analysis has been performed to define the most suitable model. In this study a total of 21 plasma input models comprising one to three compartments and up to six free rate constants were compared using statistical analysis of clinical data and simulations. To this end, 14 healthy volunteers were scanned using [{sup 11}C]palmitate, whilst myocardial blood flow was measured using H{sub 2} {sup 15}O. Models including an oxidative pathway, representing production of {sup 11}CO{sub 2}, provided significantly better fits to the data than other models. Model robustness was increased by fixing efflux of {sup 11}CO{sub 2} to the oxidation rate. Simulations showed that a three-tissue compartment model describing oxidation and esterification was feasible when no more than three free rate constants were included. Although further studies in patients are required to substantiate this choice, based on the accuracy of data description, the number of free parameters and generality, the three-tissue model with three free rate constants was the model of choice for describing [{sup 11}C]palmitate kinetics in terms of oxidation and fatty acid accumulation in the cell. (orig.)

  2. A novel substance flow analysis model for analysing multi-year phosphorus flow at the regional scale.

    Science.gov (United States)

    Chowdhury, Rubel Biswas; Moore, Graham A; Weatherley, Anthony J; Arora, Meenakshi

    2016-12-01

    Achieving sustainable management of phosphorus (P) is crucial for both global food security and global environmental protection. In order to formulate informed policy measures to overcome existing barriers of achieving sustainable P management, there is need for a sound understanding of the nature and magnitude of P flow through various systems at different geographical and temporal scales. So far, there is a limited understanding on the nature and magnitude of P flow over multiple years at the regional scale. In this study, we have developed a novel substance flow analysis (SFA) model in the MATLAB/Simulink® software platform that can be effectively utilized to analyse the nature and magnitude of multi-year P flow at the regional scale. The model is inclusive of all P flows and storage relating to all key systems, subsystems, processes or components, and the associated interactions of P flow required to represent a typical P flow system at the regional scale. In an annual time step, this model can analyse P flow and storage over as many as years required at a time, and therefore, can indicate the trends and changes in P flow and storage over many years, which is not offered by the existing regional scale SFA models of P. The model is flexible enough to allow any modification or the inclusion of any degree of complexity, and therefore, can be utilized for analysing P flow in any region around the world. The application of the model in the case of Gippsland region, Australia has revealed that the model generates essential information about the nature and magnitude of P flow at the regional scale which can be utilized for making improved management decisions towards attaining P sustainability. A systematic reliability check on the findings of model application also indicates that the model produces reliable results.

  3. Comparative study analysing women's childbirth satisfaction and obstetric outcomes across two different models of maternity care.

    Science.gov (United States)

    Conesa Ferrer, Ma Belén; Canteras Jordana, Manuel; Ballesteros Meseguer, Carmen; Carrillo García, César; Martínez Roche, M Emilia

    2016-08-26

    To describe the differences in obstetrical results and women's childbirth satisfaction across 2 different models of maternity care (biomedical model and humanised birth). 2 university hospitals in south-eastern Spain from April to October 2013. A correlational descriptive study. A convenience sample of 406 women participated in the study, 204 of the biomedical model and 202 of the humanised model. The differences in obstetrical results were (biomedical model/humanised model): onset of labour (spontaneous 66/137, augmentation 70/1, p=0.0005), pain relief (epidural 172/132, no pain relief 9/40, p=0.0005), mode of delivery (normal vaginal 140/165, instrumental 48/23, p=0.004), length of labour (0-4 hours 69/93, >4 hours 133/108, p=0.011), condition of perineum (intact perineum or tear 94/178, episiotomy 100/24, p=0.0005). The total questionnaire score (100) gave a mean (M) of 78.33 and SD of 8.46 in the biomedical model of care and an M of 82.01 and SD of 7.97 in the humanised model of care (p=0.0005). In the analysis of the results per items, statistical differences were found in 8 of the 9 subscales. The highest scores were reached in the humanised model of maternity care. The humanised model of maternity care offers better obstetrical outcomes and women's satisfaction scores during the labour, birth and immediate postnatal period than does the biomedical model. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  4. Comparative study analysing women's childbirth satisfaction and obstetric outcomes across two different models of maternity care

    Science.gov (United States)

    Conesa Ferrer, Ma Belén; Canteras Jordana, Manuel; Ballesteros Meseguer, Carmen; Carrillo García, César; Martínez Roche, M Emilia

    2016-01-01

    Objectives To describe the differences in obstetrical results and women's childbirth satisfaction across 2 different models of maternity care (biomedical model and humanised birth). Setting 2 university hospitals in south-eastern Spain from April to October 2013. Design A correlational descriptive study. Participants A convenience sample of 406 women participated in the study, 204 of the biomedical model and 202 of the humanised model. Results The differences in obstetrical results were (biomedical model/humanised model): onset of labour (spontaneous 66/137, augmentation 70/1, p=0.0005), pain relief (epidural 172/132, no pain relief 9/40, p=0.0005), mode of delivery (normal vaginal 140/165, instrumental 48/23, p=0.004), length of labour (0–4 hours 69/93, >4 hours 133/108, p=0.011), condition of perineum (intact perineum or tear 94/178, episiotomy 100/24, p=0.0005). The total questionnaire score (100) gave a mean (M) of 78.33 and SD of 8.46 in the biomedical model of care and an M of 82.01 and SD of 7.97 in the humanised model of care (p=0.0005). In the analysis of the results per items, statistical differences were found in 8 of the 9 subscales. The highest scores were reached in the humanised model of maternity care. Conclusions The humanised model of maternity care offers better obstetrical outcomes and women's satisfaction scores during the labour, birth and immediate postnatal period than does the biomedical model. PMID:27566632

  5. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    Science.gov (United States)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  6. Comparative Analyses of MIRT Models and Software (BMIRT and flexMIRT)

    Science.gov (United States)

    Yavuz, Guler; Hambleton, Ronald K.

    2017-01-01

    Application of MIRT modeling procedures is dependent on the quality of parameter estimates provided by the estimation software and techniques used. This study investigated model parameter recovery of two popular MIRT packages, BMIRT and flexMIRT, under some common measurement conditions. These packages were specifically selected to investigate the…

  7. The Cannon 2: A data-driven model of stellar spectra for detailed chemical abundance analyses

    CERN Document Server

    Casey, Andrew R; Ness, Melissa; Rix, Hans-Walter; Ho, Anna Q Y; Gilmore, Gerry

    2016-01-01

    We have shown that data-driven models are effective for inferring physical attributes of stars (labels; Teff, logg, [M/H]) from spectra, even when the signal-to-noise ratio is low. Here we explore whether this is possible when the dimensionality of the label space is large (Teff, logg, and 15 abundances: C, N, O, Na, Mg, Al, Si, S, K, Ca, Ti, V, Mn, Fe, Ni) and the model is non-linear in its response to abundance and parameter changes. We adopt ideas from compressed sensing to limit overall model complexity while retaining model freedom. The model is trained with a set of 12,681 red-giant stars with high signal-to-noise spectroscopic observations and stellar parameters and abundances taken from the APOGEE Survey. We find that we can successfully train and use a model with 17 stellar labels. Validation shows that the model does a good job of inferring all 17 labels (typical abundance precision is 0.04 dex), even when we degrade the signal-to-noise by discarding ~50% of the observing time. The model dependencie...

  8. Analysing empowerment-oriented email consultation for parents : Development of the Guiding the Empowerment Process model

    NARCIS (Netherlands)

    Nieuwboer, C.C.; Fukkink, R.G.; Hermanns, J.M.A.

    2017-01-01

    Online consultation is increasingly offered by parenting practitioners, but it is not clear if it is feasible to provide empowerment-oriented support in a single session email consultation. Based on the empowerment theory, we developed the Guiding the Empowerment Process model (GEP model) to evaluat

  9. Transport of nutrients from land to sea: Global modeling approaches and uncertainty analyses

    NARCIS (Netherlands)

    Beusen, A.H.W.

    2014-01-01

    This thesis presents four examples of global models developed as part of the Integrated Model to Assess the Global Environment (IMAGE). They describe different components of global biogeochemical cycles of the nutrients nitrogen (N), phosphorus (P) and silicon (Si), with a focus on approaches to

  10. Modelling and analysing track cycling Omnium performances using statistical and machine learning techniques.

    Science.gov (United States)

    Ofoghi, Bahadorreza; Zeleznikow, John; Dwyer, Dan; Macmahon, Clare

    2013-01-01

    This article describes the utilisation of an unsupervised machine learning technique and statistical approaches (e.g., the Kolmogorov-Smirnov test) that assist cycling experts in the crucial decision-making processes for athlete selection, training, and strategic planning in the track cycling Omnium. The Omnium is a multi-event competition that will be included in the summer Olympic Games for the first time in 2012. Presently, selectors and cycling coaches make decisions based on experience and intuition. They rarely have access to objective data. We analysed both the old five-event (first raced internationally in 2007) and new six-event (first raced internationally in 2011) Omniums and found that the addition of the elimination race component to the Omnium has, contrary to expectations, not favoured track endurance riders. We analysed the Omnium data and also determined the inter-relationships between different individual events as well as between those events and the final standings of riders. In further analysis, we found that there is no maximum ranking (poorest performance) in each individual event that riders can afford whilst still winning a medal. We also found the required times for riders to finish the timed components that are necessary for medal winning. The results of this study consider the scoring system of the Omnium and inform decision-making toward successful participation in future major Omnium competitions.

  11. Mathematical modeling of materially nonlinear problems in structural analyses, Part II: Application in contemporary software

    Directory of Open Access Journals (Sweden)

    Bonić Zoran

    2010-01-01

    Full Text Available The paper presents application of nonlinear material models in the software package Ansys. The development of the model theory is presented in the paper of the mathematical modeling of material nonlinear problems in structural analysis (part I - theoretical foundations, and here is described incremental-iterative procedure for solving problems of nonlinear material used by this package and an example of modeling of spread footing by using Bilinear-kinematics and Drucker-Prager mode was given. A comparative analysis of the results obtained by these modeling and experimental research of the author was made. Occurrence of the load level that corresponds to plastic deformation was noted, development of deformations with increasing load, as well as the distribution of dilatation in the footing was observed. Comparison of calculated and measured values of reinforcement dilatation shows their very good agreement.

  12. Crowd-structure interaction in footbridges: Modelling, application to a real case-study and sensitivity analyses

    Science.gov (United States)

    Bruno, Luca; Venuti, Fiammetta

    2009-06-01

    A mathematical and computational model used to simulate crowd-structure interaction in lively footbridges is presented in this work. The model is based on the mathematical and numerical decomposition of the coupled multiphysical nonlinear system into two interacting subsystems. The model was conceived to simulate the synchronous lateral excitation phenomenon caused by pedestrians walking on footbridges. The model was first applied to simulate a crowd event on an actual footbridge, the T-bridge in Japan. Three sensitivity analyses were then performed on the same benchmark to evaluate the properties of the model. The simulation results show good agreement with the experimental data found in literature and the model could be considered a useful tool for designers and engineers in the different phases of footbridge design.

  13. Stochastic Spatio-Temporal Models for Analysing NDVI Distribution of GIMMS NDVI3g Images

    Directory of Open Access Journals (Sweden)

    Ana F. Militino

    2017-01-01

    Full Text Available The normalized difference vegetation index (NDVI is an important indicator for evaluating vegetation change, monitoring land surface fluxes or predicting crop models. Due to the great availability of images provided by different satellites in recent years, much attention has been devoted to testing trend changes with a time series of NDVI individual pixels. However, the spatial dependence inherent in these data is usually lost unless global scales are analyzed. In this paper, we propose incorporating both the spatial and the temporal dependence among pixels using a stochastic spatio-temporal model for estimating the NDVI distribution thoroughly. The stochastic model is a state-space model that uses meteorological data of the Climatic Research Unit (CRU TS3.10 as auxiliary information. The model will be estimated with the Expectation-Maximization (EM algorithm. The result is a set of smoothed images providing an overall analysis of the NDVI distribution across space and time, where fluctuations generated by atmospheric disturbances, fire events, land-use/cover changes or engineering problems from image capture are treated as random fluctuations. The illustration is carried out with the third generation of NDVI images, termed NDVI3g, of the Global Inventory Modeling and Mapping Studies (GIMMS in continental Spain. This data are taken in bymonthly periods from January 2011 to December 2013, but the model can be applied to many other variables, countries or regions with different resolutions.

  14. Neural Network-Based Model for Landslide Susceptibility and Soil Longitudinal Profile Analyses

    DEFF Research Database (Denmark)

    Farrokhzad, F.; Barari, Amin; Choobbasti, A. J.

    2011-01-01

    The purpose of this study was to create an empirical model for assessing the landslide risk potential at Savadkouh Azad University, which is located in the rural surroundings of Savadkouh, about 5 km from the city of Pol-Sefid in northern Iran. The soil longitudinal profile of the city of Babol......, located 25 km from the Caspian Sea, also was predicted with an artificial neural network (ANN). A multilayer perceptron neural network model was applied to the landslide area and was used to analyze specific elements in the study area that contributed to previous landsliding events. The ANN models were...... studies in landslide susceptibility zonation....

  15. Model-Based Fault Diagnosis: Performing Root Cause and Impact Analyses in Real Time

    Science.gov (United States)

    Figueroa, Jorge F.; Walker, Mark G.; Kapadia, Ravi; Morris, Jonathan

    2012-01-01

    Generic, object-oriented fault models, built according to causal-directed graph theory, have been integrated into an overall software architecture dedicated to monitoring and predicting the health of mission- critical systems. Processing over the generic fault models is triggered by event detection logic that is defined according to the specific functional requirements of the system and its components. Once triggered, the fault models provide an automated way for performing both upstream root cause analysis (RCA), and for predicting downstream effects or impact analysis. The methodology has been applied to integrated system health management (ISHM) implementations at NASA SSC's Rocket Engine Test Stands (RETS).

  16. Bases de Datos Semánticas

    Directory of Open Access Journals (Sweden)

    Irving Caro Fierros

    2016-12-01

    Full Text Available En 1992, cuando Tim Berners-Lee dio a conocer la primera  versión  de  la  Web,  su  visión  a  futuro  era  incorporar metadatos  con  información  semántica  en  las  páginas  Web.  Es precisamente   a   principios   de   este   siglo   que   inicia   el   auge repentino  de  la  Web  semántica  en  el  ambiente  académico  e Internet. El modelo de datos semántico se define como un modelo conceptual que permite definir el significado de los datos a través de  sus  relaciones  con  otros.  En  este  sentido,  el  formato  de representación  de  los  datos  es  fundamental  para  proporcionar información de carácter semántico. La tecnología enfocada en las bases de datos semánticas se encuentra actualmente en un punto de  inflexión,  al  pasar  del  ámbito  académico  y  de  investigación  a ser una opción comercial completa. En este artículo se realiza un análisis  del  concepto  de  base  de  datos  semántica.  También  se presenta  un  caso  de  estudio  donde  se  ejemplifican  operaciones básicas  que  involucran  la  gestión  de  la  información  almacenada en este tipo de base de datos.

  17. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    transmission electron microscopy is used to acquire images from two experimental groups of rats: 1) rats subjected to a behavioral model of stress and 2) rats subjected to sham stress as the control group. The synaptic vesicle distribution and interactions are modeled by employing a point process approach......This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section....... The model is able to correctly separate the two experimental groups. Two different approaches to estimate the thickness of each section of specimen being imaged are introduced. The first approach uses Darboux frame and Cartan matrix to measure the isophote curvature and the second approach is based...

  18. Genomic analyses with biofilter 2.0: knowledge driven filtering, annotation, and model development

    National Research Council Canada - National Science Library

    Pendergrass, Sarah A; Frase, Alex; Wallace, John; Wolfe, Daniel; Katiyar, Neerja; Moore, Carrie; Ritchie, Marylyn D

    2013-01-01

    .... We have now extensively revised and updated the multi-purpose software tool Biofilter that allows researchers to annotate and/or filter data as well as generate gene-gene interaction models based...

  19. Wave modelling for the North Indian Ocean using MSMR analysed winds

    Digital Repository Service at National Institute of Oceanography (India)

    Vethamony, P.; Sudheesh, K.; Rupali, S.P.; Babu, M.T.; Jayakumar, S.; Saran, A.K.; Basu, S.K.; Kumar, R.; Sarkar, A.

    NCMRWF (National Centre for Medium Range Weather Forecast) winds assimilated with MSMR (Multi-channel Scanning Microwave Radiometer) winds are used as input to MIKE21 Offshore Spectral Wave model (OSW) which takes into account wind induced wave...

  20. The strut-and-tie models in reinforced concrete structures analysed by a numerical technique

    Directory of Open Access Journals (Sweden)

    V. S. Almeida

    Full Text Available The strut-and-tie models are appropriate to design and to detail certain types of structural elements in reinforced concrete and in regions of stress concentrations, called "D" regions. This is a good model representation of the structural behavior and mechanism. The numerical techniques presented herein are used to identify stress regions which represent the strut-and-tie elements and to quantify their respective efforts. Elastic linear plane problems are analyzed using strut-and-tie models by coupling the classical evolutionary structural optimization, ESO, and a new variant called SESO - Smoothing ESO, for finite element formulation. The SESO method is based on the procedure of gradual reduction of stiffness contribution of the inefficient elements at lower stress until it no longer has any influence. Optimal topologies of strut-and-tie models are presented in several instances with good settings comparing with other pioneer works allowing the design of reinforcement for structural elements.

  1. Het Job-demands resources model: Een motivationele analyse vanuit de Zelf-Determinatie Theorie

    OpenAIRE

    2013-01-01

    This article details the doctoral dissertation of Anja Van Broeck (2010) detailing employee motivation from two different recent perspectives: the job demands-resources model (JD-R model) en the self-determination theory (SDT). This article primarily highlights how the studies of this dissertation add to the JDR by relying on SDT. First, a distinction is made between two types of job demands: job hindrances and job challenges Second, motivation is shown to represent the underlying mechanism ...

  2. On the emancipation of PLS-SEM : A commentary on Rigdon

    NARCIS (Netherlands)

    Sarstedt, Marko; Ringle, Christian M.; Henseler, Jörg; Hair, Joseph F.

    2014-01-01

    Rigdon's (2012) thoughtful article argues that PLS-SEM should free itself from CB-SEM. It should renounce all mechanisms, frameworks, and jargon associated with factor models entirely. In this comment, we shed further light on two subject areas on which Rigdon (2012) touches in his discussion of

  3. On the emancipation of PLS-SEM : A commentary on Rigdon

    NARCIS (Netherlands)

    Sarstedt, Marko; Ringle, Christian M.; Henseler, Jörg; Hair, Joseph F.

    2014-01-01

    Rigdon's (2012) thoughtful article argues that PLS-SEM should free itself from CB-SEM. It should renounce all mechanisms, frameworks, and jargon associated with factor models entirely. In this comment, we shed further light on two subject areas on which Rigdon (2012) touches in his discussion of CB-

  4. Optimization of extraction procedures for ecotoxicity analyses: Use of TNT contaminated soil as a model

    Energy Technology Data Exchange (ETDEWEB)

    Sunahara, G.I.; Renoux, A.Y.; Dodard, S.; Paquet, L.; Hawari, J. [BRI, Montreal, Quebec (Canada); Ampleman, G.; Lavigne, J.; Thiboutot, S. [DREV, Courcelette, Quebec (Canada)

    1995-12-31

    The environmental impact of energetic substances (TNT, RDX, GAP, NC) in soil is being examined using ecotoxicity bioassays. An extraction method was characterized to optimize bioassay assessment of TNT toxicity in different soil types. Using the Microtox{trademark} (Photobacterium phosphoreum) assay and non-extracted samples, TNT was most acutely toxic (IC{sub 50} = 1--9 PPM) followed by RDX and GAP; NC did not show obvious toxicity (probably due to solubility limitations). TNT (in 0.25% DMSO) yielded an IC{sub 50} 0.98 + 0.10 (SD) ppm. The 96h-EC{sub 50} (Selenastrum capricornutum growth inhibition) of TNT (1. 1 ppm) was higher than GAP and RDX; NC was not apparently toxic (probably due to solubility limitations). Soil samples (sand or a silt-sand mix) were spiked with either 2,000 or 20,000 mg TNT/kg soil, and were adjusted to 20% moisture. Samples were later mixed with acetonitrile, sonicated, and then treated with CaCl{sub 2} before filtration, HPLC and ecotoxicity analyses. Results indicated that: the recovery of TNT from soil (97.51% {+-} 2.78) was independent of the type of soil or moisture content; CaCl{sub 2} interfered with TNT toxicity and acetonitrile extracts could not be used directly for algal testing. When TNT extracts were diluted to fixed concentrations, similar TNT-induced ecotoxicities were generally observed and suggested that, apart from the expected effects of TNT concentrations in the soil, the soil texture and the moisture effects were minimal. The extraction procedure permits HPLC analyses as well as ecotoxicity testing and minimizes secondary soil matrix effects. Studies will be conducted to study the toxic effects of other energetic substances present in soil using this approach.

  5. Analysing stratified medicine business models and value systems: innovation-regulation interactions.

    Science.gov (United States)

    Mittra, James; Tait, Joyce

    2012-09-15

    Stratified medicine offers both opportunities and challenges to the conventional business models that drive pharmaceutical R&D. Given the increasingly unsustainable blockbuster model of drug development, due in part to maturing product pipelines, alongside increasing demands from regulators, healthcare providers and patients for higher standards of safety, efficacy and cost-effectiveness of new therapies, stratified medicine promises a range of benefits to pharmaceutical and diagnostic firms as well as healthcare providers and patients. However, the transition from 'blockbusters' to what might now be termed 'niche-busters' will require the adoption of new, innovative business models, the identification of different and perhaps novel types of value along the R&D pathway, and a smarter approach to regulation to facilitate innovation in this area. In this paper we apply the Innogen Centre's interdisciplinary ALSIS methodology, which we have developed for the analysis of life science innovation systems in contexts where the value creation process is lengthy, expensive and highly uncertain, to this emerging field of stratified medicine. In doing so, we consider the complex collaboration, timing, coordination and regulatory interactions that shape business models, value chains and value systems relevant to stratified medicine. More specifically, we explore in some depth two convergence models for co-development of a therapy and diagnostic before market authorisation, highlighting the regulatory requirements and policy initiatives within the broader value system environment that have a key role in determining the probable success and sustainability of these models.

  6. Analysing the Costs of Integrated Care: A Case on Model Selection for Chronic Care Purposes

    Directory of Open Access Journals (Sweden)

    Marc Carreras

    2016-08-01

    Full Text Available Background: The objective of this study is to investigate whether the algorithm proposed by Manning and Mullahy, a consolidated health economics procedure, can also be used to estimate individual costs for different groups of healthcare services in the context of integrated care. Methods: A cross-sectional study focused on the population of the Baix Empordà (Catalonia-Spain for the year 2012 (N = 92,498 individuals. A set of individual cost models as a function of sex, age and morbidity burden were adjusted and individual healthcare costs were calculated using a retrospective full-costing system. The individual morbidity burden was inferred using the Clinical Risk Groups (CRG patient classification system. Results: Depending on the characteristics of the data, and according to the algorithm criteria, the choice of model was a linear model on the log of costs or a generalized linear model with a log link. We checked for goodness of fit, accuracy, linear structure and heteroscedasticity for the models obtained. Conclusion: The proposed algorithm identified a set of suitable cost models for the distinct groups of services integrated care entails. The individual morbidity burden was found to be indispensable when allocating appropriate resources to targeted individuals.

  7. A Study on Micro-morphology and Elemental Analyses of Particulate Matter Collected at Exhaust Pipe of Automobiles and Coal Burning Chimney by SEM-EDX%汽车尾气管和燃煤烟囱颗粒物微形貌及其X-射线能谱析研究

    Institute of Scientific and Technical Information of China (English)

    陈满荣; 张卫国; 俞立中

    2016-01-01

    用扫描电镜及X-射线能谱(SEM-EDX)分析方法,对山西省朔州市区汽车尾气管末端和燃煤烟囱上部颗粒物样品的微形貌和化学元素进行了分析.结果显示,尾气管颗粒单体为球、团状,聚集体微形为棉絮状、层叠状集合体;烟囱颗粒物为不规则片状单粒和银耳状、层叠状集合体.二者EDX特征峰显示颗粒物的主要元素为:O、Si、Al、Fe、Pb、Na、Mg、Ca、P、Ti、Cd等.研究表明,汽车尾气颗粒物中的元素,O、Si 、Al平均含量最高.其比例之和为所测得元素含量的76.19%.Pb平均含量位居第4;燃煤烟囱颗粒物中O平均含量最高,其次是S、Cl、si、Pb,其中Pb的平均含量高于汽车尾气颗粒物中Pb的平均含量.样品磁化率的测试对区分这2种颗粒物,防治大气颗粒物(PM)污染有一定的技术指导意义.

  8. An LP-model to analyse economic and ecological sustainability on Dutch dairy farms: model presentation and application for experimental farm "de Marke"

    NARCIS (Netherlands)

    Calker, van K.J.; Berentsen, P.B.M.; Boer, de I.J.M.; Giesen, G.W.J.; Huirne, R.B.M.

    2004-01-01

    Farm level modelling can be used to determine how farm management adjustments and environmental policy affect different sustainability indicators. In this paper indicators were included in a dairy farm LP (linear programming)-model to analyse the effects of environmental policy and management

  9. Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    Proceeding for the poster presentation at LHCP2017, Shanghai, China on the topic of "Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses" (ATL-PHYS-SLIDE-2017-265 https://cds.cern.ch/record/2265389) Deadline: 01/09/2017

  10. Microstructure Analyses of NA-Nanodiamond Particles

    Science.gov (United States)

    2016-08-01

    Microscopy Results and Discussion 1 Scanning Electron Microscope (SEM) Analysis 5 Objective 5 Experimental Procedure 5 Discussion of Results 5...420 electron microscope at 120 KV voltage was used for TEM analyses. Transmission Electronic Microscopy Results and Discussion An electron ...SCANNING ELECTRON MICROSCOPE (SEM) ANALYSIS Objective Examine the morphology and elemental chemistry of detonated nanodiamonds (DND

  11. Seafloor earthquake measurement system, SEMS IV

    Energy Technology Data Exchange (ETDEWEB)

    Platzbecker, M.R.; Ehasz, J.P.; Franco, R.J.

    1997-07-01

    Staff of the Telemetry Technology Development Department (2664) have, in support of the U.S. Interior Department Mineral Management Services (MMS), developed and deployed the Seafloor Earthquake Measurement System IV (SEMS IV). The result of this development project is a series of three fully operational seafloor seismic monitor systems located at offshore platforms: Eureka, Grace, and Irene. The instrument probes are embedded from three to seven feet into the seafloor and hardwired to seismic data recorders installed top side at the offshore platforms. The probes and underwater cables were designed to survive the seafloor environment with an operation life of five years. The units have been operational for two years and have produced recordings of several minor earthquakes in that time. Sandia Labs will transfer operation of SEMS IV to MMS contractors in the coming months. 29 figs., 25 tabs.

  12. SEM investigation of heart tissue samples

    Energy Technology Data Exchange (ETDEWEB)

    Saunders, R; Amoroso, M [Physics Department, University of the West Indies, St. Augustine, Trinidad and Tobago, West Indies (Trinidad and Tobago)

    2010-07-01

    We used the scanning electron microscope to examine the cardiac tissue of a cow (Bos taurus), a pig (Sus scrofa), and a human (Homo sapiens). 1mm{sup 3} blocks of left ventricular tissue were prepared for SEM scanning by fixing in 96% ethanol followed by critical point drying (cryofixation), then sputter-coating with gold. The typical ridged structure of the myofibrils was observed for all the species. In addition crystal like structures were found in one of the samples of the heart tissue of the pig. These structures were investigated further using an EDVAC x-ray analysis attachment to the SEM. Elemental x-ray analysis showed highest peaks occurred for gold, followed by carbon, oxygen, magnesium and potassium. As the samples were coated with gold for conductivity, this highest peak is expected. Much lower peaks at carbon, oxygen, magnesium and potassium suggest that a cystallized salt such as a carbonate was present in the tissue before sacrifice.

  13. AMME: an Automatic Mental Model Evaluation to analyse user behaviour traced in a finite, discrete state space.

    Science.gov (United States)

    Rauterberg, M

    1993-11-01

    To support the human factors engineer in designing a good user interface, a method has been developed to analyse the empirical data of the interactive user behaviour traced in a finite discrete state space. The sequences of actions produced by the user contain valuable information about the mental model of this user, the individual problem solution strategies for a given task and the hierarchical structure of the task-subtasks relationships. The presented method, AMME, can analyse the action sequences and automatically generate (1) a net description of the task dependent model of the user, (2) a complete state transition matrix, and (3) various quantitative measures of the user's task solving process. The behavioural complexity of task-solving processes carried out by novices has been found to be significantly larger than the complexity of task-solving processes carried out by experts.

  14. Model-driven meta-analyses for informing health care: a diabetes meta-analysis as an exemplar.

    Science.gov (United States)

    Brown, Sharon A; Becker, Betsy Jane; García, Alexandra A; Brown, Adama; Ramírez, Gilbert

    2015-04-01

    A relatively novel type of meta-analysis, a model-driven meta-analysis, involves the quantitative synthesis of descriptive, correlational data and is useful for identifying key predictors of health outcomes and informing clinical guidelines. Few such meta-analyses have been conducted and thus, large bodies of research remain unsynthesized and uninterpreted for application in health care. We describe the unique challenges of conducting a model-driven meta-analysis, focusing primarily on issues related to locating a sample of published and unpublished primary studies, extracting and verifying descriptive and correlational data, and conducting analyses. A current meta-analysis of the research on predictors of key health outcomes in diabetes is used to illustrate our main points.

  15. Multicollinearity in prognostic factor analyses using the EORTC QLQ-C30: identification and impact on model selection.

    Science.gov (United States)

    Van Steen, Kristel; Curran, Desmond; Kramer, Jocelyn; Molenberghs, Geert; Van Vreckem, Ann; Bottomley, Andrew; Sylvester, Richard

    2002-12-30

    Clinical and quality of life (QL) variables from an EORTC clinical trial of first line chemotherapy in advanced breast cancer were used in a prognostic factor analysis of survival and response to chemotherapy. For response, different final multivariate models were obtained from forward and backward selection methods, suggesting a disconcerting instability. Quality of life was measured using the EORTC QLQ-C30 questionnaire completed by patients. Subscales on the questionnaire are known to be highly correlated, and therefore it was hypothesized that multicollinearity contributed to model instability. A correlation matrix indicated that global QL was highly correlated with 7 out of 11 variables. In a first attempt to explore multicollinearity, we used global QL as dependent variable in a regression model with other QL subscales as predictors. Afterwards, standard diagnostic tests for multicollinearity were performed. An exploratory principal components analysis and factor analysis of the QL subscales identified at most three important components and indicated that inclusion of global QL made minimal difference to the loadings on each component, suggesting that it is redundant in the model. In a second approach, we advocate a bootstrap technique to assess the stability of the models. Based on these analyses and since global QL exacerbates problems of multicollinearity, we therefore recommend that global QL be excluded from prognostic factor analyses using the QLQ-C30. The prognostic factor analysis was rerun without global QL in the model, and selected the same significant prognostic factors as before.

  16. Analysing improvements to on-street public transport systems: a mesoscopic model approach

    DEFF Research Database (Denmark)

    Ingvardson, Jesper Bláfoss; Kornerup Jensen, Jonas; Nielsen, Otto Anker

    2017-01-01

    Light rail transit and bus rapid transit have shown to be efficient and cost-effective in improving public transport systems in cities around the world. As these systems comprise various elements, which can be tailored to any given setting, e.g. pre-board fare-collection, holding strategies...... and other advanced public transport systems (APTS), the attractiveness of such systems depends heavily on their implementation. In the early planning stage it is advantageous to deploy simple and transparent models to evaluate possible ways of implementation. For this purpose, the present study develops...... a mesoscopic model which makes it possible to evaluate public transport operations in details, including dwell times, intelligent traffic signal timings and holding strategies while modelling impacts from other traffic using statistical distributional data thereby ensuring simplicity in use and fast...

  17. Latent Variable Modelling and Item Response Theory Analyses in Marketing Research

    Directory of Open Access Journals (Sweden)

    Brzezińska Justyna

    2016-12-01

    Full Text Available Item Response Theory (IRT is a modern statistical method using latent variables designed to model the interaction between a subject’s ability and the item level stimuli (difficulty, guessing. Item responses are treated as the outcome (dependent variables, and the examinee’s ability and the items’ characteristics are the latent predictor (independent variables. IRT models the relationship between a respondent’s trait (ability, attitude and the pattern of item responses. Thus, the estimation of individual latent traits can differ even for two individuals with the same total scores. IRT scores can yield additional benefits and this will be discussed in detail. In this paper theory and application with R software with the use of packages designed for modelling IRT will be presented.

  18. Viewing Integrated-Circuit Interconnections By SEM

    Science.gov (United States)

    Lawton, Russel A.; Gauldin, Robert E.; Ruiz, Ronald P.

    1990-01-01

    Back-scattering of energetic electrons reveals hidden metal layers. Experiment shows that with suitable operating adjustments, scanning electron microscopy (SEM) used to look for defects in aluminum interconnections in integrated circuits. Enables monitoring, in situ, of changes in defects caused by changes in temperature. Gives truer picture of defects, as etching can change stress field of metal-and-passivation pattern, causing changes in defects.

  19. Modeling human papillomavirus and cervical cancer in the United States for analyses of screening and vaccination

    Directory of Open Access Journals (Sweden)

    Ortendahl Jesse

    2007-10-01

    Full Text Available Abstract Background To provide quantitative insight into current U.S. policy choices for cervical cancer prevention, we developed a model of human papillomavirus (HPV and cervical cancer, explicitly incorporating uncertainty about the natural history of disease. Methods We developed a stochastic microsimulation of cervical cancer that distinguishes different HPV types by their incidence, clearance, persistence, and progression. Input parameter sets were sampled randomly from uniform distributions, and simulations undertaken with each set. Through systematic reviews and formal data synthesis, we established multiple epidemiologic targets for model calibration, including age-specific prevalence of HPV by type, age-specific prevalence of cervical intraepithelial neoplasia (CIN, HPV type distribution within CIN and cancer, and age-specific cancer incidence. For each set of sampled input parameters, likelihood-based goodness-of-fit (GOF scores were computed based on comparisons between model-predicted outcomes and calibration targets. Using 50 randomly resampled, good-fitting parameter sets, we assessed the external consistency and face validity of the model, comparing predicted screening outcomes to independent data. To illustrate the advantage of this approach in reflecting parameter uncertainty, we used the 50 sets to project the distribution of health outcomes in U.S. women under different cervical cancer prevention strategies. Results Approximately 200 good-fitting parameter sets were identified from 1,000,000 simulated sets. Modeled screening outcomes were externally consistent with results from multiple independent data sources. Based on 50 good-fitting parameter sets, the expected reductions in lifetime risk of cancer with annual or biennial screening were 76% (range across 50 sets: 69–82% and 69% (60–77%, respectively. The reduction from vaccination alone was 75%, although it ranged from 60% to 88%, reflecting considerable parameter

  20. Analysing the distribution of synaptic vesicles using a spatial point process model

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta

    2014-01-01

    Stress can affect the brain functionality in many ways. As the synaptic vesicles have a major role in nervous signal transportation in synapses, their distribution in relationship to the active zone is very important in studying the neuron responses. We study the effect of stress on brain functio...... in the two groups. The spatial distributions are modelled using spatial point process models with an inhomogeneous conditional intensity and repulsive pairwise interactions. Our results verify the hypothesis that the two groups have different spatial distributions....

  1. Analysing Amazonian forest productivity using a new individual and trait-based model (TFS v.1)

    Science.gov (United States)

    Fyllas, N. M.; Gloor, E.; Mercado, L. M.; Sitch, S.; Quesada, C. A.; Domingues, T. F.; Galbraith, D. R.; Torre-Lezama, A.; Vilanova, E.; Ramírez-Angulo, H.; Higuchi, N.; Neill, D. A.; Silveira, M.; Ferreira, L.; Aymard C., G. A.; Malhi, Y.; Phillips, O. L.; Lloyd, J.

    2014-07-01

    Repeated long-term censuses have revealed large-scale spatial patterns in Amazon basin forest structure and dynamism, with some forests in the west of the basin having up to a twice as high rate of aboveground biomass production and tree recruitment as forests in the east. Possible causes for this variation could be the climatic and edaphic gradients across the basin and/or the spatial distribution of tree species composition. To help understand causes of this variation a new individual-based model of tropical forest growth, designed to take full advantage of the forest census data available from the Amazonian Forest Inventory Network (RAINFOR), has been developed. The model allows for within-stand variations in tree size distribution and key functional traits and between-stand differences in climate and soil physical and chemical properties. It runs at the stand level with four functional traits - leaf dry mass per area (Ma), leaf nitrogen (NL) and phosphorus (PL) content and wood density (DW) varying from tree to tree - in a way that replicates the observed continua found within each stand. We first applied the model to validate canopy-level water fluxes at three eddy covariance flux measurement sites. For all three sites the canopy-level water fluxes were adequately simulated. We then applied the model at seven plots, where intensive measurements of carbon allocation are available. Tree-by-tree multi-annual growth rates generally agreed well with observations for small trees, but with deviations identified for larger trees. At the stand level, simulations at 40 plots were used to explore the influence of climate and soil nutrient availability on the gross (ΠG) and net (ΠN) primary production rates as well as the carbon use efficiency (CU). Simulated ΠG, ΠN and CU were not associated with temperature. On the other hand, all three measures of stand level productivity were positively related to both mean annual precipitation and soil nutrient status

  2. Critical factors in SEM 3D stereo microscopy

    DEFF Research Database (Denmark)

    Marinello, F.; Bariano, P.; Savio, E.;

    2008-01-01

    This work addresses dimensional measurements performed with the scanning electron microscope (SEM) using 3D reconstruction of surface topography through stereo-photogrammetry. The paper presents both theoretical and experimental investigations, on the effects of instrumental variables...... factors are recognized: the first one is related to the measurement operation and the instrument set-up; the second concerns the quality of scanned images and represents the major criticality in the application of SEMs for 3D characterizations....... and measurement parameters on reconstruction accuracy. Investigations were performed on a novel sample, specifically developed and implemented for the tests. The description is based on the model function introduced by Piazzesi and adapted for eucentrically tilted stereopairs. Two main classes of influencing...

  3. Integrated modeling/analyses of thermal-shock effects in SNS targets

    Energy Technology Data Exchange (ETDEWEB)

    Taleyarkhan, R.P.; Haines, J. [Oak Ridge National Lab., TN (United States)

    1996-06-01

    In a spallation neutron source (SNS), extremely rapid energy pulses are introduced in target materials such as mercury, lead, tungsten, uranium, etc. Shock phenomena in such systems may possibly lead to structural material damage beyond the design basis. As expected, the progression of shock waves and interaction with surrounding materials for liquid targets can be quite different from that in solid targets. The purpose of this paper is to describe ORNL`s modeling framework for `integrated` assessment of thermal-shock issues in liquid and solid target designs. This modeling framework is being developed based upon expertise developed from past reactor safety studies, especially those related to the Advanced Neutron Source (ANS) Project. Unlike previous separate-effects modeling approaches employed (for evaluating target behavior when subjected to thermal shocks), the present approach treats the overall problem in a coupled manner using state-of-the-art equations of state for materials of interest (viz., mercury, tungsten and uranium). That is, the modeling framework simultaneously accounts for localized (and distributed) compression pressure pulse generation due to transient heat deposition, the transport of this shock wave outwards, interaction with surrounding boundaries, feedback to mercury from structures, multi-dimensional reflection patterns & stress induced (possible) breakup or fracture.

  4. Using Latent Trait Measurement Models to Analyse Attitudinal Data: A Synthesis of Viewpoints.

    Science.gov (United States)

    Andrich, David

    A Rasch model for ordered response categories is derived and it is shown that it retains the key features of both the Thurstone and Likert approaches to studying attitude. Key features of the latter approaches are reviewed. Characteristics in common with the Thurstone approach are: statements are scaled with respect to their affective values;…

  5. An anisotropic numerical model for thermal hydraulic analyses: application to liquid metal flow in fuel assemblies

    Science.gov (United States)

    Vitillo, F.; Vitale Di Maio, D.; Galati, C.; Caruso, G.

    2015-11-01

    A CFD analysis has been carried out to study the thermal-hydraulic behavior of liquid metal coolant in a fuel assembly of triangular lattice. In order to obtain fast and accurate results, the isotropic two-equation RANS approach is often used in nuclear engineering applications. A different approach is provided by Non-Linear Eddy Viscosity Models (NLEVM), which try to take into account anisotropic effects by a nonlinear formulation of the Reynolds stress tensor. This approach is very promising, as it results in a very good numerical behavior and in a potentially better fluid flow description than classical isotropic models. An Anisotropic Shear Stress Transport (ASST) model, implemented into a commercial software, has been applied in previous studies, showing very trustful results for a large variety of flows and applications. In the paper, the ASST model has been used to perform an analysis of the fluid flow inside the fuel assembly of the ALFRED lead cooled fast reactor. Then, a comparison between the results of wall-resolved conjugated heat transfer computations and the results of a decoupled analysis using a suitable thermal wall-function previously implemented into the solver has been performed and presented.

  6. Cyclodextrin--piroxicam inclusion complexes: analyses by mass spectrometry and molecular modelling

    Science.gov (United States)

    Gallagher, Richard T.; Ball, Christopher P.; Gatehouse, Deborah R.; Gates, Paul J.; Lobell, Mario; Derrick, Peter J.

    1997-11-01

    Mass spectrometry has been used to investigate the natures of non-covalent complexes formed between the anti-inflammatory drug piroxicam and [alpha]-, [beta]- and [gamma]-cyclodextrins. Energies of these complexes have been calculated by means of molecular modelling. There is a correlation between peak intensities in the mass spectra and the calculated energies.

  7. Survival data analyses in ecotoxicology: critical effect concentrations, methods and models. What should we use?

    Science.gov (United States)

    Forfait-Dubuc, Carole; Charles, Sandrine; Billoir, Elise; Delignette-Muller, Marie Laure

    2012-05-01

    In ecotoxicology, critical effect concentrations are the most common indicators to quantitatively assess risks for species exposed to contaminants. Three types of critical effect concentrations are classically used: lowest/ no observed effect concentration (LOEC/NOEC), LC( x) (x% lethal concentration) and NEC (no effect concentration). In this article, for each of these three types of critical effect concentration, we compared methods or models used for their estimation and proposed one as the most appropriate. We then compared these critical effect concentrations to each other. For that, we used nine survival data sets corresponding to D. magna exposition to nine different contaminants, for which the time-course of the response was monitored. Our results showed that: (i) LOEC/NOEC values at day 21 were method-dependent, and that the Cochran-Armitage test with a step-down procedure appeared to be the most protective for the environment; (ii) all tested concentration-response models we compared gave close values of LC50 at day 21, nevertheless the Weibull model had the lowest global mean deviance; (iii) a simple threshold NEC-model both concentration and time dependent more completely described whole data (i.e. all timepoints) and enabled a precise estimation of the NEC. We then compared the three critical effect concentrations and argued that the use of the NEC might be a good option for environmental risk assessment.

  8. Transformation of Baumgarten's aesthetics into a tool for analysing works and for modelling

    DEFF Research Database (Denmark)

    Thomsen, Bente Dahl

    2006-01-01

      Abstract: Is this the best form, or does it need further work? The aesthetic object does not possess the perfect qualities; but how do I proceed with the form? These are questions that all modellers ask themselves at some point, and with which they can grapple for days - even weeks - before the...

  9. Modelling and analysing 3D buildings with a primal/dual data structure

    NARCIS (Netherlands)

    Boguslawski, P.; Gold, C.; Ledoux, H.

    2011-01-01

    While CityGML permits us to represent 3D city models, its use for applications where spatial analysis and/or real-time modifications are required is limited since at this moment the possibility to store topological relationships between the elements is rather limited and often not exploited. We pres

  10. Modelling and analysing 3D buildings with a primal/dual data structure

    NARCIS (Netherlands)

    Boguslawski, P.; Gold, C.; Ledoux, H.

    2011-01-01

    While CityGML permits us to represent 3D city models, its use for applications where spatial analysis and/or real-time modifications are required is limited since at this moment the possibility to store topological relationships between the elements is rather limited and often not exploited. We

  11. A multi-scale modelling approach for analysing landscape service dynamics

    NARCIS (Netherlands)

    Willemen, L.; Veldkamp, A.; Verburg, P.H.; Hein, L.G.; Leemans, R.

    2012-01-01

    Shifting societal needs drive and shape landscapes and the provision of their services. This paper presents a modelling approach to visualize the regional spatial and temporal dynamics in landscape service supply as a function of changing landscapes and societal demand. This changing demand can resu

  12. GSEVM v.2: MCMC software to analyse genetically structured environmental variance models

    DEFF Research Database (Denmark)

    Ibáñez-Escriche, N; Garcia, M; Sorensen, D

    2010-01-01

    This note provides a description of software that allows to fit Bayesian genetically structured variance models using Markov chain Monte Carlo (MCMC). The gsevm v.2 program was written in Fortran 90. The DOS and Unix executable programs, the user's guide, and some example files are freely availab...

  13. Analysing outsourcing policies in an asset management context: a six-stage model

    NARCIS (Netherlands)

    Schoenmaker, R.; Verlaan, J.G.

    2013-01-01

    Asset managers of civil infrastructure are increasingly outsourcing their maintenance. Whereas maintenance is a cyclic process, decisions to outsource decisions are often project-based, and confusing the discussion on the degree of outsourcing. This paper presents a six-stage model that facilitates

  14. Analysing outsourcing policies in an asset management context: a six-stage model

    NARCIS (Netherlands)

    Schoenmaker, R.; Verlaan, J.G.

    2013-01-01

    Asset managers of civil infrastructure are increasingly outsourcing their maintenance. Whereas maintenance is a cyclic process, decisions to outsource decisions are often project-based, and confusing the discussion on the degree of outsourcing. This paper presents a six-stage model that facilitates

  15. Analysing green supply chain management practices in Brazil's electrical/electronics industry using interpretive structural modelling

    DEFF Research Database (Denmark)

    Govindan, Kannan; Kannan, Devika; Mathiyazhagan, K.

    2013-01-01

    that exists between GSCM practices with regard to their adoption within Brazilian electrical/electronic industry with the help of interpretive structural modelling (ISM). From the results, we infer that cooperation with customers for eco-design practice is driving other practices, and this practice acts...

  16. Analysing the Severity and Frequency of Traffic Crashes in Riyadh City Using Statistical Models

    Directory of Open Access Journals (Sweden)

    Saleh Altwaijri

    2012-12-01

    Full Text Available Traffic crashes in Riyadh city cause losses in the form of deaths, injuries and property damages, in addition to the pain and social tragedy affecting families of the victims. In 2005, there were a total of 47,341 injury traffic crashes occurred in Riyadh city (19% of the total KSA crashes and 9% of those crashes were severe. Road safety in Riyadh city may have been adversely affected by: high car ownership, migration of people to Riyadh city, high daily trips reached about 6 million, high rate of income, low-cost of petrol, drivers from different nationalities, young drivers and tremendous growth in population which creates a high level of mobility and transport activities in the city. The primary objective of this paper is therefore to explore factors affecting the severity and frequency of road crashes in Riyadh city using appropriate statistical models aiming to establish effective safety policies ready to be implemented to reduce the severity and frequency of road crashes in Riyadh city. Crash data for Riyadh city were collected from the Higher Commission for the Development of Riyadh (HCDR for a period of five years from 1425H to 1429H (roughly corresponding to 2004-2008. Crash data were classified into three categories: fatal, serious-injury and slight-injury. Two nominal response models have been developed: a standard multinomial logit model (MNL and a mixed logit model to injury-related crash data. Due to a severe underreporting problem on the slight injury crashes binary and mixed binary logistic regression models were also estimated for two categories of severity: fatal and serious crashes. For frequency, two count models such as Negative Binomial (NB models were employed and the unit of analysis was 168 HAIs (wards in Riyadh city. Ward-level crash data are disaggregated by severity of the crash (such as fatal and serious injury crashes. The results from both multinomial and binary response models are found to be fairly consistent but

  17. Using species abundance distribution models and diversity indices for biogeographical analyses

    Science.gov (United States)

    Fattorini, Simone; Rigal, François; Cardoso, Pedro; Borges, Paulo A. V.

    2016-01-01

    We examine whether Species Abundance Distribution models (SADs) and diversity indices can describe how species colonization status influences species community assembly on oceanic islands. Our hypothesis is that, because of the lack of source-sink dynamics at the archipelago scale, Single Island Endemics (SIEs), i.e. endemic species restricted to only one island, should be represented by few rare species and consequently have abundance patterns that differ from those of more widespread species. To test our hypothesis, we used arthropod data from the Azorean archipelago (North Atlantic). We divided the species into three colonization categories: SIEs, archipelagic endemics (AZEs, present in at least two islands) and native non-endemics (NATs). For each category, we modelled rank-abundance plots using both the geometric series and the Gambin model, a measure of distributional amplitude. We also calculated Shannon entropy and Buzas and Gibson's evenness. We show that the slopes of the regression lines modelling SADs were significantly higher for SIEs, which indicates a relative predominance of a few highly abundant species and a lack of rare species, which also depresses diversity indices. This may be a consequence of two factors: (i) some forest specialist SIEs may be at advantage over other, less adapted species; (ii) the entire populations of SIEs are by definition concentrated on a single island, without possibility for inter-island source-sink dynamics; hence all populations must have a minimum number of individuals to survive natural, often unpredictable, fluctuations. These findings are supported by higher values of the α parameter of the Gambin mode for SIEs. In contrast, AZEs and NATs had lower regression slopes, lower α but higher diversity indices, resulting from their widespread distribution over several islands. We conclude that these differences in the SAD models and diversity indices demonstrate that the study of these metrics is useful for

  18. Evaluation of a dentoalveolar model for testing mouthguards: stress and strain analyses.

    Science.gov (United States)

    Verissimo, Crisnicaw; Costa, Paulo Victor Moura; Santos-Filho, Paulo César Freitas; Fernandes-Neto, Alfredo Júlio; Tantbirojn, Daranee; Versluis, Antheunis; Soares, Carlos José

    2016-02-01

    Custom-fitted mouthguards are devices used to decrease the likelihood of dental trauma. The aim of this study was to develop an experimental bovine dentoalveolar model with periodontal ligament to evaluate mouthguard shock absorption, and impact strain and stress behavior. A pendulum impact device was developed to perform the impact tests with two different impact materials (steel ball and baseball). Five bovine jaws were selected with standard age and dimensions. Six-mm mouthguards were made for the impact tests. The jaws were fixed in a pendulum device and impacts were performed from 90, 60, and 45° angles, with and without mouthguard. Strain gauges were attached at the palatal surface of the impacted tooth. The strain and shock absorption of the mouthguards was calculated and data were analyzed with 3-way anova and Tukey's test (α = 0.05). Two-dimensional finite element models were created based on the cross-section of the bovine dentoalveolar model used in the experiment. A nonlinear dynamic impact analysis was performed to evaluate the strain and stress distributions. Without mouthguards, the increase in impact angulation significantly increased strains and stresses. Mouthguards reduced strain and stress values. Impact velocity, impact object (steel ball or baseball), and mouthguard presence affected the impact stresses and strains in a bovine dentoalveolar model. Experimental strain measurements and finite element models predicted similar behavior; therefore, both methodologies are suitable for evaluating the biomechanical performance of mouthguards. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Novel basophil- or eosinophil-depleted mouse models for functional analyses of allergic inflammation.

    Science.gov (United States)

    Matsuoka, Kunie; Shitara, Hiroshi; Taya, Choji; Kohno, Kenji; Kikkawa, Yoshiaki; Yonekawa, Hiromichi

    2013-01-01

    Basophils and eosinophils play important roles in various host defense mechanisms but also act as harmful effectors in allergic disorders. We generated novel basophil- and eosinophil-depletion mouse models by introducing the human diphtheria toxin (DT) receptor gene under the control of the mouse CD203c and the eosinophil peroxidase promoter, respectively, to study the critical roles of these cells in the immunological response. These mice exhibited selective depletion of the target cells upon DT administration. In the basophil-depletion model, DT administration attenuated a drop in body temperature in IgG-mediated systemic anaphylaxis in a dose-dependent manner and almost completely abolished the development of ear swelling in IgE-mediated chronic allergic inflammation (IgE-CAI), a typical skin swelling reaction with massive eosinophil infiltration. In contrast, in the eosinophil-depletion model, DT administration ameliorated the ear swelling in IgE-CAI whether DT was administered before, simultaneously, or after, antigen challenge, with significantly lower numbers of eosinophils infiltrating into the swelling site. These results confirm that basophils and eosinophils act as the initiator and the effector, respectively, in IgE-CAI. In addition, antibody array analysis suggested that eotaxin-2 is a principal chemokine that attracts proinflammatory cells, leading to chronic allergic inflammation. Thus, the two mouse models established in this study are potentially useful and powerful tools for studying the in vivo roles of basophils and eosinophils. The combination of basophil- and eosinophil-depletion mouse models provides a new approach to understanding the complicated mechanism of allergic inflammation in conditions such as atopic dermatitis and asthma.

  20. Novel basophil- or eosinophil-depleted mouse models for functional analyses of allergic inflammation.

    Directory of Open Access Journals (Sweden)

    Kunie Matsuoka

    Full Text Available Basophils and eosinophils play important roles in various host defense mechanisms but also act as harmful effectors in allergic disorders. We generated novel basophil- and eosinophil-depletion mouse models by introducing the human diphtheria toxin (DT receptor gene under the control of the mouse CD203c and the eosinophil peroxidase promoter, respectively, to study the critical roles of these cells in the immunological response. These mice exhibited selective depletion of the target cells upon DT administration. In the basophil-depletion model, DT administration attenuated a drop in body temperature in IgG-mediated systemic anaphylaxis in a dose-dependent manner and almost completely abolished the development of ear swelling in IgE-mediated chronic allergic inflammation (IgE-CAI, a typical skin swelling reaction with massive eosinophil infiltration. In contrast, in the eosinophil-depletion model, DT administration ameliorated the ear swelling in IgE-CAI whether DT was administered before, simultaneously, or after, antigen challenge, with significantly lower numbers of eosinophils infiltrating into the swelling site. These results confirm that basophils and eosinophils act as the initiator and the effector, respectively, in IgE-CAI. In addition, antibody array analysis suggested that eotaxin-2 is a principal chemokine that attracts proinflammatory cells, leading to chronic allergic inflammation. Thus, the two mouse models established in this study are potentially useful and powerful tools for studying the in vivo roles of basophils and eosinophils. The combination of basophil- and eosinophil-depletion mouse models provides a new approach to understanding the complicated mechanism of allergic inflammation in conditions such as atopic dermatitis and asthma.

  1. Static simulation and analyses of mower's ROPS behavior in a finite element model.

    Science.gov (United States)

    Wang, X; Ayers, P; Womac, A R

    2009-10-01

    The goal of this research was to numerically predict the maximum lateral force acting on a mower rollover protective structure (ROPS) and the energy absorbed by the ROPS during a lateral continuous roll. A finite element (FE) model of the ROPS was developed using elastic and plastic theories including nonlinear relationships between stresses and strains in the plastic deformation range. Model validation was performed using field measurements of ROPS behavior in a lateral continuous roll on a purpose-designed test slope. Field tests determined the maximum deformation of the ROPS of a 900 kg John Deere F925 mower with a 183 cm (72 in.) mowing deck during an actual lateral roll on a pad and on soil. In the FE model, lateral force was gradually added to the ROPS until the field-measured maximum deformation was achieved. The results from the FE analysis indicated that the top corners of the ROPS enter slightly into the plastic deformation region. Maximum lateral forces acting on the ROPS during the simulated impact with the pad and soil were 19650 N and 22850 N, respectively. The FE model predicted that the energy absorbed by the ROPS (643 J) in the lateral roll test on the pad was less than the static test requirements (1575 J) of Organization for Economic Development (OECD) Code 6. In addition, the energy absorbed by the ROPS (1813 J) in the test on the soil met the static test requirements (1575 J). Both the FE model and the field test results indicated that the deformed ROPS of the F925 mower with deck did not intrude into the occupant clearance zone during the lateral continuous or non-continuous roll.

  2. MONTE CARLO ANALYSES OF THE YALINA THERMAL FACILITY WITH SERPENT STEREOLITHOGRAPHY GEOMETRY MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, Y.

    2015-01-01

    This paper analyzes the YALINA Thermal subcritical assembly of Belarus using two different Monte Carlo transport programs, SERPENT and MCNP. The MCNP model is based on combinatorial geometry and universes hierarchy, while the SERPENT model is based on Stereolithography geometry. The latter consists of unstructured triangulated surfaces defined by the normal and vertices. This geometry format is used by 3D printers and it has been created by: the CUBIT software, MATLAB scripts, and C coding. All the Monte Carlo simulations have been performed using the ENDF/B-VII.0 nuclear data library. Both MCNP and SERPENT share the same geometry specifications, which describe the facility details without using any material homogenization. Three different configurations have been studied with different number of fuel rods. The three fuel configurations use 216, 245, or 280 fuel rods, respectively. The numerical simulations show that the agreement between SERPENT and MCNP results is within few tens of pcms.

  3. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section...... on differences of statistical measures in section and the same measures in between sections. Three-dimensional (3D) datasets are reconstructed by using image registration techniques and estimated thicknesses. We distinguish the effect of stress by estimating the synaptic vesicle densities and modeling......, which leads to more accurate results. Finally, we present a thorough statistical investigation of the shape, orientation and interactions of the synaptic vesicles during active time of the synapse. Focused ion beam-scanning electron microscopy images of a male mammalian brain are used for this study...

  4. A note on the Fourier series model for analysing line transect data.

    Science.gov (United States)

    Buckland, S T

    1982-06-01

    The Fourier series model offers a powerful procedure for the estimation of animal population density from line transect data. The estimate is reliable over a wide range of detection functions. In contrast, analytic confidence intervals yield, at best, 90% confidence for nominal 95% intervals. Three solutions, one using Monte Carlo techniques, another making direct use of replicate lines and the third based on the jackknife method, are discussed and compared.

  5. Analysing Amazonian forest productivity using a new individual and trait-based model (TFS v.1

    Directory of Open Access Journals (Sweden)

    N. M. Fyllas

    2014-02-01

    Full Text Available Repeated long-term censuses have revealed large-scale spatial patterns in Amazon Basin forest structure and dynamism, with some forests in the west of the Basin having up to a twice as high rate of aboveground biomass production and tree recruitment as forests in the east. Possible causes for this variation could be the climatic and edaphic gradients across the Basin and/or the spatial distribution of tree species composition. To help understand causes of this variation a new individual-based model of tropical forest growth designed to take full advantage of the forest census data available from the Amazonian Forest Inventory Network (RAINFOR has been developed. The model incorporates variations in tree size distribution, functional traits and soil physical properties and runs at the stand level with four functional traits, leaf dry mass per area (Ma, leaf nitrogen (NL and phosphorus (PL content and wood density (DW used to represent a continuum of plant strategies found in tropical forests. We first applied the model to validate canopy-level water fluxes at three Amazon eddy flux sites. For all three sites the canopy-level water fluxes were adequately simulated. We then applied the model at seven plots, where intensive measurements of carbon allocation are available. Tree-by-tree multi-annual growth rates generally agreed well with observations for small trees, but with deviations identified for large trees. At the stand-level, simulations at 40 plots were used to explore the influence of climate and soil fertility on the gross (ΠG and net (ΠN primary production rates as well as the carbon use efficiency (CU. Simulated ΠG, ΠN and CU were not associated with temperature. However all three measures of stand level productivity were positively related to annual precipitation and soil fertility.

  6. Sensitivity to model geometry in finite element analyses of reconstructed skeletal structures: experience with a juvenile pelvis.

    Science.gov (United States)

    Watson, Peter J; Fagan, Michael J; Dobson, Catherine A

    2015-01-01

    Biomechanical analysis of juvenile pelvic growth can be used in the evaluation of medical devices and investigation of hip joint disorders. This requires access to scan data of healthy juveniles, which are not always freely available. This article analyses the application of a geometric morphometric technique, which facilitates the reconstruction of the articulated juvenile pelvis from cadaveric remains, in biomechanical modelling. The sensitivity of variation in reconstructed morphologies upon predicted stress/strain distributions is of particular interest. A series of finite element analyses of a 9-year-old hemi-pelvis were performed to examine differences in predicted strain distributions between a reconstructed model and the originally fully articulated specimen. Only minor differences in the minimum principal strain distributions were observed between two varying hemi-pelvic morphologies and that of the original articulation. A Wilcoxon rank-sum test determined there was no statistical significance between the nodal strains recorded at 60 locations throughout the hemi-pelvic structures. This example suggests that finite element models created by this geometric morphometric reconstruction technique can be used with confidence, and as observed with this hemi-pelvis model, even a visual morphological difference does not significantly affect the predicted results. The validated use of this geometric morphometric reconstruction technique in biomechanical modelling reduces the dependency on clinical scan data.

  7. Systematic Selection of Key Logistic Regression Variables for Risk Prediction Analyses: A Five-Factor Maximum Model.

    Science.gov (United States)

    Hewett, Timothy E; Webster, Kate E; Hurd, Wendy J

    2017-08-16

    The evolution of clinical practice and medical technology has yielded an increasing number of clinical measures and tests to assess a patient's progression and return to sport readiness after injury. The plethora of available tests may be burdensome to clinicians in the absence of evidence that demonstrates the utility of a given measurement. Thus, there is a critical need to identify a discrete number of metrics to capture during clinical assessment to effectively and concisely guide patient care. The data sources included Pubmed and PMC Pubmed Central articles on the topic. Therefore, we present a systematic approach to injury risk analyses and how this concept may be used in algorithms for risk analyses for primary anterior cruciate ligament (ACL) injury in healthy athletes and patients after ACL reconstruction. In this article, we present the five-factor maximum model, which states that in any predictive model, a maximum of 5 variables will contribute in a meaningful manner to any risk factor analysis. We demonstrate how this model already exists for prevention of primary ACL injury, how this model may guide development of the second ACL injury risk analysis, and how the five-factor maximum model may be applied across the injury spectrum for development of the injury risk analysis.

  8. Hydrogeologic analyses in support of the conceptual model for the LANL Area G LLRW performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Vold, E.L.; Birdsell, K.; Rogers, D.; Springer, E.; Krier, D.; Turin, H.J.

    1996-04-01

    The Los Alamos National Laboratory low level radioactive waste disposal facility at Area G is currently completing a draft of the site Performance Assessment. Results from previous field studies have estimated a range in recharge rate up to 1 cm/yr. Recent estimates of unsaturated hydraulic conductivity for each stratigraphic layer under a unit gradient assumption show a wide range in recharge rate of 10{sup {minus}4} to 1 cm/yr depending upon location. Numerical computations show that a single net infiltration rate at the mesa surface does not match the moisture profile in each stratigraphic layer simultaneously, suggesting local source or sink terms possibly due to surface connected porous regions. The best fit to field data at deeper stratigraphic layers occurs for a net infiltration of about 0.1 cm/yr. A recent detailed analysis evaluated liquid phase vertical moisture flux, based on moisture profiles in several boreholes and van Genuchten fits to the hydraulic properties for each of the stratigraphic units. Results show a near surface infiltration region averages 8m deep, below which is a dry, low moisture content, and low flux region, where liquid phase recharge averages to zero. Analysis shows this low flux region is dominated by vapor movement. Field data from tritium diffusion studies, from pressure fluctuation attenuation studies, and from comparisons of in-situ and core sample permeabilities indicate that the vapor diffusion is enhanced above that expected in the matrix and is presumably due to enhanced flow through the fractures. Below this dry region within the mesa is a moisture spike which analyses show corresponds to a moisture source. The likely physical explanation is seasonal transient infiltration through surface-connected fractures. This anomalous region is being investigated in current field studies, because it is critical in understanding the moisture flux which continues to deeper regions through the unsaturated zone.

  9. A new compact solid-state neutral particle analyser at ASDEX Upgrade: Setup and physics modeling

    Science.gov (United States)

    Schneider, P. A.; Blank, H.; Geiger, B.; Mank, K.; Martinov, S.; Ryter, F.; Weiland, M.; Weller, A.

    2015-07-01

    At ASDEX Upgrade (AUG), a new compact solid-state detector has been installed to measure the energy spectrum of fast neutrals based on the principle described by Shinohara et al. [Rev. Sci. Instrum. 75, 3640 (2004)]. The diagnostic relies on the usual charge exchange of supra-thermal fast-ions with neutrals in the plasma. Therefore, the measured energy spectra directly correspond to those of confined fast-ions with a pitch angle defined by the line of sight of the detector. Experiments in AUG showed the good signal to noise characteristics of the detector. It is energy calibrated and can measure energies of 40-200 keV with count rates of up to 140 kcps. The detector has an active view on one of the heating beams. The heating beam increases the neutral density locally; thereby, information about the central fast-ion velocity distribution is obtained. The measured fluxes are modeled with a newly developed module for the 3D Monte Carlo code F90FIDASIM [Geiger et al., Plasma Phys. Controlled Fusion 53, 65010 (2011)]. The modeling allows to distinguish between the active (beam) and passive contributions to the signal. Thereby, the birth profile of the measured fast neutrals can be reconstructed. This model reproduces the measured energy spectra with good accuracy when the passive contribution is taken into account.

  10. A new compact solid-state neutral particle analyser at ASDEX Upgrade: Setup and physics modeling

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, P. A.; Blank, H.; Geiger, B.; Mank, K.; Martinov, S.; Ryter, F.; Weiland, M.; Weller, A. [Max-Planck-Institut für Plasmaphysik, Garching (Germany)

    2015-07-15

    At ASDEX Upgrade (AUG), a new compact solid-state detector has been installed to measure the energy spectrum of fast neutrals based on the principle described by Shinohara et al. [Rev. Sci. Instrum. 75, 3640 (2004)]. The diagnostic relies on the usual charge exchange of supra-thermal fast-ions with neutrals in the plasma. Therefore, the measured energy spectra directly correspond to those of confined fast-ions with a pitch angle defined by the line of sight of the detector. Experiments in AUG showed the good signal to noise characteristics of the detector. It is energy calibrated and can measure energies of 40-200 keV with count rates of up to 140 kcps. The detector has an active view on one of the heating beams. The heating beam increases the neutral density locally; thereby, information about the central fast-ion velocity distribution is obtained. The measured fluxes are modeled with a newly developed module for the 3D Monte Carlo code F90FIDASIM [Geiger et al., Plasma Phys. Controlled Fusion 53, 65010 (2011)]. The modeling allows to distinguish between the active (beam) and passive contributions to the signal. Thereby, the birth profile of the measured fast neutrals can be reconstructed. This model reproduces the measured energy spectra with good accuracy when the passive contribution is taken into account.

  11. High-temperature series analyses of the classical Heisenberg and XY model

    CERN Document Server

    Adler, J; Janke, W

    1993-01-01

    Although there is now a good measure of agreement between Monte Carlo and high-temperature series expansion estimates for Ising ($n=1$) models, published results for the critical temperature from series expansions up to 12{\\em th} order for the three-dimensional classical Heisenberg ($n=3$) and XY ($n=2$) model do not agree very well with recent high-precision Monte Carlo estimates. In order to clarify this discrepancy we have analyzed extended high-temperature series expansions of the susceptibility, the second correlation moment, and the second field derivative of the susceptibility, which have been derived a few years ago by L\\"uscher and Weisz for general $O(n)$ vector spin models on $D$-dimensional hypercubic lattices up to 14{\\em th} order in $K \\equiv J/k_B T$. By analyzing these series expansions in three dimensions with two different methods that allow for confluent correction terms, we obtain good agreement with the standard field theory exponent estimates and with the critical temperature estimates...

  12. Metabolic model for the filamentous ‘Candidatus Microthrix parvicella' based on genomic and metagenomic analyses

    Science.gov (United States)

    Jon McIlroy, Simon; Kristiansen, Rikke; Albertsen, Mads; Michael Karst, Søren; Rossetti, Simona; Lund Nielsen, Jeppe; Tandoi, Valter; James Seviour, Robert; Nielsen, Per Halkjær

    2013-01-01

    ‘Candidatus Microthrix parvicella' is a lipid-accumulating, filamentous bacterium so far found only in activated sludge wastewater treatment plants, where it is a common causative agent of sludge separation problems. Despite attracting considerable interest, its detailed physiology is still unclear. In this study, the genome of the RN1 strain was sequenced and annotated, which facilitated the construction of a theoretical metabolic model based on available in situ and axenic experimental data. This model proposes that under anaerobic conditions, this organism accumulates preferentially long-chain fatty acids as triacylglycerols. Utilisation of trehalose and/or polyphosphate stores or partial oxidation of long-chain fatty acids may supply the energy required for anaerobic lipid uptake and storage. Comparing the genome sequence of this isolate with metagenomes from two full-scale wastewater treatment plants with enhanced biological phosphorus removal reveals high similarity, with few metabolic differences between the axenic and the dominant community ‘Ca. M. parvicella' strains. Hence, the metabolic model presented in this paper could be considered generally applicable to strains in full-scale treatment systems. The genomic information obtained here will provide the basis for future research into in situ gene expression and regulation. Such information will give substantial insight into the ecophysiology of this unusual and biotechnologically important filamentous bacterium. PMID:23446830

  13. Consequence modeling for nuclear weapons probabilistic cost/benefit analyses of safety retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Harvey, T.F.; Peters, L.; Serduke, F.J.D.; Hall, C.; Stephens, D.R.

    1998-01-01

    The consequence models used in former studies of costs and benefits of enhanced safety retrofits are considered for (1) fuel fires; (2) non-nuclear detonations; and, (3) unintended nuclear detonations. Estimates of consequences were made using a representative accident location, i.e., an assumed mixed suburban-rural site. We have explicitly quantified land- use impacts and human-health effects (e.g. , prompt fatalities, prompt injuries, latent cancer fatalities, low- levels of radiation exposure, and clean-up areas). Uncertainty in the wind direction is quantified and used in a Monte Carlo calculation to estimate a range of results for a fuel fire with uncertain respirable amounts of released Pu. We define a nuclear source term and discuss damage levels of concern. Ranges of damages are estimated by quantifying health impacts and property damages. We discuss our dispersal and prompt effects models in some detail. The models used to loft the Pu and fission products and their particle sizes are emphasized.

  14. A new compact solid-state neutral particle analyser at ASDEX Upgrade: Setup and physics modeling.

    Science.gov (United States)

    Schneider, P A; Blank, H; Geiger, B; Mank, K; Martinov, S; Ryter, F; Weiland, M; Weller, A

    2015-07-01

    At ASDEX Upgrade (AUG), a new compact solid-state detector has been installed to measure the energy spectrum of fast neutrals based on the principle described by Shinohara et al. [Rev. Sci. Instrum. 75, 3640 (2004)]. The diagnostic relies on the usual charge exchange of supra-thermal fast-ions with neutrals in the plasma. Therefore, the measured energy spectra directly correspond to those of confined fast-ions with a pitch angle defined by the line of sight of the detector. Experiments in AUG showed the good signal to noise characteristics of the detector. It is energy calibrated and can measure energies of 40-200 keV with count rates of up to 140 kcps. The detector has an active view on one of the heating beams. The heating beam increases the neutral density locally; thereby, information about the central fast-ion velocity distribution is obtained. The measured fluxes are modeled with a newly developed module for the 3D Monte Carlo code F90FIDASIM [Geiger et al., Plasma Phys. Controlled Fusion 53, 65010 (2011)]. The modeling allows to distinguish between the active (beam) and passive contributions to the signal. Thereby, the birth profile of the measured fast neutrals can be reconstructed. This model reproduces the measured energy spectra with good accuracy when the passive contribution is taken into account.

  15. Analysing the origin of long-range interactions in proteins using lattice models

    Directory of Open Access Journals (Sweden)

    Unger Ron

    2009-01-01

    Full Text Available Abstract Background Long-range communication is very common in proteins but the physical basis of this phenomenon remains unclear. In order to gain insight into this problem, we decided to explore whether long-range interactions exist in lattice models of proteins. Lattice models of proteins have proven to capture some of the basic properties of real proteins and, thus, can be used for elucidating general principles of protein stability and folding. Results Using a computational version of double-mutant cycle analysis, we show that long-range interactions emerge in lattice models even though they are not an input feature of them. The coupling energy of both short- and long-range pairwise interactions is found to become more positive (destabilizing in a linear fashion with increasing 'contact-frequency', an entropic term that corresponds to the fraction of states in the conformational ensemble of the sequence in which the pair of residues is in contact. A mathematical derivation of the linear dependence of the coupling energy on 'contact-frequency' is provided. Conclusion Our work shows how 'contact-frequency' should be taken into account in attempts to stabilize proteins by introducing (or stabilizing contacts in the native state and/or through 'negative design' of non-native contacts.

  16. Analyses of the redistribution of work following cardiac resynchronisation therapy in a patient specific model.

    Directory of Open Access Journals (Sweden)

    Steven Alexander Niederer

    Full Text Available Regulation of regional work is essential for efficient cardiac function. In patients with heart failure and electrical dysfunction such as left branch bundle block regional work is often depressed in the septum. Following cardiac resynchronisation therapy (CRT this heterogeneous distribution of work can be rebalanced by altering the pattern of electrical activation. To investigate the changes in regional work in these patients and the mechanisms underpinning the improved function following CRT we have developed a personalised computational model. Simulations of electromechanical cardiac function in the model estimate the regional stress, strain and work pre- and post-CRT. These simulations predict that the increase in observed work performed by the septum following CRT is not due to an increase in the volume of myocardial tissue recruited during contraction but rather that the volume of recruited myocardium remains the same and the average peak work rate per unit volume increases. These increases in the peak average rate of work is is attributed to slower and more effective contraction in the septum, as opposed to a change in active tension. Model results predict that this improved septal work rate following CRT is a result of resistance to septal contraction provided by the LV free wall. This resistance results in septal shortening over a longer period which, in turn, allows the septum to contract while generating higher levels of active tension to produce a higher work rate.

  17. Marginal estimation for multi-stage models: waiting time distributions and competing risks analyses.

    Science.gov (United States)

    Satten, Glen A; Datta, Somnath

    2002-01-15

    We provide non-parametric estimates of the marginal cumulative distribution of stage occupation times (waiting times) and non-parametric estimates of marginal cumulative incidence function (proportion of persons who leave stage j for stage j' within time t of entering stage j) using right-censored data from a multi-stage model. We allow for stage and path dependent censoring where the censoring hazard for an individual may depend on his or her natural covariate history such as the collection of stages visited before the current stage and their occupation times. Additional external time dependent covariates that may induce dependent censoring can also be incorporated into our estimates, if available. Our approach requires modelling the censoring hazard so that an estimate of the integrated censoring hazard can be used in constructing the estimates of the waiting times distributions. For this purpose, we propose the use of an additive hazard model which results in very flexible (robust) estimates. Examples based on data from burn patients and simulated data with tracking are also provided to demonstrate the performance of our estimators.

  18. Promoting Social Inclusion through Sport for Refugee-Background Youth in Australia: Analysing Different Participation Models

    Directory of Open Access Journals (Sweden)

    Karen Block

    2017-06-01

    Full Text Available Sports participation can confer a range of physical and psychosocial benefits and, for refugee and migrant youth, may even act as a critical mediator for achieving positive settlement and engaging meaningfully in Australian society. This group has low participation rates however, with identified barriers including costs; discrimination and a lack of cultural sensitivity in sporting environments; lack of knowledge of mainstream sports services on the part of refugee-background settlers; inadequate access to transport; culturally determined gender norms; and family attitudes. Organisations in various sectors have devised programs and strategies for addressing these participation barriers. In many cases however, these responses appear to be ad hoc and under-theorised. This article reports findings from a qualitative exploratory study conducted in a range of settings to examine the benefits, challenges and shortcomings associated with different participation models. Interview participants were drawn from non-government organisations, local governments, schools, and sports clubs. Three distinct models of participation were identified, including short term programs for refugee-background children; ongoing programs for refugee-background children and youth; and integration into mainstream clubs. These models are discussed in terms of their relative challenges and benefits and their capacity to promote sustainable engagement and social inclusion for this population group.

  19. A biophysically-based finite state machine model for analysing gastric experimental entrainment and pacing recordings

    Science.gov (United States)

    Sathar, Shameer; Trew, Mark L.; Du, Peng; O’ Grady, Greg; Cheng, Leo K.

    2014-01-01

    Gastrointestinal motility is coordinated by slow waves (SWs) generated by the interstitial cells of Cajal (ICC). Experimental studies have shown that SWs spontaneously activate at different intrinsic frequencies in isolated tissue, whereas in intact tissues they are entrained to a single frequency. Gastric pacing has been used in an attempt to improve motility in disorders such as gastroparesis by modulating entrainment, but the optimal methods of pacing are currently unknown. Computational models can aid in the interpretation of complex in-vivo recordings and help to determine optical pacing strategies. However, previous computational models of SW entrainment are limited to the intrinsic pacing frequency as the primary determinant of the conduction velocity, and are not able to accurately represent the effects of external stimuli and electrical anisotropies. In this paper, we present a novel computationally efficient method for modelling SW propagation through the ICC network while accounting for conductivity parameters and fiber orientations. The method successfully reproduced experimental recordings of entrainment following gastric transection and the effects of gastric pacing on SW activity. It provides a reliable new tool for investigating gastric electrophysiology in normal and diseased states, and to guide and focus future experimental studies. PMID:24276722

  20. Study on dynamic response of embedded long span corrugated steel culverts using scaled model shaking table tests and numerical analyses

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A series of scaled-model shaking table tests and its simulation analyses using dynamic finite element method were performed to clarify the dynamic behaviors and the seismic stability of embedded corrugated steel culverts due to strong earthquakes like the 1995 Hyogoken-nanbu earthquake. The dynamic strains of the embedded culvert models and the seismic soil pressure acting on the models due to sinusoidal and random strong motions were investigated. This study verified that the corrugated culvert model was subjected to dynamic horizontal forces (lateral seismic soil pressure) from the surrounding ground,which caused the large bending strains on the structure; and that the structures do not exceed the allowable plastic deformation and do not collapse completely during strong earthquake like Hyogoken-nanbu earthquake. The results obtained are useful for design and construction of embedded long span corrugated steel culverts in seismic regions.

  1. Model-independent analyses of non-Gaussianity in Planck CMB maps using Minkowski functionals

    Science.gov (United States)

    Buchert, Thomas; France, Martin J.; Steiner, Frank

    2017-05-01

    Despite the wealth of Planck results, there are difficulties in disentangling the primordial non-Gaussianity of the Cosmic Microwave Background (CMB) from the secondary and the foreground non-Gaussianity (NG). For each of these forms of NG the lack of complete data introduces model-dependences. Aiming at detecting the NGs of the CMB temperature anisotropy δ T , while paying particular attention to a model-independent quantification of NGs, our analysis is based upon statistical and morphological univariate descriptors, respectively: the probability density function P(δ T) , related to v0, the first Minkowski Functional (MF), and the two other MFs, v1 and v2. From their analytical Gaussian predictions we build the discrepancy functions {{ Δ }k} (k  =  P, 0, 1, 2) which are applied to an ensemble of 105 CMB realization maps of the Λ CDM model and to the Planck CMB maps. In our analysis we use general Hermite expansions of the {{ Δ }k} up to the 12th order, where the coefficients are explicitly given in terms of cumulants. Assuming hierarchical ordering of the cumulants, we obtain the perturbative expansions generalizing the second order expansions of Matsubara to arbitrary order in the standard deviation {σ0} for P(δ T) and v0, where the perturbative expansion coefficients are explicitly given in terms of complete Bell polynomials. The comparison of the Hermite expansions and the perturbative expansions is performed for the Λ CDM map sample and the Planck data. We confirm the weak level of non-Gaussianity (1-2)σ of the foreground corrected masked Planck 2015 maps.

  2. Computational and Statistical Analyses of Insertional Polymorphic Endogenous Retroviruses in a Non-Model Organism

    Directory of Open Access Journals (Sweden)

    Le Bao

    2014-11-01

    Full Text Available Endogenous retroviruses (ERVs are a class of transposable elements found in all vertebrate genomes that contribute substantially to genomic functional and structural diversity. A host species acquires an ERV when an exogenous retrovirus infects a germ cell of an individual and becomes part of the genome inherited by viable progeny. ERVs that colonized ancestral lineages are fixed in contemporary species. However, in some extant species, ERV colonization is ongoing, which results in variation in ERV frequency in the population. To study the consequences of ERV colonization of a host genome, methods are needed to assign each ERV to a location in a species’ genome and determine which individuals have acquired each ERV by descent. Because well annotated reference genomes are not widely available for all species, de novo clustering approaches provide an alternative to reference mapping that are insensitive to differences between query and reference and that are amenable to mobile element studies in both model and non-model organisms. However, there is substantial uncertainty in both identifying ERV genomic position and assigning each unique ERV integration site to individuals in a population. We present an analysis suitable for detecting ERV integration sites in species without the need for a reference genome. Our approach is based on improved de novo clustering methods and statistical models that take the uncertainty of assignment into account and yield a probability matrix of shared ERV integration sites among individuals. We demonstrate that polymorphic integrations of a recently identified endogenous retrovirus in deer reflect contemporary relationships among individuals and populations.

  3. Analysing and modelling the impact of habitat fragmentation on species diversity: a macroecological perspective

    Directory of Open Access Journals (Sweden)

    Thomas Matthews

    2015-07-01

    Full Text Available My research aimed to examine a variety of macroecological and biogeographical patterns using a large number of purely habitat island datasets (i.e. isolated patches of natural habitat set within in a matrix of human land uses sourced from both the literature and my own sampling, with the objective of testing various macroecological and biogeographical patterns. These patterns can be grouped under four broad headings: 1 species–area relationships (SAR, 2 nestedness, 3 species abundance distributions (SADs and 4 species incidence functions (function of area. Overall, I found that there were few hard macroecological generalities that hold in all cases across habitat island systems. This is because most habitat island systems are highly disturbed environments, with a variety of confounding variables and ‘undesirable’ species (e.g. species associated with human land uses acting to modulate the patterns of interest. Nonetheless, some clear patterns did emerge. For example, the power model was by the far the best general SAR model for habitat islands. The slope of the island species–area relationship (ISAR was related to the matrix type surrounding archipelagos, such that habitat island ISARs were shallower than true island ISARs. Significant compositional and functional nestedness was rare in habitat island datasets, although island area was seemingly responsible for what nestedness was observed. Species abundance distribution models were found to provide useful information for conservation in fragmented landscapes, but the presence of undesirable species substantially affected the shape of the SAD. In conclusion, I found that the application of theory derived from the study of true islands, to habitat island systems, is inappropriate as it fails to incorporate factors that are unique to habitat islands. 

  4. Using Rasch Modeling to Re-Evaluate Rapid Malaria Diagnosis Test Analyses

    Directory of Open Access Journals (Sweden)

    Dawit G. Ayele

    2014-06-01

    Full Text Available The objective of this study was to demonstrate the use of the Rasch model by assessing the appropriateness of the demographic, social-economic and geographic factors in providing a total score in malaria RDT in accordance with the model’s expectations. The baseline malaria indicator survey was conducted in Amhara, Oromiya and Southern Nation Nationalities and People (SNNP regions of Ethiopia by The Carter Center in 2007. The result shows high reliability and little disordering of thresholds with no evidence of differential item functioning.

  5. Analysing the distribution of synaptic vesicles using a spatial point process model

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta

    2014-01-01

    Stress can affect the brain functionality in many ways. As the synaptic vesicles have a major role in nervous signal transportation in synapses, their distribution in relationship to the active zone is very important in studying the neuron responses. We study the effect of stress on brain...... functionality by statistically modelling the distribution of the synaptic vesicles in two groups of rats: a control group subjected to sham stress and a stressed group subjected to a single acute foot-shock (FS)-stress episode. We hypothesize that the synaptic vesicles have different spatial distributions...

  6. The influence of jet-grout constitutive modelling in excavation analyses

    OpenAIRE

    Ciantia, M.; Arroyo Alvarez de Toledo, Marcos; Castellanza, R; Gens Solé, Antonio

    2012-01-01

    A bonded elasto-plastic soil model is employed to characterize cement-treated clay in the finite element analysis of an excavation on soft clay supported with a soil-cement slab at the bottom. The soft clay is calibrated to represent the behaviour of Bangkok soft clay. A parametric study is run for a series of materials characterised by increasing cement content in the clay-cement mixture. The different mixtures are indirectly specified by means of their unconfined compressive strength. A ...

  7. Analyses of Methods and Algorithms for Modelling and Optimization of Biotechnological Processes

    Directory of Open Access Journals (Sweden)

    Stoyan Stoyanov

    2009-08-01

    Full Text Available A review of the problems in modeling, optimization and control of biotechnological processes and systems is given in this paper. An analysis of existing and some new practical optimization methods for searching global optimum based on various advanced strategies - heuristic, stochastic, genetic and combined are presented in the paper. Methods based on the sensitivity theory, stochastic and mix strategies for optimization with partial knowledge about kinetic, technical and economic parameters in optimization problems are discussed. Several approaches for the multi-criteria optimization tasks are analyzed. The problems concerning optimal controls of biotechnological systems are also discussed.

  8. Daniel K. Inouye Solar Telescope: computational fluid dynamic analyses and evaluation of the air knife model

    Science.gov (United States)

    McQuillen, Isaac; Phelps, LeEllen; Warner, Mark; Hubbard, Robert

    2016-08-01

    Implementation of an air curtain at the thermal boundary between conditioned and ambient spaces allows for observation over wavelength ranges not practical when using optical glass as a window. The air knife model of the Daniel K. Inouye Solar Telescope (DKIST) project, a 4-meter solar observatory that will be built on Haleakalā, Hawai'i, deploys such an air curtain while also supplying ventilation through the ceiling of the coudé laboratory. The findings of computational fluid dynamics (CFD) analysis and subsequent changes to the air knife model are presented. Major design constraints include adherence to the Interface Control Document (ICD), separation of ambient and conditioned air, unidirectional outflow into the coudé laboratory, integration of a deployable glass window, and maintenance and accessibility requirements. Optimized design of the air knife successfully holds full 12 Pa backpressure under temperature gradients of up to 20°C while maintaining unidirectional outflow. This is a significant improvement upon the .25 Pa pressure differential that the initial configuration, tested by Linden and Phelps, indicated the curtain could hold. CFD post- processing, developed by Vogiatzis, is validated against interferometry results of initial air knife seeing evaluation, performed by Hubbard and Schoening. This is done by developing a CFD simulation of the initial experiment and using Vogiatzis' method to calculate error introduced along the optical path. Seeing error, for both temperature differentials tested in the initial experiment, match well with seeing results obtained from the CFD analysis and thus validate the post-processing model. Application of this model to the realizable air knife assembly yields seeing errors that are well within the error budget under which the air knife interface falls, even with a temperature differential of 20°C between laboratory and ambient spaces. With ambient temperature set to 0°C and conditioned temperature set to 20

  9. Subchannel and Computational Fluid Dynamic Analyses of a Model Pin Bundle

    Energy Technology Data Exchange (ETDEWEB)

    Gairola, A.; Arif, M.; Suh, K. Y. [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    The current study showed that the simplistic approach of subchannel analysis code MATRA was not good in capturing the physical behavior of the coolant inside the rod bundle. With the incorporation of more detailed geometry of the grid spacer in the CFX code it was possible to approach the experimental values. However, it is vital to incorporate more advanced turbulence mixing models to more realistically simulate behavior of the liquid metal coolant inside the model pin bundle in parallel with the incorporation of the bottom and top grid structures. In the framework of the 11{sup th} international meeting of International Association for Hydraulic Research and Engineering (IAHR) working group on the advanced reactor thermal hydraulics a standard problem was conducted. The quintessence of the problem was to check on the hydraulics and heat transfer in a novel pin bundle with different pitch to rod diameter ratio and heat flux cooled by liquid metal. The standard problem stems from the field of nuclear safety research with the idea of validating and checking the performances of computer codes against the experimental results. Comprehensive checks between the two will succor in improving the dependability and exactness of the codes used for accident simulations.

  10. Integration of 3d Models and Diagnostic Analyses Through a Conservation-Oriented Information System

    Science.gov (United States)

    Mandelli, A.; Achille, C.; Tommasi, C.; Fassi, F.

    2017-08-01

    In the recent years, mature technologies for producing high quality virtual 3D replicas of Cultural Heritage (CH) artefacts has grown thanks to the progress of Information Technologies (IT) tools. These methods are an efficient way to present digital models that can be used with several scopes: heritage managing, support to conservation, virtual restoration, reconstruction and colouring, art cataloguing and visual communication. The work presented is an emblematic case of study oriented to the preventive conservation through monitoring activities, using different acquisition methods and instruments. It was developed inside a project founded by Lombardy Region, Italy, called "Smart Culture", which was aimed to realise a platform that gave the users the possibility to easily access to the CH artefacts, using as an example a very famous statue. The final product is a 3D reality-based model that contains a lot of information inside it, and that can be consulted through a common web browser. In the end, it was possible to define the general strategies oriented to the maintenance and the valorisation of CH artefacts, which, in this specific case, must consider the integration of different techniques and competencies, to obtain a complete, accurate and continuative monitoring of the statue.

  11. Preliminary Thermal Hydraulic Analyses of the Conceptual Core Models with Tubular Type Fuel Assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Chae, Hee Taek; Park, Jong Hark; Park, Cheol

    2006-11-15

    A new research reactor (AHR, Advanced HANARO Reactor) based on the HANARO has being conceptually developed for the future needs of research reactors. A tubular type fuel was considered as one of the fuel options of the AHR. A tubular type fuel assembly has several curved fuel plates arranged with a constant small gap to build up cooling channels, which is very similar to an annulus pipe with many layers. This report presents the preliminary analysis of thermal hydraulic characteristics and safety margins for three conceptual core models using tubular fuel assemblies. Four design criteria, which are the fuel temperature, ONB (Onset of Nucleate Boiling) margin, minimum DNBR (Departure from Nucleate Boiling Ratio) and OFIR (Onset of Flow Instability Ratio), were investigated along with various core flow velocities in the normal operating conditions. And the primary coolant flow rate based a conceptual core model was suggested as a design information for the process design of the primary cooling system. The computational fluid dynamics analysis was also carried out to evaluate the coolant velocity distributions between tubular channels and the pressure drop characteristics of the tubular fuel assembly.

  12. A new non-randomized model for analysing sensitive questions with binary outcomes.

    Science.gov (United States)

    Tian, Guo-Liang; Yu, Jun-Wu; Tang, Man-Lai; Geng, Zhi

    2007-10-15

    We propose a new non-randomized model for assessing the association of two sensitive questions with binary outcomes. Under the new model, respondents only need to answer a non-sensitive question instead of the original two sensitive questions. As a result, it can protect a respondent's privacy, avoid the usage of any randomizing device, and be applied to both the face-to-face interview and mail questionnaire. We derive the constrained maximum likelihood estimates of the cell probabilities and the odds ratio for two binary variables associated with the sensitive questions via the EM algorithm. The corresponding standard error estimates are then obtained by bootstrap approach. A likelihood ratio test and a chi-squared test are developed for testing association between the two binary variables. We discuss the loss of information due to the introduction of the non-sensitive question, and the design of the co-operative parameters. Simulations are performed to evaluate the empirical type I error rates and powers for the two tests. In addition, a simulation is conducted to study the relationship between the probability of obtaining valid estimates and the sample size for any given cell probability vector. A real data set from an AIDS study is used to illustrate the proposed methodologies.

  13. Coupled biophysical global ocean model and molecular genetic analyses identify multiple introductions of cryptogenic species.

    Science.gov (United States)

    Dawson, Michael N; Sen Gupta, Alex; England, Matthew H

    2005-08-23

    The anthropogenic introduction of exotic species is one of the greatest modern threats to marine biodiversity. Yet exotic species introductions remain difficult to predict and are easily misunderstood because knowledge of natural dispersal patterns, species diversity, and biogeography is often insufficient to distinguish between a broadly dispersed natural population and an exotic one. Here we compare a global molecular phylogeny of a representative marine meroplanktonic taxon, the moon-jellyfish Aurelia, with natural dispersion patterns predicted by a global biophysical ocean model. Despite assumed high dispersal ability, the phylogeny reveals many cryptic species and predominantly regional structure with one notable exception: the globally distributed Aurelia sp.1, which, molecular data suggest, may occasionally traverse the Pacific unaided. This possibility is refuted by the ocean model, which shows much more limited dispersion and patterns of distribution broadly consistent with modern biogeographic zones, thus identifying multiple introductions worldwide of this cryptogenic species. This approach also supports existing evidence that (i) the occurrence in Hawaii of Aurelia sp. 4 and other native Indo-West Pacific species with similar life histories is most likely due to anthropogenic translocation, and (ii) there may be a route for rare natural colonization of northeast North America by the European marine snail Littorina littorea, whose status as endemic or exotic is unclear.

  14. Cotton chromosome substitution lines crossed with cultivars: genetic model evaluation and seed trait analyses.

    Science.gov (United States)

    Wu, Jixiang; McCarty, Jack C; Jenkins, Johnie N

    2010-05-01

    Seed from upland cotton, Gossypium hirsutum L., provides a desirable and important nutrition profile. In this study, several seed traits (protein content, oil content, seed hull fiber content, seed index, seed volume, embryo percentage) for F(3) hybrids of 13 cotton chromosome substitution lines crossed with five elite cultivars over four environments were evaluated. Oil and protein were expressed both as percentage of total seed weight and as an index which is the grams of product/100 seeds. An additive and dominance (AD) genetic model with cytoplasmic effects was designed, assessed by simulations, and employed to analyze these seed traits. Simulated results showed that this model was sufficient for analyzing the data structure with F(3) and parents in multiple environments without replications. Significant cytoplasmic effects were detected for seed oil content, oil index, seed index, seed volume, and seed embryo percentage. Additive effects were significant for protein content, fiber content, protein index, oil index, fiber index, seed index, seed volume, and embryo percentage. Dominance effects were significant for oil content, oil index, seed index, and seed volume. Cytoplasmic and additive effects for parents and dominance effects in homozygous and heterozygous forms were predicted. Favorable genetic effects were predicted in this study and the results provided evidence that these seed traits can be genetically improved. In addition, chromosome associations with AD effects were detected and discussed in this study.

  15. Analysing and combining atmospheric general circulation model simulations forced by prescribed SST: northern extratropical response

    Directory of Open Access Journals (Sweden)

    K. Maynard

    2001-06-01

    Full Text Available The ECHAM 3.2 (T21, ECHAM 4 (T30 and LMD (version 6, grid-point resolution with 96 longitudes × 72 latitudes atmospheric general circulation models were integrated through the period 1961 to 1993 forced with the same observed Sea Surface Temperatures (SSTs as compiled at the Hadley Centre. Three runs were made for each model starting from different initial conditions. The mid-latitude circulation pattern which maximises the covariance between the simulation and the observations, i.e. the most skilful mode, and the one which maximises the covariance amongst the runs, i.e. the most reproducible mode, is calculated as the leading mode of a Singular Value Decomposition (SVD analysis of observed and simulated Sea Level Pressure (SLP and geopotential height at 500 hPa (Z500 seasonal anomalies. A common response amongst the different models, having different resolution and parametrization should be considered as a more robust atmospheric response to SST than the same response obtained with only one model. A robust skilful mode is found mainly in December-February (DJF, and in June-August (JJA. In DJF, this mode is close to the SST-forced pattern found by Straus and Shukla (2000 over the North Pacific and North America with a wavy out-of-phase between the NE Pacific and the SE US on the one hand and the NE North America on the other. This pattern evolves in a NAO-like pattern over the North Atlantic and Europe (SLP and in a more N-S tripole on the Atlantic and European sector with an out-of-phase between the middle Europe on the one hand and the northern and southern parts on the other (Z500. There are almost no spatial shifts between either field around North America (just a slight eastward shift of the highest absolute heterogeneous correlations for SLP relative to the Z500 ones. The time evolution of the SST-forced mode is moderatly to strongly related to the ENSO/LNSO events but the spread amongst the ensemble of runs is not systematically related

  16. Analysing and combining atmospheric general circulation model simulations forced by prescribed SST. Northern extra tropical response

    Energy Technology Data Exchange (ETDEWEB)

    Moron, V. [Universite' de Provence, UFR des sciences geographiques et de l' amenagement, Aix-en-Provence (France); Navarra, A. [Istituto Nazionale di Geofisica e Vulcanologia, Bologna (Italy); Ward, M. N. [University of Oklahoma, Cooperative Institute for Mesoscale Meteorological Studies, Norman OK (United States); Foland, C. K. [Hadley Center for Climate Prediction and Research, Meteorological Office, Bracknell (United Kingdom); Friederichs, P. [Meteorologisches Institute des Universitaet Bonn, Bonn (Germany); Maynard, K.; Polcher, J. [Paris Universite' Pierre et Marie Curie, Paris (France). Centre Nationale de la Recherche Scientifique, Laboratoire de Meteorologie Dynamique, Paris

    2001-08-01

    The ECHAM 3.2 (T21), ECHAM 4 (T30) and LMD (version 6, grid-point resolution with 96 longitudes x 72 latitudes) atmospheric general circulation models were integrated through the period 1961 to 1993 forced with the same observed Sa Surface Temperatures (SSTs) as compiled at the Hadley Centre. Three runs were made for each model starting from different initial conditions. The mid-latitude circulation pattern which maximises the covariance between the simulation and the observations, i.e. the most skilful mode, and the one which maximises the covariance amongst the runs, i.e. the most reproducible mode, is calculated as the leading mode of a Singular Value Decomposition (SVD) analysis of observed and simulated Sea Level Pressure (SLP) and geo potential height at 500 hPa (Z500) seasonal anomalies. A common response amongst the different models, having different resolution and parametrization should be considered as a more robust atmospheric response to SST than the sam response obtained with only one model A robust skilful mode is found mainly in December-February (DJF), and in June-August (JJA). In DJF, this mode is close to the SST-forced pattern found by Straus nd Shukla (2000) over the North Pacific and North America with a wavy out-of-phase between the NE Pacific and the SE US on the one hand and the NE North America on the other. This pattern evolves in a NAO-like pattern over the North Atlantic and Europe (SLP) and in a more N-S tripote on the Atlantic and European sector with an out-of-phase between the middle Europe on the one hand and the northern and southern parts on the other (Z500). There are almost no spatial shifts between either field around North America (just a slight eastward shift of the highest absolute heterogenous correlations for SLP relative to the Z500 ones). The time evolution of the SST-forced mode is moderately to strongly related to the ENSO/LNSO events but the spread amongst the ensemble of runs is not systematically related at all to

  17. Descentralização do atendimento a pacientes com câncer avançado sem possibilidade de cura A decentralized model of palliative care for patients with advanced incurable cancer

    Directory of Open Access Journals (Sweden)

    Luiz Carlos Zeferino

    2010-11-01

    Full Text Available A maioria dos cânceres no Brasil é diagnosticada em estádios avançados e, portanto, a sobrevida dos pacientes é baixa, o que significa que há um grande contingente que necessita de cuidados paliativos. Os modelos de cuidados paliativos que têm sido experimentados são centrados nos hospitais que tratam câncer e a principal restrição é que têm abrangência apenas local, quando a demanda é predominantemente regional. Este estudo teve como objetivo testar um modelo de cuidados paliativos descentralizado com base nos serviços e nos profissionais de saúde que atuam no município, para assistir pacientes com câncer ginecológico e/ou mamário sem possibilidade de cura, em parceria com o Centro de Atenção Integral à Saúde da Mulher da Universidade Estadual de Campinas (CAISM. Este foi um estudo descritivo qualitativo que seguiu as diretrizes de uma pesquisa de desenvolvimento. A expectativa era de que os municípios adquirissem a resolutividade correspondente ao nível primário, tendo o Centro de Atenção Integral à Saúde da Mulher como referência para as condições que exigissem cuidados de maior complexidade. Os municípios que demonstraram interesse e aceitaram a proposta foram Amparo, Atibaia, Indaiatuba, Mogi-Mirim, São João da Boa Vista e São José do Rio Pardo. A estratégia de implementação incluiu a capacitação prévia dos profissionais e a realizações de reuniões específicas em cada município, a fim de buscar respaldo político e estratégico para a implementação dessas atividades. Como os dados foram coletados na forma de conversação, a análise compreendeu: preparação e descrição do material bruto; redução de dados; codificação; análise vertical e transversal. O modelo foi operacionalizado nos municípios de Amparo, Atibaia, Indaiatuba e São José do Rio Pardo. Houve incremento de resolutividade e percepção positiva dos efeitos biopsicossociais em relação às pacientes e familiares

  18. Possibilities for a sustainable development. Muligheter for en baerekraftig utvikling; Analyser paa ''World Model''

    Energy Technology Data Exchange (ETDEWEB)

    Bjerkholt, O.; Johnsen, T.; Thonstad, K.

    1993-01-01

    This report is the final report of a project that the Central Bureau of Statistics of Norway has carried out. The report present analyses of the relations between economic development, energy consumption and emission of pollutants to air in a global perspective. The analyses are based on the ''World Model'', that has been developed at the Institute for Economic Analysis at New York University. The analyses show that it will be very difficult to obtain a global stabilization of the CO[sub 2] emission on the 1990 level. In the reference scenario of the United Nations report ''Our Common Future'', the increase of CO[sub 2] emissions from 1990 to 2020 was 73%. Even in the scenario with the most drastic measures, the emissions in 2020 will be about 43% above the 1990 level, according to the present report. A stabilization of the global emissions at the 1990 level will require strong measures beyond those assumed in the model calculations, or a considerable breakthrough in energy technology. 17 refs., 5 figs., 21 tabs.

  19. A model using marginal efficiency of investment to analyse carbon and nitrogen interactions in forested ecosystems

    Science.gov (United States)

    Thomas, R. Q.; Williams, M.

    2014-12-01

    Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. Here we explore the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants using a new, simple model of ecosystem C-N cycling and interactions (ACONITE). ACONITE builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C:N, N fixation, and plant C use efficiency) based on the optimization of the marginal change in net C or N uptake associated with a change in allocation of C or N to plant tissues. We simulated and evaluated steady-state and transient ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C:N differed among the three ecosystem types (temperate deciduous traits. Gross primary productivity (GPP) and net primary productivity (NPP) estimates compared well to observed fluxes at the simulation sites. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C:N. Also, a widely used linear leaf N-respiration relationship did not yield a realistic leaf C:N, while a more recently reported non-linear relationship simulated leaf C:N that compared better to the global trait database than the linear relationship. Overall, our ability to constrain leaf area index and allow spatially and temporally variable leaf C:N can help address challenges simulating these properties in ecosystem and Earth System models. Furthermore, the simple approach with emergent properties based on coupled C-N dynamics has

  20. Application of satellite precipitation data to analyse and model arbovirus activity in the tropics

    Directory of Open Access Journals (Sweden)

    Corner Robert J

    2011-01-01

    Full Text Available Abstract Background Murray Valley encephalitis virus (MVEV is a mosquito-borne Flavivirus (Flaviviridae: Flavivirus which is closely related to Japanese encephalitis virus, West Nile virus and St. Louis encephalitis virus. MVEV is enzootic in northern Australia and Papua New Guinea and epizootic in other parts of Australia. Activity of MVEV in Western Australia (WA is monitored by detection of seroconversions in flocks of sentinel chickens at selected sample sites throughout WA. Rainfall is a major environmental factor influencing MVEV activity. Utilising data on rainfall and seroconversions, statistical relationships between MVEV occurrence and rainfall can be determined. These relationships can be used to predict MVEV activity which, in turn, provides the general public with important information about disease transmission risk. Since ground measurements of rainfall are sparse and irregularly distributed, especially in north WA where rainfall is spatially and temporally highly variable, alternative data sources such as remote sensing (RS data represent an attractive alternative to ground measurements. However, a number of competing alternatives are available and careful evaluation is essential to determine the most appropriate product for a given problem. Results The Tropical Rainfall Measurement Mission (TRMM Multi-satellite Precipitation Analysis (TMPA 3B42 product was chosen from a range of RS rainfall products to develop rainfall-based predictor variables and build logistic regression models for the prediction of MVEV activity in the Kimberley and Pilbara regions of WA. Two models employing monthly time-lagged rainfall variables showed the strongest discriminatory ability of 0.74 and 0.80 as measured by the Receiver Operating Characteristics area under the curve (ROC AUC. Conclusions TMPA data provide a state-of-the-art data source for the development of rainfall-based predictive models for Flavivirus activity in tropical WA. Compared to

  1. IMPROVEMENTS IN HANFORD TRANSURANIC (TRU) PROGRAM UTILIZING SYSTEMS MODELING AND ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    UYTIOCO EM

    2007-11-12

    Hanford's Transuranic (TRU) Program is responsible for certifying contact-handled (CH) TRU waste and shipping the certified waste to the Waste Isolation Pilot Plant (WIPP). Hanford's CH TRU waste includes material that is in retrievable storage as well as above ground storage, and newly generated waste. Certifying a typical container entails retrieving and then characterizing it (Real-Time Radiography, Non-Destructive Assay, and Head Space Gas Sampling), validating records (data review and reconciliation), and designating the container for a payload. The certified payload is then shipped to WIPP. Systems modeling and analysis techniques were applied to Hanford's TRU Program to help streamline the certification process and increase shipping rates.

  2. Analysing green supply chain management practices in Brazil's electrical/electronics industry using interpretive structural modelling

    DEFF Research Database (Denmark)

    Govindan, Kannan; Kannan, Devika; Mathiyazhagan, K.

    2013-01-01

    Industries need to adopt the environmental management concepts in the traditional supply chain management. The green supply chain management (GSCM) is an established concept to ensure environment-friendly activities in industry. This paper identifies the relationship of driving and dependence...... that exists between GSCM practices with regard to their adoption within Brazilian electrical/electronic industry with the help of interpretive structural modelling (ISM). From the results, we infer that cooperation with customers for eco-design practice is driving other practices, and this practice acts...... as a vital role among other practices. Commitment to GSCM from senior managers and cooperation with customers for cleaner production occupy the highest level. © 2013 © 2013 Taylor & Francis....

  3. Statistical Analyses and Modeling of the Implementation of Agile Manufacturing Tactics in Industrial Firms

    Directory of Open Access Journals (Sweden)

    Mohammad D. AL-Tahat

    2012-01-01

    Full Text Available This paper provides a review and introduction on agile manufacturing. Tactics of agile manufacturing are mapped into different production areas (eight-construct latent: manufacturing equipment and technology, processes technology and know-how, quality and productivity improvement, production planning and control, shop floor management, product design and development, supplier relationship management, and customer relationship management. The implementation level of agile manufacturing tactics is investigated in each area. A structural equation model is proposed. Hypotheses are formulated. Feedback from 456 firms is collected using five-point-Likert-scale questionnaire. Statistical analysis is carried out using IBM SPSS and AMOS. Multicollinearity, content validity, consistency, construct validity, ANOVA analysis, and relationships between agile components are tested. The results of this study prove that the agile manufacturing tactics have positive effect on the overall agility level. This conclusion can be used by manufacturing firms to manage challenges when trying to be agile.

  4. Personality change over 40 years of adulthood: hierarchical linear modeling analyses of two longitudinal samples.

    Science.gov (United States)

    Helson, Ravenna; Jones, Constance; Kwan, Virginia S Y

    2002-09-01

    Normative personality change over 40 years was shown in 2 longitudinal cohorts with hierarchical linear modeling of California Psychological Inventory data obtained at multiple times between ages 21-75. Although themes of change and the paucity of differences attributable to gender and cohort largely supported findings of multiethnic cross-sectional samples, the authors also found much quadratic change and much individual variability. The form of quadratic change supported predictions about the influence of period of life and social climate as factors in change over the adult years: Scores on Dominance and Independence peaked in the middle age of both cohorts, and scores on Responsibility were lowest during peak years of the culture of individualism. The idea that personality change is most pronounced before age 30 and then reaches a plateau received no support.

  5. Exploring prospective secondary mathematics teachers' interpretation of student thinking through analysing students' work in modelling

    Science.gov (United States)

    Didis, Makbule Gozde; Erbas, Ayhan Kursat; Cetinkaya, Bulent; Cakiroglu, Erdinc; Alacaci, Cengiz

    2016-09-01

    Researchers point out the importance of teachers' knowledge of student thinking and the role of examining student work in various contexts to develop a knowledge base regarding students' ways of thinking. This study investigated prospective secondary mathematics teachers' interpretations of students' thinking as manifested in students' work that embodied solutions of mathematical modelling tasks. The data were collected from 25 prospective mathematics teachers enrolled in an undergraduate course through four 2-week-long cycles. Analysis of data revealed that the prospective teachers interpreted students' thinking in four ways: describing, questioning, explaining, and comparing. Moreover, whereas some of the prospective teachers showed a tendency to increase their attention to the meaning of students' ways of thinking more while they engaged in students' work in depth over time and experience, some of them continued to focus on only judging the accuracy of students' thinking. The implications of the findings for understanding and developing prospective teachers' ways of interpreting students' thinking are discussed.

  6. The usefulness of optical analyses for detecting vulnerable plaques using rabbit models

    Science.gov (United States)

    Nakai, Kanji; Ishihara, Miya; Kawauchi, Satoko; Shiomi, Masashi; Kikuchi, Makoto; Kaji, Tatsumi

    2011-03-01

    Purpose: Carotid artery stenting (CAS) has become a widely used option for treatment of carotid stenosis. Although technical improvements have led to a decrease in complications related to CAS, distal embolism continues to be a problem. The purpose of this research was to investigate the usefulness of optical methods (Time-Resolved Laser- Induced Fluorescence Spectroscopy [TR-LIFS] and reflection spectroscopy [RS] as diagnostic tools for assessment of vulnerable atherosclerotic lesions, using rabbit models of vulnerable plaque. Materials & Methods: Male Japanese white rabbits were divided into a high cholesterol diet group and a normal diet group. In addition, we used a Watanabe heritable hyperlipidemic (WHHL) rabbit, because we confirmed the reliability of our animal model for this study. Experiment 1: TR-LIFS. Fluorescence was induced using the third harmonic wave of a Q switch Nd:YAG laser. The TR-LIFS was performed using a photonic multi-channel analyzer with ICCD (wavelength range, 200 - 860 nm). Experiment 2: RS. Refection spectra in the wavelength range of 900 to 1700 nm were acquired using a spectrometer. Results: In the TR-LIFS, the wavelength at the peak was longer by plaque formation. The TR-LIFS method revealed a difference in peak levels between a normal aorta and a lipid-rich aorta. The RS method showed increased absorption from 1450 to 1500 nm for lipid-rich plaques. We observed absorption around 1200 nm due to lipid only in the WHHL group. Conclusion: These methods using optical analysis might be useful for diagnosis of vulnerable plaques. Keywords: Carotid artery stenting, vulnerable plaque, Time-Resolved Laser-Induced Fluorescence

  7. Multiplicity Control in Structural Equation Modeling: Incorporating Parameter Dependencies

    Science.gov (United States)

    Smith, Carrie E.; Cribbie, Robert A.

    2013-01-01

    When structural equation modeling (SEM) analyses are conducted, significance tests for all important model relationships (parameters including factor loadings, covariances, etc.) are typically conducted at a specified nominal Type I error rate ([alpha]). Despite the fact that many significance tests are often conducted in SEM, rarely is…

  8. Semântica expressivista = Expressivist semantics

    Directory of Open Access Journals (Sweden)

    Mendonça, Wilson John Pessoa

    2016-01-01

    Full Text Available O programa semântico do expressivismo surgiu como uma tentativa de fundamentar a visão não-cognitivista do discurso ético, mas logo foi generalizado de forma a cobrir a linguagem normativa em geral. Ele promete desenvolver uma alternativa global à abordagem clássica da semântica das condições de verdade: uma teoria não-factualista, baseada na pragmática, do significado linguístico. Os expressivistas veem o conteúdo das sentenças normativas como determinado por seu uso primário, que é não-descritivo. As versões tradicionais da semântica expressivista procedem associando sistematicamente às sentenças normativas as atitudes mentais que elas convencionalmente expressam. Elas assumem que, se as sentenças simples expressam atitudes, a aplicação a essas sentenças dos conectivos da lógica proposicional ou da ligação de variáveis resulta em sentenças complexas que também expressam atitudes. O núcleo do presente trabalho avalia algumas tentativas influentes de desenvolvimento do programa expressivista, focando em um problema discutido com veemência na literatura: o “problema da negação para o expressivismo”. Algumas abordagens propostas nos últimos anos, baseadas na rejeição da assunção central do expressivismo tradicional, são consideradas em detalhes. Embora uma avaliação definitiva dessas abordagens inovadoras como explicações satisfatórias do funcionamento da linguagem normativa não possa ainda ser alcançada, o trabalho afirma que há razões para otimismo

  9. Comparison of statistical inferences from the DerSimonian-Laird and alternative random-effects model meta-analyses - an empirical assessment of 920 Cochrane primary outcome meta-analyses

    DEFF Research Database (Denmark)

    Thorlund, Kristian; Wetterslev, Jørn; Awad, Tahany;

    2011-01-01

    In random-effects model meta-analysis, the conventional DerSimonian-Laird (DL) estimator typically underestimates the between-trial variance. Alternative variance estimators have been proposed to address this bias. This study aims to empirically compare statistical inferences from random......-effects model meta-analyses on the basis of the DL estimator and four alternative estimators, as well as distributional assumptions (normal distribution and t-distribution) about the pooled intervention effect. We evaluated the discrepancies of p-values, 95% confidence intervals (CIs) in statistically...... significant meta-analyses, and the degree (percentage) of statistical heterogeneity (e.g. I(2)) across 920 Cochrane primary outcome meta-analyses. In total, 414 of the 920 meta-analyses were statistically significant with the DL meta-analysis, and 506 were not. Compared with the DL estimator, the four...

  10. Principios sobre semáforos

    OpenAIRE

    Valencia Alaix, Víctor Gabriel

    2000-01-01

    Debido a la ausencia de una publicación que reuniese los aspectos más relevantes sobre semáforos y en el marco académico sobre ingeniería de tránsito de los cursos de pregrado y posgrado en Vías y Transporte de la Universidad Nacional de Colombia - Sede Medellín, se ha preparado este documento como guía introductoria a dicho tema. En su preparación se han recogido los tópicos principales de varias publicaciones internacionales y nacionales, además, su desarrollo ha considerado la experi...

  11. O ciberativismo sem bússola

    Directory of Open Access Journals (Sweden)

    Francisco Rüdiger

    2014-07-01

    Full Text Available Questiona-se no texto se uma abordagem que, no essencial, relata a trajetória do chamado ciberativismo de acordo com seus próprios termos se justifica academicamente ou, em vez disso, se mantém prisioneira de uma mitologia que o fenômeno, em si mesmo, já construiu e, por isso, autoriza seus sujeitos a dispensarem sem prejuízo eventual contribuição de origem universitária.

  12. In silico analyses of dystrophin Dp40 cellular distribution, nuclear export signals and structure modeling

    Directory of Open Access Journals (Sweden)

    Alejandro Martínez-Herrera

    2015-09-01

    Full Text Available Dystrophin Dp40 is the shortest protein encoded by the DMD (Duchenne muscular dystrophy gene. This protein is unique since it lacks the C-terminal end of dystrophins. In this data article, we describe the subcellular localization, nuclear export signals and the three-dimensional structure modeling of putative Dp40 proteins using bioinformatics tools. The Dp40 wild type protein was predicted as a cytoplasmic protein while the Dp40n4 was predicted to be nuclear. Changes L93P and L170P are involved in the nuclear localization of Dp40n4 protein. A close analysis of Dp40 protein scored that amino acids 93LEQEHNNLV101 and 168LLLHDSIQI176 could function as NES sequences and the scores are lost in Dp40n4. In addition, the changes L93/170P modify the tertiary structure of putative Dp40 mutants. The analysis showed that changes of residues 93 and 170 from leucine to proline allow the nuclear localization of Dp40 proteins. The data described here are related to the research article entitled “EF-hand domains are involved in the differential cellular distribution of dystrophin Dp40” (J. Aragón et al. Neurosci. Lett. 600 (2015 115–120 [1].

  13. Analysing hydro-mechanical behaviour of reinforced slopes through centrifuge modelling

    Science.gov (United States)

    Veenhof, Rick; Wu, Wei

    2017-04-01

    Every year, slope instability is causing casualties and damage to properties and the environment. The behaviour of slopes during and after these kind of events is complex and depends on meteorological conditions, slope geometry, hydro-mechanical soil properties, boundary conditions and the initial state of the soils. This study describes the effects of adding reinforcement, consisting of randomly distributed polyolefin monofilament fibres or Ryegrass (Lolium), on the behaviour of medium-fine sand in loose and medium dense conditions. Direct shear tests were performed on sand specimens with different void ratios, water content and fibre or root density, respectively. To simulate the stress state of real scale field situations, centrifuge model tests were conducted on sand specimens with different slope angles, thickness of the reinforced layer, fibre density, void ratio and water content. An increase in peak shear strength is observed in all reinforced cases. Centrifuge tests show that for slopes that are reinforced the period until failure is extended. The location of shear band formation and patch displacement behaviour indicate that the design of slope reinforcement has a significant effect on the failure behaviour. Future research will focus on the effect of plant water uptake on soil cohesion.

  14. Biomechanical analyses of prosthetic mesh repair in a hiatal hernia model.

    Science.gov (United States)

    Alizai, Patrick Hamid; Schmid, Sofie; Otto, Jens; Klink, Christian Daniel; Roeth, Anjali; Nolting, Jochen; Neumann, Ulf Peter; Klinge, Uwe

    2014-10-01

    Recurrence rate of hiatal hernia can be reduced with prosthetic mesh repair; however, type and shape of the mesh are still a matter of controversy. The purpose of this study was to investigate the biomechanical properties of four conventional meshes: pure polypropylene mesh (PP-P), polypropylene/poliglecaprone mesh (PP-U), polyvinylidenefluoride/polypropylene mesh (PVDF-I), and pure polyvinylidenefluoride mesh (PVDF-S). Meshes were tested either in warp direction (parallel to production direction) or perpendicular to the warp direction. A Zwick testing machine was used to measure elasticity and effective porosity of the textile probes. Stretching of the meshes in warp direction required forces that were up to 85-fold higher than the same elongation in perpendicular direction. Stretch stress led to loss of effective porosity in most meshes, except for PVDF-S. Biomechanical impact of the mesh was additionally evaluated in a hiatal hernia model. The different meshes were used either as rectangular patches or as circular meshes. Circular meshes led to a significant reinforcement of the hiatus, largely unaffected by the orientation of the warp fibers. In contrast, rectangular meshes provided a significant reinforcement only when warp fibers ran perpendicular to the crura. Anisotropic elasticity of prosthetic meshes should therefore be considered in hiatal closure with rectangular patches.

  15. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

    Directory of Open Access Journals (Sweden)

    Giuliano de Oliveira Freitas

    2013-10-01

    Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

  16. Analysing movements in investor’s risk aversion using the Heston volatility model

    Directory of Open Access Journals (Sweden)

    Alexie ALUPOAIEI

    2013-03-01

    Full Text Available In this paper we intend to identify and analyze, if it is the case, an “epidemiological” relationship between forecasts of professional investors and short-term developments in the EUR/RON exchange rate. Even that we don’t call a typical epidemiological model as those ones used in biology fields of research, we investigated the hypothesis according to which after the Lehman Brothers crash and implicit the generation of the current financial crisis, the forecasts of professional investors pose a significant explanatory power on the futures short-run movements of EUR/RON. How does it work this mechanism? Firstly, the professional forecasters account for the current macro, financial and political states, then they elaborate forecasts. Secondly, based on that forecasts they get positions in the Romanian exchange market for hedging and/or speculation purposes. But their positions incorporate in addition different degrees of uncertainty. In parallel, a part of their anticipations are disseminated to the public via media channels. Since some important movements are viewed within macro, financial or political fields, the positions of professsional investors from FX derivative market are activated. The current study represents a first step in that direction of analysis for Romanian case. For the above formulated objectives, in this paper different measures of EUR/RON rate volatility have been estimated and compared with implied volatilities. In a second timeframe we called the co-integration and dynamic correlation based tools in order to investigate the relationship between implied volatility and daily returns of EUR/RON exchange rate.

  17. Model-based analyses of bioequivalence crossover trials using the stochastic approximation expectation maximisation algorithm.

    Science.gov (United States)

    Dubois, Anne; Lavielle, Marc; Gsteiger, Sandro; Pigeolet, Etienne; Mentré, France

    2011-09-20

    In this work, we develop a bioequivalence analysis using nonlinear mixed effects models (NLMEM) that mimics the standard noncompartmental analysis (NCA). We estimate NLMEM parameters, including between-subject and within-subject variability and treatment, period and sequence effects. We explain how to perform a Wald test on a secondary parameter, and we propose an extension of the likelihood ratio test for bioequivalence. We compare these NLMEM-based bioequivalence tests with standard NCA-based tests. We evaluate by simulation the NCA and NLMEM estimates and the type I error of the bioequivalence tests. For NLMEM, we use the stochastic approximation expectation maximisation (SAEM) algorithm implemented in monolix. We simulate crossover trials under H(0) using different numbers of subjects and of samples per subject. We simulate with different settings for between-subject and within-subject variability and for the residual error variance. The simulation study illustrates the accuracy of NLMEM-based geometric means estimated with the SAEM algorithm, whereas the NCA estimates are biased for sparse design. NCA-based bioequivalence tests show good type I error except for high variability. For a rich design, type I errors of NLMEM-based bioequivalence tests (Wald test and likelihood ratio test) do not differ from the nominal level of 5%. Type I errors are inflated for sparse design. We apply the bioequivalence Wald test based on NCA and NLMEM estimates to a three-way crossover trial, showing that Omnitrope®; (Sandoz GmbH, Kundl, Austria) powder and solution are bioequivalent to Genotropin®; (Pfizer Pharma GmbH, Karlsruhe, Germany). NLMEM-based bioequivalence tests are an alternative to standard NCA-based tests. However, caution is needed for small sample size and highly variable drug.

  18. Study on the Appraising Model of Hotel Customer Satisfaction ——Based on SEM Analysis%基于SEM的饭店顾客满意度测评模型研究

    Institute of Scientific and Technical Information of China (English)

    黄燕玲; 黄震方; 袁林旺

    2006-01-01

    运用结构方程模型(SEM),通过对现有国际主流顾客满意度指数模型的改进,从多学科角度构建了饭店顾客满意度测评模型(HCS),并进行实证研究.饭店顾客满意度测评模型是一个具有因果关系的结构方程模型,运用LISREL及SPSS统计软件进行检验,结果表明测量模型中的观测变量对顾客满意度的影响显著,结构模型中各潜变量之间的路径系数与假定基本符合.在研究分析基础上给出相应结论及建议.

  19. 基于SEM消费者购买意愿模型构建:以无缝内衣为例%Construction of Consumers' Purchase Intention Model Based on SEM:with Seamless Underwear as an Example

    Institute of Scientific and Technical Information of China (English)

    熊文

    2012-01-01

    首先在相关文献理论基础上提出理论模型构建,然后利用SPSS17.0对模型分支纬度和主纬度层分别进行因子分析,在因子分析的基础上利用AMOS17.0的结构方程模型技术(SEM)进行模型的构建、参数估计与拟合优度评价,最后对整个模型的总体评价与分析.通过构建无缝内衣消费者购买意愿模型,可以为无缝内衣企业制定品牌营销策略提供参考.

  20. Tropical cyclones in a T159 resolution global climate model: comparison with observations and re-analyses

    Science.gov (United States)

    Bengtsson, L.; Hodges, K. I.; Esch, M.

    2007-08-01

    Tropical cyclones have been investigated in a T159 version of the MPI ECHAM5 climate model using a novel technique to diagnose the evolution of the three-dimensional vorticity structure of tropical cyclones, including their full life cycle from weak initial vortices to their possible extra-tropical transition. Results have been compared with re-analyses [the European Centre for Medium-Range Weather Forecasts (ECMWF) 40-yr Re-analysis (ERA40) and Japanese 25 yr re-analysis (JRA25)] and observed tropical storms during the period 1978-1999 for the Northern Hemisphere. There is no indication of any trend in the number or intensity of tropical storms during this period in ECHAM5 or in re-analyses but there are distinct inter-annual variations. The storms simulated by ECHAM5 are realistic both in space and time, but the model and even more so the re-analyses, underestimate the intensities of the most intense storms (in terms of their maximum wind speeds). There is an indication of a response to El Niño-Southern Oscillation (ENSO) with a smaller number of Atlantic storms during El Niño in agreement with previous studies. The global divergence circulation responds to El Niño by setting up a large-scale convergence flow, with the centre over the central Pacific with enhanced subsidence over the tropical Atlantic. At the same time there is an increase in the vertical wind shear in the region of the tropical Atlantic where tropical storms normally develop. There is a good correspondence between the model and ERA40 except that the divergence circulation is somewhat stronger in the model. The model underestimates storms in the Atlantic but tends to overestimate them in the Western Pacific and in the North Indian Ocean. It is suggested that the overestimation of storms in the Pacific by the model is related to an overly strong response to the tropical Pacific sea surface temperature (SST) anomalies. The overestimation in the North Indian Ocean is likely to be due to an over

  1. Pathophysiologic and transcriptomic analyses of viscerotropic yellow fever in a rhesus macaque model.

    Science.gov (United States)

    Engelmann, Flora; Josset, Laurence; Girke, Thomas; Park, Byung; Barron, Alex; Dewane, Jesse; Hammarlund, Erika; Lewis, Anne; Axthelm, Michael K; Slifka, Mark K; Messaoudi, Ilhem

    2014-01-01

    Infection with yellow fever virus (YFV), an explosively replicating flavivirus, results in viral hemorrhagic disease characterized by cardiovascular shock and multi-organ failure. Unvaccinated populations experience 20 to 50% fatality. Few studies have examined the pathophysiological changes that occur in humans during YFV infection due to the sporadic nature and remote locations of outbreaks. Rhesus macaques are highly susceptible to YFV infection, providing a robust animal model to investigate host-pathogen interactions. In this study, we characterized disease progression as well as alterations in immune system homeostasis, cytokine production and gene expression in rhesus macaques infected with the virulent YFV strain DakH1279 (YFV-DakH1279). Following infection, YFV-DakH1279 replicated to high titers resulting in viscerotropic disease with ∼72% mortality. Data presented in this manuscript demonstrate for the first time that lethal YFV infection results in profound lymphopenia that precedes the hallmark changes in liver enzymes and that although tissue damage was noted in liver, kidneys, and lymphoid tissues, viral antigen was only detected in the liver. These observations suggest that additional tissue damage could be due to indirect effects of viral replication. Indeed, circulating levels of several cytokines peaked shortly before euthanasia. Our study also includes the first description of YFV-DakH1279-induced changes in gene expression within peripheral blood mononuclear cells 3 days post-infection prior to any clinical signs. These data show that infection with wild type YFV-DakH1279 or live-attenuated vaccine strain YFV-17D, resulted in 765 and 46 differentially expressed genes (DEGs), respectively. DEGs detected after YFV-17D infection were mostly associated with innate immunity, whereas YFV-DakH1279 infection resulted in dysregulation of genes associated with the development of immune response, ion metabolism, and apoptosis. Therefore, WT-YFV infection

  2. [Selection of a statistical model for the evaluation of the reliability of the results of toxicological analyses. II. Selection of our statistical model for the evaluation].

    Science.gov (United States)

    Antczak, K; Wilczyńska, U

    1980-01-01

    Part II presents a statistical model devised by the authors for evaluating toxicological analyses results. The model includes: 1. Establishment of a reference value, basing on our own measurements taken by two independent analytical methods. 2. Selection of laboratories -- basing on the deviation of the obtained values from reference ones. 3. On consideration of variance analysis, t-student's test and differences test, subsequent quality controls and particular laboratories have been evaluated.

  3. An assessment of the wind re-analyses in the modelling of an extreme sea state in the Black Sea

    Science.gov (United States)

    Akpinar, Adem; Ponce de León, S.

    2016-03-01

    This study aims at an assessment of wind re-analyses for modelling storms in the Black Sea. A wind-wave modelling system (Simulating WAve Nearshore, SWAN) is applied to the Black Sea basin and calibrated with buoy data for three recent re-analysis wind sources, namely the European Centre for Medium-Range Weather Forecasts Reanalysis-Interim (ERA-Interim), Climate Forecast System Reanalysis (CFSR), and Modern Era Retrospective Analysis for Research and Applications (MERRA) during an extreme wave condition that occurred in the north eastern part of the Black Sea. The SWAN model simulations are carried out for default and tuning settings for deep water source terms, especially whitecapping. Performances of the best model configurations based on calibration with buoy data are discussed using data from the JASON2, TOPEX-Poseidon, ENVISAT and GFO satellites. The SWAN model calibration shows that the best configuration is obtained with Janssen and Komen formulations with whitecapping coefficient (Cds) equal to 1.8e-5 for wave generation by wind and whitecapping dissipation using ERA-Interim. In addition, from the collocated SWAN results against the satellite records, the best configuration is determined to be the SWAN using the CFSR winds. Numerical results, thus show that the accuracy of a wave forecast will depend on the quality of the wind field and the ability of the SWAN model to simulate the waves under extreme wind conditions in fetch limited wave conditions.

  4. Modeling and stress analyses of a normal foot-ankle and a prosthetic foot-ankle complex.

    Science.gov (United States)

    Ozen, Mustafa; Sayman, Onur; Havitcioglu, Hasan

    2013-01-01

    Total ankle replacement (TAR) is a relatively new concept and is becoming more popular for treatment of ankle arthritis and fractures. Because of the high costs and difficulties of experimental studies, the developments of TAR prostheses are progressing very slowly. For this reason, the medical imaging techniques such as CT, and MR have become more and more useful. The finite element method (FEM) is a widely used technique to estimate the mechanical behaviors of materials and structures in engineering applications. FEM has also been increasingly applied to biomechanical analyses of human bones, tissues and organs, thanks to the development of both the computing capabilities and the medical imaging techniques. 3-D finite element models of the human foot and ankle from reconstruction of MR and CT images have been investigated by some authors. In this study, data of geometries (used in modeling) of a normal and a prosthetic foot and ankle were obtained from a 3D reconstruction of CT images. The segmentation software, MIMICS was used to generate the 3D images of the bony structures, soft tissues and components of prosthesis of normal and prosthetic ankle-foot complex. Except the spaces between the adjacent surface of the phalanges fused, metatarsals, cuneiforms, cuboid, navicular, talus and calcaneus bones, soft tissues and components of prosthesis were independently developed to form foot and ankle complex. SOLIDWORKS program was used to form the boundary surfaces of all model components and then the solid models were obtained from these boundary surfaces. Finite element analyses software, ABAQUS was used to perform the numerical stress analyses of these models for balanced standing position. Plantar pressure and von Mises stress distributions of the normal and prosthetic ankles were compared with each other. There was a peak pressure increase at the 4th metatarsal, first metatarsal and talus bones and a decrease at the intermediate cuneiform and calcaneus bones, in

  5. Experiments and sensitivity analyses for heat transfer in a meter-scale regularly fractured granite model with water flow

    Institute of Scientific and Technical Information of China (English)

    Wei LU; Yan-yong XIANG

    2012-01-01

    Experiments of saturated water flow and heat transfer were conducted for a meter-scale model of regularly fractured granite.The fractured rock model (height 1502.5 mm,width 904 mm,and thickness 300 mm),embedded with two vertical and two horizontal fractures of pre-set apertures,was constructed using 18 pieces of intact granite.The granite was taken from a site currently being investigated for a high-level nuclear waste repository in China.The experiments involved different heat source temperatures and vertical water fluxes in the embedded fractures either open or filled with sand.A finite difference scheme and computer code for calculation of water flow and heat transfer in regularly fractured rocks was developed,verified against both the experimental data and calculations from the TOUGH2 code,and employed for parametric sensitivity analyses.The experiments revealed that,among other things,the temperature distribution was influenced by water flow in the fractures,especially the water flow in the vertical fracture adjacent to the heat source,and that the heat conduction between the neighboring rock blocks in the model with sand-filled fractures was enhanced by the sand,with larger range of influence of the heat source and longer time for approaching asymptotic steady-state than those of the model with open fractures.The temperatures from the experiments were in general slightly smaller than those from the numerical calculations,probably due to the fact that a certain amount of outward heat transfer at the model perimeter was unavoidable in the experiments.The parametric sensitivity analyses indicated that the temperature distribution was highly sensitive to water flow in the fractures,and the water temperature in the vertical fracture adjacent to the heat source was rather insensitive to water flow in other fractures.

  6. Spatial Modeling Techniques for Characterizing Geomaterials: Deterministic vs. Stochastic Modeling for Single-Variable and Multivariate Analyses%Spatial Modeling Techniques for Characterizing Geomaterials:Deterministic vs. Stochastic Modeling for Single-Variable and Multivariate Analyses

    Institute of Scientific and Technical Information of China (English)

    Katsuaki Koike

    2011-01-01

    Sample data in the Earth and environmental sciences are limited in quantity and sampling location and therefore, sophisticated spatial modeling techniques are indispensable for accurate imaging of complicated structures and properties of geomaterials. This paper presents several effective methods that are grouped into two categories depending on the nature of regionalized data used. Type I data originate from plural populations and type II data satisfy the prerequisite of stationarity and have distinct spatial correlations. For the type I data, three methods are shown to be effective and demonstrated to produce plausible results: (1) a spline-based method, (2) a combination of a spline-based method with a stochastic simulation, and (3) a neural network method. Geostatistics proves to be a powerful tool for type II data. Three new approaches of geostatistics are presented with case studies: an application to directional data such as fracture, multi-scale modeling that incorporates a scaling law,and space-time joint analysis for multivariate data. Methods for improving the contribution of such spatial modeling to Earth and environmental sciences are also discussed and future important problems to be solved are summarized.

  7. Assessing the utility of FIB-SEM images for shale digital rock physics

    Science.gov (United States)

    Kelly, Shaina; El-Sobky, Hesham; Torres-Verdín, Carlos; Balhoff, Matthew T.

    2016-09-01

    Shales and other unconventional or low permeability (tight) reservoirs house vast quantities of hydrocarbons, often demonstrate considerable water uptake, and are potential repositories for fluid sequestration. The pore-scale topology and fluid transport mechanisms within these nanoporous sedimentary rocks remain to be fully understood. Image-informed pore-scale models are useful tools for studying porous media: a debated question in shale pore-scale petrophysics is whether there is a representative elementary volume (REV) for shale models? Furthermore, if an REV exists, how does it differ among petrophysical properties? We obtain three dimensional (3D) models of the topology of microscale shale volumes from image analysis of focused ion beam-scanning electron microscope (FIB-SEM) image stacks and investigate the utility of these models as a potential REV for shale. The scope of data used in this work includes multiple local groups of neighboring FIB-SEM images of different microscale sizes, corresponding core-scale (milli- and centimeters) laboratory data, and, for comparison, series of two-dimensional (2D) cross sections from broad ion beam SEM images (BIB-SEM), which capture a larger microscale field of view than the FIB-SEM images; this array of data is larger than the majority of investigations with FIB-SEM-derived microscale models of shale. Properties such as porosity, organic matter content, and pore connectivity are extracted from each model. Assessments of permeability with single phase, pressure-driven flow simulations are performed in the connected pore space of the models using the lattice-Boltzmann method. Calculated petrophysical properties are compared to those of neighboring FIB-SEM images and to core-scale measurements of the sample associated with the FIB-SEM sites. Results indicate that FIB-SEM images below ∼5000 μm3 volume (the largest volume analyzed) are not a suitable REV for shale permeability and pore-scale networks; i.e. field of view

  8. The use of SEM/EDS method in mineralogical analysis of ordinary chondritic meteorite

    Directory of Open Access Journals (Sweden)

    Breda Mirtič

    2009-12-01

    Full Text Available The aim of this study was to evaluate the potential of scanning electron microscopy coupled with energy dispersiveX-ray spectroscopy (SEM/EDS for determination of mineral phases according to their stoichiometry and assessment of mineral composition of ordinary chondritic meteorite. For the purposes of this study, H3 type ordinary chondritic meteorite Abbott was selected. SEM/EDS allows identification and characterisation of mineralphases, whose size is below the resolution of an optical microscope. Mineral phases in chondrules and interstitial matrix were located in backscattered electron (BSE mode and were assessed from atomic proportions of constituent elements, obtained by the EDS analysis. SEM/EDS analyses of mineral phases showed that Abbott meteorite is characterised by Fe-rich (Fe, Ni-alloy kamacite, Fe-sulphide troilite or pyrrhotite, chromite, Mg-rich olivine, orthopyroxene bronzite or hypersthene, clinopyroxene Al-diopside, acid plagioclase oligoclase, accessory mineral chlorapatite and secondary minerals Fe-hydroxides (goethite or lepidocrocite. Results of semi-quantitative analyses confirmed that most of analysed mineralphases conform well to stoichiometric minerals with minor deviations of oxygen from stoichiometric proportions. Comparison between mineral phases in chondrules and interstitial matrix was also performed, however it showed no significant differences in elemental composition.Differences in chemical composition between minerals in interstitial matrix and chondrules are sometimes too small to be discernedby the SEM/EDS, therefore knowledge of SEM/EDS capabilities is important for correct interpretation of chondrite formation.

  9. Scanning Electron Microscopy (SEM) Procedure for HE Powders on a Zeiss Sigma HD VP SEM

    Energy Technology Data Exchange (ETDEWEB)

    Zaka, F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-11-15

    This method describes the characterization of inert and HE materials by the Zeiss Sigma HD VP field emission Scanning Electron Microscope (SEM). The SEM uses an accelerated electron beam to generate high-magnification images of explosives and other materials. It is fitted with five detectors (SE, Inlens, STEM, VPSE, HDBSD) to enable imaging of the sample via different secondary electron signatures, angles, and energies. In addition to imaging through electron detection, the microscope is also fitted with two Oxford Instrument Energy Dispersive Spectrometer (EDS) 80 mm detectors to generate elemental constituent spectra and two-dimensional maps of the material being scanned.

  10. Genetic analyses using GGE model and a mixed linear model approach, and stability analyses using AMMI bi-plot for late-maturity alpha-amylase activity in bread wheat genotypes.

    Science.gov (United States)

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye

    2017-06-01

    Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.

  11. Data Assimilation Tools for CO2 Reservoir Model Development – A Review of Key Data Types, Analyses, and Selected Software

    Energy Technology Data Exchange (ETDEWEB)

    Rockhold, Mark L.; Sullivan, E. C.; Murray, Christopher J.; Last, George V.; Black, Gary D.

    2009-09-30

    Pacific Northwest National Laboratory (PNNL) has embarked on an initiative to develop world-class capabilities for performing experimental and computational analyses associated with geologic sequestration of carbon dioxide. The ultimate goal of this initiative is to provide science-based solutions for helping to mitigate the adverse effects of greenhouse gas emissions. This Laboratory-Directed Research and Development (LDRD) initiative currently has two primary focus areas—advanced experimental methods and computational analysis. The experimental methods focus area involves the development of new experimental capabilities, supported in part by the U.S. Department of Energy’s (DOE) Environmental Molecular Science Laboratory (EMSL) housed at PNNL, for quantifying mineral reaction kinetics with CO2 under high temperature and pressure (supercritical) conditions. The computational analysis focus area involves numerical simulation of coupled, multi-scale processes associated with CO2 sequestration in geologic media, and the development of software to facilitate building and parameterizing conceptual and numerical models of subsurface reservoirs that represent geologic repositories for injected CO2. This report describes work in support of the computational analysis focus area. The computational analysis focus area currently consists of several collaborative research projects. These are all geared towards the development and application of conceptual and numerical models for geologic sequestration of CO2. The software being developed for this focus area is referred to as the Geologic Sequestration Software Suite or GS3. A wiki-based software framework is being developed to support GS3. This report summarizes work performed in FY09 on one of the LDRD projects in the computational analysis focus area. The title of this project is Data Assimilation Tools for CO2 Reservoir Model Development. Some key objectives of this project in FY09 were to assess the current state

  12. Kvalitative analyser ..

    DEFF Research Database (Denmark)

    Boolsen, Merete Watt

    bogen forklarer de fundamentale trin i forskningsprocessen og applikerer dem på udvalgte kvalitative analyser: indholdsanalyse, Grounded Theory, argumentationsanalyse og diskursanalyse......bogen forklarer de fundamentale trin i forskningsprocessen og applikerer dem på udvalgte kvalitative analyser: indholdsanalyse, Grounded Theory, argumentationsanalyse og diskursanalyse...

  13. PartitionFinder 2: New Methods for Selecting Partitioned Models of Evolution for Molecular and Morphological Phylogenetic Analyses.

    Science.gov (United States)

    Lanfear, Robert; Frandsen, Paul B; Wright, April M; Senfeld, Tereza; Calcott, Brett

    2017-03-01

    PartitionFinder 2 is a program for automatically selecting best-fit partitioning schemes and models of evolution for phylogenetic analyses. PartitionFinder 2 is substantially faster and more efficient than version 1, and incorporates many new methods and features. These include the ability to analyze morphological datasets, new methods to analyze genome-scale datasets, new output formats to facilitate interoperability with downstream software, and many new models of molecular evolution. PartitionFinder 2 is freely available under an open source license and works on Windows, OSX, and Linux operating systems. It can be downloaded from www.robertlanfear.com/partitionfinder. The source code is available at https://github.com/brettc/partitionfinder. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. ANALYSES ON NONLINEAR COUPLING OF MAGNETO-THERMO-ELASTICITY OF FERROMAGNETIC THIN SHELL-Ⅱ: FINITE ELEMENT MODELING AND APPLICATION

    Institute of Scientific and Technical Information of China (English)

    Xingzhe Wang; Xiaojing Zheng

    2009-01-01

    Based on the generalized variational principle of magneto-thermo-elasticity of a ferromagnetic thin shell established (see, Analyses on nonlinear coupling of magneto-thermo-elasticity of ferromagnetic thin shell-Ⅰ), the present paper developed a finite element modeling for the mechanical-magneto-thermal multi-field coupling of a ferromagnetic thin shell. The numerical modeling composes of finite element equations for three sub-systems of magnetic, thermal and deformation fields, as well as iterative methods for nonlinearities of the geometrical large-deflection and the multi-field coupling of the ferromagnetic shell. As examples, the numerical simulations on magneto-elastic behaviors of a ferromagnetic cylindrical shell in an applied magnetic field, and magneto-thermo-elastic behaviors of the shell in applied magnetic and thermal fields are carried out. The results are in good agreement with the experimental ones.

  15. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes

    Directory of Open Access Journals (Sweden)

    Ilona Naujokaitis-Lewis

    2016-07-01

    Full Text Available Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0 that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat

  16. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes.

    Science.gov (United States)

    Naujokaitis-Lewis, Ilona; Curtis, Janelle M R

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along

  17. ATOP - The Advanced Taiwan Ocean Prediction System Based on the mpiPOM. Part 1: Model Descriptions, Analyses and Results

    Directory of Open Access Journals (Sweden)

    Leo Oey

    2013-01-01

    Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of ¡§(i¡¨ are also given.

  18. In-situ SEM electrochemistry and radiolysis

    DEFF Research Database (Denmark)

    Møller-Nilsen, Rolf Erling Robberstad; Norby, Poul

    Electron microscopy is a ubiquitous technique to see effects which are too small to see with traditional optical microscopes. Recently it has become possible to also image liquid samples by encapsulating them from the vacuum of the microscope and a natural evolution from that has been to include...... microelectrodes on the windows to enable studies of electrohcemical processes. In this way it is possible to perform in-situ electrochemical experiments such as electroplating and charge and discharge analysis of battery electrodes. In a typical liquid cell, electrons are accelerated to sufficiently high energies...... to traverse a thin window made by a silicon nitride membrane, and interact with the sample immersed in liquid. In transmission electron microscopy (TEM) the majority of the electrons continue through the sample to form an image. In scanning electron microscopy (SEM) a fraction of the electrons...

  19. SEMS: System for Environmental Monitoring and Sustainability

    Science.gov (United States)

    Arvidson, Raymond E.

    1998-01-01

    The goal of this project was to establish a computational and data management system, SEMS, building on our existing system and MTPE-related research. We proposed that the new system would help support Washington University's efforts in environmental sustainability through use in: (a) Problem-based environmental curriculum for freshmen and sophomores funded by the Hewlett Foundation that integrates scientific, cultural, and policy perspectives to understand the dynamics of wetland degradation, deforestation, and desertification and that will develop policies for sustainable environments and economies; (b) Higher-level undergraduate and graduate courses focused on monitoring the environment and developing policies that will lead to sustainable environmental and economic conditions; and (c) Interdisciplinary research focused on the dynamics of the Missouri River system and development of policies that lead to sustainable environmental and economic floodplain conditions.

  20. Water Quality Development in the Semíč Stream

    Directory of Open Access Journals (Sweden)

    Petra Oppeltová

    2015-01-01

    Full Text Available The aims of the work were to analyse selected quality indicators of a small water stream called Semíč and evaluate the results based on the valid legislation. Eight sampling profiles (SP were selected and water was sampled four times a year in the period May 2013–April 2014. PH, conductivity, oxygen content and temperature were measured directly in the field. Subsequently, ferrum, nitric nitrogen, ammoniacal nitrogen, sulphates, chlorides, chemical oxygen demand tested using dichromate, total phosphorus, total nitrogen and manganese were analysed in the laboratory. Analyses of selected heavy metals – zinc, copper and aluminum – were carried out in spring 2014. The results were classified in compliance with Government Decree (GD No. 61/2003 Coll., as amended, and Czech standard ČSN 75 7221. The results of the period 2013–2014 were compared with the results from 2002–2003 and 1992. The resulting concentrations of substances manifest considerable instability during the year, which can most likely be attributed to large changes in flow rates in different seasons. When comparing the values to older results, it can be concluded that the concentrations of a number of substances have decreased; by contrast, others have increased. An extreme increase in copper was detected, where the concentration exceeded the environmental quality standard several times.

  1. Using FOSM-Based Data Worth Analyses to Design Geophysical Surveys to Reduce Uncertainty in a Regional Groundwater Model Update

    Science.gov (United States)

    Smith, B. D.; White, J.; Kress, W. H.; Clark, B. R.; Barlow, J.

    2016-12-01

    Hydrogeophysical surveys have become an integral part of understanding hydrogeological frameworks used in groundwater models. Regional models cover a large area where water well data is, at best, scattered and irregular. Since budgets are finite, priorities must be assigned to select optimal areas for geophysical surveys. For airborne electromagnetic (AEM) geophysical surveys, optimization of mapping depth and line spacing needs to take in account the objectives of the groundwater models. The approach discussed here uses a first-order, second-moment (FOSM) uncertainty analyses which assumes an approximate linear relation between model parameters and observations. This assumption allows FOSM analyses to be applied to estimate the value of increased parameter knowledge to reduce forecast uncertainty. FOSM is used to facilitate optimization of yet-to-be-completed geophysical surveying to reduce model forecast uncertainty. The main objective of geophysical surveying is assumed to estimate values and spatial variation in hydrologic parameters (i.e. hydraulic conductivity) as well as map lower permeability layers that influence the spatial distribution of recharge flux. The proposed data worth analysis was applied to Mississippi Embayment Regional Aquifer Study (MERAS) which is being updated. The objective of MERAS is to assess the ground-water availability (status and trends) of the Mississippi embayment aquifer system. The study area covers portions of eight states including Alabama, Arkansas, Illinois, Kentucky, Louisiana, Mississippi, Missouri, and Tennessee. The active model grid covers approximately 70,000 square miles, and incorporates some 6,000 miles of major rivers and over 100,000 water wells. In the FOSM analysis, a dense network of pilot points was used to capture uncertainty in hydraulic conductivity and recharge. To simulate the effect of AEM flight lines, the prior uncertainty for hydraulic conductivity and recharge pilots along potential flight lines was

  2. Using plant growth modeling to analyse C source-sink relations under drought: inter and intra specific comparison

    Directory of Open Access Journals (Sweden)

    Benoit ePallas

    2013-11-01

    Full Text Available The ability to assimilate C and allocate NSC (non structural carbohydrates to the most appropriate organs is crucial to maximize plant ecological or agronomic performance. Such C source and sink activities are differentially affected by environmental constraints. Under drought, plant growth is generally more sink than source limited as organ expansion or appearance rate is earlier and stronger affected than C assimilation. This favors plant survival and recovery but not always agronomic performance as NSC are stored rather than used for growth due to a modified metabolism in source and sink leaves. Such interactions between plant C and water balance are complex and plant modeling can help analyzing their impact on plant phenotype. This paper addresses the impact of trade-offs between C sink and source activities and plant production under drought, combining experimental and modeling approaches. Two contrasted monocotyledonous species (rice, oil palm were studied. Experimentally, the sink limitation of plant growth under moderate drought was confirmed as well as the modifications in NSC metabolism in source and sink organs. Under severe stress, when C source became limiting, plant NSC concentration decreased. Two plant models dedicated to oil palm and rice morphogenesis were used to perform a sensitivity analysis and further explore how to optimize C sink and source drought sensitivity to maximize plant growth. Modeling results highlighted that optimal drought sensitivity depends both on drought type and species and that modeling is a great opportunity to analyse such complex processes. Further modeling needs and more generally the challenge of using models to support complex trait breeding are discussed.

  3. A multinomial logit model-Bayesian network hybrid approach for driver injury severity analyses in rear-end crashes.

    Science.gov (United States)

    Chen, Cong; Zhang, Guohui; Tarefder, Rafiqul; Ma, Jianming; Wei, Heng; Guan, Hongzhi

    2015-07-01

    Rear-end crash is one of the most common types of traffic crashes in the U.S. A good understanding of its characteristics and contributing factors is of practical importance. Previously, both multinomial Logit models and Bayesian network methods have been used in crash modeling and analysis, respectively, although each of them has its own application restrictions and limitations. In this study, a hybrid approach is developed to combine multinomial logit models and Bayesian network methods for comprehensively analyzing driver injury severities in rear-end crashes based on state-wide crash data collected in New Mexico from 2010 to 2011. A multinomial logit model is developed to investigate and identify significant contributing factors for rear-end crash driver injury severities classified into three categories: no injury, injury, and fatality. Then, the identified significant factors are utilized to establish a Bayesian network to explicitly formulate statistical associations between injury severity outcomes and explanatory attributes, including driver behavior, demographic features, vehicle factors, geometric and environmental characteristics, etc. The test results demonstrate that the proposed hybrid approach performs reasonably well. The Bayesian network reference analyses indicate that the factors including truck-involvement, inferior lighting conditions, windy weather conditions, the number of vehicles involved, etc. could significantly increase driver injury severities in rear-end crashes. The developed methodology and estimation results provide insights for developing effective countermeasures to reduce rear-end crash injury severities and improve traffic system safety performance.

  4. Analyses of simulations of three-dimensional lattice proteins in comparison with a simplified statistical mechanical model of protein folding.

    Science.gov (United States)

    Abe, H; Wako, H

    2006-07-01

    Folding and unfolding simulations of three-dimensional lattice proteins were analyzed using a simplified statistical mechanical model in which their amino acid sequences and native conformations were incorporated explicitly. Using this statistical mechanical model, under the assumption that only interactions between amino acid residues within a local structure in a native state are considered, the partition function of the system can be calculated for a given native conformation without any adjustable parameter. The simulations were carried out for two different native conformations, for each of which two foldable amino acid sequences were considered. The native and non-native contacts between amino acid residues occurring in the simulations were examined in detail and compared with the results derived from the theoretical model. The equilibrium thermodynamic quantities (free energy, enthalpy, entropy, and the probability of each amino acid residue being in the native state) at various temperatures obtained from the simulations and the theoretical model were also examined in order to characterize the folding processes that depend on the native conformations and the amino acid sequences. Finally, the free energy landscapes were discussed based on these analyses.

  5. Comparative analyses reveal potential uses of Brachypodium distachyon as a model for cold stress responses in temperate grasses

    Directory of Open Access Journals (Sweden)

    Li Chuan

    2012-05-01

    Full Text Available Abstract Background Little is known about the potential of Brachypodium distachyon as a model for low temperature stress responses in Pooideae. The ice recrystallization inhibition protein (IRIP genes, fructosyltransferase (FST genes, and many C-repeat binding factor (CBF genes are Pooideae specific and important in low temperature responses. Here we used comparative analyses to study conservation and evolution of these gene families in B. distachyon to better understand its potential as a model species for agriculturally important temperate grasses. Results Brachypodium distachyon contains cold responsive IRIP genes which have evolved through Brachypodium specific gene family expansions. A large cold responsive CBF3 subfamily was identified in B. distachyon, while CBF4 homologs are absent from the genome. No B. distachyon FST gene homologs encode typical core Pooideae FST-motifs and low temperature induced fructan accumulation was dramatically different in B. distachyon compared to core Pooideae species. Conclusions We conclude that B. distachyon can serve as an interesting model for specific molecular mechanisms involved in low temperature responses in core Pooideae species. However, the evolutionary history of key genes involved in low temperature responses has been different in Brachypodium and core Pooideae species. These differences limit the use of B. distachyon as a model for holistic studies relevant for agricultural core Pooideae species.

  6. 3D RECORDING FOR 2D DELIVERING – THE EMPLOYMENT OF 3D MODELS FOR STUDIES AND ANALYSES

    Directory of Open Access Journals (Sweden)

    A. Rizzi

    2012-09-01

    Full Text Available In the last years, thanks to the advances of surveying sensors and techniques, many heritage sites could be accurately replicated in digital form with very detailed and impressive results. The actual limits are mainly related to hardware capabilities, computation time and low performance of personal computer. Often, the produced models are not visible on a normal computer and the only solution to easily visualized them is offline using rendered videos. This kind of 3D representations is useful for digital conservation, divulgation purposes or virtual tourism where people can visit places otherwise closed for preservation or security reasons. But many more potentialities and possible applications are available using a 3D model. The problem is the ability to handle 3D data as without adequate knowledge this information is reduced to standard 2D data. This article presents some surveying and 3D modeling experiences within the APSAT project ("Ambiente e Paesaggi dei Siti d’Altura Trentini", i.e. Environment and Landscapes of Upland Sites in Trentino. APSAT is a multidisciplinary project funded by the Autonomous Province of Trento (Italy with the aim documenting, surveying, studying, analysing and preserving mountainous and hill-top heritage sites located in the region. The project focuses on theoretical, methodological and technological aspects of the archaeological investigation of mountain landscape, considered as the product of sequences of settlements, parcelling-outs, communication networks, resources, and symbolic places. The mountain environment preserves better than others the traces of hunting and gathering, breeding, agricultural, metallurgical, symbolic activities characterised by different lengths and environmental impacts, from Prehistory to the Modern Period. Therefore the correct surveying and documentation of this heritage sites and material is very important. Within the project, the 3DOM unit of FBK is delivering all the surveying

  7. Assessing models of speciation under different biogeographic scenarios; An empirical study using multi-locus and RNA-seq analyses

    Science.gov (United States)

    Edwards, Taylor; Tollis, Marc; Hsieh, PingHsun; Gutenkunst, Ryan N.; Liu, Zhen; Kusumi, Kenro; Culver, Melanie; Murphy, Robert W.

    2016-01-01

    Evolutionary biology often seeks to decipher the drivers of speciation, and much debate persists over the relative importance of isolation and gene flow in the formation of new species. Genetic studies of closely related species can assess if gene flow was present during speciation, because signatures of past introgression often persist in the genome. We test hypotheses on which mechanisms of speciation drove diversity among three distinct lineages of desert tortoise in the genus Gopherus. These lineages offer a powerful system to study speciation, because different biogeographic patterns (physical vs. ecological segregation) are observed at opposing ends of their distributions. We use 82 samples collected from 38 sites, representing the entire species' distribution and generate sequence data for mtDNA and four nuclear loci. A multilocus phylogenetic analysis in *BEAST estimates the species tree. RNA-seq data yield 20,126 synonymous variants from 7665 contigs from two individuals of each of the three lineages. Analyses of these data using the demographic inference package ∂a∂i serve to test the null hypothesis of no gene flow during divergence. The best-fit demographic model for the three taxa is concordant with the *BEAST species tree, and the ∂a∂i analysis does not indicate gene flow among any of the three lineages during their divergence. These analyses suggest that divergence among the lineages occurred in the absence of gene flow and in this scenario the genetic signature of ecological isolation (parapatric model) cannot be differentiated from geographic isolation (allopatric model).

  8. Virus-induced gene silencing as a tool for functional analyses in the emerging model plant Aquilegia (columbine, Ranunculaceae

    Directory of Open Access Journals (Sweden)

    Kramer Elena M

    2007-04-01

    Full Text Available Abstract Background The lower eudicot genus Aquilegia, commonly known as columbine, is currently the subject of extensive genetic and genomic research aimed at developing this taxon as a new model for the study of ecology and evolution. The ability to perform functional genetic analyses is a critical component of this development process and ultimately has the potential to provide insight into the genetic basis for the evolution of a wide array of traits that differentiate flowering plants. Aquilegia is of particular interest due to both its recent evolutionary history, which involves a rapid adaptive radiation, and its intermediate phylogenetic position between core eudicot (e.g., Arabidopsis and grass (e.g., Oryza model species. Results Here we demonstrate the effective use of a reverse genetic technique, virus-induced gene silencing (VIGS, to study gene function in this emerging model plant. Using Agrobacterium mediated transfer of tobacco rattle virus (TRV based vectors, we induce silencing of PHYTOENE DESATURASE (AqPDS in Aquilegia vulgaris seedlings, and ANTHOCYANIDIN SYNTHASE (AqANS and the B-class floral organ identity gene PISTILLATA in A. vulgaris flowers. For all of these genes, silencing phenotypes are associated with consistent reduction in endogenous transcript levels. In addition, we show that silencing of AqANS has no effect on overall floral morphology and is therefore a suitable marker for the identification of silenced flowers in dual-locus silencing experiments. Conclusion Our results show that TRV-VIGS in Aquilegia vulgaris allows data to be rapidly obtained and can be reproduced with effective survival and silencing rates. Furthermore, this method can successfully be used to evaluate the function of early-acting developmental genes. In the future, data derived from VIGS analyses will be combined with large-scale sequencing and microarray experiments already underway in order to address both recent and ancient evolutionary

  9. Nondestructive SEM for surface and subsurface wafer imaging

    Science.gov (United States)

    Propst, Roy H.; Bagnell, C. Robert; Cole, Edward I., Jr.; Davies, Brian G.; Dibianca, Frank A.; Johnson, Darryl G.; Oxford, William V.; Smith, Craig A.

    1987-01-01

    The scanning electron microscope (SEM) is considered as a tool for both failure analysis as well as device characterization. A survey is made of various operational SEM modes and their applicability to image processing methods on semiconductor devices.

  10. The luminal surface of thyroid cysts in SEM

    DEFF Research Database (Denmark)

    Zelander, T; Kirkeby, S

    1978-01-01

    Four of the five kinds of cells constituting the walls of thyroid cysts can be identified in the SEM. These are cuboidal cells, mucous cells, cells with large granules and ciliated cells. A correlation between SEM and TEM observations is attempted.......Four of the five kinds of cells constituting the walls of thyroid cysts can be identified in the SEM. These are cuboidal cells, mucous cells, cells with large granules and ciliated cells. A correlation between SEM and TEM observations is attempted....

  11. Systems genetics of obesity in an F2 pig model by genome-wide association, genetic network and pathway analyses

    Directory of Open Access Journals (Sweden)

    Lisette J. A. Kogelman

    2014-07-01

    Full Text Available Obesity is a complex condition with world-wide exponentially rising prevalence rates, linked with severe diseases like Type 2 Diabetes. Economic and welfare consequences have led to a raised interest in a better understanding of the biological and genetic background. To date, whole genome investigations focusing on single genetic variants have achieved limited success, and the importance of including genetic interactions is becoming evident. Here, the aim was to perform an integrative genomic analysis in an F2 pig resource population that was constructed with an aim to maximize genetic variation of obesity-related phenotypes and genotyped using the 60K SNP chip. Firstly, Genome Wide Association (GWA analysis was performed on the Obesity Index to locate candidate genomic regions that were further validated using combined Linkage Disequilibrium Linkage Analysis and investigated by evaluation of haplotype blocks. We built Weighted Interaction SNP Hub (WISH and differentially wired (DW networks using genotypic correlations amongst obesity-associated SNPs resulting from GWA analysis. GWA results and SNP modules detected by WISH and DW analyses were further investigated by functional enrichment analyses. The functional annotation of SNPs revealed several genes associated with obesity, e.g. NPC2 and OR4D10. Moreover, gene enrichment analyses identified several significantly associated pathways, over and above the GWA study results, that may influence obesity and obesity related diseases, e.g. metabolic processes. WISH networks based on genotypic correlations allowed further identification of various gene ontology terms and pathways related to obesity and related traits, which were not identified by the GWA study. In conclusion, this is the first study to develop a (genetic obesity index and employ systems genetics in a porcine model to provide important insights into the complex genetic architecture associated with obesity and many biological pathways

  12. Systems genetics of obesity in an F2 pig model by genome-wide association, genetic network, and pathway analyses.

    Science.gov (United States)

    Kogelman, Lisette J A; Pant, Sameer D; Fredholm, Merete; Kadarmideen, Haja N

    2014-01-01

    Obesity is a complex condition with world-wide exponentially rising prevalence rates, linked with severe diseases like Type 2 Diabetes. Economic and welfare consequences have led to a raised interest in a better understanding of the biological and genetic background. To date, whole genome investigations focusing on single genetic variants have achieved limited success, and the importance of including genetic interactions is becoming evident. Here, the aim was to perform an integrative genomic analysis in an F2 pig resource population that was constructed with an aim to maximize genetic variation of obesity-related phenotypes and genotyped using the 60K SNP chip. Firstly, Genome Wide Association (GWA) analysis was performed on the Obesity Index to locate candidate genomic regions that were further validated using combined Linkage Disequilibrium Linkage Analysis and investigated by evaluation of haplotype blocks. We built Weighted Interaction SNP Hub (WISH) and differentially wired (DW) networks using genotypic correlations amongst obesity-associated SNPs resulting from GWA analysis. GWA results and SNP modules detected by WISH and DW analyses were further investigated by functional enrichment analyses. The functional annotation of SNPs revealed several genes associated with obesity, e.g., NPC2 and OR4D10. Moreover, gene enrichment analyses identified several significantly associated pathways, over and above the GWA study results, that may influence obesity and obesity related diseases, e.g., metabolic processes. WISH networks based on genotypic correlations allowed further identification of various gene ontology terms and pathways related to obesity and related traits, which were not identified by the GWA study. In conclusion, this is the first study to develop a (genetic) obesity index and employ systems genetics in a porcine model to provide important insights into the complex genetic architecture associated with obesity and many biological pathways that underlie

  13. Analyses of the Classical Model for Porous Materials%多孔材料模型分析

    Institute of Scientific and Technical Information of China (English)

    刘培生; 夏凤金; 罗军

    2009-01-01

    New developments are ceaselessly gained for the preparation, the application and the property study of porous materials. As to the theories about the structure and properties of porous materials, the famous classical model-Gibson-Ashby model has been being commonly endorsed in the field of porous materials all over the world, and is the theoretical foundation widespreadly applied by numerous investigators to their relative researches up to now. Some supplementary thinking and analyses are made for the shortages in this model in the present paper, and it is found that some shortages can even break the completivity originally shown by this model. Based on the summery about these problems, another new model is introduced which can make up the shortcomings existed in Gibson-Ashby model.%多孔泡沫材料的制备、应用和性能研究均不断取得新的进展.在关于多孔材料结构和性能方面的理论中,著名的经典性模型--Gibson-Ashby模型一直受到国际同行的普遍认同,迄今仍然是众多研究者在研究工作中广泛应用的理论基础.对该模型尚存在的若干不足和问题进行了一些补充思考和分析,发现其中有些缺陷甚至可以打破该模型原来表现出来的"完满性".在总结陈述这些问题的基础上,引荐了可以克服或弥补上述模型不足的另一个模型.

  14. Latent vs. Observed Variables : Analysis of Irrigation Water Efficiency Using SEM and SUR

    NARCIS (Netherlands)

    Tang, Jianjun; Folmer, Henk

    2016-01-01

    In this paper we compare conceptualising single factor technical and allocative efficiency as indicators of a single latent variable, or as separate observed variables. In the former case, the impacts on both efficiency types are analysed by means of structural equationmodeling (SEM), in the latter

  15. 3-D Analysis of Graphite Nodules in Ductile Cast Iron Using FIB-SEM

    DEFF Research Database (Denmark)

    D'Angelo, Luca; Jespersen, Freja N.; MacDonald, A. Nicole;

    Ductile cast iron samples were analysed in a Focused Ion Beam Scanning Electron Microscope, FIB-SEM. The focussed ion beam was used to carefully remove layers of the graphite nodules to reveal internal structures in the nodules. The sample preparation and milling procedure for sectioning graphite...

  16. In-situ tensile testing of propellant samples within SEM

    NARCIS (Netherlands)

    Benedetto, G.L. di; Ramshorst, M.C.J. van; Duvalois, W.; Hooijmeijer, P.A.; Heijden, A.E.D.M. van der; Klerk, W.P.C. de

    2015-01-01

    A tensile module system placed within a FEI NovaNanoSEM 650 Scanning Electron Microscope (SEM) was utilized in this work to conduct in-situ tensile testing of propellant material samples. This tensile module system allows for real-time in-situ SEM analysis of the samples to determine the failure mec

  17. CREB3 subfamily transcription factors are not created equal: Recent insights from global analyses and animal models

    Directory of Open Access Journals (Sweden)

    Chan Chi-Ping

    2011-02-01

    Full Text Available Abstract The CREB3 subfamily of membrane-bound bZIP transcription factors has five members in mammals known as CREB3 and CREB3L1-L4. One current model suggests that CREB3 subfamily transcription factors are similar to ATF6 in regulated intramembrane proteolysis and transcriptional activation. Particularly, they were all thought to be proteolytically activated in response to endoplasmic reticulum (ER stress to stimulate genes that are involved in unfolded protein response (UPR. Although the physiological inducers of their proteolytic activation remain to be identified, recent findings from microarray analyses, RNAi screens and gene knockouts not only demonstrated their critical roles in regulating development, metabolism, secretion, survival and tumorigenesis, but also revealed cell type-specific patterns in the activation of their target genes. Members of the CREB3 subfamily show differential activity despite their structural similarity. The spectrum of their biological function expands beyond ER stress and UPR. Further analyses are required to elucidate the mechanism of their proteolytic activation and the molecular basis of their target recognition.

  18. Raman-in-SEM, a multimodal and multiscale analytical tool: performance for materials and expertise.

    Science.gov (United States)

    Wille, Guillaume; Bourrat, Xavier; Maubec, Nicolas; Lahfid, Abdeltif

    2014-12-01

    The availability of Raman spectroscopy in a powerful analytical scanning electron microscope (SEM) allows morphological, elemental, chemical, physical and electronic analysis without moving the sample between instruments. This paper documents the metrological performance of the SEMSCA commercial Raman interface operated in a low vacuum SEM. It provides multiscale and multimodal analyses as Raman/EDS, Raman/cathodoluminescence or Raman/STEM (STEM: scanning transmission electron microscopy) as well as Raman spectroscopy on nanomaterials. Since Raman spectroscopy in a SEM can be influenced by several SEM-related phenomena, this paper firstly presents a comparison of this new tool with a conventional micro-Raman spectrometer. Then, some possible artefacts are documented, which are due to the impact of electron beam-induced contamination or cathodoluminescence contribution to the Raman spectra, especially with geological samples. These effects are easily overcome by changing or adapting the Raman spectrometer and the SEM settings and methodology. The deletion of the adverse effect of cathodoluminescence is solved by using a SEM beam shutter during Raman acquisition. In contrast, this interface provides the ability to record the cathodoluminescence (CL) spectrum of a phase. In a second part, this study highlights the interest and efficiency of the coupling in characterizing micrometric phases at the same point. This multimodal approach is illustrated with various issues encountered in geosciences.

  19. Semântica e lexicografia

    Directory of Open Access Journals (Sweden)

    Julio Casares

    2001-01-01

    Full Text Available

    A Semântica e a Lexicografia se interpenetram mutuamente porque a Lexicografia não se limita a recolher as palavras do léxico, mas procura descrever a significação dos vocábulos e seus usos. O lexicógrafo também se ocupa de evolução dos sentidos das palavras para estabelecer a escala das acepções de um signo lexical. Casares conceitua acepção e discute o problema da discriminação das acepções e da sua ordenação no caso de palavras polissêmicas. Outra Questão delicada para o lexicógrafo é o reconhecimento e a identificação correta dos valores metafóricos. O autor usa como exemplo ilustrativo o verbete lat. ordo > esp. orden (port. ordem, signo polissêmico. Traça gráficos da ma-, lha de significações na semântica evolutiva dessa palavra, do étimo original latino ao espanhol moderno. Casares também trata do problema da lematização, ou seja, a decisão técnica de escolher como entrada de um dicionário, uma ou outra forma vocabular, o que envolve controvérsias permanentes em meio aos lexicólogos sobre as lexias (palavras complexas e como e quando se dá a categorização lexical de um polinómio vocabular. Esse problema é ampliado por causa da tradição caótica de muitas grafias, particularmente no caso de "locuções vocabulares". Advoga as vantagens e as virtudes de um dicionário que tivesse um índice de freqüência do uso de cada palavra, ou de cada acepção de um vocábulo.

  20. Comprehensive simulation of SEM images taking into account local and global electromagnetic fields

    Science.gov (United States)

    Babin, Sergey; Borisov, Sergey S.; Ito, Hiroyuki; Ivanchikov, Andrei; Matison, Dmitri; Militsin, Vladimir; Suzuki, Makoto

    2010-06-01

    We are reporting the development of a simulation tool with unique capabilities to comprehensively model an SEM signal. This includes electron scattering, charging, and detector settings, as well as modeling of the local and global electromagnetic fields and the electron trajectories in these fields. Experimental and simulated results were compared for SEM imaging of carbon nanofibers embedded into bulk material in the presence of significant charging, as well as for samples with applied potential on metal electrodes. The effect of the potentials applied to electrodes on the secondary emission was studied; the resulting SEM images were simulated. The image contrast depends strongly on the sign and the value of the potential. SEM imaging of nanofibers embedded into silicon dioxide resulted in the considerable change of the appeared dimensions of the fibers and as well as tone reversal when the beam voltage was varied. The results of the simulations are in agreement with experimental results.

  1. Robust surface reconstruction by design-guided SEM photometric stereo

    Science.gov (United States)

    Miyamoto, Atsushi; Matsuse, Hiroki; Koutaki, Gou

    2017-04-01

    We present a novel approach that addresses the blind reconstruction problem in scanning electron microscope (SEM) photometric stereo for complicated semiconductor patterns to be measured. In our previous work, we developed a bootstrapping de-shadowing and self-calibration (BDS) method, which automatically calibrates the parameter of the gradient measurement formulas and resolves shadowing errors for estimating an accurate three-dimensional (3D) shape and underlying shadowless images. Experimental results on 3D surface reconstruction demonstrated the significance of the BDS method for simple shapes, such as an isolated line pattern. However, we found that complicated shapes, such as line-and-space (L&S) and multilayered patterns, produce deformed and inaccurate measurement results. This problem is due to brightness fluctuations in the SEM images, which are mainly caused by the energy fluctuations of the primary electron beam, variations in the electronic expanse inside a specimen, and electrical charging of specimens. Despite these being essential difficulties encountered in SEM photometric stereo, it is difficult to model accurately all the complicated physical phenomena of electronic behavior. We improved the robustness of the surface reconstruction in order to deal with these practical difficulties with complicated shapes. Here, design data are useful clues as to the pattern layout and layer information of integrated semiconductors. We used the design data as a guide of the measured shape and incorporated a geometrical constraint term to evaluate the difference between the measured and designed shapes into the objective function of the BDS method. Because the true shape does not necessarily correspond to the designed one, we use an iterative scheme to develop proper guide patterns and a 3D surface that provides both a less distorted and more accurate 3D shape after convergence. Extensive experiments on real image data demonstrate the robustness and effectiveness

  2. Hierarchical linear modeling analyses of the NEO-PI-R scales in the Baltimore Longitudinal Study of Aging.

    Science.gov (United States)

    Terracciano, Antonio; McCrae, Robert R; Brant, Larry J; Costa, Paul T

    2005-09-01

    The authors examined age trends in the 5 factors and 30 facets assessed by the Revised NEO Personality Inventory in Baltimore Longitudinal Study of Aging data (N=1,944; 5,027 assessments) collected between 1989 and 2004. Consistent with cross-sectional results, hierarchical linear modeling analyses showed gradual personality changes in adulthood: a decline in Neuroticism up to age 80, stability and then decline in Extraversion, decline in Openness, increase in Agreeableness, and increase in Conscientiousness up to age 70. Some facets showed different curves from the factor they define. Birth cohort effects were modest, and there were no consistent Gender x Age interactions. Significant nonnormative changes were found for all 5 factors; they were not explained by attrition but might be due to genetic factors, disease, or life experience. Copyright (c) 2005 APA, all rights reserved.

  3. Hierarchical Linear Modeling Analyses of NEO-PI-R Scales In the Baltimore Longitudinal Study of Aging

    Science.gov (United States)

    Terracciano, Antonio; McCrae, Robert R.; Brant, Larry J.; Costa, Paul T.

    2009-01-01

    We examined age trends in the five factors and 30 facets assessed by the Revised NEO Personality Inventory in Baltimore Longitudinal Study of Aging data (N = 1,944; 5,027 assessments) collected between 1989 and 2004. Consistent with cross-sectional results, Hierarchical Linear Modeling analyses showed gradual personality changes in adulthood: a decline up to age 80 in Neuroticism, stability and then decline in Extraversion, decline in Openness, increase in Agreeableness, and increase up to age 70 in Conscientiousness. Some facets showed different curves from the factor they define. Birth cohort effects were modest, and there were no consistent Gender × Age interactions. Significant non-normative changes were found for all five factors; they were not explained by attrition but might be due to genetic factors, disease, or life experience. PMID:16248708

  4. COUPLING EFFECTS FOR CELL-TRUSS SPAR PLATFORM: COMPARISON OF FREQUENCY- AND TIME-DOMAIN ANALYSES WITH MODEL TESTS

    Institute of Scientific and Technical Information of China (English)

    ZHANG Fan; YANG Jian-min; LI Run-pei; CHEN Gang

    2008-01-01

    For the floating structures in deepwater, the coupling effects of the mooring lines and risers on the motion responses of the structures become increasingly significant. Viscous damping, inertial mass, current loading and restoring, etc. from these slender structures should be carefully handled to accurately predict the motion responses and line tensions. For the spar platforms, coupling the mooring system and riser with the vessel motion typically results in a reduction in extreme motion responses. This article presents numerical simulations and model tests on a new cell-truss spar platform in the State Key Laboratory of Ocean Engineering in Shanghai Jiaotong University. Results from three calculation methods, including frequency-domain analysis, time-domain semi-coupled and fully-coupled analyses, were compared with the experimental data to find the applicability of different approaches. Proposals for the improvement of numerical calculations and experimental technique were tabled as well.

  5. Using niche-modelling and species-specific cost analyses to determine a multispecies corridor in a fragmented landscape

    Science.gov (United States)

    Zurano, Juan Pablo; Selleski, Nicole; Schneider, Rosio G.

    2017-01-01

    types independent of the degree of legal protection. These data used with multifocal GIS analyses balance the varying degree of overlap and unique properties among them allowing for comprehensive conservation strategies to be developed relatively rapidly. Our comprehensive approach serves as a model to other regions faced with habitat loss and lack of data. The five carnivores focused on in our study have wide ranges, so the results from this study can be expanded and combined with surrounding countries, with analyses at the species or community level. PMID:28841692

  6. EPA Region 2 SEMS_CERCLIS Sites All [R2] and SEMS_CERCLIS Sites NPL [R2] GIS Layers

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Region 2 SEMS_CERCLIS Sites All [R2] GIS layer contains unique Superfund Enterprise Management System (SEMS) site records. These records have the following...

  7. From Global Climate Model Projections to Local Impacts Assessments: Analyses in Support of Planning for Climate Change

    Science.gov (United States)

    Snover, A. K.; Littell, J. S.; Mantua, N. J.; Salathe, E. P.; Hamlet, A. F.; McGuire Elsner, M.; Tohver, I.; Lee, S.

    2010-12-01

    Assessing and planning for the impacts of climate change require regionally-specific information. Information is required not only about projected changes in climate but also the resultant changes in natural and human systems at the temporal and spatial scales of management and decision making. Therefore, climate impacts assessment typically results in a series of analyses, in which relatively coarse-resolution global climate model projections of changes in regional climate are downscaled to provide appropriate input to local impacts models. This talk will describe recent examples in which coarse-resolution (~150 to 300km) GCM output was “translated” into information requested by decision makers at relatively small (watershed) and large (multi-state) scales using regional climate modeling, statistical downscaling, hydrologic modeling, and sector-specific impacts modeling. Projected changes in local air temperature, precipitation, streamflow, and stream temperature were developed to support Seattle City Light’s assessment of climate change impacts on hydroelectric operations, future electricity load, and resident fish populations. A state-wide assessment of climate impacts on eight sectors (agriculture, coasts, energy, forests, human health, hydrology and water resources, salmon, and urban stormwater infrastructure) was developed for Washington State to aid adaptation planning. Hydro-climate change scenarios for approximately 300 streamflow locations in the Columbia River basin and selected coastal drainages west of the Cascades were developed in partnership with major water management agencies in the Pacific Northwest to allow planners to consider how hydrologic changes may affect management objectives. Treatment of uncertainty in these assessments included: using “bracketing” scenarios to describe a range of impacts, using ensemble averages to characterize the central estimate of future conditions (given an emissions scenario), and explicitly assessing

  8. Civil engineering: EDF needs for concrete modelling; Genie civile: analyse des besoins EDF en modelisation du comportement des betons

    Energy Technology Data Exchange (ETDEWEB)

    Didry, O.; Gerard, B.; Bui, D. [Electricite de France (EDF), Direction des Etudes et Recherches, 92 - Clamart (France)

    1997-12-31

    Concrete structures which are encountered at EDF, like all civil engineering structures, age. In order to adapt the maintenance conditions of these structures, particularly to extend their service life, and also to prepare constructions of future structures, tools for predicting the behaviour of these structures in their environment should be available. For EDF the technical risks are high and consequently very appropriate R and D actions are required. In this context the Direction des Etudes et Recherches (DER) has developed a methodology for analysing concrete structure behaviour modelling. This approach has several aims: - making a distinction between the problems which refer to the existing models and those which require R and D; - displaying disciplinary links between different problems encountered on EDF structures (non-linear mechanical, chemical - hydraulic - mechanical coupling, etc); - listing of the existing tools and positioning the DER `Aster` finite element code among them. This document is a state of the art of scientific knowledge intended to shed light on the fields in which one should be involved when there is, on one part a strong requirement on the side of structure operators, and on the other one, the present tools do not allow this requirement to be satisfactorily met. The analysis has been done on 12 scientific subjects: 1) Hydration of concrete at early ages: exothermicity, hardening, autogenous shrinkage; 2) Drying and drying shrinkage; 3) Alkali-silica reaction and bulky stage formation; 4) Long term deterioration by leaching; 5) Ionic diffusion and associated attacks: the chlorides case; 6) Permeability / tightness of concrete; 7) Concretes -nonlinear behaviour and cracking (I): contribution of the plasticity models; 8) Concretes - nonlinear behaviour and cracking (II): contribution of the damage models; 9) Concretes - nonlinear behaviour and cracking (III): the contribution of the probabilistic analysis model; 10) Delayed behaviour of

  9. Focused ion beam (FIB)/scanning electron microscopy (SEM) in tissue structural research.

    Science.gov (United States)

    Leser, Vladka; Milani, Marziale; Tatti, Francesco; Tkalec, Ziva Pipan; Strus, Jasna; Drobne, Damjana

    2010-10-01

    The focused ion beam (FIB) and scanning electron microscope (SEM) are commonly used in material sciences for imaging and analysis of materials. Over the last decade, the combined FIB/SEM system has proven to be also applicable in the life sciences. We have examined the potential of the focused ion beam/scanning electron microscope system for the investigation of biological tissues of the model organism Porcellio scaber (Crustacea: Isopoda). Tissue from digestive glands was prepared as for conventional SEM or as for transmission electron microscopy (TEM). The samples were transferred into FIB/SEM for FIB milling and an imaging operation. FIB-milled regions were secondary electron imaged, back-scattered electron imaged, or energy dispersive X-ray (EDX) analyzed. Our results demonstrated that FIB/SEM enables simultaneous investigation of sample gross morphology, cell surface characteristics, and subsurface structures. The same FIB-exposed regions were analyzed by EDX to provide basic compositional data. When samples were prepared as for TEM, the information obtained with FIB/SEM is comparable, though at limited magnification, to that obtained from TEM. A combination of imaging, micro-manipulation, and compositional analysis appears of particular interest in the investigation of epithelial tissues, which are subjected to various endogenous and exogenous conditions affecting their structure and function. The FIB/SEM is a promising tool for an overall examination of epithelial tissue under normal, stressed, or pathological conditions.

  10. Spatially quantitative models for vulnerability analyses and resilience measures in flood risk management: Case study Rafina, Greece

    Science.gov (United States)

    Karagiorgos, Konstantinos; Chiari, Michael; Hübl, Johannes; Maris, Fotis; Thaler, Thomas; Fuchs, Sven

    2013-04-01

    We will address spatially quantitative models for vulnerability analyses in flood risk management in the catchment of Rafina, 25 km east of Athens, Greece; and potential measures to reduce damage costs. The evaluation of flood damage losses is relatively advanced. Nevertheless, major problems arise since there are no market prices for the evaluation process available. Moreover, there is particular gap in quantifying the damages and necessary expenditures for the implementation of mitigation measures with respect to flash floods. The key issue is to develop prototypes for assessing flood losses and the impact of mitigation measures on flood resilience by adjusting a vulnerability model and to further develop the method in a Mediterranean region influenced by both, mountain and coastal characteristics of land development. The objective of this study is to create a spatial and temporal analysis of the vulnerability factors based on a method combining spatially explicit loss data, data on the value of exposed elements at risk, and data on flood intensities. In this contribution, a methodology for the development of a flood damage assessment as a function of the process intensity and the degree of loss is presented. It is shown that (1) such relationships for defined object categories are dependent on site-specific and process-specific characteristics, but there is a correlation between process types that have similar characteristics; (2) existing semi-quantitative approaches of vulnerability assessment for elements at risk can be improved based on the proposed quantitative method; and (3) the concept of risk can be enhanced with respect to a standardised and comprehensive implementation by applying the vulnerability functions to be developed within the proposed research. Therefore, loss data were collected from responsible administrative bodies and analysed on an object level. The used model is based on a basin scale approach as well as data on elements at risk exposed

  11. Analysis list: sem-4 [Chip-atlas[Archive

    Lifescience Database Archive (English)

    Full Text Available sem-4 Larvae + ce10 http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/target/sem-4.1....tsv http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/target/sem-4.5.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/target/sem...-4.10.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/colo/sem-4.Larvae.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/colo/Larvae.gml ...

  12. Statistical correlations and risk analyses techniques for a diving dual phase bubble model and data bank using massively parallel supercomputers.

    Science.gov (United States)

    Wienke, B R; O'Leary, T R

    2008-05-01

    Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.

  13. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    Science.gov (United States)

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  14. Time Headway Modelling of Motorcycle-Dominated Traffic to Analyse Traffic Safety Performance and Road Link Capacity of Single Carriageways

    Directory of Open Access Journals (Sweden)

    D. M. Priyantha Wedagama

    2017-04-01

    Full Text Available This study aims to develop time headway distribution models to analyse traffic safety performance and road link capacities for motorcycle-dominated traffic in Denpasar, Bali. Three road links selected as the case study are Jl. Hayam Wuruk, Jl.Hang Tuah, and Jl. Padma. Data analysis showed that between 55%-80% of motorists in Denpasar during morning and evening peak hours paid less attention to the safe distance with the vehicles in front. The study found that Lognormal distribution models are best to fit time headway data during morning peak hours while either Weibull (3P or Pearson III distributions is for evening peak hours. Road link capacities for mixed traffic predominantly motorcycles are apparently affected by the behaviour of motorists in keeping safe distance with the vehicles in front. Theoretical road link capacities for Jl. Hayam Wuruk, Jl. Hang Tuah and Jl. Padma are 3,186 vehicles/hour, 3,077 vehicles/hour and 1935 vehicles/hour respectively.

  15. Bayesian salamanders: analysing the demography of an underground population of the European plethodontid Speleomantes strinatii with state-space modelling

    Directory of Open Access Journals (Sweden)

    Salvidio Sebastiano

    2010-02-01

    Full Text Available Abstract Background It has been suggested that Plethodontid salamanders are excellent candidates for indicating ecosystem health. However, detailed, long-term data sets of their populations are rare, limiting our understanding of the demographic processes underlying their population fluctuations. Here we present a demographic analysis based on a 1996 - 2008 data set on an underground population of Speleomantes strinatii (Aellen in NW Italy. We utilised a Bayesian state-space approach allowing us to parameterise a stage-structured Lefkovitch model. We used all the available population data from annual temporary removal experiments to provide us with the baseline data on the numbers of juveniles, subadults and adult males and females present at any given time. Results Sampling the posterior chains of the converged state-space model gives us the likelihood distributions of the state-specific demographic rates and the associated uncertainty of these estimates. Analysing the resulting parameterised Lefkovitch matrices shows that the population growth is very close to 1, and that at population equilibrium we expect half of the individuals present to be adults of reproductive age which is what we also observe in the data. Elasticity analysis shows that adult survival is the key determinant for population growth. Conclusion This analysis demonstrates how an understanding of population demography can be gained from structured population data even in a case where following marked individuals over their whole lifespan is not practical.

  16. A simple experimental procedure to quantify image noise in the context of strain measurements at the microscale using DIC and SEM images

    Directory of Open Access Journals (Sweden)

    Bornert M.

    2010-06-01

    Full Text Available Image noise is an important factor that influences the accuracy of strain field measurements by means of digital image correlation and scanning electron microscope (SEM imaging. We propose a new model to quantify the SEM image noise, which extends the classical photon noise model by taking into account the brightness setup in SEM imaging. Furthermore, we apply this model to investigate the impact of different SEM setting parameters on image noise, such as detector, dwell time, spot size, and pressure in the SEM chamber in the context of low vacuum imaging.

  17. Comparison of statistical inferences from the DerSimonian-Laird and alternative random-effects model meta-analyses - an empirical assessment of 920 Cochrane primary outcome meta-analyses.

    Science.gov (United States)

    Thorlund, Kristian; Wetterslev, Jørn; Awad, Tahany; Thabane, Lehana; Gluud, Christian

    2011-12-01

    In random-effects model meta-analysis, the conventional DerSimonian-Laird (DL) estimator typically underestimates the between-trial variance. Alternative variance estimators have been proposed to address this bias. This study aims to empirically compare statistical inferences from random-effects model meta-analyses on the basis of the DL estimator and four alternative estimators, as well as distributional assumptions (normal distribution and t-distribution) about the pooled intervention effect. We evaluated the discrepancies of p-values, 95% confidence intervals (CIs) in statistically significant meta-analyses, and the degree (percentage) of statistical heterogeneity (e.g. I(2)) across 920 Cochrane primary outcome meta-analyses. In total, 414 of the 920 meta-analyses were statistically significant with the DL meta-analysis, and 506 were not. Compared with the DL estimator, the four alternative estimators yielded p-values and CIs that could be interpreted as discordant in up to 11.6% or 6% of the included meta-analyses pending whether a normal distribution or a t-distribution of the intervention effect estimates were assumed. Large discrepancies were observed for the measures of degree of heterogeneity when comparing DL with each of the four alternative estimators. Estimating the degree (percentage) of heterogeneity on the basis of less biased between-trial variance estimators seems preferable to current practice. Disclosing inferential sensitivity of p-values and CIs may also be necessary when borderline significant results have substantial impact on the conclusion. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Mechanical and SEM analysis of artificial comet nucleus samples

    Science.gov (United States)

    Thiel, K.; Kochan, H.; Roessler, K.; Gruen, E.; Schwehm, G.; Hellmann, H.; Hsiung, P.; Koelzer, G.

    1989-01-01

    Since 1987 experiments dealing with comet nucleus phenomena have been carried out in the DFVLR space simulation chambers. The main objective of these experiments is a better understanding of thermal behavior, surface phenomena and especially the gas dust interaction. As a function of different sample compositions and exposure to solar irradiation (xenon-bulbs) crusts of different hardness and thickness were measured. The measuring device consists of a motor driven pressure foot (5 mm diameter), which is pressed into the sample. The applied compressive force is electronically monitored. The microstructure of the crust and dust residuals is investigated by scanning electron microscopy (SEM) techniques. Stress-depth profiles of an unirradiated and an irradiated model comet are given.

  19. Filler segmentation of SEM paper images based on mathematical morphology.

    Science.gov (United States)

    Ait Kbir, M; Benslimane, Rachid; Princi, Elisabetta; Vicini, Silvia; Pedemonte, Enrico

    2007-07-01

    Recent developments in microscopy and image processing have made digital measurements on high-resolution images of fibrous materials possible. This helps to gain a better understanding of the structure and other properties of the material at micro level. In this paper SEM image segmentation based on mathematical morphology is proposed. In fact, paper models images (Whatman, Murillo, Watercolor, Newsprint paper) selected in the context of the Euro Mediterranean PaperTech Project have different distributions of fibers and fillers, caused by the presence of SiAl and CaCO3 particles. It is a microscopy challenge to make filler particles in the sheet distinguishable from the other components of the paper surface. This objectif is reached here by using switable strutural elements and mathematical morphology operators.

  20. Canticum Novum: música sem palavras e palavras sem som no pensamento de Santo Agostinho

    Directory of Open Access Journals (Sweden)

    Lorenzo Mammì

    2000-04-01

    Full Text Available NO De Magistro, Santo Agostinho coloca a reza e o canto numa posição similar, à margem das funções imediatamente comunicativas da linguagem. A reflexão agostiniana sobre a reza se baseia nos hábitos cristãos da leitura, da oração e da meditação silenciosas. Há sobre o canto, na prática igualmente inovadora do jubilus, melodia sem palavra destinada aos momentos mais intensos e gaudiosos da liturgia. A oração silenciosa e o jubilus são temas recorrentes da literatura patrística, mas Agostinho os aborda de maneira original, desenhando, a partir das palavras sem som da oração e do som sem palavra do jubilus, o perfil de um discurso interior, que não se destina aos homens, mas a Deus.IN HIS De Magistro Saint Augustine places prayer and song on a similar level, alongside the language immediately communicative functions. His considerations on prayer are grounded on the Christian habits of silent reading, prayer and meditation; those on song, on the equally innovating practice called jubilus, which is melody without words designed for the intensest and most joyous liturgical moments. Silent prayer and jubilus are recurring topics in patristic literature, but Augustine deals with them in an original way, drawing from the soundless words of prayer and the wordless sound of jubilus an inner discourse, addressed not to men but to God.

  1. Using ecological niche models and niche analyses to understand speciation patterns: the case of sister neotropical orchid bees.

    Directory of Open Access Journals (Sweden)

    Daniel P Silva

    Full Text Available The role of past connections between the two major South American forested biomes on current species distribution has been recognized a long time ago. Climatic oscillations that further separated these biomes have promoted parapatric speciation, in which many species had their continuous distribution split, giving rise to different but related species (i.e., different potential distributions and realized niche features. The distribution of many sister species of orchid bees follow this pattern. Here, using ecological niche models and niche analyses, we (1 tested the role of ecological niche differentiation on the divergence between sister orchid-bees (genera Eulaema and Eufriesea from the Amazon and Atlantic forests, and (2 highlighted interesting areas for new surveys. Amazonian species occupied different realized niches than their Atlantic sister species. Conversely, species of sympatric but distantly related Eulaema bees occupied similar realized niches. Amazonian species had a wide potential distribution in South America, whereas Atlantic Forest species were more limited to the eastern coast of the continent. Additionally, we identified several areas in need of future surveys. Our results show that the realized niche of Atlantic-Amazonian sister species of orchid bees, which have been previously treated as allopatric populations of three species, had limited niche overlap and similarity. These findings agree with their current taxonomy, which treats each of those populations as distinct valid species.

  2. Using ecological niche models and niche analyses to understand speciation patterns: the case of sister neotropical orchid bees.

    Science.gov (United States)

    Silva, Daniel P; Vilela, Bruno; De Marco, Paulo; Nemésio, André

    2014-01-01

    The role of past connections between the two major South American forested biomes on current species distribution has been recognized a long time ago. Climatic oscillations that further separated these biomes have promoted parapatric speciation, in which many species had their continuous distribution split, giving rise to different but related species (i.e., different potential distributions and realized niche features). The distribution of many sister species of orchid bees follow this pattern. Here, using ecological niche models and niche analyses, we (1) tested the role of ecological niche differentiation on the divergence between sister orchid-bees (genera Eulaema and Eufriesea) from the Amazon and Atlantic forests, and (2) highlighted interesting areas for new surveys. Amazonian species occupied different realized niches than their Atlantic sister species. Conversely, species of sympatric but distantly related Eulaema bees occupied similar realized niches. Amazonian species had a wide potential distribution in South America, whereas Atlantic Forest species were more limited to the eastern coast of the continent. Additionally, we identified several areas in need of future surveys. Our results show that the realized niche of Atlantic-Amazonian sister species of orchid bees, which have been previously treated as allopatric populations of three species, had limited niche overlap and similarity. These findings agree with their current taxonomy, which treats each of those populations as distinct valid species.

  3. Water flow experiments and analyses on the cross-flow type mercury target model with the flow guide plates

    CERN Document Server

    Haga, K; Kaminaga, M; Hino, R

    2001-01-01

    A mercury target is used in the spallation neutron source driven by a high-intensity proton accelerator. In this study, the effectiveness of the cross-flow type mercury target structure was evaluated experimentally and analytically. Prior to the experiment, the mercury flow field and the temperature distribution in the target container were analyzed assuming a proton beam energy and power of 1.5 GeV and 5 MW, respectively, and the feasibility of the cross-flow type target was evaluated. Then the average water flow velocity field in the target mock-up model, which was fabricated from Plexiglass for a water experiment, was measured at room temperature using the PIV technique. Water flow analyses were conducted and the analytical results were compared with the experimental results. The experimental results showed that the cross-flow could be realized in most of the proton beam path area and the analytical result of the water flow velocity field showed good correspondence to the experimental results in the case w...

  4. Rapid evaluation of particle properties using inverse SEM simulations

    Energy Technology Data Exchange (ETDEWEB)

    Bekar, Kursat B [ORNL; Miller, Thomas Martin [ORNL; Patton, Bruce W [ORNL; Weber, Charles F [ORNL

    2017-01-01

    The characteristic X-rays produced by the interactions of the electron beam with the sample in a scanning electron microscope (SEM) are usually captured with a variable-energy detector, a process termed energy dispersive spectrometry (EDS). The purpose of this work is to exploit inverse simulations of SEM-EDS spectra to enable rapid determination of sample properties, particularly elemental composition. This is accomplished using penORNL, a modified version of PENELOPE, and a modified version of the traditional Levenberg Marquardt nonlinear optimization algorithm, which together is referred to as MOZAIK-SEM. The overall conclusion of this work is that MOZAIK-SEM is a promising method for performing inverse analysis of X-ray spectra generated within a SEM. As this methodology exists now, MOZAIK-SEM has been shown to calculate the elemental composition of an unknown sample within a few percent of the actual composition.

  5. Modeling Acequia Irrigation Systems Using System Dynamics: Model Development, Evaluation, and Sensitivity Analyses to Investigate Effects of Socio-Economic and Biophysical Feedbacks

    Directory of Open Access Journals (Sweden)

    Benjamin L. Turner

    2016-10-01

    Full Text Available Agriculture-based irrigation communities of northern New Mexico have survived for centuries despite the arid environment in which they reside. These irrigation communities are threatened by regional population growth, urbanization, a changing demographic profile, economic development, climate change, and other factors. Within this context, we investigated the extent to which community resource management practices centering on shared resources (e.g., water for agricultural in the floodplains and grazing resources in the uplands and mutualism (i.e., shared responsibility of local residents to maintaining traditional irrigation policies and upholding cultural and spiritual observances embedded within the community structure influence acequia function. We used a system dynamics modeling approach as an interdisciplinary platform to integrate these systems, specifically the relationship between community structure and resource management. In this paper we describe the background and context of acequia communities in northern New Mexico and the challenges they face. We formulate a Dynamic Hypothesis capturing the endogenous feedbacks driving acequia community vitality. Development of the model centered on major stock-and-flow components, including linkages for hydrology, ecology, community, and economics. Calibration metrics were used for model evaluation, including statistical correlation of observed and predicted values and Theil inequality statistics. Results indicated that the model reproduced trends exhibited by the observed system. Sensitivity analyses of socio-cultural processes identified absentee decisions, cumulative income effect on time in agriculture, and land use preference due to time allocation, community demographic effect, effect of employment on participation, and farm size effect as key determinants of system behavior and response. Sensitivity analyses of biophysical parameters revealed that several key parameters (e.g., acres per

  6. Transitividade dos verbos alternantes: uma proposta semântica

    Directory of Open Access Journals (Sweden)

    Larissa CIRÍACO

    2009-12-01

    Full Text Available Este artigo traz uma proposta semântica para se classificar os verbos alternantes quanto a sua transitividade. Parte-se de uma análise das propriedades semântico-lexicais acarretadas pelos verbos causativos do Português Brasileiro, assumindo-se ser a transitivi­dade um fenômeno de interface entre a sintaxe e a semântica lexical. A proposta mostra não só a propriedade semântica relevante para a transitividade, mas também os processos gerais responsáveis pelas alternâncias verbais.

  7. Epidemiology of HPV 16 and cervical cancer in Finland and the potential impact of vaccination: mathematical modelling analyses.

    Directory of Open Access Journals (Sweden)

    Ruanne V Barnabas

    2006-05-01

    Full Text Available BACKGROUND: Candidate human papillomavirus (HPV vaccines have demonstrated almost 90%-100% efficacy in preventing persistent, type-specific HPV infection over 18 mo in clinical trials. If these vaccines go on to demonstrate prevention of precancerous lesions in phase III clinical trials, they will be licensed for public use in the near future. How these vaccines will be used in countries with national cervical cancer screening programmes is an important question. METHODS AND FINDINGS: We developed a transmission model of HPV 16 infection and progression to cervical cancer and calibrated it to Finnish HPV 16 seroprevalence over time. The model was used to estimate the transmission probability of the virus, to look at the effect of changes in patterns of sexual behaviour and smoking on age-specific trends in cancer incidence, and to explore the impact of HPV 16 vaccination. We estimated a high per-partnership transmission probability of HPV 16, of 0.6. The modelling analyses showed that changes in sexual behaviour and smoking accounted, in part, for the increase seen in cervical cancer incidence in 35- to 39-y-old women from 1990 to 1999. At both low (10% in opportunistic immunisation and high (90% in a national immunisation programme coverage of the adolescent population, vaccinating women and men had little benefit over vaccinating women alone. We estimate that vaccinating 90% of young women before sexual debut has the potential to decrease HPV type-specific (e.g., type 16 cervical cancer incidence by 91%. If older women are more likely to have persistent infections and progress to cancer, then vaccination with a duration of protection of less than 15 y could result in an older susceptible cohort and no decrease in cancer incidence. While vaccination has the potential to significantly reduce type-specific cancer incidence, its combination with screening further improves cancer prevention. CONCLUSIONS: HPV vaccination has the potential to

  8. Automated SEM Modal Analysis Applied to the Diogenites

    Science.gov (United States)

    Bowman, L. E.; Spilde, M. N.; Papike, James J.

    1996-01-01

    Analysis of volume proportions of minerals, or modal analysis, is routinely accomplished by point counting on an optical microscope, but the process, particularly on brecciated samples such as the diogenite meteorites, is tedious and prone to error by misidentification of very small fragments, which may make up a significant volume of the sample. Precise volume percentage data can be gathered on a scanning electron microscope (SEM) utilizing digital imaging and an energy dispersive spectrometer (EDS). This form of automated phase analysis reduces error, and at the same time provides more information than could be gathered using simple point counting alone, such as particle morphology statistics and chemical analyses. We have previously studied major, minor, and trace-element chemistry of orthopyroxene from a suite of diogenites. This abstract describes the method applied to determine the modes on this same suite of meteorites and the results of that research. The modal abundances thus determined add additional information on the petrogenesis of the diogenites. In addition, low-abundance phases such as spinels were located for further analysis by this method.

  9. Structural equation modeling: building and evaluating causal models: Chapter 8

    Science.gov (United States)

    Grace, James B.; Scheiner, Samuel M.; Schoolmaster, Donald R.

    2015-01-01

    Scientists frequently wish to study hypotheses about causal relationships, rather than just statistical associations. This chapter addresses the question of how scientists might approach this ambitious task. Here we describe structural equation modeling (SEM), a general modeling framework for the study of causal hypotheses. Our goals are to (a) concisely describe the methodology, (b) illustrate its utility for investigating ecological systems, and (c) provide guidance for its application. Throughout our presentation, we rely on a study of the effects of human activities on wetland ecosystems to make our description of methodology more tangible. We begin by presenting the fundamental principles of SEM, including both its distinguishing characteristics and the requirements for modeling hypotheses about causal networks. We then illustrate SEM procedures and offer guidelines for conducting SEM analyses. Our focus in this presentation is on basic modeling objectives and core techniques. Pointers to additional modeling options are also given.

  10. Multiplicative models of analysis : a description and the use in analysing accident ratios as a function of hourly traffic volume and road-surface skidding resistance.

    NARCIS (Netherlands)

    Oppe, S.

    1977-01-01

    Accident ratios are analysed with regard to the variables road surface skidding resistance and hourly traffic volume. It is concluded that the multiplicative model describes the data better than the additive model. Moreover that there is no interaction between skidding resistance and traffic volume

  11. A second-generation device for automated training and quantitative behavior analyses of molecularly-tractable model organisms.

    Directory of Open Access Journals (Sweden)

    Douglas Blackiston

    Full Text Available A deep understanding of cognitive processes requires functional, quantitative analyses of the steps leading from genetics and the development of nervous system structure to behavior. Molecularly-tractable model systems such as Xenopus laevis and planaria offer an unprecedented opportunity to dissect the mechanisms determining the complex structure of the brain and CNS. A standardized platform that facilitated quantitative analysis of behavior would make a significant impact on evolutionary ethology, neuropharmacology, and cognitive science. While some animal tracking systems exist, the available systems do not allow automated training (feedback to individual subjects in real time, which is necessary for operant conditioning assays. The lack of standardization in the field, and the numerous technical challenges that face the development of a versatile system with the necessary capabilities, comprise a significant barrier keeping molecular developmental biology labs from integrating behavior analysis endpoints into their pharmacological and genetic perturbations. Here we report the development of a second-generation system that is a highly flexible, powerful machine vision and environmental control platform. In order to enable multidisciplinary studies aimed at understanding the roles of genes in brain function and behavior, and aid other laboratories that do not have the facilities to undergo complex engineering development, we describe the device and the problems that it overcomes. We also present sample data using frog tadpoles and flatworms to illustrate its use. Having solved significant engineering challenges in its construction, the resulting design is a relatively inexpensive instrument of wide relevance for several fields, and will accelerate interdisciplinary discovery in pharmacology, neurobiology, regenerative medicine, and cognitive science.

  12. Transcriptomics and proteomics analyses of the PACAP38 influenced ischemic brain in permanent middle cerebral artery occlusion model mice

    Directory of Open Access Journals (Sweden)

    Hori Motohide

    2012-11-01

    Full Text Available Abstract Introduction The neuropeptide pituitary adenylate cyclase-activating polypeptide (PACAP is considered to be a potential therapeutic agent for prevention of cerebral ischemia. Ischemia is a most common cause of death after heart attack and cancer causing major negative social and economic consequences. This study was designed to investigate the effect of PACAP38 injection intracerebroventrically in a mouse model of permanent middle cerebral artery occlusion (PMCAO along with corresponding SHAM control that used 0.9% saline injection. Methods Ischemic and non-ischemic brain tissues were sampled at 6 and 24 hours post-treatment. Following behavioral analyses to confirm whether the ischemia has occurred, we investigated the genome-wide changes in gene and protein expression using DNA microarray chip (4x44K, Agilent and two-dimensional gel electrophoresis (2-DGE coupled with matrix assisted laser desorption/ionization-time of flight-mass spectrometry (MALDI-TOF-MS, respectively. Western blotting and immunofluorescent staining were also used to further examine the identified protein factor. Results Our results revealed numerous changes in the transcriptome of ischemic hemisphere (ipsilateral treated with PACAP38 compared to the saline-injected SHAM control hemisphere (contralateral. Previously known (such as the interleukin family and novel (Gabra6, Crtam genes were identified under PACAP influence. In parallel, 2-DGE analysis revealed a highly expressed protein spot in the ischemic hemisphere that was identified as dihydropyrimidinase-related protein 2 (DPYL2. The DPYL2, also known as Crmp2, is a marker for the axonal growth and nerve development. Interestingly, PACAP treatment slightly increased its abundance (by 2-DGE and immunostaining at 6 h but not at 24 h in the ischemic hemisphere, suggesting PACAP activates neuronal defense mechanism early on. Conclusions This study provides a detailed inventory of PACAP influenced gene expressions

  13. Photomask Dimensional Metrology in the SEM: Has Anything Really Changed?

    Science.gov (United States)

    Postek, Michael T., Jr.; Vladar, Andras E.; Bennett, Marylyn H.

    2002-12-01

    Photomask dimensional metrology in the scanning electron microscope (SEM) has not evolved as rapidly as the metrology of resists and integrated circuit features on wafers. This has been due partly to the 4x (or 5x) reduction in the optical steppers and scanners used in the lithography process, and partly for the lesser need to account for the real three-dimensionality of the mask structures. So, where photomasks are concerned, many of the issues challenging wafer dimensional metrology at 1x are reduced by a factor of 4 or 5 and thus could be temporarily swept aside. This is rapidly changing with the introduction of advanced masks with optical proximity correction and phase shifting features used in 100 nm and smaller circuit generations. Fortunately, photomask metrology generally benefits from the advances made for wafer metrology, but there are still unique issues to be solved in this form of dimensional metrology. It is likely that no single metrology method or tool will ever provide all necessary answers. As with other types of metrology, resolution, sensitivity and linearity in the three-dimensional measurements of the shape of the lines and phase shifting features in general (width, height and wall angles) and the departures from the desired shape (surface and edge roughness, etc.) are the key parameters. Different methods and tools differ in their ability to collect averaged and localized signals with an acceptable speed, but in any case, application of this thorough knowledge of the physics of the given metrology is essential to extract the needed information. This paper will discuss the topics of precision, accuracy and traceability in the SEM metrology of photomasks. Current and possible new techniques utilized in the measurements of photomasks including charge suppression and highly accurate modeling for electron beam metrology will also be explored to answer the question "Has anything really changed?"

  14. Aeroelastic Analyses of the SemiSpan SuperSonic Transport (S4T) Wind Tunnel Model at Mach 0.95

    Science.gov (United States)

    Hur, Jiyoung

    2014-01-01

    Detailed aeroelastic analyses of the SemiSpan SuperSonic Transport (S4T) wind tunnel model at Mach 0.95 with a 1.75deg fixed angle of attack are presented. First, a numerical procedure using the Computational Fluids Laboratory 3-Dimensional (CFL3D) Version 6.4 flow solver is investigated. The mesh update method for structured multi-block grids was successfully applied to the Navier-Stokes simulations. Second, the steady aerodynamic analyses with a rigid structure of the S4T wind tunnel model are reviewed in transonic flow. Third, the static analyses were performed for both the Euler and Navier-Stokes equations. Both the Euler and Navier-Stokes equations predicted a significant increase of lift forces, compared to the results from the rigid structure of the S4T wind-tunnel model, over various dynamic pressures. Finally, dynamic aeroelastic analyses were performed to investigate the flutter condition of the S4T wind tunnel model at the transonic Mach number. The condition of flutter was observed at a dynamic pressure of approximately 75.0-psf for the Navier-Stokes simulations. However, it was observed that the flutter condition occurred a dynamic pressure of approximately 47.27-psf for the Euler simulations. Also, the computational efficiency of the aeroelastic analyses for the S4T wind tunnel model has been assessed.

  15. Web semántica y servicios web semanticos

    OpenAIRE

    Marquez Solis, Santiago

    2007-01-01

    Des d'aquest TFC volem estudiar l'evolució de la Web actual cap a la Web Semàntica. Desde este TFC queremos estudiar la evolución de la Web actual hacia la Web Semántica. From this Final Degree Project we want to study the evolution of the current Web to the Semantic Web.

  16. Scanning electron microscopy: preparation and imaging for SEM.

    Science.gov (United States)

    Jones, Chris G

    2012-01-01

    Scanning electron microscopy (SEM) has been almost universally applied for the surface examination and characterization of both natural and man-made objects. Although an invasive technique, developments in electron microscopy over the years has given the microscopist a much clearer choice in how invasive the technique will be. With the advent of low vacuum SEM in the 1970s (The environmental cold stage, 1970) and environmental SEM in the late 1980s (J Microsc 160(pt. 1):9-19, 1989), it is now possible in some circumstances to examine samples without preparation. However, for the examination of biological tissue and cells it is still advisable to chemically fix, dehydrate, and coat samples for SEM imaging and analysis. This chapter aims to provide an overview of SEM as an imaging tool, and a general introduction to some of the methods applied for the preparation of samples.

  17. Mental model mapping as a new tool to analyse the use of information in decision-making in integrated water management

    Science.gov (United States)

    Kolkman, M. J.; Kok, M.; van der Veen, A.

    The solution of complex, unstructured problems is faced with policy controversy and dispute, unused and misused knowledge, project delay and failure, and decline of public trust in governmental decisions. Mental model mapping (also called concept mapping) is a technique to analyse these difficulties on a fundamental cognitive level, which can reveal experiences, perceptions, assumptions, knowledge and subjective beliefs of stakeholders, experts and other actors, and can stimulate communication and learning. This article presents the theoretical framework from which the use of mental model mapping techniques to analyse this type of problems emerges as a promising technique. The framework consists of the problem solving or policy design cycle, the knowledge production or modelling cycle, and the (computer) model as interface between the cycles. Literature attributes difficulties in the decision-making process to communication gaps between decision makers, stakeholders and scientists, and to the construction of knowledge within different paradigm groups that leads to different interpretation of the problem situation. Analysis of the decision-making process literature indicates that choices, which are made in all steps of the problem solving cycle, are based on an individual decision maker’s frame of perception. This frame, in turn, depends on the mental model residing in the mind of the individual. Thus we identify three levels of awareness on which the decision process can be analysed. This research focuses on the third level. Mental models can be elicited using mapping techniques. In this way, analysing an individual’s mental model can shed light on decision-making problems. The steps of the knowledge production cycle are, in the same manner, ultimately driven by the mental models of the scientist in a specific discipline. Remnants of this mental model can be found in the resulting computer model. The characteristics of unstructured problems (complexity

  18. Smart flexible microrobots for scanning electron microscope (SEM) applications

    Science.gov (United States)

    Schmoeckel, Ferdinand; Fatikow, Sergej

    2000-06-01

    In the scanning electron microscope (SEM), specially designed microrobots can act as a flexible assembly facility for hybrid microsystems, as probing devices for in-situ tests on IC structures or just as a helpful teleoperated tool for the SEM operator when examining samples. Several flexible microrobots of this kind have been developed and tested. Driven by piezoactuators, these few cubic centimeters small mobile robots perform manipulations with a precision of up to 10 nm and transport the gripped objects at speeds of up to 3 cm/s. In accuracy, flexibility and price they are superior to conventional precision robots. A new SEM-suited microrobot prototype is described in this paper. The SEM's vacuum chamber has been equipped with various elements like flanges and CCD cameras to enable the robot to operate. In order to use the SEM image for the automatic real-time control of the robots, the SEM's electron beam is actively controlled by a PC. The latter submits the images to the robots' control computer system. For obtaining three-dimensional information in real time, especially for the closed-loop control of a robot endeffector, e.g. microgripper, a triangulation method with the luminescent spot of the SEM's electron beam is being investigated.

  19. Automated CD-SEM metrology for efficient TD and HVM

    Science.gov (United States)

    Starikov, Alexander; Mulapudi, Satya P.

    2008-03-01

    CD-SEM is the metrology tool of choice for patterning process development and production process control. We can make these applications more efficient by extracting more information from each CD-SEM image. This enables direct monitors of key process parameters, such as lithography dose and focus, or predicting the outcome of processing, such as etched dimensions or electrical parameters. Automating CD-SEM recipes at the early stages of process development can accelerate technology characterization, segmentation of variance and process improvements. This leverages the engineering effort, reduces development costs and helps to manage the risks inherent in new technology. Automating CD-SEM for manufacturing enables efficient operations. Novel SEM Alarm Time Indicator (SATI) makes this task manageable. SATI pulls together data mining, trend charting of the key recipe and Operations (OPS) indicators, Pareto of OPS losses and inputs for root cause analysis. This approach proved natural to our FAB personnel. After minimal initial training, we applied new methods in 65nm FLASH manufacture. This resulted in significant lasting improvements of CD-SEM recipe robustness, portability and automation, increased CD-SEM capacity and MT productivity.

  20. Generic linking of finite element models for non-linear static and global dynamic analyses for aircraft structures

    NARCIS (Netherlands)

    Wit, de A.J.; Akcay Perdahcioglu, D.; Brink, van den W.M.; Boer, de A.

    2011-01-01

    Depending on the type of analysis, Finite Element(FE) models of different fidelity are necessary. Creating these models manually is a labor intensive task. This paper discusses a generic approach for generating FE models of different fidelity from a single reference FE model. These different fidelit

  1. Generic Linking of Finite Element Models for non-linear static and global dynamic analyses of aircraft structures

    NARCIS (Netherlands)

    Wit, de A.J.; Akcay-Perdahcioglu, D.; Brink, van den W.M.; Boer, de A.

    2012-01-01

    Depending on the type of analysis, Finite Element(FE) models of different fidelity are necessary. Creating these models manually is a labor intensive task. This paper discusses a generic approach for generating FE models of different fidelity from a single reference FE model. These different fidelit

  2. Generic linking of finite element models for non-linear static and global dynamic analyses for aircraft structures

    NARCIS (Netherlands)

    de Wit, A.J.; Akcay-Perdahcioglu, Didem; van den Brink, W.M.; de Boer, Andries; Rolfes, R.; Jansen, E.L.

    2011-01-01

    Depending on the type of analysis, Finite Element(FE) models of different fidelity are necessary. Creating these models manually is a labor intensive task. This paper discusses a generic approach for generating FE models of different fidelity from a single reference FE model. These different

  3. Improvement of geometrical measurements from 3D-SEM reconstructions

    DEFF Research Database (Denmark)

    Carli, Lorenzo; De Chiffre, Leonardo; Horsewell, Andy

    2009-01-01

    The quantification of 3D geometry at the nanometric scale is a major metrological challenge. In this work geometrical measurements on cylindrical items obtained with a 3D-SEM were investigated. Two items were measured: a wire gauge having a 0.25 mm nominal diameter and a hypodermic needle having...... that the diameter estimation performed using the 3D-SEM leads to an overestimation of approx. 7% compared to the reference values obtained using a 1-D length measuring machine. Standard deviation of SEM measurements performed on the wire gauge is approx. 1.5 times lower than the one performed on the hypodermic...

  4. SEM-EBSP能知道些什么

    Institute of Scientific and Technical Information of China (English)

    张唯敏

    2003-01-01

    @@ 1 SEM-EBSP是什么 所谓SEM-EBSP是指采用在扫描电子显微镜(SEM)镜体中的反射电子菊池线衍射的结晶方位分析.被称为菊池图形的衍射图形可因结晶的稍许倾斜而大大地改变其位置,因此,通过解析菊池图形就能正确地知道结晶方位.

  5. Improvement of CD-SEM mark position measurement accuracy

    Science.gov (United States)

    Kasa, Kentaro; Fukuhara, Kazuya

    2014-04-01

    CD-SEM is now attracting attention as a tool that can accurately measure positional error of device patterns. However, the measurement accuracy can get worse due to pattern asymmetry as in the case of image based overlay (IBO) and diffraction based overlay (DBO). For IBO and DBO, a way of correcting the inaccuracy arising from measurement patterns was suggested. For CD-SEM, although a way of correcting CD bias was proposed, it has not been argued how to correct the inaccuracy arising from pattern asymmetry using CD-SEM. In this study we will propose how to quantify and correct the measurement inaccuracy affected by pattern asymmetry.

  6. Alternative SEM techniques for observing pyritised fossil material.

    Science.gov (United States)

    Poole; Lloyd

    2000-11-01

    Two scanning electron microscopy (SEM) electron-specimen interactions that provide images based on sample crystal structure, electron channelling and electron backscattered diffraction, are described. The SEM operating conditions and sample preparation are presented, followed by an example application of these techniques to the study of pyritised plant material. The two approaches provide an opportunity to examine simultaneously, at higher magnifications normally available optically, detailed specimen anatomy and preservation state. Our investigation suggests that whereas both techniques have their advantages, the electron channelling approach is generally more readily available to most SEM users. However, electron backscattered diffraction does afford the opportunity of automated examination and characterisation of pyritised fossil material.

  7. Modelling of the spallation reaction: analysis and testing of nuclear models; Simulation de la spallation: analyse et test des modeles nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Toccoli, C

    2000-04-03

    The spallation reaction is considered as a 2-step process. First a very quick stage (10{sup -22}, 10{sup -29} s) which corresponds to the individual interaction between the incident projectile and nucleons, this interaction is followed by a series of nucleon-nucleon collisions (intranuclear cascade) during which fast particles are emitted, the nucleus is left in a strongly excited level. Secondly a slower stage (10{sup -18}, 10{sup -19} s) during which the nucleus is expected to de-excite completely. This de-excitation is performed by evaporation of light particles (n, p, d, t, {sup 3}He, {sup 4}He) or/and fission or/and fragmentation. The HETC code has been designed to simulate spallation reactions, this simulation is based on the 2-steps process and on several models of intranuclear cascades (Bertini model, Cugnon model, Helder Duarte model), the evaporation model relies on the statistical theory of Weiskopf-Ewing. The purpose of this work is to evaluate the ability of the HETC code to predict experimental results. A methodology about the comparison of relevant experimental data with results of calculation is presented and a preliminary estimation of the systematic error of the HETC code is proposed. The main problem of cascade models originates in the difficulty of simulating inelastic nucleon-nucleon collisions, the emission of pions is over-estimated and corresponding differential spectra are badly reproduced. The inaccuracy of cascade models has a great impact to determine the excited level of the nucleus at the end of the first step and indirectly on the distribution of final residual nuclei. The test of the evaporation model has shown that the emission of high energy light particles is under-estimated. (A.C.)

  8. Why Isn't Talent Development on the IEP? SEM and the Twice Exceptional Learner

    Science.gov (United States)

    Baum, Susan; Novak, Cynthia

    2010-01-01

    Why isn't talent development included on the Individual Educational Plan of 2E students? Twice exceptional students have unique issues that respond especially well to a talent development approach especially within the context of the Schoolwide Enrichment Model. Through case studies and a review of successful projects using SEM with at risk…

  9. Lessons Learned from My Students: The Impact of SEM Teaching and Learning on Affective Development

    Science.gov (United States)

    Hebert, Thomas P.

    2010-01-01

    Through reflection on his years as an enrichment teacher in Schoolwide Enrichment Model (SEM) programs, the author describes significant ways the social and emotional development of his students was shaped by their involvement in enriched teaching and learning. Through portraits of his students engaged in Type II and Type III enrichment, the…

  10. Mental model mapping as a new tool to analyse the use of information in decision-making in integrated water management

    NARCIS (Netherlands)

    Kolkman, M.J.; Kok, M.; Veen, van der A.

    2005-01-01

    The solution of complex, unstructured problems is faced with policy controversy and dispute, unused and misused knowledge, project delay and failure, and decline of public trust in governmental decisions. Mental model mapping (also called concept mapping) is a technique to analyse these difficulties

  11. Mental model mapping as a new tool to analyse the use of information in decision-making in integrated water management

    NARCIS (Netherlands)

    Kolkman, Rien; Kok, Matthijs; van der Veen, A.

    2005-01-01

    The solution of complex, unstructured problems is faced with policy controversy and dispute, unused and misused knowledge, project delay and failure, and decline of public trust in governmental decisions. Mental model mapping (also called concept mapping) is a technique to analyse these difficulties

  12. From global economic modelling to household level analyses of food security and sustainability: how big is the gap and can we bridge it?

    NARCIS (Netherlands)

    Wijk, van M.T.

    2014-01-01

    Policy and decision makers have to make difficult choices to improve the food security of local people against the background of drastic global and local changes. Ex-ante impact assessment using integrated models can help them with these decisions. This review analyses the state of affairs of the mu

  13. Direct and Indirect Effects of Parental Influence upon Adolescent Alcohol Use: A Structural Equation Modeling Analysis

    Science.gov (United States)

    Kim, Young-Mi; Neff, James Alan

    2010-01-01

    A model incorporating the direct and indirect effects of parental monitoring on adolescent alcohol use was evaluated by applying structural equation modeling (SEM) techniques to data on 4,765 tenth-graders in the 2001 Monitoring the Future Study. Analyses indicated good fit of hypothesized measurement and structural models. Analyses supported both…

  14. Parsimony and Model-Based Analyses of Indels in Avian Nuclear Genes Reveal Congruent and Incongruent Phylogenetic Signals

    Directory of Open Access Journals (Sweden)

    Frederick H. Sheldon

    2013-03-01

    Full Text Available Insertion/deletion (indel mutations, which are represented by gaps in multiple sequence alignments, have been used to examine phylogenetic hypotheses for some time. However, most analyses combine gap data with the nucleotide sequences in which they are embedded, probably because most phylogenetic datasets include few gap characters. Here, we report analyses of 12,030 gap characters from an alignment of avian nuclear genes using maximum parsimony (MP and a simple maximum likelihood (ML framework. Both trees were similar, and they exhibited almost all of the strongly supported relationships in the nucleotide tree, although neither gap tree supported many relationships that have proven difficult to recover in previous studies. Moreover, independent lines of evidence typically corroborated the nucleotide topology instead of the gap topology when they disagreed, although the number of conflicting nodes with high bootstrap support was limited. Filtering to remove short indels did not substantially reduce homoplasy or reduce conflict. Combined analyses of nucleotides and gaps resulted in the nucleotide topology, but with increased support, suggesting that gap data may prove most useful when analyzed in combination with nucleotide substitutions.

  15. Automated transmission-mode scanning electron microscopy (tSEM for large volume analysis at nanoscale resolution.

    Directory of Open Access Journals (Sweden)

    Masaaki Kuwajima

    Full Text Available Transmission-mode scanning electron microscopy (tSEM on a field emission SEM platform was developed for efficient and cost-effective imaging of circuit-scale volumes from brain at nanoscale resolution. Image area was maximized while optimizing the resolution and dynamic range necessary for discriminating key subcellular structures, such as small axonal, dendritic and glial processes, synapses, smooth endoplasmic reticulum, vesicles, microtubules, polyribosomes, and endosomes which are critical for neuronal function. Individual image fields from the tSEM system were up to 4,295 µm(2 (65.54 µm per side at 2 nm pixel size, contrasting with image fields from a modern transmission electron microscope (TEM system, which were only 66.59 µm(2 (8.160 µm per side at the same pixel size. The tSEM produced outstanding images and had reduced distortion and drift relative to TEM. Automated stage and scan control in tSEM easily provided unattended serial section imaging and montaging. Lens and scan properties on both TEM and SEM platforms revealed no significant nonlinear distortions within a central field of ∼100 µm(2 and produced near-perfect image registration across serial sections using the computational elastic alignment tool in Fiji/TrakEM2 software, and reliable geometric measurements from RECONSTRUCT™ or Fiji/TrakEM2 software. Axial resolution limits the analysis of small structures contained within a section (∼45 nm. Since this new tSEM is non-destructive, objects within a section can be explored at finer axial resolution in TEM tomography with current methods. Future development of tSEM tomography promises thinner axial resolution producing nearly isotropic voxels and should provide within-section analyses of structures without changing platforms. Brain was the test system given our interest in synaptic connectivity and plasticity; however, the new tSEM system is readily applicable to other biological systems.

  16. DETECTION OF DELAMINATION IN A COMPOSITE PLATE BY SEM

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A numerical method of integration of Green's functi ons of strip element method (SEM) is proposed. The response of ultrasonic sourc e generated by a transducer on the surface of a multi-ply composite plate conta ining a delamination is analyzed by the use of SEM. The numerical results show that the scanning features of the ultrasonic waves may be used to identify the d elamination inside the composite plate .

  17. Structured modelling and nonlinear analysis of PEM fuel cells; Strukturierte Modellierung und nichtlineare Analyse von PEM-Brennstoffzellen

    Energy Technology Data Exchange (ETDEWEB)

    Hanke-Rauschenbach, R.

    2007-10-26

    In the first part of this work a model structuring concept for electrochemical systems is presented. The application of such a concept for the structuring of a process model allows it to combine different fuel cell models to form a whole model family, regardless of their level of detail. Beyond this the concept offers the opportunity to flexibly exchange model entities on different model levels. The second part of the work deals with the nonlinear behaviour of PEM fuel cells. With the help of a simple, spatially lumped and isothermal model, bistable current-voltage characteristics of PEM fuel cells operated with low humidified feed gases are predicted and discussed in detail. The cell is found to exhibit current-voltage curves with pronounced local extrema in a parameter range that is of practical interest when operated at constant feed gas flow rates. (orig.)

  18. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation.

    Science.gov (United States)

    Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D

    2015-07-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  19. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    Science.gov (United States)

    Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  20. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.