WorldWideScience

Sample records for modeling sem analyses

  1. The use of Structural Equation Modelling (SEM) in Capital Structure ...

    African Journals Online (AJOL)

    analytic structural equation modelling (SEM) methodology. The SEM Methodology allows the use of more than one indicator for a latent variable. It also estimates the latent variables and accommodates reciprocal causation and interdependences ...

  2. Design and Use of the Simple Event Model (SEM)

    NARCIS (Netherlands)

    van Hage, W.R.; Malaisé, V.; Segers, R.H.; Hollink, L.

    2011-01-01

    Events have become central elements in the representation of data from domains such as history, cultural heritage, multimedia and geography. The Simple Event Model (SEM) is created to model events in these various domains, without making assumptions about the domain-specific vocabularies used. SEM

  3. SEM Based CARMA Time Series Modeling for Arbitrary N.

    Science.gov (United States)

    Oud, Johan H L; Voelkle, Manuel C; Driver, Charles C

    2018-01-01

    This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be presented in which a single panel model (T = 41 time points) is estimated for a sample of N = 1,000 individuals as well as for samples of N = 100 and N = 50 individuals, followed by estimating 100 separate models for each of the one-hundred N = 1 cases in the N = 100 sample. Furthermore, we will demonstrate how to test the difference between the full panel model and each N = 1 model by means of a subject-group-reproducibility test. Finally, the proposed analyses will be applied in an empirical example, in which the relationships between mood at work and mood at home are studied in a sample of N = 55 women. All analyses are carried out by ctsem, an R-package for continuous time modeling, interfacing to OpenMx.

  4. semPLS: Structural Equation Modeling Using Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Armin Monecke

    2012-05-01

    Full Text Available Structural equation models (SEM are very popular in many disciplines. The partial least squares (PLS approach to SEM offers an alternative to covariance-based SEM, which is especially suited for situations when data is not normally distributed. PLS path modelling is referred to as soft-modeling-technique with minimum demands regarding mea- surement scales, sample sizes and residual distributions. The semPLS package provides the capability to estimate PLS path models within the R programming environment. Different setups for the estimation of factor scores can be used. Furthermore it contains modular methods for computation of bootstrap confidence intervals, model parameters and several quality indices. Various plot functions help to evaluate the model. The well known mobile phone dataset from marketing research is used to demonstrate the features of the package.

  5. Continuous time modeling of panel data by means of SEM

    NARCIS (Netherlands)

    Oud, J.H.L.; Delsing, M.J.M.H.; Montfort, C.A.G.M.; Oud, J.H.L.; Satorra, A.

    2010-01-01

    After a brief history of continuous time modeling and its implementation in panel analysis by means of structural equation modeling (SEM), the problems of discrete time modeling are discussed in detail. This is done by means of the popular cross-lagged panel design. Next, the exact discrete model

  6. SEM/EDS and optical microscopy analyses of microplastics in ocean trawl and fish guts.

    Science.gov (United States)

    Wang, Zhong-Min; Wagner, Jeff; Ghosal, Sutapa; Bedi, Gagandeep; Wall, Stephen

    2017-12-15

    Microplastic particles from Atlantic and Pacific Ocean trawls, lab-fed fish guts and ocean fish guts have been characterized using optical microscopy and SEM/EDS in terms of size, morphology, and chemistry. We assessed whether these measurements could serve as a rapid screening process for subsequent identification of the likely microplastic candidates by micro-spectroscopy. Optical microscopy enabled morphological classification of the types of particles or fibers present in the sample, as well as the quantification of particle size ranges and fiber lengths. SEM/EDS analysis was used to rule out non-plastic particles and screen the prepared samples for potential microplastic, based on their element signatures and surface characteristics. Chlorinated plastics such as polyvinyl chloride (PVC) could be easily identified with SEM/EDS due to their unique elemental signatures including chlorine, as could mineral species that are falsely identified as plastics by optical microscopy. Particle morphology determined by optical microscopy and SEM suggests the fish ingested particles contained both degradation fragments from larger plastic pieces and also manufactured microplastics. SEM images of microplastic particle surfaces revealed characteristic cracks consistent with environmental exposure, as well as pigment particles consistent with manufactured materials. Most of the microplastic surfaces in the fish guts and ocean trawls were covered with biofilms, radiolarians, and crustaceans. Many of the fish stomachs contained micro-shell pieces which visually resembled microplastics. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. GW-SEM: A Statistical Package to Conduct Genome-Wide Structural Equation Modeling.

    Science.gov (United States)

    Verhulst, Brad; Maes, Hermine H; Neale, Michael C

    2017-05-01

    Improving the accuracy of phenotyping through the use of advanced psychometric tools will increase the power to find significant associations with genetic variants and expand the range of possible hypotheses that can be tested on a genome-wide scale. Multivariate methods, such as structural equation modeling (SEM), are valuable in the phenotypic analysis of psychiatric and substance use phenotypes, but these methods have not been integrated into standard genome-wide association analyses because fitting a SEM at each single nucleotide polymorphism (SNP) along the genome was hitherto considered to be too computationally demanding. By developing a method that can efficiently fit SEMs, it is possible to expand the set of models that can be tested. This is particularly necessary in psychiatric and behavioral genetics, where the statistical methods are often handicapped by phenotypes with large components of stochastic variance. Due to the enormous amount of data that genome-wide scans produce, the statistical methods used to analyze the data are relatively elementary and do not directly correspond with the rich theoretical development, and lack the potential to test more complex hypotheses about the measurement of, and interaction between, comorbid traits. In this paper, we present a method to test the association of a SNP with multiple phenotypes or a latent construct on a genome-wide basis using a diagonally weighted least squares (DWLS) estimator for four common SEMs: a one-factor model, a one-factor residuals model, a two-factor model, and a latent growth model. We demonstrate that the DWLS parameters and p-values strongly correspond with the more traditional full information maximum likelihood parameters and p-values. We also present the timing of simulations and power analyses and a comparison with and existing multivariate GWAS software package.

  8. Profilometric and SEM analyses of composite surfaces after excess cement removal

    Directory of Open Access Journals (Sweden)

    Jevremović Danimir P.

    2012-01-01

    Full Text Available Composite cements are widely used in dentistry, due to their positive characteristics (bond strength, color, low solubility etc.. However, removal of the cement presents one of the drawbacks of their use, since incomplete removal might cause bacterial adhesion, gingival irritation and subsequent inflammation. The aim of this study was to investigate surface characteristics of the composite cements after different ways of excess removal, by means of profilometric and SEM analysis. Thirty leucite reinforced ceramic specimens were divided into three groups, based on the manner of excess cement removal: Group 1 (polished: excess was fully polymerized for 40 s, then removed; Group 2 (cleaned: excess was removed with a cotton roll, after which cement was fully polymerized for 40 s; Group 3 (pre-polymerized: excess was light cured for 5 s, after which cement excess was broken with an instrument and then fully polymerized for 40 s. Surface roughness was measured using a surface profilometer. Subsequently, specimens were inspected by a scanning electron microscope. The data were statistically analyzed. The examination of variants of average values proved the statistically significant difference in the height of average values per group, p<0.0001; the statistically significantly highest values were for the pre-polymerized group, whereas the statistically significantly lowest values were for the polished group. Results of this study show that utmost attention has to be paid to the excess removal procedure, since surface roughness parameters directly depend on the choice of the applied technique.

  9. An extensible analysable system model

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, Rene Rydhof

    2008-01-01

    Analysing real-world systems for vulnerabilities with respect to security and safety threats is a difficult undertaking, not least due to a lack of availability of formalisations for those systems. While both formalisations and analyses can be found for artificial systems such as software......, this does not hold for real physical systems. Approaches such as threat modelling try to target the formalisation of the real-world domain, but still are far from the rigid techniques available in security research. Many currently available approaches to assurance of critical infrastructure security...... are based on (quite successful) ad-hoc techniques. We believe they can be significantly improved beyond the state-of-the-art by pairing them with static analyses techniques. In this paper we present an approach to both formalising those real-world systems, as well as providing an underlying semantics, which...

  10. From patterns to causal understanding: Structural equation modeling (SEM) in soil ecology

    Science.gov (United States)

    Eisenhauer, Nico; Powell, Jeff R; Grace, James B.; Bowker, Matthew A.

    2015-01-01

    In this perspectives paper we highlight a heretofore underused statistical method in soil ecological research, structural equation modeling (SEM). SEM is commonly used in the general ecological literature to develop causal understanding from observational data, but has been more slowly adopted by soil ecologists. We provide some basic information on the many advantages and possibilities associated with using SEM and provide some examples of how SEM can be used by soil ecologists to shift focus from describing patterns to developing causal understanding and inspiring new types of experimental tests. SEM is a promising tool to aid the growth of soil ecology as a discipline, particularly by supporting research that is increasingly hypothesis-driven and interdisciplinary, thus shining light into the black box of interactions belowground.

  11. Evaluating Neighborhoods Livability in Nigeria: A Structural Equation Modelling (SEM Approach

    Directory of Open Access Journals (Sweden)

    Sule Abass Iyanda

    2018-01-01

    and housing unit characteristics first-order factors. The result shows that economic vitality (income, mobility and mobility cost most significantly measures neighborhood livability. Also, results of the model achieved good fit indices such as CFI of 0.907 and the RMSEA value of 0.096. Thus, SEM analyses in this study offer a methodological guide on the efficacy of CFA second-order factor.

  12. SEM Model Medical Solid Waste Hospital Management In Medan City

    Science.gov (United States)

    Simarmata, Verawaty; Pandia, Setiaty; Mawengkang, Herman

    2018-01-01

    In daily activities, hospitals, as one of the important health care unit, generate both medical solid waste and non-medical solid waste. The occurrence of medical solid waste could be from the results of treatment activities, such as, in the treatment room for a hospital inpatient, general clinic, a dental clinic, a mother and child clinic, laboratories and pharmacies. Most of the medical solid waste contains infectious and hazardous materials. Therefore it should be managed properly, otherwise it could be a source of new infectious for the community around the hospital as well as for health workers themselves. Efforts surveillance of various environmental factors need to be applied in accordance with the principles of sanitation focuses on environmental cleanliness. One of the efforts that need to be done in improving the quality of the environment is to undertake waste management activities, because with proper waste management is the most important in order to achieve an optimal degree of human health. Health development in Indonesian aims to achieve a future in which the Indonesian people live in a healthy environment, its people behave clean and healthy, able to reach quality health services, fair and equitable, so as to have optimal health status, health development paradigm anchored to the healthy. The healthy condition of the individual and society can be influenced by the environment. Poor environmental quality is a cause of various health problems. Efforts surveillance of various environmental factors need to be applied in accordance with the principles of sanitation focuses on environmental cleanliness. This paper proposes a model for managing the medical solid waste in hospitals in Medan city, in order to create healthy environment around hospitals.

  13. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    Science.gov (United States)

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  14. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    Science.gov (United States)

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  15. Mathematical model of the seismic electromagnetic signals (SEMS) in non crystalline substances

    Energy Technology Data Exchange (ETDEWEB)

    Dennis, L. C. C.; Yahya, N.; Daud, H.; Shafie, A. [Electromagnetic cluster, Universiti Teknologi Petronas, 31750 Tronoh, Perak (Malaysia)

    2012-09-26

    The mathematical model of seismic electromagnetic waves in non crystalline substances is developed and the solutions are discussed to show the possibility of improving the electromagnetic waves especially the electric field. The shear stress of the medium in fourth order tensor gives the equation of motion. Analytic methods are selected for the solutions written in Hansen vector form. From the simulated SEMS, the frequency of seismic waves has significant effects to the SEMS propagating characteristics. EM waves transform into SEMS or energized seismic waves. Traveling distance increases once the frequency of the seismic waves increases from 100% to 1000%. SEMS with greater seismic frequency will give seismic alike waves but greater energy is embedded by EM waves and hence further distance the waves travel.

  16. Morphological modelling of three-phase microstructures of anode layers using SEM images.

    Science.gov (United States)

    Abdallah, Bassam; Willot, François; Jeulin, Dominique

    2016-07-01

    A general method is proposed to model 3D microstructures representative of three-phases anode layers used in fuel cells. The models are based on SEM images of cells with varying morphologies. The materials are first characterized using three morphological measurements: (cross-)covariances, granulometry and linear erosion. They are measured on segmented SEM images, for each of the three phases. Second, a generic model for three-phases materials is proposed. The model is based on two independent underlying random sets which are otherwise arbitrary. The validity of this model is verified using the cross-covariance functions of the various phases. In a third step, several types of Boolean random sets and plurigaussian models are considered for the unknown underlying random sets. Overall, good agreement is found between the SEM images and three-phases models based on plurigaussian random sets, for all morphological measurements considered in the present work: covariances, granulometry and linear erosion. The spatial distribution and shapes of the phases produced by the plurigaussian model are visually very close to the real material. Furthermore, the proposed models require no numerical optimization and are straightforward to generate using the covariance functions measured on the SEM images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  17. Subjective Values of Quality of Life Dimensions in Elderly People. A SEM Preference Model Approach

    Science.gov (United States)

    Elosua, Paula

    2011-01-01

    This article proposes a Thurstonian model in the framework of Structural Equation Modelling (SEM) to assess preferences among quality of life dimensions for the elderly. Data were gathered by a paired comparison design in a sample comprised of 323 people aged from 65 to 94 years old. Five dimensions of quality of life were evaluated: Health,…

  18. Prescriptive Statements and Educational Practice: What Can Structural Equation Modeling (SEM) Offer?

    Science.gov (United States)

    Martin, Andrew J.

    2011-01-01

    Longitudinal structural equation modeling (SEM) can be a basis for making prescriptive statements on educational practice and offers yields over "traditional" statistical techniques under the general linear model. The extent to which prescriptive statements can be made will rely on the appropriate accommodation of key elements of research design,…

  19. BIB-SEM of representative area clay structures paving towards an alternative model of porosity

    Science.gov (United States)

    Desbois, G.; Urai, J. L.; Houben, M.; Hemes, S.; Klaver, J.

    2012-04-01

    A major contribution to understanding the sealing capacity, coupled flow, capillary processes and associated deformation in clay-rich geomaterials is based on detailed investigation of the rock microstructures. However, the direct characterization of pores in representative elementary area (REA) and below µm-scale resolution remains challenging. To investigate directly the mm- to nm-scale porosity, SEM is certainly the most direct approach, but it is limited by the poor quality of the investigated surfaces. The recent development of ion milling tools (BIB and FIB; Desbois et al, 2009, 2011; Heath et al., 2011; Keller et al., 2011) and cryo-SEM allows respectively producing exceptional high quality polished cross-sections suitable for high resolution porosity SEM-imaging at nm-scale and investigating samples under wet conditions by cryogenic stabilization. This contribution focuses mainly on the SEM description of pore microstructures in 2D BIB-polished cross-sections of Boom (Mol site, Belgium) and Opalinus (Mont Terri, Switzerland) clays down to the SEM resolution. Pores detected in images are statistically analyzed to perform porosity quantification in REA. On the one hand, BIB-SEM results allow retrieving MIP measurements obtained from larger sample volumes. On the other hand, the BIB-SEM approach allows characterizing porosity-homogeneous and -predictable islands, which form the elementary components of an alternative concept of porosity/permeability model based on pore microstructures. Desbois G., Urai J.L. and Kukla P.A. (2009) Morphology of the pore space in claystones - evidence from BIB/FIB ion beam sectioning and cryo-SEM observations. E-Earth, 4, 15-22. Desbois G., Urai J.L., Kukla P.A., Konstanty J. and Baerle C. (2011). High-resolution 3D fabric and porosity model in a tight gas sandstone reservoir: a new approach to investigate microstructures from mm- to nm-scale combining argon beam cross-sectioning and SEM imaging . Journal of Petroleum Science

  20. AxiSEM3D: broadband seismic wavefields in 3-D aspherical Earth models

    Science.gov (United States)

    Leng, K.; Nissen-Meyer, T.; Zad, K. H.; van Driel, M.; Al-Attar, D.

    2017-12-01

    Seismology is the primary tool for data-informed inference of Earth structure and dynamics. Simulating seismic wave propagation at a global scale is fundamental to seismology, but remains as one of most challenging problems in scientific computing, because of both the multiscale nature of Earth's interior and the observable frequency band of seismic data. We present a novel numerical method to simulate global seismic wave propagation in realistic 3-D Earth models. Our method, named AxiSEM3D, is a hybrid of spectral element method and pseudospectral method. It reduces the azimuthal dimension of wavefields by means of a global Fourier series parameterization, of which the number of terms can be locally adapted to the inherent azimuthal smoothness of the wavefields. AxiSEM3D allows not only for material heterogeneities, such as velocity, density, anisotropy and attenuation, but also for finite undulations on radial discontinuities, both solid-solid and solid-fluid, and thereby a variety of aspherical Earth features such as ellipticity, topography, variable crustal thickness, and core-mantle boundary topography. Such interface undulations are equivalently interpreted as material perturbations of the contiguous media, based on the "particle relabelling transformation". Efficiency comparisons show that AxiSEM3D can be 1 to 3 orders of magnitude faster than conventional 3-D methods, with the speedup increasing with simulation frequency and decreasing with model complexity, but for all realistic structures the speedup remains at least one order of magnitude. The observable frequency range of global seismic data (up to 1 Hz) has been covered for wavefield modelling upon a 3-D Earth model with reasonable computing resources. We show an application of surface wave modelling within a state-of-the-art global crustal model (Crust1.0), with the synthetics compared to real data. The high-performance C++ code is released at github.com/AxiSEM3D/AxiSEM3D.

  1. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...... of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating....

  2. SEM-EDS Analyses of Small Craters in Stardust Aluminum Foils: Implications for the Wild-2 Dust Distribution

    Science.gov (United States)

    Borg, J.; Horz, F.; Bridges, J. C.; Burchell, M. J.; Djouadi, Z.; Floss, C.; Graham, G. A.; Green, S. F.; Heck, P. R.; Hoppe, P.; hide

    2007-01-01

    Aluminium foils were used on Stardust to stabilize the aerogel specimens in the modular collector tray. Part of these foils were fully exposed to the flux of cometary grains emanating from Wild 2. Because the exposed part of these foils had to be harvested before extraction of the aerogel, numerous foil strips some 1.7 mm wide and 13 or 33 mm long were generated during Stardusts's Preliminary Examination (PE). These strips are readily accommodated in their entirety in the sample chambers of modern SEMs, thus providing the opportunity to characterize in situ the size distribution and residue composition - employing EDS methods - of statistically more significant numbers of cometary dust particles compared to aerogel, the latter mandating extensive sample preparation. We describe here the analysis of nearly 300 impact craters and their implications for Wild 2 dust.

  3. Hybrid OPC modeling with SEM contour technique for 10nm node process

    Science.gov (United States)

    Hitomi, Keiichiro; Halle, Scott; Miller, Marshal; Graur, Ioana; Saulnier, Nicole; Dunn, Derren; Okai, Nobuhiro; Hotta, Shoji; Yamaguchi, Atsuko; Komuro, Hitoshi; Ishimoto, Toru; Koshihara, Shunsuke; Hojo, Yutaka

    2014-03-01

    Hybrid OPC modeling is investigated using both CDs from 1D and simple 2D structures and contours extracted from complex 2D structures, which are obtained by a Critical Dimension-Scanning Electron Microscope (CD-SEM). Recent studies have addressed some of key issues needed for the implementation of contour extraction, including an edge detection algorithm consistent with conventional CD measurements, contour averaging and contour alignment. Firstly, pattern contours obtained from CD-SEM images were used to complement traditional site driven CD metrology for the calibration of OPC models for both metal and contact layers of 10 nm-node logic device, developed in Albany Nano-Tech. The accuracy of hybrid OPC model was compared with that of conventional OPC model, which was created with only CD data. Accuracy of the model, defined as total error root-mean-square (RMS), was improved by 23% with the use of hybrid OPC modeling for contact layer and 18% for metal layer, respectively. Pattern specific benefit of hybrid modeling was also examined. Resist shrink correction was applied to contours extracted from CD-SEM images in order to improve accuracy of the contours, and shrink corrected contours were used for OPC modeling. The accuracy of OPC model with shrink correction was compared with that without shrink correction, and total error RMS was decreased by 0.2nm (12%) with shrink correction technique. Variation of model accuracy among 8 modeling runs with different model calibration patterns was reduced by applying shrink correction. The shrink correction of contours can improve accuracy and stability of OPC model.

  4. Analysis of Balance Scorecards Model Performance and Perspective Strategy Synergized by SEM

    Directory of Open Access Journals (Sweden)

    Waluyo Minto

    2016-01-01

    Full Text Available The performance assessment analysis after the economic crisis by using Balanced Scorecard (BSC method becomes a powerful and effective tool and can provide an integrated view of the performance of an organization. This strategy led to the Indonesian economy being stretched positively after the economic crisis. Taking effective decisions is not spared from combining four BSC perspectives and strategies that focus on a system with different behavior or steps. This paper combines two methods of BSC with structural equation modeling (SEM because they have the same concept, which is a causal relationship, where the research model concept SEM variables use BSC variable. The purpose of this paper is to investigate the influence of variables that synergized between balanced scorecard with SEM as a means of strategic planning in the future. This study used primary data with a large enough sample to meet the maximum likelihood estimation by assessment scale of seven semantic points. This research model is a combination of one and two step models. The next step is to test the measurement model, structural equation modeling, and modification models. The test results indicated that the model has multi colinearities. Therefore, the model is converted into one step model. The test results after being modified into a model of the goodness of fit indices showed a good score. All BSC variables have direct significant influence, including the perspective of strategic goals and sustainable competitive advantage. The implication of the simulation model of goodness of fit-modification results are DF = 227, Chi-square =276.550, P =0.058, CMIN/DF = 1.150, GFI = 0.831, AGFI = 0.791, CFI = 0.972, TLI = 0.965 and RMSEA = 0.039.

  5. Two refractory Wild 2 terminal particles from a carrot-shaped track characterized combining MIR/FIR/Raman microspectroscopy and FE-SEM/EDS analyses

    Science.gov (United States)

    Rotundi, A.; Rietmeijer, F. J. M.; Ferrari, M.; Della Corte, V.; Baratta, G. A.; Brunetto, R.; Dartois, E.; Djouadi, Z.; Merouane, S.; Borg, J.; Brucato, J. R.; Sergeant D'Hendecourt, L.; Mennella, V.; Palumbo, M. E.; Palumbo, P.

    2014-04-01

    We present the analyses results of two bulk Terminal Particles, C2112,7,171,0,0 and C2112,9,171,0,0, derived from the Jupiter-family comet 81P/Wild 2 returned by the Stardust mission. Each particle embedded in a slab of silica aerogel was pressed in a diamond cell. This preparation, as expected, made it difficult to identify the minerals and organic materials present in these particles. This problem was overcome using a combination of three different analytical techniques, viz. FE-SEM/EDS, IR, and Raman microspectroscopy that allowed identifying the minerals and small amounts of amorphous carbon present in both particles. TP2 and TP3 were dominated by Ca-free and low-Ca, Mg-rich, Mg,Fe-olivine. The presence of melilite in both particles is supported by IR microspectroscopy, but is not confirmed by Raman microspectroscopy, possibly because the amounts are too small to be detected. TP2 and TP3 show similar silicate mineral compositions, but Ni-free and low-Ni, subsulfur (Fe,Ni)S grains are present in TP2 only. TP2 contains indigenous amorphous carbon hot spots; no indigenous carbon was identified in TP3. These nonchondritic particles probably originated in a differentiated body. This work found an unanticipated carbon contamination following the FE-SEM/EDS analyses. It is suggested that organic materials in the embedding silica aerogel are irradiated during FE-SEM/EDS analyses creating a carbon gas that develops a strong fluorescence continuum. The combination of the selected analytical techniques can be used to characterize bulk Wild 2 particles without the need of extraction and removal of the encapsulating aerogel. This approach offers a relatively fast sample preparation procedure, but compressing the samples can cause spurious artifacts, viz. silica contamination. Because of the combination of techniques, we account for these artifacts.

  6. CUFE at SemEval-2016 Task 4: A Gated Recurrent Model for Sentiment Classification

    KAUST Repository

    Nabil, Mahmoud

    2016-06-16

    In this paper we describe a deep learning system that has been built for SemEval 2016 Task4 (Subtask A and B). In this work we trained a Gated Recurrent Unit (GRU) neural network model on top of two sets of word embeddings: (a) general word embeddings generated from unsupervised neural language model; and (b) task specific word embeddings generated from supervised neural language model that was trained to classify tweets into positive and negative categories. We also added a method for analyzing and splitting multi-words hashtags and appending them to the tweet body before feeding it to our model. Our models achieved 0.58 F1-measure for Subtask A (ranked 12/34) and 0.679 Recall for Subtask B (ranked 12/19).

  7. Structural Equations Model (SEM of a questionnaire on the evaluation of intercultural secondary education classrooms

    Directory of Open Access Journals (Sweden)

    Eva María Olmedo Moreno

    2014-12-01

    Full Text Available This research includes the design of a questionnaire for evaluating cultural coexistence in secondary education classrooms (Berrocal, Olmedo & Olmos, 2014; Olmedo et al., 2014, as well as the comparison of its psychometric properties in a multicultural population of schools in southern Spain. An attempt is made to create a valid, reliable and useful tool for teachers to measure conflict situations in the classroom, as well as understanding the nature of the conflict from the point of view of all those involved. The metric aspects show a maximized content and construct validity (Muñiz, 2010 using a Structural Equation Model (SEM and Confirmatory Factor Analysis (CFA analysis, checking and modifying its model by Wald and Lagrange indicators (Bentler, 2007, to obtain the most adjusted model to the theoretical and goodness criteria.

  8. Incorporating Latent Variables into Discrete Choice Models - A Simultaneous Estimation Approach Using SEM Software

    Directory of Open Access Journals (Sweden)

    Dirk Temme

    2008-12-01

    Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.

  9. Modelling and analysing oriented fibrous structures

    International Nuclear Information System (INIS)

    Rantala, M; Lassas, M; Siltanen, S; Sampo, J; Takalo, J; Timonen, J

    2014-01-01

    A mathematical model for fibrous structures using a direction dependent scaling law is presented. The orientation of fibrous nets (e.g. paper) is analysed with a method based on the curvelet transform. The curvelet-based orientation analysis has been tested successfully on real data from paper samples: the major directions of fibrefibre orientation can apparently be recovered. Similar results are achieved in tests on data simulated by the new model, allowing a comparison with ground truth

  10. SEM-microphotogrammetry, a new take on an old method for generating high-resolution 3D models from SEM images.

    Science.gov (United States)

    Ball, A D; Job, P A; Walker, A E L

    2017-08-01

    The method we present here uses a scanning electron microscope programmed via macros to automatically capture dozens of images at suitable angles to generate accurate, detailed three-dimensional (3D) surface models with micron-scale resolution. We demonstrate that it is possible to use these Scanning Electron Microscope (SEM) images in conjunction with commercially available software originally developed for photogrammetry reconstructions from Digital Single Lens Reflex (DSLR) cameras and to reconstruct 3D models of the specimen. These 3D models can then be exported as polygon meshes and eventually 3D printed. This technique offers the potential to obtain data suitable to reconstruct very tiny features (e.g. diatoms, butterfly scales and mineral fabrics) at nanometre resolution. Ultimately, we foresee this as being a useful tool for better understanding spatial relationships at very high resolution. However, our motivation is also to use it to produce 3D models to be used in public outreach events and exhibitions, especially for the blind or partially sighted. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  11. Modelling and Analysing Socio-Technical Systems

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi; Ivanova, Marieta Georgieva; Nielson, Flemming

    2015-01-01

    Modern organisations are complex, socio-technical systems consisting of a mixture of physical infrastructure, human actors, policies and processes. An in-creasing number of attacks on these organisations exploits vulnerabilities on all different levels, for example combining a malware attack...... with social engineering. Due to this combination of attack steps on technical and social levels, risk assessment in socio-technical systems is complex. Therefore, established risk assessment methods often abstract away the internal structure of an organisation and ignore human factors when modelling...... and assessing attacks. In our work we model all relevant levels of socio-technical systems, and propose evaluation techniques for analysing the security properties of the model. Our approach simplifies the identification of possible attacks and provides qualified assessment and ranking of attacks based...

  12. Effects of a potassium nitrate mouthwash on dentinal tubules--a SEM analysis using the dentine disc model.

    Science.gov (United States)

    Pereira, Richard; Chava, Vijay K

    2002-04-01

    The concept of tubular occlusion as a method of dentine desensitisation is a logical conclusion from the hydrodynamic hypothesis put forth by Brannström. The aim of this study was therefore to investigate qualitatively by SEM whether a 3% potassium nitrate/0.2% sodium fluoride mouthwash occluded tubule orifices, and by x-ray microanalysis, to characterise the nature of the deposits if any, following application. Following the 'dentine disc model' methodology 1mm thick tooth sections from unerupted molars were obtained. These were treated with the test and control mouthwashes and subjected to scanning electron microscopy. If any deposits were seen, they were to be subjected to elemental analysis using the energy dispersive x-ray analyser. Examination of all the dentine disc surfaces, treated by water (control), active and control mouthwashes demonstrated that none of the treatments, at any of the time intervals, had any visible effect on the dentinal tubule orifices i.e. there was no dentinal tubular occlusion seen. The results suggest that potassium nitrate does not reduce dentinal hypersensitivity, at least by tubule occlusion. This could mean that there is a different mechanism of action, which could not be detected by this in vitro model.

  13. The SEM Risk Behavior (SRB) Model: A New Conceptual Model of how Pornography Influences the Sexual Intentions and HIV Risk Behavior of MSM.

    Science.gov (United States)

    Wilkerson, J Michael; Iantaffi, Alex; Smolenski, Derek J; Brady, Sonya S; Horvath, Keith J; Grey, Jeremy A; Rosser, B R Simon

    2012-01-01

    While the effects of sexually explicit media (SEM) on heterosexuals' sexual intentions and behaviors have been studied, little is known about the consumption and possible influence of SEM among men who have sex with men (MSM). Importantly, conceptual models of how Internet-based SEM influences behavior are lacking. Seventy-nine MSM participated in online focus groups about their SEM viewing preferences and sexual behavior. Twenty-three participants reported recent exposure to a new behavior via SEM. Whether participants modified their sexual intentions and/or engaged in the new behavior depended on three factors: arousal when imagining the behavior, pleasure when attempting the behavior, and trust between sex partners. Based on MSM's experience, we advance a model of how viewing a new sexual behavior in SEM influences sexual intentions and behaviors. The model includes five paths. Three paths result in the maintenance of sexual intentions and behaviors. One path results in a modification of sexual intentions while maintaining previous sexual behaviors, and one path results in a modification of both sexual intentions and behaviors. With this model, researchers have a framework to test associations between SEM consumption and sexual intentions and behavior, and public health programs have a framework to conceptualize SEM-based HIV/STI prevention programs.

  14. A SEM Model in Assessing the Effect of Convergent, Divergent and Logical Thinking on Students' Understanding of Chemical Phenomena

    Science.gov (United States)

    Stamovlasis, D.; Kypraios, N.; Papageorgiou, G.

    2015-01-01

    In this study, structural equation modeling (SEM) is applied to an instrument assessing students' understanding of chemical change. The instrument comprised items on understanding the structure of substances, chemical changes and their interpretation. The structural relationships among particular groups of items are investigated and analyzed using…

  15. Case Studies of Successful Schoolwide Enrichment Model-Reading (SEM-R) Classroom Implementations. Research Monograph Series. RM10204

    Science.gov (United States)

    Reis, Sally M.; Little, Catherine A.; Fogarty, Elizabeth; Housand, Angela M.; Housand, Brian C.; Sweeny, Sheelah M.; Eckert, Rebecca D.; Muller, Lisa M.

    2010-01-01

    The purpose of this qualitative study was to examine the scaling up of the Schoolwide Enrichment Model in Reading (SEM-R) in 11 elementary and middle schools in geographically diverse sites across the country. Qualitative comparative analysis was used in this study, with multiple data sources compiled into 11 in-depth school case studies…

  16. APLIKASI STRUCTURAL EQUATION MODEL (SEM DALAM PENENTUAN ALTERNATIF PENGELOLAAN LINGKUNGAN INDUSTRI KOMPONEN ALAT BERAT BERBASIS PARTISIPASI DAN KEMITRAAN MASYARAKAT

    Directory of Open Access Journals (Sweden)

    Budi Setyo Utomo

    2012-07-01

    Full Text Available As a company engaged in the industrial sector by producing certain components and localized in an industrial area, there will be an impact on the environment. These impacts can be positive in the form of employment, reducing dependence on imported heavy equipment, increase in foreign exchange due to reduced imports and increased exports, increased government revenue from taxes, public facilities improvement and supporting infrastructure, and opening up opportunities for other related industries. These impacts can also be negative in the form of environmental degradation such as noise disturbance, dust, and micro climate change, and changes in social and cultural conditions surrounding the industry. Data analysis was performed descriptively and with the Structural Equation Model (SEM. SEM is a multivariate statistical technique which is a combination of factor analysis and regression analysis (correlation, which aims to test the connections between existing variables in a model, whether it is between the indicator with the construct, or the connections between constructs. SEM model consists of two parts, which is the latent variable model and the observed variable model. In contrast to ordinary regression linking the causality between the observed variables, it is also possible in SEM to identify the causality between latent variables. The results of SEM analysis showed that the developed model has a fairly high level of validity that is shown by the minimum fit chi-square value of 93.15 (P = 0.00029. Based on said model, it shows that the company's performance in waste management is largely determined by employee integrity and objectivity of the new employees followed later by the independence of the employees in waste management. The most important factor that determines the employee integrity in waste management in the model is honesty, individual wisdom, and a sense of responsibility. The most important factor in the employee objectivity

  17. Determination of the distribution of copper and chromium in partly remediated CCA-treated pine wood using SEM and EDX analyses

    DEFF Research Database (Denmark)

    Christensen, Iben Vernegren; Ottosen, Lisbeth M.; Melcher, Eckhard

    2005-01-01

    ) could be reduced to a large extent. Scanning electron microscopy with simultaneous electron dispersive X-ray analysis (SEM/EDX) clearly demonstrated a distinct difference in the distribution of Cu and Cr due to experimental conditions. Before soaking, the Cu and Cr was mainly located in the cell wall...

  18. Externalizing Behaviour for Analysing System Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof

    2013-01-01

    System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate...

  19. Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.

  20. Pulse electrochemical machining on Invar alloy: Optical microscopic/SEM and non-contact 3D measurement study of surface analyses

    Science.gov (United States)

    Kim, S. H.; Choi, S. G.; Choi, W. K.; Yang, B. Y.; Lee, E. S.

    2014-09-01

    In this study, Invar alloy (Fe 63.5%, Ni 36.5%) was electrochemically polished by PECM (Pulse Electro Chemical Machining) in a mixture of NaCl, glycerin, and distilled water. A series of PECM experiments were carried out with different voltages and different electrode shapes, and then the surfaces of polished Invar alloy were investigated. The polished Invar alloy surfaces were investigated by optical microscope, scanning electron microscope (SEM), and non-contact 3D measurement (white light microscopes) and it was found that different applied voltages produced different surface characteristics on the Invar alloy surface because of the locally concentrated applied voltage on the Invar alloy surface. Moreover, we found that the shapes of electrode also have an effect on the surface characteristics on Invar alloy surface by influencing the applied voltage. These experimental findings provide fundamental knowledge for PECM of Invar alloy by surface analysis.

  1. Modelling and Analyses of Embedded Systems Design

    DEFF Research Database (Denmark)

    Brekling, Aske Wiid

    We present the MoVES languages: a language with which embedded systems can be specified at a stage in the development process where an application is identified and should be mapped to an execution platform (potentially multi- core). We give a formal model for MoVES that captures and gives......-based verification is a promising approach for assisting developers of embedded systems. We provide examples of system verifications that, in size and complexity, point in the direction of industrially-interesting systems....... semantics to the elements of specifications in the MoVES language. We show that even for seem- ingly simple systems, the complexity of verifying real-time constraints can be overwhelming - but we give an upper limit to the size of the search-space that needs examining. Furthermore, the formal model exposes...

  2. Radiobiological analyse based on cell cluster models

    International Nuclear Information System (INIS)

    Lin Hui; Jing Jia; Meng Damin; Xu Yuanying; Xu Liangfeng

    2010-01-01

    The influence of cell cluster dimension on EUD and TCP for targeted radionuclide therapy was studied using the radiobiological method. The radiobiological features of tumor with activity-lack in core were evaluated and analyzed by associating EUD, TCP and SF.The results show that EUD will increase with the increase of tumor dimension under the activity homogeneous distribution. If the extra-cellular activity was taken into consideration, the EUD will increase 47%. Under the activity-lack in tumor center and the requirement of TCP=0.90, the α cross-fire influence of 211 At could make up the maximum(48 μm)3 activity-lack for Nucleus source, but(72 μm)3 for Cytoplasm, Cell Surface, Cell and Voxel sources. In clinic,the physician could prefer the suggested dose of Cell Surface source in case of the future of local tumor control for under-dose. Generally TCP could well exhibit the effect difference between under-dose and due-dose, but not between due-dose and over-dose, which makes TCP more suitable for the therapy plan choice. EUD could well exhibit the difference between different models and activity distributions,which makes it more suitable for the research work. When the user uses EUD to study the influence of activity inhomogeneous distribution, one should keep the consistency of the configuration and volume of the former and the latter models. (authors)

  3. Assessing Actual Visit Behavior through Antecedents of Tourists Satisfaction among International Tourists in Jordan: A Structural Equation Modeling (SEM Approach

    Directory of Open Access Journals (Sweden)

    Ayed Moh’d Al Muala

    2011-06-01

    Full Text Available Jordan tourism industry is facing fluctuating tourist visit provoked by dissatisfaction, high visit risk, low hotel service, or negative Jordan image. This study aims to examine the relationships between the antecedents of tourist satisfaction and actual visit behavior in tourism of Jordan, and the mediating effect of tourist satisfaction (SAT in the relationship between Jordan image (JOM, service climate (SER and actual visit behavior (ACT. A total of 850 international tourists completed a survey that were conducted at southern sites in Jordan. Using structural equation modeling (SEM technique, confirmatory Factor Analysis (CFA was performed to examine the reliability and validity of the measurement, and the structural equation modeling techniques (Amos 6.0 were used to evaluate the casual model. Results of the study demonstrate the strong predictive power and explain of international tourists’ behavior in Jordan. The findings highlighted that the relationship between Jordan image and service climate are significant and positive on actual visit behavior.

  4. An SEM Approach to Continuous Time Modeling of Panel Data: Relating Authoritarianism and Anomia

    Science.gov (United States)

    Voelkle, Manuel C.; Oud, Johan H. L.; Davidov, Eldad; Schmidt, Peter

    2012-01-01

    Panel studies, in which the same subjects are repeatedly observed at multiple time points, are among the most popular longitudinal designs in psychology. Meanwhile, there exists a wide range of different methods to analyze such data, with autoregressive and cross-lagged models being 2 of the most well known representatives. Unfortunately, in these…

  5. Filipino Nursing Students' Behavioral Intentions toward Geriatric Care: A Structural Equation Model (SEM)

    Science.gov (United States)

    de Guzman, Allan B.; Jimenez, Benito Christian B.; Jocson, Kathlyn P.; Junio, Aileen R.; Junio, Drazen E.; Jurado, Jasper Benjamin N.; Justiniano, Angela Bianca F.

    2013-01-01

    Anchored on the key constucts of Ajzen's Theory of Planned Behavior (1985), this paper seeks to test a model that explores the influence of knowledge, attitude, and caring behavior on nursing students' behavioral intention toward geriatric care. A five-part survey-questionnaire was administered to 839 third and fourth year nursing students from a…

  6. The Effect of Nonnormality on CB-SEM and PLS-SEM Path Estimates

    OpenAIRE

    Z. Jannoo; B. W. Yap; N. Auchoybur; M. A. Lazim

    2014-01-01

    The two common approaches to Structural Equation Modeling (SEM) are the Covariance-Based SEM (CB-SEM) and Partial Least Squares SEM (PLS-SEM). There is much debate on the performance of CB-SEM and PLS-SEM for small sample size and when distributions are nonnormal. This study evaluates the performance of CB-SEM and PLS-SEM under normality and nonnormality conditions via a simulation. Monte Carlo Simulation in R programming language was employed to generate data based on the theoretical model w...

  7. VIPRE modeling of VVER-1000 reactor core for DNB analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)

    1995-09-01

    Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.

  8. Seismic Soil-Structure Interaction Analyses of a Deeply Embedded Model Reactor – SASSI Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Nie J.; Braverman J.; Costantino, M.

    2013-10-31

    This report summarizes the SASSI analyses of a deeply embedded reactor model performed by BNL and CJC and Associates, as part of the seismic soil-structure interaction (SSI) simulation capability project for the NEAMS (Nuclear Energy Advanced Modeling and Simulation) Program of the Department of Energy. The SASSI analyses included three cases: 0.2 g, 0.5 g, and 0.9g, all of which refer to nominal peak accelerations at the top of the bedrock. The analyses utilized the modified subtraction method (MSM) for performing the seismic SSI evaluations. Each case consisted of two analyses: input motion in one horizontal direction (X) and input motion in the vertical direction (Z), both of which utilized the same in-column input motion. Besides providing SASSI results for use in comparison with the time domain SSI results obtained using the DIABLO computer code, this study also leads to the recognition that the frequency-domain method should be modernized so that it can better serve its mission-critical role for analysis and design of nuclear power plants.

  9. Nutrition, Balance and Fear of Falling as Predictors of Risk for Falls among Filipino Elderly in Nursing Homes: A Structural Equation Model (SEM)

    Science.gov (United States)

    de Guzman, Allan B.; Ines, Joanna Louise C.; Inofinada, Nina Josefa A.; Ituralde, Nielson Louie J.; Janolo, John Robert E.; Jerezo, Jnyv L.; Jhun, Hyae Suk J.

    2013-01-01

    While a number of empirical studies have been conducted regarding risk for falls among the elderly, there is still a paucity of similar studies in a developing country like the Philippines. This study purports to test through Structural Equation Modeling (SEM) a model that shows the interaction between and among nutrition, balance, fear of…

  10. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or

  11. Ao leitor sem medo

    Directory of Open Access Journals (Sweden)

    José Eisenberg

    2000-05-01

    Full Text Available O texto resenha Ao leitor sem medo, de Renato Janine Ribeiro (Belo Horizonte, UFMG, 1999.This text is a review of Ao leitor sem medo by Renato Janine Ribeiro (Belo Horizonte, UFMG, 1999

  12. Modeling Citable Textual Analyses for the Homer Multitext

    Directory of Open Access Journals (Sweden)

    Christopher William Blackwell

    2016-12-01

    Full Text Available The 'Homer Multitext' project (hmt is documenting the language and structure of Greek epic poetry, and the ancient tradition of commentary on it. The project’s primary data consist of editions of Greek texts; automated and manually created readings analyze the texts across historical and thematic axes. This paper describes an abstract model we follow in documenting an open-ended body of diverse analyses. The analyses apply to passages of texts at different levels of granularity; they may refer to overlapping or mutually exclusive passages of text; and they may apply to non-contiguous passages of text. All are recorded in with explicit, concise, machine-actionable canonical citation of both text passage and analysis in a scheme aligning all analyses to a common notional text. We cite our texts with urns that capture a passage’s position in an 'Ordered Hierarchy of Citation Objects' (ohco2. Analyses are modeled as data-objects with five properties. We create collections of ‘analytical objects’, each uniquely identified by its own urn and each aligned to a particular edition of a text by a urn citation. We can view these analytical objects as an extension of the edition’s citation hierarchy; since they are explicitly ordered by their alignment with the edition they analyze, each collection of analyses meets satisfies the (ohco2 model of a citable text. We call these texts that are derived from and aligned to an edition ‘analytical exemplars’.

  13. An SEM approach to continuous time modeling of panel data: Relating authoritarianism and anomia: Correction to Voelkle, Oud, Davidov, and Schmidt

    NARCIS (Netherlands)

    Voelkle, M.C.; Oud, J.H.L.; Davidov, E.; Schmidt, P.

    2012-01-01

    Reports an error in "An SEM approach to continuous time modeling of panel data: Relating authoritarianism and anomia" by Manuel C. Voelkle, Johan H. L. Oud, Eldad Davidov and Peter Schmidt (Psychological Methods, 2012[Jun], Vol 17[2], 176-192). The supplemental materials link was missing. All

  14. Use of flow models to analyse loss of coolant accidents

    International Nuclear Information System (INIS)

    Pinet, Bernard

    1978-01-01

    This article summarises current work on developing the use of flow models to analyse loss-of-coolant accident in pressurized-water plants. This work is being done jointly, in the context of the LOCA Technical Committee, by the CEA, EDF and FRAMATOME. The construction of the flow model is very closely based on some theoretical studies of the two-fluid model. The laws of transfer at the interface and at the wall are tested experimentally. The representativity of the model then has to be checked in experiments involving several elementary physical phenomena [fr

  15. Social Network Analyses and Nutritional Behavior: An Integrated Modeling Approach

    Directory of Open Access Journals (Sweden)

    Alistair McNair Senior

    2016-01-01

    Full Text Available Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent advances in nutrition research, combining state-space models of nutritional geometry with agent-based models of systems biology, show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a tangible and practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit agent-based models that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition. Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interaction in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  16. SVM models for analysing the headstreams of mine water inrush

    Energy Technology Data Exchange (ETDEWEB)

    Yan Zhi-gang; Du Pei-jun; Guo Da-zhi [China University of Science and Technology, Xuzhou (China). School of Environmental Science and Spatial Informatics

    2007-08-15

    The support vector machine (SVM) model was introduced to analyse the headstrean of water inrush in a coal mine. The SVM model, based on a hydrogeochemical method, was constructed for recognising two kinds of headstreams and the H-SVMs model was constructed for recognising multi- headstreams. The SVM method was applied to analyse the conditions of two mixed headstreams and the value of the SVM decision function was investigated as a means of denoting the hydrogeochemical abnormality. The experimental results show that the SVM is based on a strict mathematical theory, has a simple structure and a good overall performance. Moreover the parameter W in the decision function can describe the weights of discrimination indices of the headstream of water inrush. The value of the decision function can denote hydrogeochemistry abnormality, which is significant in the prevention of water inrush in a coal mine. 9 refs., 1 fig., 7 tabs.

  17. Performance of neutron kinetics models for ADS transient analyses

    International Nuclear Information System (INIS)

    Rineiski, A.; Maschek, W.; Rimpault, G.

    2002-01-01

    Within the framework of the SIMMER code development, neutron kinetics models for simulating transients and hypothetical accidents in advanced reactor systems, in particular in Accelerator Driven Systems (ADSs), have been developed at FZK/IKET in cooperation with CE Cadarache. SIMMER is a fluid-dynamics/thermal-hydraulics code, coupled with a structure model and a space-, time- and energy-dependent neutronics module for analyzing transients and accidents. The advanced kinetics models have also been implemented into KIN3D, a module of the VARIANT/TGV code (stand-alone neutron kinetics) for broadening application and for testing and benchmarking. In the paper, a short review of the SIMMER and KIN3D neutron kinetics models is given. Some typical transients related to ADS perturbations are analyzed. The general models of SIMMER and KIN3D are compared with more simple techniques developed in the context of this work to get a better understanding of the specifics of transients in subcritical systems and to estimate the performance of different kinetics options. These comparisons may also help in elaborating new kinetics models and extending existing computation tools for ADS transient analyses. The traditional point-kinetics model may give rather inaccurate transient reaction rate distributions in an ADS even if the material configuration does not change significantly. This inaccuracy is not related to the problem of choosing a 'right' weighting function: the point-kinetics model with any weighting function cannot take into account pronounced flux shape variations related to possible significant changes in the criticality level or to fast beam trips. To improve the accuracy of the point-kinetics option for slow transients, we have introduced a correction factor technique. The related analyses give a better understanding of 'long-timescale' kinetics phenomena in the subcritical domain and help to evaluate the performance of the quasi-static scheme in a particular case. One

  18. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Du, Qiang [Pennsylvania State Univ., State College, PA (United States)

    2014-11-12

    The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of which is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next

  19. GeoSemOLAP

    DEFF Research Database (Denmark)

    Gur, Nurefsan; Nielsen, Jacob; Hose, Katja

    2017-01-01

    very difficult for inexperienced users. Hence, we have developed GeoSemOLAP to enable users without detailed knowledge of RDF and SPARQL to query the SW with SOLAP. GeoSemOLAP generates SPARQL queries based on high-level SOLAP operators and allows the user to interactively formulate queries using...

  20. Integration efficiency for model reduction in micro-mechanical analyses

    Science.gov (United States)

    van Tuijl, Rody A.; Remmers, Joris J. C.; Geers, Marc G. D.

    2017-11-01

    Micro-structural analyses are an important tool to understand material behavior on a macroscopic scale. The analysis of a microstructure is usually computationally very demanding and there are several reduced order modeling techniques available in literature to limit the computational costs of repetitive analyses of a single representative volume element. These techniques to speed up the integration at the micro-scale can be roughly divided into two classes; methods interpolating the integrand and cubature methods. The empirical interpolation method (high-performance reduced order modeling) and the empirical cubature method are assessed in terms of their accuracy in approximating the full-order result. A micro-structural volume element is therefore considered, subjected to four load-cases, including cyclic and path-dependent loading. The differences in approximating the micro- and macroscopic quantities of interest are highlighted, e.g. micro-fluctuations and stresses. Algorithmic speed-ups for both methods with respect to the full-order micro-structural model are quantified. The pros and cons of both classes are thereby clearly identified.

  1. A 1024 channel analyser of model FH 465

    International Nuclear Information System (INIS)

    Tang Cunxun

    1988-01-01

    The FH 465 is renewed type of the 1024 Channel Analyser of model FH451. Besides simple operation and fine display, featured by the primary one, the core memory is replaced by semiconductor memory; the integration has been improved; employment of 74LS low power consumpted devices widely used in the world has not only greatly decreased the cost, but also can be easily interchanged with Apple-II, Great Wall-0520-CH or IBM-PC/XT Microcomputers. The operating principle, main specifications and test results are described

  2. FAKTOR ADOPSI INTERNET MARKETING UNTUK USAHA MIKRO DAN USAHA KECIL MENENGAH (UMKM DI KABUPATEN KUDUS DENGAN SEM (STRUCTURAL EQUATION MODEL DAN FRAMEWORK COBIT 4.1

    Directory of Open Access Journals (Sweden)

    Endang Supriyati

    2013-06-01

    Full Text Available ABSTRAK Pemasaran melalui internet merupakan strategi baru dalam era teknologi informasi saat ini. Teknologi informasi diarahkan untuk mendukung proses bisnis utama dan pendukung yang ada di Usaha Mikro Dan Usaha Kecil Menengah (UMKM. Penelitian ini dilakukan pada UMKM di Kab Kudus yang bergerak di bidang konveksi dan kerajinan bordir. Analisa terhadap Tata Kelola TI diperoleh Domain COBIT yang sesuai yaitu PO5 (Mengukur Investasi TI. Indikator yang dianalisa adalah indikator penggunaan internet marketing. Dari identifikasi ini, kuisioner disebar ke UMKM. Pendekatan Struktural Equation Modeling (SEM digunakan untuk menganalisa secara empiris tentang faktor-faktor yang terkait dengan penggunaan internet marketing dalam memasarkan produk UMKM. Dari hasil penelitian ini menunjukkan bahwa korelasi Internet Marketing dengan PO5 cukup kuat (-0,358 akan tetapi arahnya negatif sehingga semakin kecil pengaturan investasi TI semakin kecil juga penggunaan Internet Marketing. Kata Kunci : UMKM, Internet marketing, COBIT, PO5, SEM

  3. Multi-state models: metapopulation and life history analyses

    Directory of Open Access Journals (Sweden)

    Arnason, A. N.

    2004-06-01

    Full Text Available Multi–state models are designed to describe populations that move among a fixed set of categorical states. The obvious application is to population interchange among geographic locations such as breeding sites or feeding areas (e.g., Hestbeck et al., 1991; Blums et al., 2003; Cam et al., 2004 but they are increasingly used to address important questions of evolutionary biology and life history strategies (Nichols & Kendall, 1995. In these applications, the states include life history stages such as breeding states. The multi–state models, by permitting estimation of stage–specific survival and transition rates, can help assess trade–offs between life history mechanisms (e.g. Yoccoz et al., 2000. These trade–offs are also important in meta–population analyses where, for example, the pre–and post–breeding rates of transfer among sub–populations can be analysed in terms of target colony distance, density, and other covariates (e.g., Lebreton et al. 2003; Breton et al., in review. Further examples of the use of multi–state models in analysing dispersal and life–history trade–offs can be found in the session on Migration and Dispersal. In this session, we concentrate on applications that did not involve dispersal. These applications fall in two main categories: those that address life history questions using stage categories, and a more technical use of multi–state models to address problems arising from the violation of mark–recapture assumptions leading to the potential for seriously biased predictions or misleading insights from the models. Our plenary paper, by William Kendall (Kendall, 2004, gives an overview of the use of Multi–state Mark–Recapture (MSMR models to address two such violations. The first is the occurrence of unobservable states that can arise, for example, from temporary emigration or by incomplete sampling coverage of a target population. Such states can also occur for life history reasons, such

  4. Comparison between tests and analyses for ground-foundation models

    International Nuclear Information System (INIS)

    Moriyama, Ken-ichi; Hibino, Hirosi; Izumi, Masanori; Kiya, Yukiharu.

    1991-01-01

    The laboratory tests were carried out on two ground models made of silicone rubber (hard and soft ground models) and a foundation model made of aluminum in order to confirm the embedment effects on soil-structure interaction system experimentally. The detail of the procedure and the results of the test are described in the companion paper. Up till now, the analytical studies on the embedment effect on seismic response of buildings have been performed in recent years and the analysis tools have been used in the seismic design procedure of the nuclear power plant facilities. The embedment effects on soil-structure interaction system are confirmed by the simulation analysis and the verification of analysis tools are investigated through the simulation analysis in this paper. The following conclusions can be drawn from comparison between laboratory test results and analysis results. (1) The effects of embedment, such as increase in the impedance functions and the rotational component of foundation input motions, were clarified by the simulation analyses and laboratory tests. (2) The analysis results of axisymmetric FEM showed good agreement with processed test results by means of the transient response to eliminate the reflected waves and the analysis tools were confirmed experimentally. (3) The excavated portion of the soil affected the foundation input motion rather than the impedance function since there was little difference between the impedance functions obtained by wave propagation theory and those obtained by the axisymmetric FEM and the rotational component of the foundation input motions increased significantly. (J.P.N.)

  5. A theoretical model for analysing gender bias in medicine

    Directory of Open Access Journals (Sweden)

    Johansson Eva E

    2009-08-01

    Full Text Available Abstract During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  6. A theoretical model for analysing gender bias in medicine.

    Science.gov (United States)

    Risberg, Gunilla; Johansson, Eva E; Hamberg, Katarina

    2009-08-03

    During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  7. Um modelo semântico de publicações eletrônicas | A semantic model for electronic publishing

    Directory of Open Access Journals (Sweden)

    Carlos Henrique Marcondes

    2011-03-01

    Full Text Available Resumo Publicações eletrônicas, apesar dos avanços das Tecnologias da Informação, são ainda calcados no modelo impresso. O formato textual impede que programas possam ser usados para o processamento “semântico” desses conteúdos. È porposto um modelo “semântico” de publicações cientificas eletrônicas, no qual as conclusões contidas no texto do artigo fornecidas por autores e representadas em formato “inteligível” por programas, permitindo recuperação semântica, identificação de indícios de novas descobertas científicas e de incoerências sobre este conhecimento. O modelo se baseia nos conceitos de estrutura profunda, ou semântica, da linguagem (CHOMSKY, 1975, de microestrutura, macroestrutura e superestrutura, (KINTSH, VAN DIJK, 1972, na estrutura retórica de artigos científicos (HUTCHINS, 1977, (GROSS, 1990 e nos elementos de metodologia cientifica, como problema, questão, objetivo, hipótese, experimento e conclusão. Resulta da análise de 89 artigos biomédicos. Foi desenvolvido um protótipo de sistema que implementa parcialmente o modelo. Questionários foram usados com autores para embasar o desenvolvimento do protótipo. O protótipo foi testando com pesquisadores-autores. Foram identificados quatro padrões de raciocínio e encadeamento dos elementos semânticos em artigos científicos. O modelo de conteúdo foi implementado como uma ontologia computacional. Foi desenvolvido e avaliado um protótipo de uma interface web de submissão artigos pelos autores a um sistema eletrônico de publicação de periódicos que implementa o modelo. Palavras-chave publicações eletrônicas; metodológica científica; comunicação científica; representação do conhecimento; ontologias; processamento semântico de conteúdos; e-Ciência Abstract Electronic publishing, although Information Technologies advancements, are still based in the print text model. The textual format prevents programs to semantic process

  8. Comparison of two potato simulation models under climate change. I. Model calibration and sensitivity analyses

    NARCIS (Netherlands)

    Wolf, J.

    2002-01-01

    To analyse the effects of climate change on potato growth and production, both a simple growth model, POTATOS, and a comprehensive model, NPOTATO, were applied. Both models were calibrated and tested against results from experiments and variety trials in The Netherlands. The sensitivity of model

  9. Energirenovering af Sems Have

    DEFF Research Database (Denmark)

    Jensen, Søren Østergaard; Rose, Jørgen; Mørck, Ove

    energirenovering af boligblokke. Vejledningen omfatter optimering af økonomi, energibesparelser og CO2-reduktion ved renovering af boligblokke til lavenerginiveau. Fokus er på elementbyggeri fra 60-70erne samt murstensbyggeri. Der tages udgangspunkt i to konkrete renoverings-cases: Traneparken og Sems Have, hvor...... renoveringen er udført på to principielt forskellige måder: Traneparken med udvendig efterisolering til næsten lavenergiklasse 2015 niveau (nuværende BR2015 krav), Sems Have med helt ny klimaskærm og nye installationer til bygningsklasse 2020 niveau. Begge bebyggelser har fået nyt ventilationsanlæg samt PV......-anlæg. Nærværende rapport beskriver renoveringen af Sems Have....

  10. Impact of sophisticated fog spray models on accident analyses

    International Nuclear Information System (INIS)

    Roblyer, S.P.; Owzarski, P.C.

    1978-01-01

    The N-Reactor confinement system release dose to the public in a postulated accident is reduced by washing the confinement atmosphere with fog sprays. This allows a low pressure release of confinement atmosphere containing fission products through filters and out an elevated stack. The current accident analysis required revision of the CORRAL code and other codes such as CONTEMPT to properly model the N Reactor confinement into a system of multiple fog-sprayed compartments. In revising these codes, more sophisticated models for the fog sprays and iodine plateout were incorporated to remove some of the conservatism of steam condensing rate, fission product washout and iodine plateout than used in previous studies. The CORRAL code, which was used to describe the transport and deposition of airborne fission products in LWR containment systems for the Rasmussen Study, was revised to describe fog spray removal of molecular iodine (I 2 ) and particulates in multiple compartments for sprays having individual characteristics of on-off times, flow rates, fall heights, and drop sizes in changing containment atmospheres. During postulated accidents, the code determined the fission product removal rates internally rather than from input decontamination factors. A discussion is given of how the calculated plateout and washout rates vary with time throughout the analysis. The results of the accident analyses indicated that more credit could be given to fission product washout and plateout. An important finding was that the release of fission products to the atmosphere and adsorption of fission products on the filters were significantly lower than previous studies had indicated

  11. SEM microcharacterization of semiconductors

    CERN Document Server

    Holt, D B

    1989-01-01

    Applications of SEM techniques of microcharacterization have proliferated to cover every type of material and virtually every branch of science and technology. This book emphasizes the fundamental physical principles. The first section deals with the foundation of microcharacterization in electron beam instruments and the second deals with the interpretation of the information obtained in the main operating modes of a scanning electron microscope.

  12. SEM-EDX

    African Journals Online (AJOL)

    aghomotsegin

    2015-03-11

    Mar 11, 2015 ... *Corresponding author. E-mail: wenchung@dragon.nchu.edu.tw. Abbreviations: SEM-EDX, Scanning electron microscopy-energy dispersive X-ray spectrometer; As, arsenic; Cd, cadmium; ICP-. MS, inductively coupled plasma-mass spectrometer; AAS, atomic absorption spectrometry. Author(s) agree that ...

  13. Testing Mediation Using Multiple Regression and Structural Equation Modeling Analyses in Secondary Data

    Science.gov (United States)

    Li, Spencer D.

    2011-01-01

    Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…

  14. Space Experiment Module (SEM)

    Science.gov (United States)

    Brodell, Charles L.

    1999-01-01

    The Space Experiment Module (SEM) Program is an education initiative sponsored by the National Aeronautics and Space Administration (NASA) Shuttle Small Payloads Project. The program provides nationwide educational access to space for Kindergarten through University level students. The SEM program focuses on the science of zero-gravity and microgravity. Within the program, NASA provides small containers or "modules" for students to fly experiments on the Space Shuttle. The experiments are created, designed, built, and implemented by students with teacher and/or mentor guidance. Student experiment modules are flown in a "carrier" which resides in the cargo bay of the Space Shuttle. The carrier supplies power to, and the means to control and collect data from each experiment.

  15. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Gunzburger, Max [Florida State Univ., Tallahassee, FL (United States)

    2015-02-17

    We have treated the modeling, analysis, numerical analysis, and algorithmic development for nonlocal models of diffusion and mechanics. Variational formulations were developed and finite element methods were developed based on those formulations for both steady state and time dependent problems. Obstacle problems and optimization problems for the nonlocal models were also treated and connections made with fractional derivative models.

  16. Identifying longitudinal growth trajectories of learning domains in problem-based learning: a latent growth curve modeling approach using SEM.

    Science.gov (United States)

    Wimmers, Paul F; Lee, Ming

    2015-05-01

    To determine the direction and extent to which medical student scores (as observed by small-group tutors) on four problem-based-learning-related domains change over nine consecutive blocks during a two-year period (Domains: Problem Solving/Use of Information/Group Process/Professionalism). Latent growth curve modeling is used to analyze performance trajectories in each domain of two cohorts of 1st and 2nd year students (n = 296). Slopes of the growth trajectories show similar linear increments in the first three domains. Further analysis revealed relative strong individual variability in initial scores but not in their later increments. Professionalism, on the other hand, shows low variability and has very small, insignificant slope increments. In this study, we showed that the learning domains (Problem Solving, Use of Information, and Group Process) observed during PBL tutorials are not only related to each other but also develop cumulatively over time. Professionalism, in contrast to the other domains studied, is less affected by the curriculum suggesting that this represents a stable characteristic. The observation that the PBL tutorial has an equal benefit to all students is noteworthy and needs further investigation.

  17. Improved analyses using function datasets and statistical modeling

    Science.gov (United States)

    John S. Hogland; Nathaniel M. Anderson

    2014-01-01

    Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space and have limited statistical functionality and machine learning algorithms. To address this issue, we developed a new modeling framework using C# and ArcObjects and integrated that framework...

  18. An electrodynamic model to analyse field emission thrusters

    Energy Technology Data Exchange (ETDEWEB)

    Cardelli, E.; Del Zoppo, R.; Venturini, G.

    1987-12-01

    After a short description of the working principle of field emission thrusters, a surface emission electrodynamic model, capable of describing the required propulsive effects, is shown. The model, developed according to cylindrical geometry, provides one-dimensional differential relations and, therefore, easy resolution. The characteristic curves obtained are graphed. Comparison with experimental data confirms the validity of the proposed model.

  19. The relationship between cost estimates reliability and BIM adoption: SEM analysis

    Science.gov (United States)

    Ismail, N. A. A.; Idris, N. H.; Ramli, H.; Rooshdi, R. R. Raja Muhammad; Sahamir, S. R.

    2018-02-01

    This paper presents the usage of Structural Equation Modelling (SEM) approach in analysing the effects of Building Information Modelling (BIM) technology adoption in improving the reliability of cost estimates. Based on the questionnaire survey results, SEM analysis using SPSS-AMOS application examined the relationships between BIM-improved information and cost estimates reliability factors, leading to BIM technology adoption. Six hypotheses were established prior to SEM analysis employing two types of SEM models, namely the Confirmatory Factor Analysis (CFA) model and full structural model. The SEM models were then validated through the assessment on their uni-dimensionality, validity, reliability, and fitness index, in line with the hypotheses tested. The final SEM model fit measures are: P-value=0.000, RMSEA=0.0790.90, TLI=0.956>0.90, NFI=0.935>0.90 and ChiSq/df=2.259; indicating that the overall index values achieved the required level of model fitness. The model supports all the hypotheses evaluated, confirming that all relationship exists amongst the constructs are positive and significant. Ultimately, the analysis verified that most of the respondents foresee better understanding of project input information through BIM visualization, its reliable database and coordinated data, in developing more reliable cost estimates. They also perceive to accelerate their cost estimating task through BIM adoption.

  20. Piecewise Structural Equation Model (SEM) Disentangles the Environmental Conditions Favoring Diatom Diazotroph Associations (DDAs) in the Western Tropical North Atlantic (WTNA).

    Science.gov (United States)

    Stenegren, Marcus; Berg, Carlo; Padilla, Cory C; David, Stefan-Sebastian; Montoya, Joseph P; Yager, Patricia L; Foster, Rachel A

    2017-01-01

    Diatom diazotroph associations (DDAs) are important components in the world's oceans, especially in the western tropical north Atlantic (WTNA), where blooms have a significant impact on carbon and nitrogen cycling. However, drivers of their abundances and distribution patterns remain unknown. Here, we examined abundance and distribution patterns for two DDA populations in relation to the Amazon River (AR) plume in the WTNA. Quantitative PCR assays, targeting two DDAs (het-1 and het-2) by their symbiont's nifH gene, served as input in a piecewise structural equation model (SEM). Collections were made during high (spring 2010) and low (fall 2011) flow discharges of the AR. The distributions of dissolved nutrients, chlorophyll- a , and DDAs showed coherent patterns indicative of areas influenced by the AR. A symbiotic Hemiaulus hauckii-Richelia (het-2) bloom (>10 6 cells L -1 ) occurred during higher discharge of the AR and was coincident with mesohaline to oceanic (30-35) sea surface salinities (SSS), and regions devoid of dissolved inorganic nitrogen (DIN), low concentrations of both DIP (>0.1 μmol L -1 ) and Si (>1.0 μmol L -1 ). The Richelia (het-1) associated with Rhizosolenia was only present in 2010 and at lower densities (10-1.76 × 10 5 nifH copies L -1 ) than het-2 and limited to regions of oceanic SSS (>36). The het-2 symbiont detected in 2011 was associated with H. membranaceus (>10 3 nifH copies L -1 ) and were restricted to regions with mesohaline SSS (31.8-34.3), immeasurable DIN, moderate DIP (0.1-0.60 μmol L -1 ) and higher Si (4.19-22.1 μmol L -1 ). The piecewise SEM identified a profound direct negative effect of turbidity on the het-2 abundance in spring 2010, while DIP and water turbidity had a more positive influence in fall 2011, corroborating our observations of DDAs at subsurface maximas. We also found a striking difference in the influence of salinity on DDA symbionts suggesting a niche differentiation and preferences in oceanic and

  1. A didactic Input-Output model for territorial ecology analyses

    OpenAIRE

    Garry Mcdonald

    2010-01-01

    This report describes a didactic input-output modelling framework created jointly be the team at REEDS, Universite de Versailles and Dr Garry McDonald, Director, Market Economics Ltd. There are three key outputs associated with this framework: (i) a suite of didactic input-output models developed in Microsoft Excel, (ii) a technical report (this report) which describes the framework and the suite of models1, and (iii) a two week intensive workshop dedicated to the training of REEDS researcher...

  2. Modelling, singular perturbation and bifurcation analyses of bitrophic food chains.

    Science.gov (United States)

    Kooi, B W; Poggiale, J C

    2018-04-20

    Two predator-prey model formulations are studied: for the classical Rosenzweig-MacArthur (RM) model and the Mass Balance (MB) chemostat model. When the growth and loss rate of the predator is much smaller than that of the prey these models are slow-fast systems leading mathematically to singular perturbation problem. In contradiction to the RM-model, the resource for the prey are modelled explicitly in the MB-model but this comes with additional parameters. These parameter values are chosen such that the two models become easy to compare. In both models a transcritical bifurcation, a threshold above which invasion of predator into prey-only system occurs, and the Hopf bifurcation where the interior equilibrium becomes unstable leading to a stable limit cycle. The fast-slow limit cycles are called relaxation oscillations which for increasing differences in time scales leads to the well known degenerated trajectories being concatenations of slow parts of the trajectory and fast parts of the trajectory. In the fast-slow version of the RM-model a canard explosion of the stable limit cycles occurs in the oscillatory region of the parameter space. To our knowledge this type of dynamics has not been observed for the RM-model and not even for more complex ecosystem models. When a bifurcation parameter crosses the Hopf bifurcation point the amplitude of the emerging stable limit cycles increases. However, depending of the perturbation parameter the shape of this limit cycle changes abruptly from one consisting of two concatenated slow and fast episodes with small amplitude of the limit cycle, to a shape with large amplitude of which the shape is similar to the relaxation oscillation, the well known degenerated phase trajectories consisting of four episodes (concatenation of two slow and two fast). The canard explosion point is accurately predicted by using an extended asymptotic expansion technique in the perturbation and bifurcation parameter simultaneously where the small

  3. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; van Deursen, A.; Pinzger, M.

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  4. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  5. Analysing Models as a Knowledge Technology in Transport Planning

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik

    2011-01-01

    critical analytic literature on knowledge utilization and policy influence. A simple scheme based in this literature is drawn up to provide a framework for discussing the interface between urban transport planning and model use. A successful example of model use in Stockholm, Sweden is used as a heuristic......Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame...... device to illuminate how such an analytic scheme may allow patterns of insight about the use, influence and role of models in planning to emerge. The main contribution of the paper is to demonstrate that concepts and terminologies from knowledge use literature can provide interpretations of significance...

  6. GOTHIC MODEL OF BWR SECONDARY CONTAINMENT DRAWDOWN ANALYSES

    International Nuclear Information System (INIS)

    Hansen, P.N.

    2004-01-01

    This article introduces a GOTHIC version 7.1 model of the Secondary Containment Reactor Building Post LOCA drawdown analysis for a BWR. GOTHIC is an EPRI sponsored thermal hydraulic code. This analysis is required by the Utility to demonstrate an ability to restore and maintain the Secondary Containment Reactor Building negative pressure condition. The technical and regulatory issues associated with this modeling are presented. The analysis includes the affect of wind, elevation and thermal impacts on pressure conditions. The model includes a multiple volume representation which includes the spent fuel pool. In addition, heat sources and sinks are modeled as one dimensional heat conductors. The leakage into the building is modeled to include both laminar as well as turbulent behavior as established by actual plant test data. The GOTHIC code provides components to model heat exchangers used to provide fuel pool cooling as well as area cooling via air coolers. The results of the evaluation are used to demonstrate the time that the Reactor Building is at a pressure that exceeds external conditions. This time period is established with the GOTHIC model based on the worst case pressure conditions on the building. For this time period the Utility must assume the primary containment leakage goes directly to the environment. Once the building pressure is restored below outside conditions the release to the environment can be credited as a filtered release

  7. Plasma-safety assessment model and safety analyses of ITER

    International Nuclear Information System (INIS)

    Honda, T.; Okazaki, T.; Bartels, H.-H.; Uckan, N.A.; Sugihara, M.; Seki, Y.

    2001-01-01

    A plasma-safety assessment model has been provided on the basis of the plasma physics database of the International Thermonuclear Experimental Reactor (ITER) to analyze events including plasma behavior. The model was implemented in a safety analysis code (SAFALY), which consists of a 0-D dynamic plasma model and a 1-D thermal behavior model of the in-vessel components. Unusual plasma events of ITER, e.g., overfueling, were calculated using the code and plasma burning is found to be self-bounded by operation limits or passively shut down due to impurity ingress from overheated divertor targets. Sudden transition of divertor plasma might lead to failure of the divertor target because of a sharp increase of the heat flux. However, the effects of the aggravating failure can be safely handled by the confinement boundaries. (author)

  8. Modeling theoretical uncertainties in phenomenological analyses for particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)

    2017-04-15

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)

  9. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  10. Compound dislocation models (CDMs) for volcano deformation analyses

    Science.gov (United States)

    Nikkhoo, Mehdi; Walter, Thomas R.; Lundgren, Paul R.; Prats-Iraola, Pau

    2017-02-01

    Volcanic crises are often preceded and accompanied by volcano deformation caused by magmatic and hydrothermal processes. Fast and efficient model identification and parameter estimation techniques for various sources of deformation are crucial for process understanding, volcano hazard assessment and early warning purposes. As a simple model that can be a basis for rapid inversion techniques, we present a compound dislocation model (CDM) that is composed of three mutually orthogonal rectangular dislocations (RDs). We present new RD solutions, which are free of artefact singularities and that also possess full rotational degrees of freedom. The CDM can represent both planar intrusions in the near field and volumetric sources of inflation and deflation in the far field. Therefore, this source model can be applied to shallow dikes and sills, as well as to deep planar and equidimensional sources of any geometry, including oblate, prolate and other triaxial ellipsoidal shapes. In either case the sources may possess any arbitrary orientation in space. After systematically evaluating the CDM, we apply it to the co-eruptive displacements of the 2015 Calbuco eruption observed by the Sentinel-1A satellite in both ascending and descending orbits. The results show that the deformation source is a deflating vertical lens-shaped source at an approximate depth of 8 km centred beneath Calbuco volcano. The parameters of the optimal source model clearly show that it is significantly different from an isotropic point source or a single dislocation model. The Calbuco example reflects the convenience of using the CDM for a rapid interpretation of deformation data.

  11. Model analyses for sustainable energy supply under CO2 restrictions

    International Nuclear Information System (INIS)

    Matsuhashi, Ryuji; Ishitani, Hisashi.

    1995-01-01

    This paper aims at clarifying key points for realizing sustainable energy supply under restrictions on CO 2 emissions. For this purpose, possibility of solar breeding system is investigated as a key technology for the sustainable energy supply. The authors describe their mathematical model simulating global energy supply and demand in ultra-long term. Depletion of non-renewable resources and constraints on CO 2 emissions are taken into consideration in the model. Computed results have shown that present energy system based on non-renewable resources shifts to a system based on renewable resources in the ultra-long term with appropriate incentives

  12. Vegetable parenting practices scale: Item response modeling analyses

    Science.gov (United States)

    Our objective was to evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We al...

  13. A Hamiltonian approach to model and analyse networks of ...

    Indian Academy of Sciences (India)

    2015-09-24

    Sep 24, 2015 ... Over the past twelve years, ideas and methods from nonlinear dynamics system theory, in particular, group theoretical methods in bifurcation theory, have been ... In this manuscript, a review of the most recent work on modelling and analysis of two seemingly different systems, an array of gyroscopes and an ...

  14. Gene Discovery and Functional Analyses in the Model Plant Arabidopsis

    DEFF Research Database (Denmark)

    Feng, Cai-ping; Mundy, J.

    2006-01-01

    The present mini-review describes newer methods and strategies, including transposon and T-DNA insertions, TILLING, Deleteagene, and RNA interference, to functionally analyze genes of interest in the model plant Arabidopsis. The relative advantages and disadvantages of the systems are also...

  15. Capacity allocation in wireless communication networks - models and analyses

    NARCIS (Netherlands)

    Litjens, Remco

    2003-01-01

    This monograph has concentrated on capacity allocation in cellular and Wireless Local Area Networks, primarily with a network operator’s perspective. In the introduc- tory chapter, a reference model has been proposed for the extensive suite of capacity allocation mechanisms that can be applied at

  16. Theoretical modeling and experimental analyses of laminated wood composite poles

    Science.gov (United States)

    Cheng Piao; Todd F. Shupe; Vijaya Gopu; Chung Y. Hse

    2005-01-01

    Wood laminated composite poles consist of trapezoid-shaped wood strips bonded with synthetic resin. The thick-walled hollow poles had adequate strength and stiffness properties and were a promising substitute for solid wood poles. It was necessary to develop theoretical models to facilitate the manufacture and future installation and maintenance of this novel...

  17. Complex accident scenarios modelled and analysed by Stochastic Petri Nets

    International Nuclear Information System (INIS)

    Nývlt, Ondřej; Haugen, Stein; Ferkl, Lukáš

    2015-01-01

    This paper is focused on the usage of Petri nets for an effective modelling and simulation of complicated accident scenarios, where an order of events can vary and some events may occur anywhere in an event chain. These cases are hardly manageable by traditional methods as event trees – e.g. one pivotal event must be often inserted several times into one branch of the tree. Our approach is based on Stochastic Petri Nets with Predicates and Assertions and on an idea, which comes from the area of Programmable Logic Controllers: an accidental scenario is described as a net of interconnected blocks, which represent parts of the scenario. So the scenario is firstly divided into parts, which are then modelled by Petri nets. Every block can be easily interconnected with other blocks by input/output variables to create complex ones. In the presented approach, every event or a part of a scenario is modelled only once, independently on a number of its occurrences in the scenario. The final model is much more transparent then the corresponding event tree. The method is shown in two case studies, where the advanced one contains a dynamic behavior. - Highlights: • Event & Fault trees have problems with scenarios where an order of events can vary. • Paper presents a method for modelling and analysis of dynamic accident scenarios. • The presented method is based on Petri nets. • The proposed method solves mentioned problems of traditional approaches. • The method is shown in two case studies: simple and advanced (with dynamic behavior)

  18. A Formal Model to Analyse the Firewall Configuration Errors

    Directory of Open Access Journals (Sweden)

    T. T. Myo

    2015-01-01

    Full Text Available The firewall is widely known as a brandmauer (security-edge gateway. To provide the demanded security, the firewall has to be appropriately adjusted, i.e. be configured. Unfortunately, when configuring, even the skilled administrators may make mistakes, which result in decreasing level of a network security and network infiltration undesirable packages.The network can be exposed to various threats and attacks. One of the mechanisms used to ensure network security is the firewall.The firewall is a network component, which, using a security policy, controls packages passing through the borders of a secured network. The security policy represents the set of rules.Package filters work in the mode without inspection of a state: they investigate packages as the independent objects. Rules take the following form: (condition, action. The firewall analyses the entering traffic, based on the IP address of the sender and recipient, the port number of the sender and recipient, and the used protocol. When the package meets rule conditions, the action specified in the rule is carried out. It can be: allow, deny.The aim of this article is to develop tools to analyse a firewall configuration with inspection of states. The input data are the file with the set of rules. It is required to submit the analysis of a security policy in an informative graphic form as well as to reveal discrepancy available in rules. The article presents a security policy visualization algorithm and a program, which shows how the firewall rules act on all possible packages. To represent a result in an intelligible form a concept of the equivalence region is introduced.Our task is the program to display results of rules action on the packages in a convenient graphic form as well as to reveal contradictions between the rules. One of problems is the large number of measurements. As it was noted above, the following parameters are specified in the rule: Source IP address, appointment IP

  19. Analyses of homologous rotavirus infection in the mouse model.

    Science.gov (United States)

    Burns, J W; Krishnaney, A A; Vo, P T; Rouse, R V; Anderson, L J; Greenberg, H B

    1995-02-20

    The group A rotaviruses are significant human and veterinary pathogens in terms of morbidity, mortality, and economic loss. Despite its importance, an effective vaccine remains elusive due at least in part to our incomplete understanding of rotavirus immunity and protection. Both large and small animal model systems have been established to address these issues. One significant drawback of these models is the lack of well-characterized wild-type homologous viruses and their cell culture-adapted variants. We have characterized four strains of murine rotaviruses, EC, EHP, EL, and EW, in the infant and adult mouse model using wild-type isolates and cell culture-adapted variants of each strain. Wild-type murine rotaviruses appear to be equally infectious in infant and adult mice in terms of the intensity and duration of virus shedding following primary infection. Spread of infection to naive cagemates is seen in both age groups. Clearance of shedding following primary infection appears to correlate with the development of virus-specific intestinal IgA. Protective immunity is developed in both infant and adult mice following oral infection as demonstrated by a lack of shedding after subsequent wild-type virus challenge. Cell culture-adapted murine rotaviruses appear to be highly attenuated when administered to naive animals and do not spread efficiently to nonimmune cagemates. The availability of these wild-type and cell culture-adapted virus preparations should allow a more systematic evaluation of rotavirus infection and immunity. Furthermore, future vaccine strategies can be evaluated in the mouse model using several fully virulent homologous viruses for challenge.

  20. Analysing the Competency of Mathematical Modelling in Physics

    OpenAIRE

    Redish, Edward F.

    2016-01-01

    A primary goal of physics is to create mathematical models that allow both predictions and explanations of physical phenomena. We weave maths extensively into our physics instruction beginning in high school, and the level and complexity of the maths we draw on grows as our students progress through a physics curriculum. Despite much research on the learning of both physics and math, the problem of how to successfully teach most of our students to use maths in physics effectively remains unso...

  1. An SEM Analysis of Bearing Failure Due to Electrical Arcing (Analyse par Microscopie Electronique a Balayage de l’Endommagement d’un Roulement a Billes a la Suite de Decharges Electriques Internes).

    Science.gov (United States)

    1983-01-01

    ANALYSIS OF BEARING FAILURE DUE TO ELECTRICAL ARCING ANALYSE PAR MICROSCOPIE ELECTRONIQUE A BALAYAGE DE L’ENIDOMMAGEMENT D’UN ROULEMENT A BILLES A LA SUITE...cause of subsequent fatigue and other mechanical damage to these components. RESUME Une analyse par microscopie 6lectronique A balayage a permis de...6vidence a la surface des billes de roulements et des voies de glissement par microscopie 6lectronique i balayage . Cela sugg~re que des d~charges

  2. A workflow model to analyse pediatric emergency overcrowding.

    Science.gov (United States)

    Zgaya, Hayfa; Ajmi, Ines; Gammoudi, Lotfi; Hammadi, Slim; Martinot, Alain; Beuscart, Régis; Renard, Jean-Marie

    2014-01-01

    The greatest source of delay in patient flow is the waiting time from the health care request, and especially the bed request to exit from the Pediatric Emergency Department (PED) for hospital admission. It represents 70% of the time that these patients occupied in the PED waiting rooms. Our objective in this study is to identify tension indicators and bottlenecks that contribute to overcrowding. Patient flow mapping through the PED was carried out in a continuous 2 years period from January 2011 to December 2012. Our method is to use the collected real data, basing on accurate visits made in the PED of the Regional University Hospital Center (CHRU) of Lille (France), in order to construct an accurate and complete representation of the PED processes. The result of this representation is a Workflow model of the patient journey in the PED representing most faithfully possible the reality of the PED of CHRU of Lille. This model allowed us to identify sources of delay in patient flow and aspects of the PED activity that could be improved. It must be enough retailed to produce an analysis allowing to identify the dysfunctions of the PED and also to propose and to estimate prevention indicators of tensions. Our survey is integrated into the French National Research Agency project, titled: "Hospital: optimization, simulation and avoidance of strain" (ANR HOST).

  3. Genomic, Biochemical, and Modeling Analyses of Asparagine Synthetases from Wheat

    Directory of Open Access Journals (Sweden)

    Hongwei Xu

    2018-01-01

    Full Text Available Asparagine synthetase activity in cereals has become an important issue with the discovery that free asparagine concentration determines the potential for formation of acrylamide, a probably carcinogenic processing contaminant, in baked cereal products. Asparagine synthetase catalyses the ATP-dependent transfer of the amino group of glutamine to a molecule of aspartate to generate glutamate and asparagine. Here, asparagine synthetase-encoding polymerase chain reaction (PCR products were amplified from wheat (Triticum aestivum cv. Spark cDNA. The encoded proteins were assigned the names TaASN1, TaASN2, and TaASN3 on the basis of comparisons with other wheat and cereal asparagine synthetases. Although very similar to each other they differed slightly in size, with molecular masses of 65.49, 65.06, and 66.24 kDa, respectively. Chromosomal positions and scaffold references were established for TaASN1, TaASN2, and TaASN3, and a fourth, more recently identified gene, TaASN4. TaASN1, TaASN2, and TaASN4 were all found to be single copy genes, located on chromosomes 5, 3, and 4, respectively, of each genome (A, B, and D, although variety Chinese Spring lacked a TaASN2 gene in the B genome. Two copies of TaASN3 were found on chromosome 1 of each genome, and these were given the names TaASN3.1 and TaASN3.2. The TaASN1, TaASN2, and TaASN3 PCR products were heterologously expressed in Escherichia coli (TaASN4 was not investigated in this part of the study. Western blot analysis identified two monoclonal antibodies that recognized the three proteins, but did not distinguish between them, despite being raised to epitopes SKKPRMIEVAAP and GGSNKPGVMNTV in the variable C-terminal regions of the proteins. The heterologously expressed TaASN1 and TaASN2 proteins were found to be active asparagine synthetases, producing asparagine and glutamate from glutamine and aspartate. The asparagine synthetase reaction was modeled using SNOOPY® software and information from

  4. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    OpenAIRE

    Shek, Daniel T. L.; Ma, Cecilia M. S.

    2011-01-01

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documen...

  5. Using System Dynamic Model and Neural Network Model to Analyse Water Scarcity in Sudan

    Science.gov (United States)

    Li, Y.; Tang, C.; Xu, L.; Ye, S.

    2017-07-01

    Many parts of the world are facing the problem of Water Scarcity. Analysing Water Scarcity quantitatively is an important step to solve the problem. Water scarcity in a region is gauged by WSI (water scarcity index), which incorporate water supply and water demand. To get the WSI, Neural Network Model and SDM (System Dynamic Model) that depict how environmental and social factors affect water supply and demand are developed to depict how environmental and social factors affect water supply and demand. The uneven distribution of water resource and water demand across a region leads to an uneven distribution of WSI within this region. To predict WSI for the future, logistic model, Grey Prediction, and statistics are applied in predicting variables. Sudan suffers from severe water scarcity problem with WSI of 1 in 2014, water resource unevenly distributed. According to the result of modified model, after the intervention, Sudan’s water situation will become better.

  6. Numerical analyses of interaction of steel-fibre reinforced concrete slab model with subsoil

    Directory of Open Access Journals (Sweden)

    Jana Labudkova

    2017-01-01

    Full Text Available Numerical analyses of contact task were made with FEM. The test sample for the task was a steel-fibre reinforced concrete foundation slab model loaded during experimental loading test. Application of inhomogeneous half-space was used in FEM analyses. Results of FEM analyses were also confronted with the values measured during the experiment.

  7. Analyses of tumor-suppressor genes in germline mouse models of cancer.

    Science.gov (United States)

    Wang, Jingqiang; Abate-Shen, Cory

    2014-08-01

    Tumor-suppressor genes are critical regulators of growth and functioning of cells, whose loss of function contributes to tumorigenesis. Accordingly, analyses of the consequences of their loss of function in genetically engineered mouse models have provided important insights into mechanisms of human cancer, as well as resources for preclinical analyses and biomarker discovery. Nowadays, most investigations of genetically engineered mouse models of tumor-suppressor function use conditional or inducible alleles, which enable analyses in specific cancer (tissue) types and overcome the consequences of embryonic lethality of germline loss of function of essential tumor-suppressor genes. However, historically, analyses of genetically engineered mouse models based on germline loss of function of tumor-suppressor genes were very important as these early studies established the principle that loss of function could be studied in mouse cancer models and also enabled analyses of these essential genes in an organismal context. Although the cancer phenotypes of these early germline models did not always recapitulate the expected phenotypes in human cancer, these models provided the essential foundation for the more sophisticated conditional and inducible models that are currently in use. Here, we describe these "first-generation" germline models of loss of function models, focusing on the important lessons learned from their analyses, which helped in the design and analyses of "next-generation" genetically engineered mouse models. © 2014 Cold Spring Harbor Laboratory Press.

  8. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    International Nuclear Information System (INIS)

    2014-12-01

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  9. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  10. Characterizing the hydraulic properties of a paper coating layer using FIB-SEM tomography and 3D pore-scale modeling

    OpenAIRE

    Aslannejad, H.; Hassanizadeh, S.M.; Raoof, A.; de Winter, D.A.M.; Tomozeu, N.; van Genuchten, M.T.

    2017-01-01

    Paper used in the printing industry generally contains a relatively thin porous coating covering a thicker fibrous base layer. The three-dimensional pore structure of coatings has a major effect on fluid flow patterns inside the paper medium. Understanding and quantifying the flow properties of thin coating layers is hence crucial. Pore spaces within the coating have an average size of about 180 nm. We used scanning electron microscopy combined with focused ion beam (FIB-SEM) to visualize the...

  11. Fungal-Induced Deterioration of Mural Paintings: In Situ and Mock-Model Microscopy Analyses.

    Science.gov (United States)

    Unković, Nikola; Grbić, Milica Ljaljević; Stupar, Miloš; Savković, Željko; Jelikić, Aleksa; Stanojević, Dragan; Vukojević, Jelena

    2016-04-01

    Fungal deterioration of frescoes was studied in situ on a selected Serbian church, and on a laboratory model, utilizing standard and newly implemented microscopy techniques. Scanning electron microscopy (SEM) with energy-dispersive X-ray confirmed the limestone components of the plaster. Pigments used were identified as carbon black, green earth, iron oxide, ocher, and an ocher/cinnabar mixture. In situ microscopy, applied via a portable microscope ShuttlePix P-400R, proved very useful for detection of invisible micro-impairments and hidden, symptomless, microbial growth. SEM and optical microscopy established that observed deterioration symptoms, predominantly discoloration and pulverization of painted layers, were due to bacterial filaments and fungal hyphal penetration, and formation of a wide range of fungal structures (i.e., melanized hyphae, chlamydospores, microcolonial clusters, Cladosporium-like conidia, and Chaetomium perithecia and ascospores). The all year-round monitoring of spontaneous and induced fungal colonization of a "mock painting" in controlled laboratory conditions confirmed the decisive role of humidity level (70.18±6.91% RH) in efficient colonization of painted surfaces, as well as demonstrated increased bioreceptivity of painted surfaces to fungal colonization when plant-based adhesives (ilinocopie, murdent), compared with organic adhesives of animal origin (bone glue, egg white), are used for pigment sizing.

  12. An improved lake model for climate simulations: Model structure, evaluation, and sensitivity analyses in CESM1

    Directory of Open Access Journals (Sweden)

    Zachary Subin

    2012-02-01

    Full Text Available Lakes can influence regional climate, yet most general circulation models have, at best, simple and largely untested representations of lakes. We developed the Lake, Ice, Snow, and Sediment Simulator(LISSS for inclusion in the land-surface component (CLM4 of an earth system model (CESM1. The existing CLM4 lake modelperformed poorly at all sites tested; for temperate lakes, summer surface water temperature predictions were 10–25uC lower than observations. CLM4-LISSS modifies the existing model by including (1 a treatment of snow; (2 freezing, melting, and ice physics; (3 a sediment thermal submodel; (4 spatially variable prescribed lakedepth; (5 improved parameterizations of lake surface properties; (6 increased mixing under ice and in deep lakes; and (7 correction of previous errors. We evaluated the lake model predictions of water temperature and surface fluxes at three small temperate and boreal lakes where extensive observational data was available. We alsoevaluated the predicted water temperature and/or ice and snow thicknesses for ten other lakes where less comprehensive forcing observations were available. CLM4-LISSS performed very well compared to observations for shallow to medium-depth small lakes. For large, deep lakes, the under-prediction of mixing was improved by increasing the lake eddy diffusivity by a factor of 10, consistent with previouspublished analyses. Surface temperature and surface flux predictions were improved when the aerodynamic roughness lengths were calculated as a function of friction velocity, rather than using a constant value of 1 mm or greater. We evaluated the sensitivity of surface energy fluxes to modeled lake processes and parameters. Largechanges in monthly-averaged surface fluxes (up to 30 W m22 were found when excluding snow insulation or phase change physics and when varying the opacity, depth, albedo of melting lake ice, and mixing strength across ranges commonly found in real lakes. Typical

  13. FIB-SEM tomography in biology.

    Science.gov (United States)

    Kizilyaprak, Caroline; Bittermann, Anne Greet; Daraspe, Jean; Humbel, Bruno M

    2014-01-01

    Three-dimensional information is much easier to understand than a set of two-dimensional images. Therefore a layman is thrilled by the pseudo-3D image taken in a scanning electron microscope (SEM) while, when seeing a transmission electron micrograph, his imagination is challenged. First approaches to gain insight in the third dimension were to make serial microtome sections of a region of interest (ROI) and then building a model of the object. Serial microtome sectioning is a tedious and skill-demanding work and therefore seldom done. In the last two decades with the increase of computer power, sophisticated display options, and the development of new instruments, an SEM with a built-in microtome as well as a focused ion beam scanning electron microscope (FIB-SEM), serial sectioning, and 3D analysis has become far easier and faster.Due to the relief like topology of the microtome trimmed block face of resin-embedded tissue, the ROI can be searched in the secondary electron mode, and at the selected spot, the ROI is prepared with the ion beam for 3D analysis. For FIB-SEM tomography, a thin slice is removed with the ion beam and the newly exposed face is imaged with the electron beam, usually by recording the backscattered electrons. The process, also called "slice and view," is repeated until the desired volume is imaged.As FIB-SEM allows 3D imaging of biological fine structure at high resolution of only small volumes, it is crucial to perform slice and view at carefully selected spots. Finding the region of interest is therefore a prerequisite for meaningful imaging. Thin layer plastification of biofilms offers direct access to the original sample surface and allows the selection of an ROI for site-specific FIB-SEM tomography just by its pronounced topographic features.

  14. METROLOGICAL PERFORMANCE OF SEM 3D TECHNIQUES

    DEFF Research Database (Denmark)

    Marinello, Francesco; Carmignato, Simone; Savio, Enrico

    2008-01-01

    This paper addresses the metrological performance of three-dimensional measurements performed with Scanning Electron Microscopes (SEMs) using reconstruction of surface topography through stereo-photogrammetry. Reconstruction is based on the model function introduced by Piazzesi adapted for eucent...... condition are studied, in order to define a strategy to optimise the measurements taking account of the critical factors in SEM 3D reconstruction. Investigations were performed on a novel sample, specifically developed and implemented for the tests....... and the instrument set-up; the second concerns the quality of scanned images and represents the major criticality in the application of SEMs for 3D characterizations. In particular the critical role played by the tilting angle and its relative uncertainty, the magnification and the deviations from the eucentricity......This paper addresses the metrological performance of three-dimensional measurements performed with Scanning Electron Microscopes (SEMs) using reconstruction of surface topography through stereo-photogrammetry. Reconstruction is based on the model function introduced by Piazzesi adapted...

  15. On the Nature of SEM Estimates of ARMA Parameters.

    Science.gov (United States)

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2002-01-01

    Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…

  16. Global analyses of historical masonry buildings: Equivalent frame vs. 3D solid models

    Science.gov (United States)

    Clementi, Francesco; Mezzapelle, Pardo Antonio; Cocchi, Gianmichele; Lenci, Stefano

    2017-07-01

    The paper analyses the seismic vulnerability of two different masonry buildings. It provides both an advanced 3D modelling with solid elements and an equivalent frame modelling. The global structural behaviour and the dynamic properties of the compound have been evaluated using the Finite Element Modelling (FEM) technique, where the nonlinear behaviour of masonry has been taken into account by proper constitutive assumptions. A sensitivity analysis is done to evaluate the effect of the choice of the structural models.

  17. Random regression analyses using B-splines to model growth of Australian Angus cattle

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2005-09-01

    Full Text Available Abstract Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error.

  18. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  19. Buscador semántico

    OpenAIRE

    Zamar, Esteban David

    2016-01-01

    84 p. il. Este trabajo consiste en la construcción de un Prototipo de Buscador Semántico para Resoluciones Rectorales de la Universidad Católica de Salta. Conforma una parte de un proyecto de investigación sobre Minería de Textos a cargo de la Dra. Alicia Pérez y la Licenciada Carolina Cardoso. Entonces, el trabajo en cuestión intenta asociar la idea de que a partir de la minería de textos se puede desarrollar un buscador semántico con herramientas de software libre cumpliendo caracterí...

  20. Secondary emission monitor (SEM) grids.

    CERN Multimedia

    Patrice Loïez

    2002-01-01

    A great variety of Secondary Emission Monitors (SEM) are used all over the PS Complex. At other accelerators they are also called wire-grids, harps, etc. They are used to measure beam density profiles (from which beam size and emittance can be derived) in single-pass locations (not on circulating beams). Top left: two individual wire-planes. Top right: a combination of a horizontal and a vertical wire plane. Bottom left: a ribbon grid in its frame, with connecting wires. Bottom right: a SEM-grid with its insertion/retraction mechanism.

  1. Taxing CO2 and subsidising biomass: Analysed in a macroeconomic and sectoral model

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik

    2000-01-01

    This paper analyses the combination of taxes and subsidies as an instrument to enable a reduction in CO2 emission. The objective of the study is to compare recycling of a CO2 tax revenue as a subsidy for biomass use as opposed to traditional recycling such as reduced income or corporate taxation....... A model of Denmark's energy supply sector is used to analyse the e€ect of a CO2 tax combined with using the tax revenue for biomass subsidies. The energy supply model is linked to a macroeconomic model such that the macroeconomic consequences of tax policies can be analysed along with the consequences...... for speci®c sectors such as agriculture. Electricity and heat are produced at heat and power plants utilising fuels which minimise total fuel cost, while the authorities regulate capacity expansion technologies. The e€ect of fuel taxes and subsidies on fuels is very sensitive to the fuel substitution...

  2. Experimental and Computational Modal Analyses for Launch Vehicle Models considering Liquid Propellant and Flange Joints

    Directory of Open Access Journals (Sweden)

    Chang-Hoon Sim

    2018-01-01

    Full Text Available In this research, modal tests and analyses are performed for a simplified and scaled first-stage model of a space launch vehicle using liquid propellant. This study aims to establish finite element modeling techniques for computational modal analyses by considering the liquid propellant and flange joints of launch vehicles. The modal tests measure the natural frequencies and mode shapes in the first and second lateral bending modes. As the liquid filling ratio increases, the measured frequencies decrease. In addition, as the number of flange joints increases, the measured natural frequencies increase. Computational modal analyses using the finite element method are conducted. The liquid is modeled by the virtual mass method, and the flange joints are modeled using one-dimensional spring elements along with the node-to-node connection. Comparison of the modal test results and predicted natural frequencies shows good or moderate agreement. The correlation between the modal tests and analyses establishes finite element modeling techniques for modeling the liquid propellant and flange joints of space launch vehicles.

  3. Curvelet based offline analysis of SEM images.

    Directory of Open Access Journals (Sweden)

    Syed Hamad Shirazi

    Full Text Available Manual offline analysis, of a scanning electron microscopy (SEM image, is a time consuming process and requires continuous human intervention and efforts. This paper presents an image processing based method for automated offline analyses of SEM images. To this end, our strategy relies on a two-stage process, viz. texture analysis and quantification. The method involves a preprocessing step, aimed at the noise removal, in order to avoid false edges. For texture analysis, the proposed method employs a state of the art Curvelet transform followed by segmentation through a combination of entropy filtering, thresholding and mathematical morphology (MM. The quantification is carried out by the application of a box-counting algorithm, for fractal dimension (FD calculations, with the ultimate goal of measuring the parameters, like surface area and perimeter. The perimeter is estimated indirectly by counting the boundary boxes of the filled shapes. The proposed method, when applied to a representative set of SEM images, not only showed better results in image segmentation but also exhibited a good accuracy in the calculation of surface area and perimeter. The proposed method outperforms the well-known Watershed segmentation algorithm.

  4. Present status of theories and data analyses of mathematical models for carcinogenesis

    International Nuclear Information System (INIS)

    Kai, Michiaki; Kawaguchi, Isao

    2007-01-01

    Reviewed are the basic mathematical models (hazard functions), present trend of the model studies and that for radiation carcinogenesis. Hazard functions of carcinogenesis are described for multi-stage model and 2-event model related with cell dynamics. At present, the age distribution of cancer mortality is analyzed, relationship between mutation and carcinogenesis is discussed, and models for colorectal carcinogenesis are presented. As for radiation carcinogenesis, models of Armitage-Doll and of generalized MVK (Moolgavkar, Venson, Knudson, 1971-1990) by 2-stage clonal expansion have been applied to analysis of carcinogenesis in A-bomb survivors, workers in uranium mine (Rn exposure) and smoking doctors in UK and other cases, of which characteristics are discussed. In analyses of A-bomb survivors, models above are applied to solid tumors and leukemia to see the effect, if any, of stage, age of exposure, time progression etc. In miners and smokers, stages of the initiation, promotion and progression in carcinogenesis are discussed on the analyses. Others contain the analyses of workers in Canadian atomic power plant, and of patients who underwent the radiation therapy. Model analysis can help to understand the carcinogenic process in a quantitative aspect rather than to describe the process. (R.T.)

  5. SEM: A Cultural Change Agent

    Science.gov (United States)

    Barnes, Bradley; Bourke, Brian

    2015-01-01

    The authors advance the concept that institutional culture is a purposeful framework by which to view SEM's utility, particularly as a cultural change agent. Through the connection of seemingly independent functions of performance and behavior, implications emerge that deepen the understanding of the influence of culture on performance outcomes…

  6. Structuring Consumer Preferences with the SEM Method

    OpenAIRE

    Rosa, Franco

    2002-01-01

    Structuring preferences has been developed with econometric models using functional flexible parametric form and the exploring the perceptions about expressed and latent needs using different multivariate approaches. Purpose of this research is to explore the demand for a new drink using the mean-end chain (MEC) theory and multivariate SEM procedure. The first part is dedicated to description of specialty foods for their capacity to create new niche markets. The MEC theory is introduced to ex...

  7. Structural Equations and Causal Explanations: Some Challenges for Causal SEM

    Science.gov (United States)

    Markus, Keith A.

    2010-01-01

    One common application of structural equation modeling (SEM) involves expressing and empirically investigating causal explanations. Nonetheless, several aspects of causal explanation that have an impact on behavioral science methodology remain poorly understood. It remains unclear whether applications of SEM should attempt to provide complete…

  8. Comparison of linear measurements and analyses taken from plaster models and three-dimensional images.

    Science.gov (United States)

    Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins

    2014-11-01

    Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p < 0.05). The majority of the measurements, obtained using the caliper and O3d were identical, and both were significantly different from those obtained using the MS. Intra-examiner agreement was lowest when using the MS. The results demonstrated that the accuracy and reproducibility of the tooth measurements and analyses from the plaster models using the caliper and from the digital models using O3d software were identical.

  9. Comparison of plasma input and reference tissue models for analysing [(11)C]flumazenil studies

    NARCIS (Netherlands)

    Klumpers, Ursula M. H.; Veltman, Dick J.; Boellaard, Ronald; Comans, Emile F.; Zuketto, Cassandra; Yaqub, Maqsood; Mourik, Jurgen E. M.; Lubberink, Mark; Hoogendijk, Witte J. G.; Lammertsma, Adriaan A.

    2008-01-01

    A single-tissue compartment model with plasma input is the established method for analysing [(11)C]flumazenil ([(11)C]FMZ) studies. However, arterial cannulation and measurement of metabolites are time-consuming. Therefore, a reference tissue approach is appealing, but this approach has not been

  10. Kinetic analyses and mathematical modeling of primary photochemical and photoelectrochemical processes in plant photosystems

    NARCIS (Netherlands)

    Vredenberg, W.J.

    2011-01-01

    In this paper the model and simulation of primary photochemical and photo-electrochemical reactions in dark-adapted intact plant leaves is presented. A descriptive algorithm has been derived from analyses of variable chlorophyll a fluorescence and P700 oxidation kinetics upon excitation with

  11. Analysing and controlling the tax evasion dynamics via majority-vote model

    International Nuclear Information System (INIS)

    Lima, F W S

    2010-01-01

    Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdoes-Renyi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhood of the noise critical q c to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.

  12. Analysing and controlling the tax evasion dynamics via majority-vote model

    Energy Technology Data Exchange (ETDEWEB)

    Lima, F W S, E-mail: fwslima@gmail.co, E-mail: wel@ufpi.edu.b [Departamento de Fisica, Universidade Federal do PiauI, 64049-550, Teresina - PI (Brazil)

    2010-09-01

    Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdoes-Renyi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhood of the noise critical q{sub c} to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.

  13. SEM Analysis of Tooth Enamel

    OpenAIRE

    Azinović, Zoran; Keros, Jadranka; Buković, Dino; Azinović, Ana

    2003-01-01

    SEM analysis contains researches of tooth enamel surfaces of two populations. First group of samples is tooth enamel of prehistorically ancestor from Vu~edol and the second group of samples is enamel of modern Croatian citizen. Even on small number of human teeth samples from cooperage site of Vu~edol (3,000 BC) and today’s Croatian people, we can conclude about chewing biometry of prehistorically ancestors and today’s modern Croatian people, comparing interspecifically the mor...

  14. Nurses' intention to leave: critically analyse the theory of reasoned action and organizational commitment model.

    Science.gov (United States)

    Liou, Shwu-Ru

    2009-01-01

    To systematically analyse the Organizational Commitment model and Theory of Reasoned Action and determine concepts that can better explain nurses' intention to leave their job. The Organizational Commitment model and Theory of Reasoned Action have been proposed and applied to understand intention to leave and turnover behaviour, which are major contributors to nursing shortage. However, the appropriateness of applying these two models in nursing was not analysed. Three main criteria of a useful model were used for the analysis: consistency in the use of concepts, testability and predictability. Both theories use concepts consistently. Concepts in the Theory of Reasoned Action are defined broadly whereas they are operationally defined in the Organizational Commitment model. Predictability of the Theory of Reasoned Action is questionable whereas the Organizational Commitment model can be applied to predict intention to leave. A model was proposed based on this analysis. Organizational commitment, intention to leave, work experiences, job characteristics and personal characteristics can be concepts for predicting nurses' intention to leave. Nursing managers may consider nurses' personal characteristics and experiences to increase their organizational commitment and enhance their intention to stay. Empirical studies are needed to test and cross-validate the re-synthesized model for nurses' intention to leave their job.

  15. A model finite-element to analyse the mechanical behavior of a PWR fuel rod

    International Nuclear Information System (INIS)

    Galeao, A.C.N.R.; Tanajura, C.A.S.

    1988-01-01

    A model to analyse the mechanical behavior of a PWR fuel rod is presented. We drew our attention to the phenomenon of pellet-pellet and pellet-cladding contact by taking advantage of an elastic model which include the effects of thermal gradients, cladding internal and external pressures, swelling and initial relocation. The problem of contact gives rise ro a variational formulation which employs Lagrangian multipliers. An iterative scheme is constructed and the finite element method is applied to obtain the numerical solution. Some results and comments are presented to examine the performance of the model. (author) [pt

  16. Analysing, Interpreting, and Testing the Invariance of the Actor-Partner Interdependence Model

    Directory of Open Access Journals (Sweden)

    Gareau, Alexandre

    2016-09-01

    Full Text Available Although in recent years researchers have begun to utilize dyadic data analyses such as the actor-partner interdependence model (APIM, certain limitations to the applicability of these models still exist. Given the complexity of APIMs, most researchers will often use observed scores to estimate the model's parameters, which can significantly limit and underestimate statistical results. The aim of this article is to highlight the importance of conducting a confirmatory factor analysis (CFA of equivalent constructs between dyad members (i.e. measurement equivalence/invariance; ME/I. Different steps for merging CFA and APIM procedures will be detailed in order to shed light on new and integrative methods.

  17. Joint analyses model for total cholesterol and triglyceride in human serum with near-infrared spectroscopy

    Science.gov (United States)

    Yao, Lijun; Lyu, Ning; Chen, Jiemei; Pan, Tao; Yu, Jing

    2016-04-01

    The development of a small, dedicated near-infrared (NIR) spectrometer has promising potential applications, such as for joint analyses of total cholesterol (TC) and triglyceride (TG) in human serum for preventing and treating hyperlipidemia of a large population. The appropriate wavelength selection is a key technology for developing such a spectrometer. For this reason, a novel wavelength selection method, named the equidistant combination partial least squares (EC-PLS), was applied to the wavelength selection for the NIR analyses of TC and TG in human serum. A rigorous process based on the various divisions of calibration and prediction sets was performed to achieve modeling optimization with stability. By applying EC-PLS, a model set was developed, which consists of various models that were equivalent to the optimal model. The joint analyses model of the two indicators was further selected with only 50 wavelengths. The random validation samples excluded from the modeling process were used to validate the selected model. The root-mean-square errors, correlation coefficients and ratio of performance to deviation for the prediction were 0.197 mmol L- 1, 0.985 and 5.6 for TC, and 0.101 mmol L- 1, 0.992 and 8.0 for TG, respectively. The sensitivity and specificity for hyperlipidemia were 96.2% and 98.0%. These findings indicate high prediction accuracy and low model complexity. The proposed wavelength selection provided valuable references for the designing of a small, dedicated spectrometer for hyperlipidemia. The methodological framework and optimization algorithm are universal, such that they can be applied to other fields.

  18. Distinguishing Mediational Models and Analyses in Clinical Psychology: Atemporal Associations Do Not Imply Causation.

    Science.gov (United States)

    Winer, E Samuel; Cervone, Daniel; Bryant, Jessica; McKinney, Cliff; Liu, Richard T; Nadorff, Michael R

    2016-09-01

    A popular way to attempt to discern causality in clinical psychology is through mediation analysis. However, mediation analysis is sometimes applied to research questions in clinical psychology when inferring causality is impossible. This practice may soon increase with new, readily available, and easy-to-use statistical advances. Thus, we here provide a heuristic to remind clinical psychological scientists of the assumptions of mediation analyses. We describe recent statistical advances and unpack assumptions of causality in mediation, underscoring the importance of time in understanding mediational hypotheses and analyses in clinical psychology. Example analyses demonstrate that statistical mediation can occur despite theoretical mediation being improbable. We propose a delineation of mediational effects derived from cross-sectional designs into the terms temporal and atemporal associations to emphasize time in conceptualizing process models in clinical psychology. The general implications for mediational hypotheses and the temporal frameworks from within which they may be drawn are discussed. © 2016 Wiley Periodicals, Inc.

  19. Groundwater flow analyses in preliminary site investigations. Modelling strategy and computer codes

    International Nuclear Information System (INIS)

    Taivassalo, V.; Koskinen, L.; Meling, K.

    1994-02-01

    The analyses of groundwater flow comprised a part of the preliminary site investigations which were carried out by Teollisuuden Voima Oy (TVO) for five areas in Finland during 1987 -1992. The main objective of the flow analyses was to characterize groundwater flow at the sites. The flow simulations were also used to identify and study uncertainties and inadequacies which are inherent in the results of earlier modelling phases. The flow analyses were performed for flow conditions similar to the present conditions. The modelling approach was based on the concept of an equivalent continuum. Each fracture zone and the rock matrix among the zones was, however, considered separately as a hydrogeologic unit. The numerical calculations were carried out with a computer code package, FEFLOW. The code is based upon the finite element method. With the code two- and one-dimensional elements can also be used by way of embedding them in a three-dimensional element mesh. A set of new algorithms was developed and employed to create element meshes for FEFLOW. The most useful program in the preliminary site investigations was PAAWI, which adds two-dimensional elements for fracture zones to an existing three-dimensional element mesh. The new algorithms reduced significantly the time required to create spatial discretization for complex geometries. Three element meshes were created for each site. The boundaries of the regional models coincide with those of the flow models. (55 refs., 40 figs., 1 tab.)

  20. To transform or not to transform: using generalized linear mixed models to analyse reaction time data

    Science.gov (United States)

    Lo, Steson; Andrews, Sally

    2015-01-01

    Linear mixed-effect models (LMMs) are being increasingly widely used in psychology to analyse multi-level research designs. This feature allows LMMs to address some of the problems identified by Speelman and McGann (2013) about the use of mean data, because they do not average across individual responses. However, recent guidelines for using LMM to analyse skewed reaction time (RT) data collected in many cognitive psychological studies recommend the application of non-linear transformations to satisfy assumptions of normality. Uncritical adoption of this recommendation has important theoretical implications which can yield misleading conclusions. For example, Balota et al. (2013) showed that analyses of raw RT produced additive effects of word frequency and stimulus quality on word identification, which conflicted with the interactive effects observed in analyses of transformed RT. Generalized linear mixed-effect models (GLMM) provide a solution to this problem by satisfying normality assumptions without the need for transformation. This allows differences between individuals to be properly assessed, using the metric most appropriate to the researcher's theoretical context. We outline the major theoretical decisions involved in specifying a GLMM, and illustrate them by reanalysing Balota et al.'s datasets. We then consider the broader benefits of using GLMM to investigate individual differences. PMID:26300841

  1. Evaluation of Uncertainties in hydrogeological modeling and groundwater flow analyses. Model calibration

    International Nuclear Information System (INIS)

    Ijiri, Yuji; Ono, Makoto; Sugihara, Yutaka; Shimo, Michito; Yamamoto, Hajime; Fumimura, Kenichi

    2003-03-01

    This study involves evaluation of uncertainty in hydrogeological modeling and groundwater flow analysis. Three-dimensional groundwater flow in Shobasama site in Tono was analyzed using two continuum models and one discontinuous model. The domain of this study covered area of four kilometers in east-west direction and six kilometers in north-south direction. Moreover, for the purpose of evaluating how uncertainties included in modeling of hydrogeological structure and results of groundwater simulation decreased with progress of investigation research, updating and calibration of the models about several modeling techniques of hydrogeological structure and groundwater flow analysis techniques were carried out, based on the information and knowledge which were newly acquired. The acquired knowledge is as follows. As a result of setting parameters and structures in renewal of the models following to the circumstances by last year, there is no big difference to handling between modeling methods. The model calibration is performed by the method of matching numerical simulation with observation, about the pressure response caused by opening and closing of a packer in MIU-2 borehole. Each analysis technique attains reducing of residual sum of squares of observations and results of numerical simulation by adjusting hydrogeological parameters. However, each model adjusts different parameters as water conductivity, effective porosity, specific storage, and anisotropy. When calibrating models, sometimes it is impossible to explain the phenomena only by adjusting parameters. In such case, another investigation may be required to clarify details of hydrogeological structure more. As a result of comparing research from beginning to this year, the following conclusions are obtained about investigation. (1) The transient hydraulic data are effective means in reducing the uncertainty of hydrogeological structure. (2) Effective porosity for calculating pore water velocity of

  2. Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts.

    Energy Technology Data Exchange (ETDEWEB)

    Sevougian, S. David; Freeze, Geoffrey A.; Gardner, William Payton; Hammond, Glenn Edward; Mariner, Paul

    2014-09-01

    directly, rather than through simplified abstractions. It also a llows for complex representations of the source term, e.g., the explicit representation of many individual waste packages (i.e., meter - scale detail of an entire waste emplacement drift). This report fulfills the Generic Disposal System Analysis Work Packa ge Level 3 Milestone - Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts (M 3 FT - 1 4 SN08080 3 2 ).

  3. Social Context, Self-Perceptions and Student Engagement: A SEM Investigation of the Self-System Model of Motivational Development (SSMMD)

    Science.gov (United States)

    Dupont, Serge; Galand, Benoit; Nils, Frédéric; Hospel, Virginie

    2014-01-01

    Introduction: The present study aimed to test a theoretically-based model (the self-system model of motivational development) including at the same time the extent to which the social context provides structure, warmth and autonomy support, the students' perceived autonomy, relatedness and competence, and behavioral, cognitive and emotional…

  4. International Conference on SEMS 2012

    CERN Document Server

    Liu, Chuang; Scientific explanation and methodology of science; SEMS 2012

    2014-01-01

    This volume contains the contributed papers of invitees to SEMS 2012 who have also given talks at the conference. The invitees are experts in philosophy of science and technology from Asia (besides China), Australia, Europe, Latin America, North America, as well as from within China. The papers in this volume represent the latest work of each researcher in his or her expertise; and as a result, they give a good representation of the cutting-edge researches in diverse areas in different parts of the world.

  5. Generic uncertainty model for DETRA for environmental consequence analyses. Application and sample outputs

    Energy Technology Data Exchange (ETDEWEB)

    Suolanen, V.; Ilvonen, M. [VTT Energy, Espoo (Finland). Nuclear Energy

    1998-10-01

    Computer model DETRA applies a dynamic compartment modelling approach. The compartment structure of each considered application can be tailored individually. This flexible modelling method makes it possible that the transfer of radionuclides can be considered in various cases: aquatic environment and related food chains, terrestrial environment, food chains in general and food stuffs, body burden analyses of humans, etc. In the former study on this subject, modernization of the user interface of DETRA code was carried out. This new interface works in Windows environment and the usability of the code has been improved. The objective of this study has been to further develop and diversify the user interface so that also probabilistic uncertainty analyses can be performed by DETRA. The most common probability distributions are available: uniform, truncated Gaussian and triangular. The corresponding logarithmic distributions are also available. All input data related to a considered case can be varied, although this option is seldomly needed. The calculated output values can be selected as monitored values at certain simulation time points defined by the user. The results of a sensitivity run are immediately available after simulation as graphical presentations. These outcomes are distributions generated for varied parameters, density functions of monitored parameters and complementary cumulative density functions (CCDF). An application considered in connection with this work was the estimation of contamination of milk caused by radioactive deposition of Cs (10 kBq(Cs-137)/m{sup 2}). The multi-sequence calculation model applied consisted of a pasture modelling part and a dormant season modelling part. These two sequences were linked periodically simulating the realistic practice of care taking of domestic animals in Finland. The most important parameters were varied in this exercise. The performed diversifying of the user interface of DETRA code seems to provide an

  6. BWR Mark III containment analyses using a GOTHIC 8.0 3D model

    International Nuclear Information System (INIS)

    Jimenez, Gonzalo; Serrano, César; Lopez-Alonso, Emma; Molina, M del Carmen; Calvo, Daniel; García, Javier; Queral, César; Zuriaga, J. Vicente; González, Montserrat

    2015-01-01

    Highlights: • The development of a 3D GOTHIC code model of BWR Mark-III containment is described. • Suppression pool modelling based on the POOLEX STB-20 and STB-16 experimental tests. • LOCA and SBO transient simulated to verify the behaviour of the 3D GOTHIC model. • Comparison between the 3D GOTHIC model and MAAP4.07 model is conducted. • Accurate reproduction of pre severe accident conditions with the 3D GOTHIC model. - Abstract: The purpose of this study is to establish a detailed three-dimensional model of Cofrentes NPP BWR/6 Mark III containment building using the containment code GOTHIC 8.0. This paper presents the model construction, the phenomenology tests conducted and the selected transient for the model evaluation. In order to study the proper settings for the model in the suppression pool, two experiments conducted with the experimental installation POOLEX have been simulated, allowing to obtain a proper behaviour of the model under different suppression pool phenomenology. In the transient analyses, a Loss of Coolant Accident (LOCA) and a Station Blackout (SBO) transient have been performed. The main results of the simulations of those transients were qualitative compared with the results obtained from simulations with MAAP 4.07 Cofrentes NPP model, used by the plant for simulating severe accidents. From this comparison, a verification of the model in terms of pressurization, asymmetric discharges and high pressure release were obtained. The completeness of this model has proved to adequately simulate the thermal hydraulic phenomena which occur in the containment during accidental sequences

  7. Computational model for supporting SHM systems design: Damage identification via numerical analyses

    Science.gov (United States)

    Sartorato, Murilo; de Medeiros, Ricardo; Vandepitte, Dirk; Tita, Volnei

    2017-02-01

    This work presents a computational model to simulate thin structures monitored by piezoelectric sensors in order to support the design of SHM systems, which use vibration based methods. Thus, a new shell finite element model was proposed and implemented via a User ELement subroutine (UEL) into the commercial package ABAQUS™. This model was based on a modified First Order Shear Theory (FOST) for piezoelectric composite laminates. After that, damaged cantilever beams with two piezoelectric sensors in different positions were investigated by using experimental analyses and the proposed computational model. A maximum difference in the magnitude of the FRFs between numerical and experimental analyses of 7.45% was found near the resonance regions. For damage identification, different levels of damage severity were evaluated by seven damage metrics, including one proposed by the present authors. Numerical and experimental damage metrics values were compared, showing a good correlation in terms of tendency. Finally, based on comparisons of numerical and experimental results, it is shown a discussion about the potentials and limitations of the proposed computational model to be used for supporting SHM systems design.

  8. Using Weather Data and Climate Model Output in Economic Analyses of Climate Change

    Energy Technology Data Exchange (ETDEWEB)

    Auffhammer, M.; Hsiang, S. M.; Schlenker, W.; Sobel, A.

    2013-06-28

    Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overview of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.

  9. NUMERICAL MODELLING AS NON-DESTRUCTIVE METHOD FOR THE ANALYSES AND DIAGNOSIS OF STONE STRUCTURES: MODELS AND POSSIBILITIES

    Directory of Open Access Journals (Sweden)

    Nataša Štambuk-Cvitanović

    1999-12-01

    Full Text Available Assuming the necessity of analysis, diagnosis and preservation of existing valuable stone masonry structures and ancient monuments in today European urban cores, numerical modelling become an efficient tool for the structural behaviour investigation. It should be supported by experimentally found input data and taken as a part of general combined approach, particularly non-destructive techniques on the structure/model within it. For the structures or their detail which may require more complex analyses three numerical models based upon finite elements technique are suggested: (1 standard linear model; (2 linear model with contact (interface elements; and (3 non-linear elasto-plastic and orthotropic model. The applicability of these models depend upon the accuracy of the approach or type of the problem, and will be presented on some characteristic samples.

  10. Risk Factor Analyses for the Return of Spontaneous Circulation in the Asphyxiation Cardiac Arrest Porcine Model

    Directory of Open Access Journals (Sweden)

    Cai-Jun Wu

    2015-01-01

    Full Text Available Background: Animal models of asphyxiation cardiac arrest (ACA are frequently used in basic research to mirror the clinical course of cardiac arrest (CA. The rates of the return of spontaneous circulation (ROSC in ACA animal models are lower than those from studies that have utilized ventricular fibrillation (VF animal models. The purpose of this study was to characterize the factors associated with the ROSC in the ACA porcine model. Methods: Forty-eight healthy miniature pigs underwent endotracheal tube clamping to induce CA. Once induced, CA was maintained untreated for a period of 8 min. Two minutes following the initiation of cardiopulmonary resuscitation (CPR, defibrillation was attempted until ROSC was achieved or the animal died. To assess the factors associated with ROSC in this CA model, logistic regression analyses were performed to analyze gender, the time of preparation, the amplitude spectrum area (AMSA from the beginning of CPR and the pH at the beginning of CPR. A receiver-operating characteristic (ROC curve was used to evaluate the predictive value of AMSA for ROSC. Results: ROSC was only 52.1% successful in this ACA porcine model. The multivariate logistic regression analyses revealed that ROSC significantly depended on the time of preparation, AMSA at the beginning of CPR and pH at the beginning of CPR. The area under the ROC curve in for AMSA at the beginning of CPR was 0.878 successful in predicting ROSC (95% confidence intervals: 0.773∼0.983, and the optimum cut-off value was 15.62 (specificity 95.7% and sensitivity 80.0%. Conclusions: The time of preparation, AMSA and the pH at the beginning of CPR were associated with ROSC in this ACA porcine model. AMSA also predicted the likelihood of ROSC in this ACA animal model.

  11. Integrated Process Model Development and Systems Analyses for the LIFE Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Meier, W R; Anklam, T; Abbott, R; Erlandson, A; Halsey, W; Miles, R; Simon, A J

    2009-07-15

    We have developed an integrated process model (IPM) for a Laser Inertial Fusion-Fission Energy (LIFE) power plant. The model includes cost and performance algorithms for the major subsystems of the plant, including the laser, fusion target fabrication and injection, fusion-fission chamber (including the tritium and fission fuel blankets), heat transfer and power conversion systems, and other balance of plant systems. The model has been developed in Visual Basic with an Excel spreadsheet user interface in order to allow experts in various aspects of the design to easily integrate their individual modules and provide a convenient, widely accessible platform for conducting the system studies. Subsystem modules vary in level of complexity; some are based on top-down scaling from fission power plant costs (for example, electric plant equipment), while others are bottom-up models based on conceptual designs being developed by LLNL (for example, the fusion-fission chamber and laser systems). The IPM is being used to evaluate design trade-offs, do design optimization, and conduct sensitivity analyses to identify high-leverage areas for R&D. We describe key aspects of the IPM and report on the results of our systems analyses. Designs are compared and evaluated as a function of key design variables such as fusion target yield and pulse repetition rate.

  12. Exploring the Association between Transformational Leadership and Teacher's Self-Efficacy in Greek Education System: A Multilevel SEM Model

    Science.gov (United States)

    Gkolia, Aikaterini; Koustelios, Athanasios; Belias, Dimitrios

    2018-01-01

    The main aim of this study is to examine the effect of principals' transformational leadership on teachers' self-efficacy across 77 different Greek elementary and secondary schools based on a centralized education system. For the investigation of the above effect multilevel Structural Equation Modelling analysis was conducted, recognizing the…

  13. Considerations when loading spinal finite element models with predicted muscle forces from inverse static analyses.

    Science.gov (United States)

    Zhu, Rui; Zander, Thomas; Dreischarf, Marcel; Duda, Georg N; Rohlmann, Antonius; Schmidt, Hendrik

    2013-04-26

    Mostly simplified loads were used in biomechanical finite element (FE) studies of the spine because of a lack of data on muscular physiological loading. Inverse static (IS) models allow the prediction of muscle forces for predefined postures. A combination of both mechanical approaches - FE and IS - appears to allow a more realistic modeling. However, it is unknown what deviations are to be expected when muscle forces calculated for models with rigid vertebrae and fixed centers of rotation, as generally found in IS models, are applied to a FE model with elastic vertebrae and discs. The aim of this study was to determine the effects of these disagreements. Muscle forces were estimated for 20° flexion and 10° extension in an IS model and transferred to a FE model. The effects of the elasticity of bony structures (rigid vs. elastic) and the definition of the center of rotation (fixed vs. non-fixed) were quantified using the deviation of actual intervertebral rotation (IVR) of the FE model and the targeted IVR from the IS model. For extension, the elasticity of the vertebrae had only a minor effect on IVRs, whereas a non-fixed center of rotation increased the IVR deviation on average by 0.5° per segment. For flexion, a combination of the two parameters increased IVR deviation on average by 1° per segment. When loading FE models with predicted muscle forces from IS analyses, the main limitations in the IS model - rigidity of the segments and the fixed centers of rotation - must be considered. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Multiple-Group Analysis Using the sem Package in the R System

    Science.gov (United States)

    Evermann, Joerg

    2010-01-01

    Multiple-group analysis in covariance-based structural equation modeling (SEM) is an important technique to ensure the invariance of latent construct measurements and the validity of theoretical models across different subpopulations. However, not all SEM software packages provide multiple-group analysis capabilities. The sem package for the R…

  15. Moderators, mediators, and bidirectional relationships in the International Classification of Functioning, Disability and Health (ICF) framework: An empirical investigation using a longitudinal design and Structural Equation Modeling (SEM).

    Science.gov (United States)

    Rouquette, Alexandra; Badley, Elizabeth M; Falissard, Bruno; Dub, Timothée; Leplege, Alain; Coste, Joël

    2015-06-01

    The International Classification of Functioning, Disability and Health (ICF) published in 2001 describes the consequences of health conditions with three components of impairments in body structures or functions, activity limitations and participation restrictions. Two of the new features of the conceptual model were the possibility of feedback effects between each ICF component and the introduction of contextual factors conceptualized as moderators of the relationship between the components. The aim of this longitudinal study is to provide empirical evidence of these two kinds of effect. Structural equation modeling was used to analyze data from a French population-based cohort of 548 patients with knee osteoarthritis recruited between April 2007 and March 2009 and followed for three years. Indicators of the body structure and function, activity and participation components of the ICF were derived from self-administered standardized instruments. The measurement model revealed four separate factors for body structures impairments, body functions impairments, activity limitations and participation restrictions. The classic sequence from body impairments to participation restrictions through activity limitations was found at each assessment time. Longitudinal study of the ICF component relationships showed a feedback pathway indicating that the level of participation restrictions at baseline was predictive of activity limitations three years later. Finally, the moderating role of personal (age, sex, mental health, etc.) and environmental factors (family relationships, mobility device use, etc.) was investigated. Three contextual factors (sex, family relationships and walking stick use) were found to be moderators for the relationship between the body impairments and the activity limitations components. Mental health was found to be a mediating factor of the effect of activity limitations on participation restrictions. Copyright © 2015 Elsevier Ltd. All rights

  16. Multi-model finite element scheme for static and free vibration analyses of composite laminated beams

    Directory of Open Access Journals (Sweden)

    U.N. Band

    Full Text Available Abstract A transition element is developed for the local global analysis of laminated composite beams. It bridges one part of the domain modelled with a higher order theory and other with a 2D mixed layerwise theory (LWT used at critical zone of the domain. The use of developed transition element makes the analysis for interlaminar stresses possible with significant accuracy. The mixed 2D model incorporates the transverse normal and shear stresses as nodal degrees of freedom (DOF which inherently ensures continuity of these stresses. Non critical zones are modelled with higher order equivalent single layer (ESL theory leading to the global mesh with multiple models applied simultaneously. Use of higher order ESL in non critical zones reduces the total number of elements required to map the domain. A substantial reduction in DOF as compared to a complete 2D mixed model is obvious. This computationally economical multiple modelling scheme using the transition element is applied to static and free vibration analyses of laminated composite beams. Results obtained are in good agreement with benchmarks available in literature.

  17. A chip-level modeling approach for rail span collapse and survivability analyses

    International Nuclear Information System (INIS)

    Marvis, D.G.; Alexander, D.R.; Dinger, G.L.

    1989-01-01

    A general semiautomated analysis technique has been developed for analyzing rail span collapse and survivability of VLSI microcircuits in high ionizing dose rate radiation environments. Hierarchical macrocell modeling permits analyses at the chip level and interactive graphical postprocessing provides a rapid visualization of voltage, current and power distributions over an entire VLSIC. The technique is demonstrated for a 16k C MOS/SOI SRAM and a CMOS/SOS 8-bit multiplier. The authors also present an efficient method to treat memory arrays as well as a three-dimensional integration technique to compute sapphire photoconduction from the design layout

  18. Analyses and testing of model prestressed concrete reactor vessels with built-in planes of weakness

    International Nuclear Information System (INIS)

    Dawson, P.; Paton, A.A.; Fleischer, C.C.

    1990-01-01

    This paper describes the design, construction, analyses and testing of two small scale, single cavity prestressed concrete reactor vessel models, one without planes of weakness and one with planes of weakness immediately behind the cavity liner. This work was carried out to extend a previous study which had suggested the likely feasibility of constructing regions of prestressed concrete reactor vessels and biological shields, which become activated, using easily removable blocks, separated by a suitable membrane. The paper describes the results obtained and concludes that the planes of weakness concept could offer a means of facilitating the dismantling of activated regions of prestressed concrete reactor vessels, biological shields and similar types of structure. (author)

  19. Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.

    Science.gov (United States)

    Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A

    2013-02-01

    The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on

  20. Sensitivity analyses of a global flood model in different geoclimatic regions

    Science.gov (United States)

    Moylan, C.; Neal, J. C.; Freer, J. E.; Pianosi, F.; Wagener, T.; Sampson, C. C.; Smith, A.

    2017-12-01

    Flood models producing global hazard maps now exist, although with significant variation in the modelled hazard extent. Besides explicit structural differences, reasons for this variation is unknown. Understanding the behaviour of these global flood models is necessary to determine how they can be further developed. Preliminary sensitivity analysis was performed using Morris method on the Bristol global flood model, which has 37 parameters, required to translate the remotely sensed data into input for the underlying hydrodynamic model. This number of parameters implies an excess of complexity for flood modelling and should ideally be mitigated. The analysis showed an order of magnitude difference in parameter sensitivities, when comparing total flooded extent. It also showed the most important parameters' influence to be highly interactive rather than just direct; there were surprises in expectation of which parameters are the most important. Despite these findings, conclusions about the model are limited due to the fixed geoclimatic features of the location analysed. Hence more locations with varied geoclimatic characteristics must be chosen, so the consistencies and deviations of parameter sensitivities across these features become quantifiable. Locations are selected using a novel sampling technique, which aggregates the input data of a domain into representative metrics of the geoclimatic features, hypothesised to correlate with one or more parameters. Combinations of these metrics are sampled across a range of geoclimatic areas, and the sensitivities found are correlated with the sampled metrics. From this work, we find the main influences on flood risk prediction at the global scale for the used model structure, which as a methodology is transferable to the other global flood models.

  1. Analysing the Effects of Flood-Resilience Technologies in Urban Areas Using a Synthetic Model Approach

    Directory of Open Access Journals (Sweden)

    Reinhard Schinke

    2016-11-01

    Full Text Available Flood protection systems with their spatial effects play an important role in managing and reducing flood risks. The planning and decision process as well as the technical implementation are well organized and often exercised. However, building-related flood-resilience technologies (FReT are often neglected due to the absence of suitable approaches to analyse and to integrate such measures in large-scale flood damage mitigation concepts. Against this backdrop, a synthetic model-approach was extended by few complementary methodical steps in order to calculate flood damage to buildings considering the effects of building-related FReT and to analyse the area-related reduction of flood risks by geo-information systems (GIS with high spatial resolution. It includes a civil engineering based investigation of characteristic properties with its building construction including a selection and combination of appropriate FReT as a basis for derivation of synthetic depth-damage functions. Depending on the real exposition and the implementation level of FReT, the functions can be used and allocated in spatial damage and risk analyses. The application of the extended approach is shown at a case study in Valencia (Spain. In this way, the overall research findings improve the integration of FReT in flood risk management. They provide also some useful information for advising of individuals at risk supporting the selection and implementation of FReT.

  2. A STRONGLY COUPLED REACTOR CORE ISOLATION COOLING SYSTEM MODEL FOR EXTENDED STATION BLACK-OUT ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory

    2015-03-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety

  3. Establishing a Numerical Modeling Framework for Hydrologic Engineering Analyses of Extreme Storm Events

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby

    2017-08-01

    In this study a numerical modeling framework for simulating extreme storm events was established using the Weather Research and Forecasting (WRF) model. Such a framework is necessary for the derivation of engineering parameters such as probable maximum precipitation that are the cornerstone of large water management infrastructure design. Here this framework was built based on a heavy storm that occurred in Nashville (USA) in 2010, and verified using two other extreme storms. To achieve the optimal setup, several combinations of model resolutions, initial/boundary conditions (IC/BC), cloud microphysics and cumulus parameterization schemes were evaluated using multiple metrics of precipitation characteristics. The evaluation suggests that WRF is most sensitive to IC/BC option. Simulation generally benefits from finer resolutions up to 5 km. At the 15km level, NCEP2 IC/BC produces better results, while NAM IC/BC performs best at the 5km level. Recommended model configuration from this study is: NAM or NCEP2 IC/BC (depending on data availability), 15km or 15km-5km nested grids, Morrison microphysics and Kain-Fritsch cumulus schemes. Validation of the optimal framework suggests that these options are good starting choices for modeling extreme events similar to the test cases. This optimal framework is proposed in response to emerging engineering demands of extreme storm events forecasting and analyses for design, operations and risk assessment of large water infrastructures.

  4. A conceptual model for analysing informal learning in online social networks for health professionals.

    Science.gov (United States)

    Li, Xin; Gray, Kathleen; Chang, Shanton; Elliott, Kristine; Barnett, Stephen

    2014-01-01

    Online social networking (OSN) provides a new way for health professionals to communicate, collaborate and share ideas with each other for informal learning on a massive scale. It has important implications for ongoing efforts to support Continuing Professional Development (CPD) in the health professions. However, the challenge of analysing the data generated in OSNs makes it difficult to understand whether and how they are useful for CPD. This paper presents a conceptual model for using mixed methods to study data from OSNs to examine the efficacy of OSN in supporting informal learning of health professionals. It is expected that using this model with the dataset generated in OSNs for informal learning will produce new and important insights into how well this innovation in CPD is serving professionals and the healthcare system.

  5. Transformation of Baumgarten's aesthetics into a tool for analysing works and for modelling

    DEFF Research Database (Denmark)

    Thomsen, Bente Dahl

    2006-01-01

      Abstract: Is this the best form, or does it need further work? The aesthetic object does not possess the perfect qualities; but how do I proceed with the form? These are questions that all modellers ask themselves at some point, and with which they can grapple for days - even weeks - before...... the inspiration to deliver the form finally presents itself. This was the outlet for our plan to devise a tool for analysing works and the practical development of forms. The tool is a set of cards with suggestions for investigations that may assist the modeller in identifying the weaknesses of the form......, or convince him-/herself about its strengths. The cards also contain aesthetical reflections that may be of inspiration in the development of the form....

  6. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2016-10-01

    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  7. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper

    2009-01-01

    BACKGROUND: There is increasing awareness that meta-analyses require a sufficiently large information size to detect or reject an anticipated intervention effect. The required information size in a meta-analysis may be calculated from an anticipated a priori intervention effect or from...... an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-trial variability and a sampling error estimate considering the required information size. D2 is different from the intuitively obvious adjusting factor based on the common quantification of heterogeneity, the inconsistency (I2), which may underestimate the required information size. Thus, D2 and I2 are compared...

  8. Biosphere Modeling and Analyses in Support of Total System Performance Assessment

    International Nuclear Information System (INIS)

    Tappen, J. J.; Wasiolek, M. A.; Wu, D. W.; Schmitt, J. F.; Smith, A. J.

    2002-01-01

    The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities

  9. Biosphere Modeling and Analyses in Support of Total System Performance Assessment

    International Nuclear Information System (INIS)

    Jeff Tappen; M.A. Wasiolek; D.W. Wu; J.F. Schmitt

    2001-01-01

    The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities

  10. Quantifying the Representation Error of Land Biosphere Models using High Resolution Footprint Analyses and UAS Observations

    Science.gov (United States)

    Hanson, C. V.; Schmidt, A.; Law, B. E.; Moore, W.

    2015-12-01

    The validity of land biosphere model outputs rely on accurate representations of ecosystem processes within the model. Typically, a vegetation or land cover type for a given area (several Km squared or larger resolution), is assumed to have uniform properties. The limited spacial and temporal resolution of models prevents resolving finer scale heterogeneous flux patterns that arise from variations in vegetation. This representation error must be quantified carefully if models are informed through data assimilation in order to assign appropriate weighting of model outputs and measurement data. The representation error is usually only estimated or ignored entirely due to the difficulty in determining reasonable values. UAS based gas sensors allow measurements of atmospheric CO2 concentrations with unprecedented spacial resolution, providing a means of determining the representation error for CO2 fluxes empirically. In this study we use three dimensional CO2 concentration data in combination with high resolution footprint analyses in order to quantify the representation error for modelled CO2 fluxes for typical resolutions of regional land biosphere models. CO2 concentration data were collected using an Atlatl X6A hexa-copter, carrying a highly calibrated closed path infra-red gas analyzer based sampling system with an uncertainty of ≤ ±0.2 ppm CO2. Gas concentration data was mapped in three dimensions using the UAS on-board position data and compared to footprints generated using WRF 3.61. Chad Hanson, Oregon State University, Corvallis, OR Andres Schmidt, Oregon State University, Corvallis, OR Bev Law, Oregon State University, Corvallis, OR

  11. Sexual Arousal and Sexually Explicit Media (SEM)

    DEFF Research Database (Denmark)

    Hald, Gert Martin; Stulhofer, Aleksandar; Lange, Theis

    2018-01-01

    INTRODUCTION: Investigations of patterns of sexual arousal to certain groups of sexually explicit media (SEM) in the general population in non-laboratory settings are rare. Such knowledge could be important to understand more about the relative specificity of sexual arousal in different SEM users....... AIMS: (i) To investigate whether sexual arousal to non-mainstream vs mainstream SEM contents could be categorized across gender and sexual orientation, (ii) to compare levels of SEM-induced sexual arousal, sexual satisfaction, and self-evaluated sexual interests and fantasies between non......-mainstream and mainstream SEM groups, and (iii) to explore the validity and predictive accuracy of the Non-Mainstream Pornography Arousal Scale (NPAS). METHODS: Online cross-sectional survey of 2,035 regular SEM users in Croatia. MAIN OUTCOMES MEASURES: Patterns of sexual arousal to 27 different SEM themes, sexual...

  12. Challenges of Analysing Gene-Environment Interactions in Mouse Models of Schizophrenia

    Directory of Open Access Journals (Sweden)

    Peter L. Oliver

    2011-01-01

    Full Text Available The modelling of neuropsychiatric disease using the mouse has provided a wealth of information regarding the relationship between specific genetic lesions and behavioural endophenotypes. However, it is becoming increasingly apparent that synergy between genetic and nongenetic factors is a key feature of these disorders that must also be taken into account. With the inherent limitations of retrospective human studies, experiments in mice have begun to tackle this complex association, combining well-established behavioural paradigms and quantitative neuropathology with a range of environmental insults. The conclusions from this work have been varied, due in part to a lack of standardised methodology, although most have illustrated that phenotypes related to disorders such as schizophrenia are consistently modified. Far fewer studies, however, have attempted to generate a “two-hit” model, whereby the consequences of a pathogenic mutation are analysed in combination with environmental manipulation such as prenatal stress. This significant, yet relatively new, approach is beginning to produce valuable new models of neuropsychiatric disease. Focussing on prenatal and perinatal stress models of schizophrenia, this review discusses the current progress in this field, and highlights important issues regarding the interpretation and comparative analysis of such complex behavioural data.

  13. D Recording for 2d Delivering - the Employment of 3d Models for Studies and Analyses -

    Science.gov (United States)

    Rizzi, A.; Baratti, G.; Jiménez, B.; Girardi, S.; Remondino, F.

    2011-09-01

    In the last years, thanks to the advances of surveying sensors and techniques, many heritage sites could be accurately replicated in digital form with very detailed and impressive results. The actual limits are mainly related to hardware capabilities, computation time and low performance of personal computer. Often, the produced models are not visible on a normal computer and the only solution to easily visualized them is offline using rendered videos. This kind of 3D representations is useful for digital conservation, divulgation purposes or virtual tourism where people can visit places otherwise closed for preservation or security reasons. But many more potentialities and possible applications are available using a 3D model. The problem is the ability to handle 3D data as without adequate knowledge this information is reduced to standard 2D data. This article presents some surveying and 3D modeling experiences within the APSAT project ("Ambiente e Paesaggi dei Siti d'Altura Trentini", i.e. Environment and Landscapes of Upland Sites in Trentino). APSAT is a multidisciplinary project funded by the Autonomous Province of Trento (Italy) with the aim documenting, surveying, studying, analysing and preserving mountainous and hill-top heritage sites located in the region. The project focuses on theoretical, methodological and technological aspects of the archaeological investigation of mountain landscape, considered as the product of sequences of settlements, parcelling-outs, communication networks, resources, and symbolic places. The mountain environment preserves better than others the traces of hunting and gathering, breeding, agricultural, metallurgical, symbolic activities characterised by different lengths and environmental impacts, from Prehistory to the Modern Period. Therefore the correct surveying and documentation of this heritage sites and material is very important. Within the project, the 3DOM unit of FBK is delivering all the surveying and 3D material to

  14. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 3

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Corum, J.M.; Bryson, J.W.

    1975-06-01

    The third in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: the experimental data provide design information directly applicable to nozzles in cylindrical vessels; and the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 3 had a 10 in. OD and the nozzle had a 1.29 in. OD, giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios for the cylinder and the nozzle were 50 and 7.68 respectively. Thirteen separate loading cases were analyzed. In each, one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for all the loadings were obtained using 158 three-gage strain rosettes located on the inner and outer surfaces. The loading cases were also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  15. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 4

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.

    1975-06-01

    The last in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models in the series are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: (1) the experimental data provide design information directly applicable to nozzles in cylindrical vessels, and (2) the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 4 had an outside diameter of 10 in., and the nozzle had an outside diameter of 1.29 in., giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios were 50 and 20.2 for the cylinder and nozzle respectively. Thirteen separate loading cases were analyzed. For each loading condition one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for each of the 13 loadings were obtained using 157 three-gage strain rosettes located on the inner and outer surfaces. Each of the 13 loading cases was also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  16. Genetic analyses of partial egg production in Japanese quail using multi-trait random regression models.

    Science.gov (United States)

    Karami, K; Zerehdaran, S; Barzanooni, B; Lotfi, E

    2017-12-01

    1. The aim of the present study was to estimate genetic parameters for average egg weight (EW) and egg number (EN) at different ages in Japanese quail using multi-trait random regression (MTRR) models. 2. A total of 8534 records from 900 quail, hatched between 2014 and 2015, were used in the study. Average weekly egg weights and egg numbers were measured from second until sixth week of egg production. 3. Nine random regression models were compared to identify the best order of the Legendre polynomials (LP). The most optimal model was identified by the Bayesian Information Criterion. A model with second order of LP for fixed effects, second order of LP for additive genetic effects and third order of LP for permanent environmental effects (MTRR23) was found to be the best. 4. According to the MTRR23 model, direct heritability for EW increased from 0.26 in the second week to 0.53 in the sixth week of egg production, whereas the ratio of permanent environment to phenotypic variance decreased from 0.48 to 0.1. Direct heritability for EN was low, whereas the ratio of permanent environment to phenotypic variance decreased from 0.57 to 0.15 during the production period. 5. For each trait, estimated genetic correlations among weeks of egg production were high (from 0.85 to 0.98). Genetic correlations between EW and EN were low and negative for the first two weeks, but they were low and positive for the rest of the egg production period. 6. In conclusion, random regression models can be used effectively for analysing egg production traits in Japanese quail. Response to selection for increased egg weight would be higher at older ages because of its higher heritability and such a breeding program would have no negative genetic impact on egg production.

  17. DESCRIPTION OF MODELING ANALYSES IN SUPPORT OF THE 200-ZP-1 REMEDIAL DESIGN/REMEDIAL ACTION

    Energy Technology Data Exchange (ETDEWEB)

    VONGARGEN BH

    2009-11-03

    The Feasibility Study/or the 200-ZP-1 Groundwater Operable Unit (DOE/RL-2007-28) and the Proposed Plan/or Remediation of the 200-ZP-1 Groundwater Operable Unit (DOE/RL-2007-33) describe the use of groundwater pump-and-treat technology for the 200-ZP-1 Groundwater Operable Unit (OU) as part of an expanded groundwater remedy. During fiscal year 2008 (FY08), a groundwater flow and contaminant transport (flow and transport) model was developed to support remedy design decisions at the 200-ZP-1 OU. This model was developed because the size and influence of the proposed 200-ZP-1 groundwater pump-and-treat remedy will have a larger areal extent than the current interim remedy, and modeling is required to provide estimates of influent concentrations and contaminant mass removal rates to support the design of the aboveground treatment train. The 200 West Area Pre-Conceptual Design/or Final Extraction/Injection Well Network: Modeling Analyses (DOE/RL-2008-56) documents the development of the first version of the MODFLOW/MT3DMS model of the Hanford Site's Central Plateau, as well as the initial application of that model to simulate a potential well field for the 200-ZP-1 remedy (considering only the contaminants carbon tetrachloride and technetium-99). This document focuses on the use of the flow and transport model to identify suitable extraction and injection well locations as part of the 200 West Area 200-ZP-1 Pump-and-Treat Remedial Design/Remedial Action Work Plan (DOEIRL-2008-78). Currently, the model has been developed to the extent necessary to provide approximate results and to lay a foundation for the design basis concentrations that are required in support of the remedial design/remediation action (RD/RA) work plan. The discussion in this document includes the following: (1) Assignment of flow and transport parameters for the model; (2) Definition of initial conditions for the transport model for each simulated contaminant of concern (COC) (i.e., carbon

  18. Noise exposure during pregnancy, birth outcomes and fetal development: meta-analyses using quality effects model.

    Science.gov (United States)

    Dzhambov, Angel M; Dimitrova, Donka D; Dimitrakova, Elena D

    2014-01-01

    Many women are exposed daily to high levels of occupational and residential noise, so the effect of noise exposure on pregnancy should be considered because noise affects both the fetus and the mother herself. However, there is a controversy in the literature regarding the adverse effects of occupational and residential noise on pregnant women and their fetuses. The aim of this study was to conduct systematic review of previously analyzed studies, to add additional information omitted in previous reviews and to perform meta-analyses on the effects of noise exposure on pregnancy, birth outcomes and fetal development. Previous reviews and meta-analyses on the topic were consulted. Additionally, a systematic search in MEDLINE, EMBASE and Internet was carried out. Twenty nine studies were included in the meta-analyses. Quality effects meta-analytical model was applied. Women exposed to high noise levels (in most of the studies ≥ 80 dB) during pregnancy are at a significantly higher risk for having small-for-gestational-age newborn (RR = 1.19, 95% CI: 1.03, 1.38), gestational hypertension (RR = 1.27, 95% CI: 1.03, 1.58) and infant with congenital malformations (RR = 1.47, 95% CI: 1.21, 1.79). The effect was not significant for preeclampsia, perinatal death, spontaneous abortion and preterm birth. The results are consistent with previous findings regarding a higher risk for small-for-gestational-age. They also highlight the significance of residential and occupational noise exposure for developing gestational hypertension and especially congenital malformations.

  19. Normalisation genes for expression analyses in the brown alga model Ectocarpus siliculosus

    Directory of Open Access Journals (Sweden)

    Rousvoal Sylvie

    2008-08-01

    Full Text Available Abstract Background Brown algae are plant multi-cellular organisms occupying most of the world coasts and are essential actors in the constitution of ecological niches at the shoreline. Ectocarpus siliculosus is an emerging model for brown algal research. Its genome has been sequenced, and several tools are being developed to perform analyses at different levels of cell organization, including transcriptomic expression analyses. Several topics, including physiological responses to osmotic stress and to exposure to contaminants and solvents are being studied in order to better understand the adaptive capacity of brown algae to pollution and environmental changes. A series of genes that can be used to normalise expression analyses is required for these studies. Results We monitored the expression of 13 genes under 21 different culture conditions. These included genes encoding proteins and factors involved in protein translation (ribosomal protein 26S, EF1alpha, IF2A, IF4E and protein degradation (ubiquitin, ubiquitin conjugating enzyme or folding (cyclophilin, and proteins involved in both the structure of the cytoskeleton (tubulin alpha, actin, actin-related proteins and its trafficking function (dynein, as well as a protein implicated in carbon metabolism (glucose 6-phosphate dehydrogenase. The stability of their expression level was assessed using the Ct range, and by applying both the geNorm and the Normfinder principles of calculation. Conclusion Comparisons of the data obtained with the three methods of calculation indicated that EF1alpha (EF1a was the best reference gene for normalisation. The normalisation factor should be calculated with at least two genes, alpha tubulin, ubiquitin-conjugating enzyme or actin-related proteins being good partners of EF1a. Our results exclude actin as a good normalisation gene, and, in this, are in agreement with previous studies in other organisms.

  20. Models for regionalizing economic data and their applications within the scope of forensic disaster analyses

    Science.gov (United States)

    Schmidt, Hanns-Maximilian; Wiens, rer. pol. Marcus, , Dr.; Schultmann, rer. pol. Frank, Prof. _., Dr.

    2015-04-01

    The impact of natural hazards on the economic system can be observed in many different regions all over the world. Once the local economic structure is hit by an event direct costs instantly occur. However, the disturbance on a local level (e.g. parts of city or industries along a river bank) might also cause monetary damages in other, indirectly affected sectors. If the impact of an event is strong, these damages are likely to cascade and spread even on an international scale (e.g. the eruption of Eyjafjallajökull and its impact on the automotive sector in Europe). In order to determine these special impacts, one has to gain insights into the directly hit economic structure before being able to calculate these side effects. Especially, regarding the development of a model used for near real-time forensic disaster analyses any simulation needs to be based on data that is rapidly available or easily to be computed. Therefore, we investigated commonly used or recently discussed methodologies for regionalizing economic data. Surprisingly, even for German federal states there is no official input-output data available that can be used, although it might provide detailed figures concerning economic interrelations between different industry sectors. In the case of highly developed countries, such as Germany, we focus on models for regionalizing nationwide input-output table which is usually available at the national statistical offices. However, when it comes to developing countries (e.g. South-East Asia) the data quality and availability is usually much poorer. In this case, other sources need to be found for the proper assessment of regional economic performance. We developed an indicator-based model that can fill this gap because of its flexibility regarding the level of aggregation and the composability of different input parameters. Our poster presentation brings up a literature review and a summary on potential models that seem to be useful for this specific task

  1. Evaluation of Temperature and Humidity Profiles of Unified Model and ECMWF Analyses Using GRUAN Radiosonde Observations

    Directory of Open Access Journals (Sweden)

    Young-Chan Noh

    2016-07-01

    Full Text Available Temperature and water vapor profiles from the Korea Meteorological Administration (KMA and the United Kingdom Met Office (UKMO Unified Model (UM data assimilation systems and from reanalysis fields from the European Centre for Medium-Range Weather Forecasts (ECMWF were assessed using collocated radiosonde observations from the Global Climate Observing System (GCOS Reference Upper-Air Network (GRUAN for January–December 2012. The motivation was to examine the overall performance of data assimilation outputs. The difference statistics of the collocated model outputs versus the radiosonde observations indicated a good agreement for the temperature, amongst datasets, while less agreement was found for the relative humidity. A comparison of the UM outputs from the UKMO and KMA revealed that they are similar to each other. The introduction of the new version of UM into the KMA in May 2012 resulted in an improved analysis performance, particularly for the moisture field. On the other hand, ECMWF reanalysis data showed slightly reduced performance for relative humidity compared with the UM, with a significant humid bias in the upper troposphere. ECMWF reanalysis temperature fields showed nearly the same performance as the two UM analyses. The root mean square differences (RMSDs of the relative humidity for the three models were larger for more humid conditions, suggesting that humidity forecasts are less reliable under these conditions.

  2. Integrated optimization analyses of aerodynamic/stealth characteristics of helicopter rotor based on surrogate model

    Directory of Open Access Journals (Sweden)

    Jiang Xiangwen

    2015-06-01

    Full Text Available Based on computational fluid dynamics (CFD method, electromagnetic high-frequency method and surrogate model optimization techniques, an integration design method about aerodynamic/stealth has been established for helicopter rotor. The developed integration design method is composed of three modules: integrated grids generation (the moving-embedded grids for CFD solver and the blade grids for radar cross section (RCS solver are generated by solving Poisson equations and folding approach, aerodynamic/stealth solver (the aerodynamic characteristics are simulated by CFD method based upon Navier–Stokes equations and Spalart–Allmaras (S–A turbulence model, and the stealth characteristics are calculated by using a panel edge method combining the method of physical optics (PO, equivalent currents (MEC and quasi-stationary (MQS, and integrated optimization analysis (based upon the surrogate model optimization technique with full factorial design (FFD and radial basis function (RBF, an integrated optimization analyses on aerodynamic/stealth characteristics of rotor are conducted. Firstly, the scattering characteristics of the rotor with different blade-tip swept and twist angles have been carried out, then time–frequency domain grayscale with strong scattering regions of rotor have been given. Meanwhile, the effects of swept-tip and twist angles on the aerodynamic characteristic of rotor have been performed. Furthermore, by choosing suitable object function and constraint condition, the compromised design about swept and twist combinations of rotor with high aerodynamic performances and low scattering characteristics has been given at last.

  3. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Directory of Open Access Journals (Sweden)

    Sung-Chien Lin

    2014-07-01

    Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.

  4. Application of a weighted spatial probability model in GIS to analyse landslides in Penang Island, Malaysia

    Directory of Open Access Journals (Sweden)

    Samy Ismail Elmahdy

    2016-01-01

    Full Text Available In the current study, Penang Island, which is one of the several mountainous areas in Malaysia that is often subjected to landslide hazard, was chosen for further investigation. A multi-criteria Evaluation and the spatial probability weighted approach and model builder was applied to map and analyse landslides in Penang Island. A set of automated algorithms was used to construct new essential geological and morphometric thematic maps from remote sensing data. The maps were ranked using the weighted probability spatial model based on their contribution to the landslide hazard. Results obtained showed that sites at an elevation of 100–300 m, with steep slopes of 10°–37° and slope direction (aspect in the E and SE directions were areas of very high and high probability for the landslide occurrence; the total areas were 21.393 km2 (11.84% and 58.690 km2 (32.48%, respectively. The obtained map was verified by comparing variogram models of the mapped and the occurred landslide locations and showed a strong correlation with the locations of occurred landslides, indicating that the proposed method can successfully predict the unpredictable landslide hazard. The method is time and cost effective and can be used as a reference for geological and geotechnical engineers.

  5. Comparative modeling analyses of Cs-137 fate in the rivers impacted by Chernobyl and Fukushima accidents

    Energy Technology Data Exchange (ETDEWEB)

    Zheleznyak, M.; Kivva, S. [Institute of Environmental Radioactivity, Fukushima University (Japan)

    2014-07-01

    The consequences of two largest nuclear accidents of the last decades - at Chernobyl Nuclear Power Plant (ChNPP) (1986) and at Fukushima Daiichi NPP (FDNPP) (2011) clearly demonstrated that radioactive contamination of water bodies in vicinity of NPP and on the waterways from it, e.g., river- reservoir water after Chernobyl accident and rivers and coastal marine waters after Fukushima accident, in the both cases have been one of the main sources of the public concerns on the accident consequences. The higher weight of water contamination in public perception of the accidents consequences in comparison with the real fraction of doses via aquatic pathways in comparison with other dose components is a specificity of public perception of environmental contamination. This psychological phenomenon that was confirmed after these accidents provides supplementary arguments that the reliable simulation and prediction of the radionuclide dynamics in water and sediments is important part of the post-accidental radioecological research. The purpose of the research is to use the experience of the modeling activities f conducted for the past more than 25 years within the Chernobyl affected Pripyat River and Dnieper River watershed as also data of the new monitoring studies in Japan of Abukuma River (largest in the region - the watershed area is 5400 km{sup 2}), Kuchibuto River, Uta River, Niita River, Natsui River, Same River, as also of the studies on the specific of the 'water-sediment' {sup 137}Cs exchanges in this area to refine the 1-D model RIVTOX and 2-D model COASTOX for the increasing of the predictive power of the modeling technologies. The results of the modeling studies are applied for more accurate prediction of water/sediment radionuclide contamination of rivers and reservoirs in the Fukushima Prefecture and for the comparative analyses of the efficiency of the of the post -accidental measures to diminish the contamination of the water bodies. Document

  6. Continuous spatial modelling to analyse planning and economic consequences of offshore wind energy

    International Nuclear Information System (INIS)

    Moeller, Bernd

    2011-01-01

    Offshore wind resources appear abundant, but technological, economic and planning issues significantly reduce the theoretical potential. While massive investments are anticipated and planners and developers are scouting for viable locations and consider risk and impact, few studies simultaneously address potentials and costs together with the consequences of proposed planning in an analytical and continuous manner and for larger areas at once. Consequences may be investments short of efficiency and equity, and failed planning routines. A spatial resource economic model for the Danish offshore waters is presented, used to analyse area constraints, technological risks, priorities for development and opportunity costs of maintaining competing area uses. The SCREAM-offshore wind model (Spatially Continuous Resource Economic Analysis Model) uses raster-based geographical information systems (GIS) and considers numerous geographical factors, technology and cost data as well as planning information. Novel elements are weighted visibility analysis and geographically recorded shipping movements as variable constraints. A number of scenarios have been described, which include restrictions of using offshore areas, as well as alternative uses such as conservation and tourism. The results comprise maps, tables and cost-supply curves for further resource economic assessment and policy analysis. A discussion of parameter variations exposes uncertainties of technology development, environmental protection as well as competing area uses and illustrates how such models might assist in ameliorating public planning, while procuring decision bases for the political process. The method can be adapted to different research questions, and is largely applicable in other parts of the world. - Research Highlights: → A model for the spatially continuous evaluation of offshore wind resources. → Assessment of spatial constraints, costs and resources for each location. → Planning tool for

  7. Development and application of model RAIA uranium on-line analyser

    International Nuclear Information System (INIS)

    Dong Yanwu; Song Yufen; Zhu Yaokun; Cong Peiyuan; Cui Songru

    1999-01-01

    The working principle, structure, adjustment and application of model RAIA on-line analyser are reported. The performance of this instrument is reliable. For identical sample, the signal fluctuation in continuous monitoring for four months is less than +-1%. According to required measurement range, appropriate length of sample cell is chosen. The precision of measurement process is better than 1% at 100 g/L U. The detection limit is 50 mg/L. The uranium concentration in process stream can be displayed automatically and printed at any time. It presents 4∼20 mA current signal being proportional to the uranium concentration. This makes a long step towards process continuous control and computer management

  8. A model for analysing factors which may influence quality management procedures in higher education

    Directory of Open Access Journals (Sweden)

    Cătălin MAICAN

    2015-12-01

    Full Text Available In all universities, the Office for Quality Assurance defines the procedure for assessing the performance of the teaching staff, with a view to establishing students’ perception as regards the teachers’ activity from the point of view of the quality of the teaching process, of the relationship with the students and of the assistance provided for learning. The present paper aims at creating a combined model for evaluation, based on Data Mining statistical methods: starting from the findings revealed by the evaluations teachers performed to students, using the cluster analysis and the discriminant analysis, we identified the subjects which produced significant differences between students’ grades, subjects which were subsequently subjected to an evaluation by students. The results of these analyses allowed the formulation of certain measures for enhancing the quality of the evaluation process.

  9. Developing a system dynamics model to analyse environmental problem in construction site

    Science.gov (United States)

    Haron, Fatin Fasehah; Hawari, Nurul Nazihah

    2017-11-01

    This study aims to develop a system dynamics model at a construction site to analyse the impact of environmental problem. Construction sites may cause damages to the environment, and interference in the daily lives of residents. A proper environmental management system must be used to reduce pollution, enhance bio-diversity, conserve water, respect people and their local environment, measure performance and set targets for the environment and sustainability. This study investigates the damaging impact normally occur during the construction stage. Environmental problem will cause costly mistake in project implementation, either because of the environmental damages that are likely to arise during project implementation, or because of modification that may be required subsequently in order to make the action environmentally acceptable. Thus, findings from this study has helped in significantly reducing the damaging impact towards environment, and improve the environmental management system performance at construction site.

  10. Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events

    Science.gov (United States)

    Dinitz, Laura B.; Taketa, Richard A.

    2013-01-01

    This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.

  11. Testing a dual-systems model of adolescent brain development using resting-state connectivity analyses.

    Science.gov (United States)

    van Duijvenvoorde, A C K; Achterberg, M; Braams, B R; Peters, S; Crone, E A

    2016-01-01

    The current study aimed to test a dual-systems model of adolescent brain development by studying changes in intrinsic functional connectivity within and across networks typically associated with cognitive-control and affective-motivational processes. To this end, resting-state and task-related fMRI data were collected of 269 participants (ages 8-25). Resting-state analyses focused on seeds derived from task-related neural activation in the same participants: the dorsal lateral prefrontal cortex (dlPFC) from a cognitive rule-learning paradigm and the nucleus accumbens (NAcc) from a reward-paradigm. Whole-brain seed-based resting-state analyses showed an age-related increase in dlPFC connectivity with the caudate and thalamus, and an age-related decrease in connectivity with the (pre)motor cortex. nAcc connectivity showed a strengthening of connectivity with the dorsal anterior cingulate cortex (ACC) and subcortical structures such as the hippocampus, and a specific age-related decrease in connectivity with the ventral medial PFC (vmPFC). Behavioral measures from both functional paradigms correlated with resting-state connectivity strength with their respective seed. That is, age-related change in learning performance was mediated by connectivity between the dlPFC and thalamus, and age-related change in winning pleasure was mediated by connectivity between the nAcc and vmPFC. These patterns indicate (i) strengthening of connectivity between regions that support control and learning, (ii) more independent functioning of regions that support motor and control networks, and (iii) more independent functioning of regions that support motivation and valuation networks with age. These results are interpreted vis-à-vis a dual-systems model of adolescent brain development. Copyright © 2015. Published by Elsevier Inc.

  12. Individual-level space-time analyses of emergency department data using generalized additive modeling

    Directory of Open Access Journals (Sweden)

    Vieira Verónica M

    2012-08-01

    Full Text Available Abstract Background Although daily emergency department (ED data is a source of information that often includes residence, its potential for space-time analyses at the individual level has not been fully explored. We propose that ED data collected for surveillance purposes can also be used to inform spatial and temporal patterns of disease using generalized additive models (GAMs. This paper describes the methods for adapting GAMs so they can be applied to ED data. Methods GAMs are an effective approach for modeling spatial and temporal distributions of point-wise data, producing smoothed surfaces of continuous risk while adjusting for confounders. In addition to disease mapping, the method allows for global and pointwise hypothesis testing and selection of statistically optimum degree of smoothing using standard statistical software. We applied a two-dimensional GAM for location to ED data of overlapping calendar time using a locally-weighted regression smoother. To illustrate our methods, we investigated the association between participants’ address and the risk of gastrointestinal illness in Cape Cod, Massachusetts over time. Results The GAM space-time analyses simultaneously smooth in units of distance and time by using the optimum degree of smoothing to create data frames of overlapping time periods and then spatially analyzing each data frame. When resulting maps are viewed in series, each data frame contributes a movie frame, allowing us to visualize changes in magnitude, geographic size, and location of elevated risk smoothed over space and time. In our example data, we observed an underlying geographic pattern of gastrointestinal illness with risks consistently higher in the eastern part of our study area over time and intermittent variations of increased risk during brief periods. Conclusions Spatial-temporal analysis of emergency department data with GAMs can be used to map underlying disease risk at the individual-level and view

  13. Parameterization and sensitivity analyses of a radiative transfer model for remote sensing plant canopies

    Science.gov (United States)

    Hall, Carlton Raden

    A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf

  14. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    Science.gov (United States)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    Certain contaminants may travel faster through soils when they are sorbed to subsurface colloidal particles. Indeed, subsurface colloids may act as carriers of some contaminants accelerating their translocation through the soil into the water table. This phenomenon is known as colloid-facilitated contaminant transport. It plays a significant role in contaminant transport in soils and has been recognized as a source of groundwater contamination. From a mechanistic point of view, the attachment/detachment of the colloidal particles from the soil matrix or from the air-water interface and the straining process may modify the hydraulic properties of the porous media. Šimůnek et al. (2006) developed a model that can simulate the colloid-facilitated contaminant transport in variably saturated porous media. The model is based on the solution of a modified advection-dispersion equation that accounts for several processes, namely: straining, exclusion and attachement/detachement kinetics of colloids through the soil matrix. The solutions of these governing, partial differential equations are obtained using a standard Galerkin-type, linear finite element scheme, implemented in the HYDRUS-2D/3D software (Šimůnek et al., 2012). Modeling colloid transport through the soil and the interaction of colloids with the soil matrix and other contaminants is complex and requires the characterization of many model parameters. In practice, it is very difficult to assess actual transport parameter values, so they are often calibrated. However, before calibration, one needs to know which parameters have the greatest impact on output variables. This kind of information can be obtained through a sensitivity analysis of the model. The main objective of this work is to perform local and global sensitivity analyses of the colloid-facilitated contaminant transport module of HYDRUS. Sensitivity analysis was performed in two steps: (i) we applied a screening method based on Morris' elementary

  15. Modeling and analysing storage systems in agricultural biomass supply chain for cellulosic ethanol production

    International Nuclear Information System (INIS)

    Ebadian, Mahmood; Sowlati, Taraneh; Sokhansanj, Shahab; Townley-Smith, Lawrence; Stumborg, Mark

    2013-01-01

    Highlights: ► Studied the agricultural biomass supply chain for cellulosic ethanol production. ► Evaluated the impact of storage systems on different supply chain actors. ► Developed a combined simulation/optimization model to evaluate storage systems. ► Compared two satellite storage systems with roadside storage in terms of costs and emitted CO 2 . ► SS would lead to a more cost-efficient supply chain compared to roadside storage. -- Abstract: In this paper, a combined simulation/optimization model is developed to better understand and evaluate the impact of the storage systems on the costs incurred by each actor in the agricultural biomass supply chain including farmers, hauling contractors and the cellulosic ethanol plant. The optimization model prescribes the optimum number and location of farms and storages. It also determines the supply radius, the number of farms required to secure the annual supply of biomass and also the assignment of farms to storage locations. Given the specific design of the supply chain determined by the optimization model, the simulation model determines the number of required machines for each operation, their daily working schedule and utilization rates, along with the capacities of storages. To evaluate the impact of the storage systems on the delivered costs, three storage systems are molded and compared: roadside storage (RS) system and two satellite storage (SS) systems including SS with fixed hauling distance (SF) and SS with variable hauling distance (SV). In all storage systems, it is assumed the loading equipment is dedicated to storage locations. The obtained results from a real case study provide detailed cost figures for each storage system since the developed model analyses the supply chain on an hourly basis and considers time-dependence and stochasticity of the supply chain. Comparison of the storage systems shows SV would outperform SF and RS by reducing the total delivered cost by 8% and 6%, respectively

  16. Revolving SEM images visualising 3D taxonomic characters

    DEFF Research Database (Denmark)

    Akkari, Nesrine; Cheung, David Koon-Bong; Enghoff, Henrik

    2013-01-01

    A novel illustration technique based on scanning electron microscopy is used for the first time to enhance taxonomic descriptions. The male genitalia (gonopods) of six species of millipedes are used for construction of interactive imaging models. Each model is a compilation of a number of SEM ima...

  17. CERES model application for increasing preparedness to climate variability in agricultural planning—risk analyses

    Science.gov (United States)

    Popova, Zornitsa; Kercheva, Milena

    The role of soil, crop, climate and crop management for year-to-year variation of yield and groundwater pollution was quantified by simulation analyses with CERES-maize and CERES-wheat models over a 30-year period for four “soil-crop” combinations. It was established that “Chromic Luvisol-maize-dry land” combination was associated with the greatest coefficient of variability of yields ( Cv = 43%) and drought frequency (in 22 years with yield losses more than 20%) over the analysed period. Average yield losses in dry vegetation seasons were 60% of maize productivity potential under sufficient soil moisture. Traditional and drainage controlling precise irrigation scheduling mitigated drought consequences by reducing year-to-year variability of yield to Cv = 5.6-11.6% on risky Chromic Luvisol. Long-term wheat yields were much more stable ( Cv = 23-26%) than those of maize on Chromic Luvisol. In this case droughts covered 12 of the studied 30 years in which yield losses were 25-30% on the average. Soils of high water holding capacity (as Vertisol) stored 50-150 mm additional precipitation for crop evapotranspiration and thus reduced frequency of drought under both crops to 6-7 cases in 30 years. Agriculture should be more sustainable on this soil since variability of yield dropped to Cv = 13% for wheat and respectively Cv = 21% for maize. As a result Vertisol mitigated yield losses during dry vegetation periods by 10-15% for wheat and 22% for maize if compared with productivity under sufficient soil water. Thirty-year frequency analyses of seasonal nitrogen (N)-leaching, proved that ten of wheat and only one of maize vegetation seasons were susceptible to significant (10-45 kg N/ha/year) ground water pollution on Chromic Luvisol. Simulated precise irrigation scenario did not influence drainage in vegetation period. Another risky situations occurred under maize in the wettest fallow state after extremely dry vegetation (in one more of the studied years) when up

  18. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    Directory of Open Access Journals (Sweden)

    Varsha H. Rallapalli

    2016-10-01

    Full Text Available Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM has demonstrated that the signal-to-noise ratio (SNRENV from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N is assumed to: (a reduce S + N envelope power by filling in dips within clean speech (S and (b introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  19. Using an operating cost model to analyse the selection of aircraft type on short-haul routes

    CSIR Research Space (South Africa)

    Ssamula, B

    2006-08-01

    Full Text Available operating cost model to analyse suitable aircraft choices, for short haul routes, in terms of cost-related parameters, for aircraft commonly used within Africa. In this paper all the parameters that are crucial in analysing a transport service are addressed...

  20. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  1. Molecular approaches for viable bacterial population and transcriptional analyses in a rodent model of dental caries

    Science.gov (United States)

    Klein, Marlise I.; Scott-Anne, Kathleen M.; Gregoire, Stacy; Rosalen, Pedro L.; Koo, Hyun

    2012-01-01

    SUMMARY Culturing methods are the primary approach for microbiological analysis of plaque-biofilms in rodent models of dental caries. In this study, we developed strategies for isolation of DNA and RNA from in vivo formed plaque-biofilms to analyze the viable bacterial population and gene expression. Plaque-biofilm samples from rats were treated with propidium monoazide to isolate DNA from viable cells, and the purified DNA was used to quantify total bacteria and S. mutans population via qPCR and specific primers; the same samples were also analyzed by colony forming unit (CFU) counting. In parallel, RNA was isolated from plaque-biofilm samples (from same animals) and used for transcriptional analyses via RT-qPCR. The viable population of both S. mutans and total bacteria assessed by qPCR were positively correlated with the CFU data (P0.8). However, the qPCR data showed higher bacterial cell counts, particularly for total bacteria (vs. CFU). Moreover, S. mutans proportion in the plaque-biofilm determined by qPCR analysis showed strong correlation with incidence of smooth-surface caries (P=0.0022, r=0.71). The purified RNAs presented high RNA integrity numbers (>7), which allowed measurement of the expression of genes that are critical for S. mutans virulence (e.g. gtfB and gtfC). Our data show that the viable microbial population and the gene expression can be analyzed simultaneously, providing a global assessment of the infectious aspect of the disease dental caries. Our approach could enhance the value of the current rodent model in further understanding the pathophysiology of this disease and facilitating the exploration of novel anti-caries therapies. PMID:22958384

  2. VOC composition of current motor vehicle fuels and vapors, and collinearity analyses for receptor modeling.

    Science.gov (United States)

    Chin, Jo-Yu; Batterman, Stuart A

    2012-03-01

    The formulation of motor vehicle fuels can alter the magnitude and composition of evaporative and exhaust emissions occurring throughout the fuel cycle. Information regarding the volatile organic compound (VOC) composition of motor fuels other than gasoline is scarce, especially for bioethanol and biodiesel blends. This study examines the liquid and vapor (headspace) composition of four contemporary and commercially available fuels: gasoline (gasoline), ultra-low sulfur diesel (ULSD), and B20 (20% soy-biodiesel and 80% ULSD). The composition of gasoline and E85 in both neat fuel and headspace vapor was dominated by aromatics and n-heptane. Despite its low gasoline content, E85 vapor contained higher concentrations of several VOCs than those in gasoline vapor, likely due to adjustments in its formulation. Temperature changes produced greater changes in the partial pressures of 17 VOCs in E85 than in gasoline, and large shifts in the VOC composition. B20 and ULSD were dominated by C(9) to C(16)n-alkanes and low levels of the aromatics, and the two fuels had similar headspace vapor composition and concentrations. While the headspace composition predicted using vapor-liquid equilibrium theory was closely correlated to measurements, E85 vapor concentrations were underpredicted. Based on variance decomposition analyses, gasoline and diesel fuels and their vapors VOC were distinct, but B20 and ULSD fuels and vapors were highly collinear. These results can be used to estimate fuel related emissions and exposures, particularly in receptor models that apportion emission sources, and the collinearity analysis suggests that gasoline- and diesel-related emissions can be distinguished. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Modeling ecological drivers in marine viral communities using comparative metagenomics and network analyses.

    Science.gov (United States)

    Hurwitz, Bonnie L; Westveld, Anton H; Brum, Jennifer R; Sullivan, Matthew B

    2014-07-22

    Long-standing questions in marine viral ecology are centered on understanding how viral assemblages change along gradients in space and time. However, investigating these fundamental ecological questions has been challenging due to incomplete representation of naturally occurring viral diversity in single gene- or morphology-based studies and an inability to identify up to 90% of reads in viral metagenomes (viromes). Although protein clustering techniques provide a significant advance by helping organize this unknown metagenomic sequence space, they typically use only ∼75% of the data and rely on assembly methods not yet tuned for naturally occurring sequence variation. Here, we introduce an annotation- and assembly-free strategy for comparative metagenomics that combines shared k-mer and social network analyses (regression modeling). This robust statistical framework enables visualization of complex sample networks and determination of ecological factors driving community structure. Application to 32 viromes from the Pacific Ocean Virome dataset identified clusters of samples broadly delineated by photic zone and revealed that geographic region, depth, and proximity to shore were significant predictors of community structure. Within subsets of this dataset, depth, season, and oxygen concentration were significant drivers of viral community structure at a single open ocean station, whereas variability along onshore-offshore transects was driven by oxygen concentration in an area with an oxygen minimum zone and not depth or proximity to shore, as might be expected. Together these results demonstrate that this highly scalable approach using complete metagenomic network-based comparisons can both test and generate hypotheses for ecological investigation of viral and microbial communities in nature.

  4. Inverse analyses of effective diffusion parameters relevant for a two-phase moisture model of cementitious materials

    DEFF Research Database (Denmark)

    Addassi, Mouadh; Johannesson, Björn; Wadsö, Lars

    2018-01-01

    Here we present an inverse analyses approach to determining the two-phase moisture transport properties relevant to concrete durability modeling. The purposed moisture transport model was based on a continuum approach with two truly separate equations for the liquid and gas phase being connected...

  5. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input

  6. Experimental and modeling analyses for interactions between graphene oxide and quartz sand.

    Science.gov (United States)

    Kang, Jin-Kyu; Park, Jeong-Ann; Yi, In-Geol; Kim, Song-Bae

    2017-03-21

    The aim of this study was to quantify the interactions between graphene oxide (GO) and quartz sand by conducting experimental and modeling analyses. The results show that both GO and quartz sand were negatively charged in the presence of 0-50 mM NaCl and 5 mM CaCl 2 (GO = -43.10 to -17.60 mV, quartz sand = -40.97 to -8.44 mV). In the Derjaguin-Landau-Verwey-Overbeek (DLVO) energy profiles, the adhesion of GO to quartz sand becomes more favorable with increasing NaCl concentration from 0 to 10 mM because the interaction energy profile was compressed and the primary maximum energy barrier was lowered. At 50 mM NaCl and 5 mM CaCl 2 , the primary maximum energy barrier even disappeared, resulting in highly favorable conditions for GO retention to quartz sand. In the Maxwell model analysis, the probability of GO adhesion to quartz sand (α m ) increased from 2.46 × 10 -4 to 9.98 × 10 -1 at ionic strengths of 0-10 mM NaCl. In the column experiments (column length = 10 cm, inner diameter = 2.5 cm, flow rate = 0.5 mL min -1 ), the mass removal (Mr) of GO in quartz sand increased from 5.4% to 97.8% as the NaCl concentration was increased from 0 to 50 mM, indicating that the mobility of GO was high in low ionic strength solutions and decreased with increasing ionic strength. The Mr value of GO at 5 mM CaCl 2 was 100%, demonstrating that Ca 2+ had a much stronger effect than Na + on the mobility of GO. In addition, the mobility of GO was lower than that of chloride (Mr = 1.4%) but far higher than that of multi-walled carbon nanotubes (Mr = 87.0%) in deionized water. In aluminum oxide-coated sand, the Mr value of GO was 98.1% at 0 mM NaCl, revealing that the mobility of GO was reduced in the presence of metal oxides. The transport model analysis indicates that the value of the dimensionless attachment rate coefficient (D a ) increased from 0.11 to 4.47 as the NaCl concentration was increased from 0 to 50 mM. In the colloid filtration model analysis, the

  7. Recent advances in 3D SEM surface reconstruction.

    Science.gov (United States)

    Tafti, Ahmad P; Kirkpatrick, Andrew B; Alavi, Zahrasadat; Owen, Heather A; Yu, Zeyun

    2015-11-01

    The scanning electron microscope (SEM), as one of the most commonly used instruments in biology and material sciences, employs electrons instead of light to determine the surface properties of specimens. However, the SEM micrographs still remain 2D images. To effectively measure and visualize the surface attributes, we need to restore the 3D shape model from the SEM images. 3D surface reconstruction is a longstanding topic in microscopy vision as it offers quantitative and visual information for a variety of applications consisting medicine, pharmacology, chemistry, and mechanics. In this paper, we attempt to explain the expanding body of the work in this area, including a discussion of recent techniques and algorithms. With the present work, we also enhance the reliability, accuracy, and speed of 3D SEM surface reconstruction by designing and developing an optimized multi-view framework. We then consider several real-world experiments as well as synthetic data to examine the qualitative and quantitative attributes of our proposed framework. Furthermore, we present a taxonomy of 3D SEM surface reconstruction approaches and address several challenging issues as part of our future work. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. 3DSEM++: Adaptive and intelligent 3D SEM surface reconstruction.

    Science.gov (United States)

    Tafti, Ahmad P; Holz, Jessica D; Baghaie, Ahmadreza; Owen, Heather A; He, Max M; Yu, Zeyun

    2016-08-01

    Structural analysis of microscopic objects is a longstanding topic in several scientific disciplines, such as biological, mechanical, and materials sciences. The scanning electron microscope (SEM), as a promising imaging equipment has been around for decades to determine the surface properties (e.g., compositions or geometries) of specimens by achieving increased magnification, contrast, and resolution greater than one nanometer. Whereas SEM micrographs still remain two-dimensional (2D), many research and educational questions truly require knowledge and facts about their three-dimensional (3D) structures. 3D surface reconstruction from SEM images leads to remarkable understanding of microscopic surfaces, allowing informative and qualitative visualization of the samples being investigated. In this contribution, we integrate several computational technologies including machine learning, contrario methodology, and epipolar geometry to design and develop a novel and efficient method called 3DSEM++ for multi-view 3D SEM surface reconstruction in an adaptive and intelligent fashion. The experiments which have been performed on real and synthetic data assert the approach is able to reach a significant precision to both SEM extrinsic calibration and its 3D surface modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. A simple beam model to analyse the durability of adhesively bonded tile floorings in presence of shrinkage

    Directory of Open Access Journals (Sweden)

    S. de Miranda

    2014-07-01

    Full Text Available A simple beam model for the evaluation of tile debonding due to substrate shrinkage is presented. The tile-adhesive-substrate package is modeled as an Euler-Bernoulli beam laying on a two-layer elastic foundation. An effective discrete model for inter-tile grouting is introduced with the aim of modelling workmanship defects due to partial filled groutings. The model is validated using the results of a 2D FE model. Different defect configurations and adhesive typologies are analysed, focusing the attention on the prediction of normal stresses in the adhesive layer under the assumption of Mode I failure of the adhesive.

  10. Environmental regulation impacts on international trade: aggregate and sectoral analyses with a bilateral trade flow model

    NARCIS (Netherlands)

    van Beers, C.; van den Bergh, J.C.J.M.

    2003-01-01

    An important barrier to the implementation of strict environmental regulations is that they are perceived to negatively affect a country's competitiveness, visible through changes in international trade. Whereas theoretical analyses of trade and the environment indicate that relatively strict

  11. Calibração de um modelo de umidade para um solo aluvial sem cobertura vegetal Calibration of a soil moisture model for an alluvial soil without vegetable cover

    Directory of Open Access Journals (Sweden)

    Eduardo C. B. de Araújo

    2001-12-01

    Full Text Available O presente trabalho foi conduzido na região semi-árida em solo aluvial eutrófico, em área irrigada da Fazenda Experimental Vale do Curu, pertencente à Universidade Federal do Ceará. O experimento se desenvolveu no período de 4/10/99 a 10/3/00, mantendo-se o solo sem vegetação, com objetivo de calibrar o modelo de umidade do solo para atividades agrícolas (MUSAG e determinar os parâmetros associados às funções que compõem o modelo (infiltração, percolação e evaporação. A calibração consistiu em se medir a umidade no solo, na profundidade de 0 a 0,30 m, com uma sonda de nêutrons, e comparar essas medidas com a umidade no solo estimada pelo modelo parametrizado. O MUSAG permitiu valores estimados de armazenamento hídrico, estatisticamente não diferentes dos valores determinados pela sonda de nêutrons para a profundidade de 0 - 0,30 m. O modelo apresentou resultado menos satisfatório para estimativa da umidade em períodos com maior freqüência de precipitação, seguindo a tendência dos valores observados, porém subestimando-os.This work was carried out at the Experimental Farm of the Universidade Federal do Ceará, Curu River Valley, in the State of Ceará, Brazil, from October 4, 1999 to March 10, 2000, to calibrate, under field conditions, the so called soil moisture model for agricultural activities (MUSAG determining the parameters associated to the model functions (infiltration, percolation and evaporation.The measurements of soil water storage were done with a neutron probe, and compared with that predicted by the model. The MUSAG provided values of storage water, statistically not different from the values determined by the neutron probe for the depth of 0 - 0,30 m. The model did not provide reasonable estimates of the soil moisture during periods with larger precipitation frequency, however, the predictions followed the tendency of the observed values, but underestimating them.

  12. SEM probe of IC radiation sensitivity

    Science.gov (United States)

    Gauthier, M. K.; Stanley, A. G.

    1979-01-01

    Scanning Electron Microscope (SEM) used to irradiate single integrated circuit (IC) subcomponent to test for radiation sensitivity can localize area of IC less than .03 by .03 mm for determination of exact location of radiation sensitive section.

  13. Three dimensional analysis of the pore space in fine-grained Boom Clay, using BIB-SEM (broad-ion beam scanning electron microscopy), combined with FIB (focused ion-beam) serial cross-sectioning, pore network modeling and Wood's metal injection

    Science.gov (United States)

    Hemes, Susanne; Klaver, Jop; Desbois, Guillaume; Urai, Janos

    2014-05-01

    The Boom Clay is, besides the Ypresian clays, one of the potential host rock materials for radioactive waste disposal in Belgium (Gens et al., 2003; Van Marcke & Laenen, 2005; Verhoef et al., 2011). To access parameters, which are relevant for the diffusion controlled transport of radionuclides in the material, such as porosity, pore connectivity and permeability, it is crucial to characterize the pore space at high resolution (nm-scale) and in 3D. Focused-ion-beam (FIB) serial cross-sectioning in combination with high resolution scanning electron microscopy (SEM), pore network modeling, Wood's metal injection and broad-ion-beam (BIB) milling, constitute a superior set of methods to characterize the 3D pore space in fine-grained, clayey materials, down to the nm-scale resolution. In the present study, we identified characteristic 3D pore space morphologies, determined the 3D volume porosity of the material and applied pore network extraction modeling (Dong and Blunt, 2009), to access the connectivity of the pore space and to discriminate between pore bodies and pore throats. Moreover, we used Wood's metal injection (WMI) in combination with BIB-SEM imaging to assess the pore connectivity at a larger scale and even higher resolution. The FIB-SEM results show a highly (~ 90 %) interconnected pore space in Boom Clay, down to the resolution of ~ 3E+03 nm³ (voxel-size), with a total volume porosity of ~ 20 %. Pore morphologies of large (> 5E+08 nm³), highly interconnected pores are complex, with high surface area to volume ratios (shape factors G ~ 0.01), whereas small (< 1E+06 nm³), often isolated pores are much more compact and show higher shape factors (G) up to 0.03. WMI in combination with BIB-SEM, down to a resolution of ~ 50 nm² pixel-size, indicates an interconnected porosity fraction of ~ 80 %, of a total measured 2D porosity of ~ 20 %. Determining and distinguishing between pore bodies and pore throats enables us to compare 3D FIB-SEM pore

  14. A Conceptual Model for Analysing Management Development in the UK Hospitality Industry

    Science.gov (United States)

    Watson, Sandra

    2007-01-01

    This paper presents a conceptual, contingent model of management development. It explains the nature of the UK hospitality industry and its potential influence on MD practices, prior to exploring dimensions and relationships in the model. The embryonic model is presented as a model that can enhance our understanding of the complexities of the…

  15. From intermediate to final behavioral endpoints; Modeling cognitions in (cost-)effectiveness analyses in health promotion

    NARCIS (Netherlands)

    Prenger, Hendrikje Cornelia

    2012-01-01

    Cost-effectiveness analyses (CEAs) are considered an increasingly important tool in health promotion and psychology. In health promotion adequate effectiveness data of innovative interventions are often lacking. In case of many promising interventions the available data are inadequate for CEAs due

  16. Developing computational model-based diagnostics to analyse clinical chemistry data

    NARCIS (Netherlands)

    Schalkwijk, D.B. van; Bochove, K. van; Ommen, B. van; Freidig, A.P.; Someren, E.P. van; Greef, J. van der; Graaf, A.A. de

    2010-01-01

    This article provides methodological and technical considerations to researchers starting to develop computational model-based diagnostics using clinical chemistry data.These models are of increasing importance, since novel metabolomics and proteomics measuring technologies are able to produce large

  17. Bio-economic farm modelling to analyse agricultural land productivity in Rwanda

    NARCIS (Netherlands)

    Bidogeza, J.C.

    2011-01-01

    Keywords: Rwanda; farm household typology; sustainable technology adoption; multivariate analysis;
    land degradation; food security; bioeconomic model; crop simulation models; organic fertiliser; inorganic fertiliser; policy incentives

    In Rwanda, land degradation contributes to the

  18. Bio-economic farm modelling to analyse agricultural land productivity in Rwanda

    NARCIS (Netherlands)

    Bidogeza, J.C.

    2011-01-01

    Keywords: Rwanda; farm household typology; sustainable technology adoption; multivariate analysis;
    land degradation; food security; bioeconomic model; crop simulation models; organic fertiliser; inorganic fertiliser; policy incentives In Rwanda, land degradation contributes to the low and

  19. Reference model for measuring and analysing costs – particularly in business informatics

    OpenAIRE

    Milos Maryska

    2010-01-01

    This paper is devoted to problems of management of cost efficiency of business informatics with Business Intelligence (BI) assistance. It defines basic critical points that must be taken into account during creating models for management of cost efficiency of business informatics. It proposes new model for management of cost efficiency, this model include also definitions of dimensions and indicators for measuring of this cost efficiency. The model takes into account requirements that pose cl...

  20. Quantifying and Analysing Neighbourhood Characteristics Supporting Urban Land-Use Modelling

    DEFF Research Database (Denmark)

    Hansen, Henning Sten

    2009-01-01

    Land-use modelling and spatial scenarios have gained increased attention as a means to meet the challenge of reducing uncertainty in the spatial planning and decision-making. Several organisations have developed software for land-use modelling. Many of the recent modelling efforts incorporate cel...

  1. Phenology in Germany in the 20th century : methods, analyses and models

    Science.gov (United States)

    Schaber, Jörg

    2002-07-01

    locally combined time series increasing the available data for model development. Apart from analyzed protocolling errors, microclimatic site influences, genetic variation and the observers were identified as sources of uncertainty of phenological observational data. It was concluded that 99% of all phenological observations at a certain site will vary within approximately 24 days around the parametric mean. This supports to the proposed 30-day rule to detect outliers. New phenology models that predict local BB from daily temperature time series were developed. These models were based on simple interactions between inhibitory and promotory agents that are assumed to control the developmental status of a plant. Apart from the fact that, in general, the new models fitted and predicted the observations better than classical models, the main modeling results were: - The bias of the classical models, i.e. overestimation of early observations and underestimation of late observations, could be reduced but not completely removed. - The different favored model structures for each species indicated that for the late spring phases photoperiod played a more dominant role than for early spring phases. - Chilling only plays a subordinate role for spring BB compared to temperatures directly preceding BB. Die Länge der Vegetationsperiode (VP) spielt eine zentrale Rolle für die interannuelle Variation der Kohlenstoffspeicherung terrestrischer Ökosysteme. Die Analyse von Beobachtungsdaten hat gezeigt, dass sich die VP in den letzten Jahrzehnten in den nördlichen Breiten verlängert hat. Dieses Phänomen wurde oft im Zusammenhang mit der globalen Erwärmung diskutiert, da die Phänologie von der Temperatur beeinflusst wird. Die Analyse der Pflanzenphänologie in Süddeutschland im 20. Jahrhundert zeigte: - Die starke Verfrühung der Frühjahrsphasen in dem Jahrzehnt vor 1999 war kein singuläres Ereignis im 20. Jahrhundert. Schon in früheren Dekaden gab es ähnliche Trends. Es konnten

  2. Pathway models for analysing and managing the introduction of alien plant pests—an overview and categorization

    Science.gov (United States)

    J.C. Douma; M. Pautasso; R.C. Venette; C. Robinet; L. Hemerik; M.C.M. Mourits; J. Schans; W. van der Werf

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative estimates of introduction risks and effectiveness of management options....

  3. Economical analyses of build-operate-transfer model in establishing alternative power plants

    International Nuclear Information System (INIS)

    Yumurtaci, Zehra; Erdem, Hasan Hueseyin

    2007-01-01

    The most widely employed method to meet the increasing electricity demand is building new power plants. The most important issue in building new power plants is to find financial funds. Various models are employed, especially in developing countries, in order to overcome this problem and to find a financial source. One of these models is the build-operate-transfer (BOT) model. In this model, the investor raises all the funds for mandatory expenses and provides financing, builds the plant and, after a certain plant operation period, transfers the plant to the national power organization. In this model, the object is to decrease the burden of power plants on the state budget. The most important issue in the BOT model is the dependence of the unit electricity cost on the transfer period. In this study, the model giving the unit electricity cost depending on the transfer of the plants established according to the BOT model, has been discussed. Unit electricity investment cost and unit electricity cost in relation to transfer period for plant types have been determined. Furthermore, unit electricity cost change depending on load factor, which is one of the parameters affecting annual electricity production, has been determined, and the results have been analyzed. This method can be employed for comparing the production costs of different plants that are planned to be established according to the BOT model, or it can be employed to determine the appropriateness of the BOT model

  4. A Shell/3D Modeling Technique for the Analyses of Delaminated Composite Laminates

    Science.gov (United States)

    Krueger, Ronald; OBrien, T. Kevin

    2001-01-01

    A shell/3D modeling technique was developed for which a local three-dimensional solid finite element model is used only in the immediate vicinity of the delamination front. The goal was to combine the accuracy of the full three-dimensional solution with the computational efficiency of a plate or shell finite element model. Multi-point constraints provided a kinematically compatible interface between the local three-dimensional model and the global structural model which has been meshed with plate or shell finite elements. Double Cantilever Beam (DCB), End Notched Flexure (ENF), and Single Leg Bending (SLB) specimens were modeled using the shell/3D technique to study the feasibility for pure mode I (DCB), mode II (ENF) and mixed mode I/II (SLB) cases. Mixed mode strain energy release rate distributions were computed across the width of the specimens using the virtual crack closure technique. Specimens with a unidirectional layup and with a multidirectional layup where the delamination is located between two non-zero degree plies were simulated. For a local three-dimensional model, extending to a minimum of about three specimen thicknesses on either side of the delamination front, the results were in good agreement with mixed mode strain energy release rates obtained from computations where the entire specimen had been modeled with solid elements. For large built-up composite structures modeled with plate elements, the shell/3D modeling technique offers a great potential for reducing the model size, since only a relatively small section in the vicinity of the delamination front needs to be modeled with solid elements.

  5. Comparative study analysing women's childbirth satisfaction and obstetric outcomes across two different models of maternity care

    Science.gov (United States)

    Conesa Ferrer, Ma Belén; Canteras Jordana, Manuel; Ballesteros Meseguer, Carmen; Carrillo García, César; Martínez Roche, M Emilia

    2016-01-01

    Objectives To describe the differences in obstetrical results and women's childbirth satisfaction across 2 different models of maternity care (biomedical model and humanised birth). Setting 2 university hospitals in south-eastern Spain from April to October 2013. Design A correlational descriptive study. Participants A convenience sample of 406 women participated in the study, 204 of the biomedical model and 202 of the humanised model. Results The differences in obstetrical results were (biomedical model/humanised model): onset of labour (spontaneous 66/137, augmentation 70/1, p=0.0005), pain relief (epidural 172/132, no pain relief 9/40, p=0.0005), mode of delivery (normal vaginal 140/165, instrumental 48/23, p=0.004), length of labour (0–4 hours 69/93, >4 hours 133/108, p=0.011), condition of perineum (intact perineum or tear 94/178, episiotomy 100/24, p=0.0005). The total questionnaire score (100) gave a mean (M) of 78.33 and SD of 8.46 in the biomedical model of care and an M of 82.01 and SD of 7.97 in the humanised model of care (p=0.0005). In the analysis of the results per items, statistical differences were found in 8 of the 9 subscales. The highest scores were reached in the humanised model of maternity care. Conclusions The humanised model of maternity care offers better obstetrical outcomes and women's satisfaction scores during the labour, birth and immediate postnatal period than does the biomedical model. PMID:27566632

  6. Oxford CyberSEM: remote microscopy

    International Nuclear Information System (INIS)

    Rahman, M; Kirkland, A; Cockayne, D; Meyer, R

    2008-01-01

    The Internet has enabled researchers to communicate over vast geographical distances, sharing ideas and documents. e-Science, underpinned by Grid and Web Services, has enabled electronic communications to the next level where, in addition to document sharing, researchers can increasingly control high precision scientific instruments over the network. The Oxford CyberSEM project developed a simple Java applet via which samples placed in a JEOL 5510LV Scanning Electron Microscope (SEM) can be manipulated and examined collaboratively over the Internet. Designed with schoolchildren in mind, CyberSEM does not require any additional hardware or software other than a generic Java-enabled web browser. This paper reflects on both the technical and social challenges in designing real-time systems for controlling scientific equipments in collaborative environments. Furthermore, it proposes potential deployment beyond the classroom setting.

  7. Partial Least Squares Strukturgleichungsmodellierung (PLS-SEM)

    DEFF Research Database (Denmark)

    Hair, Joseph F.; Hult, G. Tomas M.; Ringle, Christian M.

    (PLS-SEM) hat sich in der wirtschafts- und sozialwissenschaftlichen Forschung als geeignetes Verfahren zur Schätzung von Kausalmodellen behauptet. Dank der Anwenderfreundlichkeit des Verfahrens und der vorhandenen Software ist es inzwischen auch in der Praxis etabliert. Dieses Buch liefert eine...... anwendungsorientierte Einführung in die PLS-SEM. Der Fokus liegt auf den Grundlagen des Verfahrens und deren praktischer Umsetzung mit Hilfe der SmartPLS-Software. Das Konzept des Buches setzt dabei auf einfache Erläuterungen statistischer Ansätze und die anschauliche Darstellung zahlreicher Anwendungsbeispiele anhand...... einer einheitlichen Fallstudie. Viele Grafiken, Tabellen und Illustrationen erleichtern das Verständnis der PLS-SEM. Zudem werden dem Leser herunterladbare Datensätze, Aufgaben und weitere Fachartikel zur Vertiefung angeboten. Damit eignet sich das Buch hervorragend für Studierende, Forscher und...

  8. Integrated freight network model : a GIS-based platform for transportation analyses.

    Science.gov (United States)

    2015-01-01

    The models currently used to examine the behavior transportation systems are usually mode-specific. That is, they focus on a single mode (i.e. railways, highways, or waterways). The lack of : integration limits the usefulness of models to analyze the...

  9. Evidence to Support the Componential Model of Creativity: Secondary Analyses of Three Studies.

    Science.gov (United States)

    Conti, Regina; And Others

    1996-01-01

    Three studies with overlapping participant populations evaluated Amabile's componential model of creativity, which postulates three major creativity components: (1) skills specific to the task domain, (2) general (cross-domain) creativity-relevant skills, and (3) task motivation. Findings of the three studies support Amabile's model. (DB)

  10. Analysing empowerment-oriented email consultation for parents : development of the Guiding the Empowerment Process model

    NARCIS (Netherlands)

    dr. Christa C.C. Nieuwboer

    2014-01-01

    Online consultation is increasingly offered by parenting practitioners, but it is not clear if it is feasible to provide empowerment-oriented support in a single session email consultation. Based on the empowerment theory, we developed the Guiding the Empowerment Process model (GEP model) to

  11. A laboratory-calibrated model of coho salmon growth with utility for ecological analyses

    Science.gov (United States)

    Manhard, Christopher V.; Som, Nicholas A.; Perry, Russell W.; Plumb, John M.

    2018-01-01

    We conducted a meta-analysis of laboratory- and hatchery-based growth data to estimate broadly applicable parameters of mass- and temperature-dependent growth of juvenile coho salmon (Oncorhynchus kisutch). Following studies of other salmonid species, we incorporated the Ratkowsky growth model into an allometric model and fit this model to growth observations from eight studies spanning ten different populations. To account for changes in growth patterns with food availability, we reparameterized the Ratkowsky model to scale several of its parameters relative to ration. The resulting model was robust across a wide range of ration allocations and experimental conditions, accounting for 99% of the variation in final body mass. We fit this model to growth data from coho salmon inhabiting tributaries and constructed ponds in the Klamath Basin by estimating habitat-specific indices of food availability. The model produced evidence that constructed ponds provided higher food availability than natural tributaries. Because of their simplicity (only mass and temperature are required as inputs) and robustness, ration-varying Ratkowsky models have utility as an ecological tool for capturing growth in freshwater fish populations.

  12. Solving scheduling problems by untimed model checking. The clinical chemical analyser case study

    NARCIS (Netherlands)

    Margaria, T.; Wijs, Anton J.; Massink, M.; van de Pol, Jan Cornelis; Bortnik, Elena M.

    2009-01-01

    In this article, we show how scheduling problems can be modelled in untimed process algebra, by using special tick actions. A minimal-cost trace leading to a particular action, is one that minimises the number of tick steps. As a result, we can use any (timed or untimed) model checking tool to find

  13. Stochastic Spatio-Temporal Models for Analysing NDVI Distribution of GIMMS NDVI3g Images

    Directory of Open Access Journals (Sweden)

    Ana F. Militino

    2017-01-01

    Full Text Available The normalized difference vegetation index (NDVI is an important indicator for evaluating vegetation change, monitoring land surface fluxes or predicting crop models. Due to the great availability of images provided by different satellites in recent years, much attention has been devoted to testing trend changes with a time series of NDVI individual pixels. However, the spatial dependence inherent in these data is usually lost unless global scales are analyzed. In this paper, we propose incorporating both the spatial and the temporal dependence among pixels using a stochastic spatio-temporal model for estimating the NDVI distribution thoroughly. The stochastic model is a state-space model that uses meteorological data of the Climatic Research Unit (CRU TS3.10 as auxiliary information. The model will be estimated with the Expectation-Maximization (EM algorithm. The result is a set of smoothed images providing an overall analysis of the NDVI distribution across space and time, where fluctuations generated by atmospheric disturbances, fire events, land-use/cover changes or engineering problems from image capture are treated as random fluctuations. The illustration is carried out with the third generation of NDVI images, termed NDVI3g, of the Global Inventory Modeling and Mapping Studies (GIMMS in continental Spain. This data are taken in bymonthly periods from January 2011 to December 2013, but the model can be applied to many other variables, countries or regions with different resolutions.

  14. Factor analyses of the Hospital Anxiety and Depression Scale: a Bayesian structural equation modeling approach.

    Science.gov (United States)

    Fong, Ted Chun Tat; Ho, Rainbow Tin Hung

    2013-12-01

    The latent structure of the Hospital Anxiety and Depression Scale (HADS) has caused inconsistent results in the literature. The HADS is frequently analyzed via maximum likelihood confirmatory factor analysis (ML-CFA). However, the overly restrictive assumption of exact zero cross-loadings and residual correlations in ML-CFA can lead to poor model fits and distorted factor structures. This study applied Bayesian structural equation modeling (BSEM) to evaluate the latent structure of the HADS. Three a priori models, the two-factor, three-factor, and bifactor models, were investigated in a Chinese community sample (N = 312) and clinical sample (N = 198) using ML-CFA and BSEM. BSEM specified approximate zero cross-loadings and residual correlations through the use of zero-mean, small-variance informative priors. The model comparison was based on the Bayesian information criterion (BIC). Using ML-CFA, none of the three models provided an adequate fit for either sample. The BSEM two-factor model with approximate zero cross-loadings and residual correlations fitted both samples well with the lowest BIC of the three models and displayed a simple and parsimonious factor-loading pattern. The study demonstrated that the two-factor structure fitted the HADS well, suggesting its usefulness in assessing the symptoms of anxiety and depression in clinical practice. BSEM is a sophisticated and flexible statistical technique that better reflects substantive theories and locates the source of model misfit. Future use of BSEM is recommended to evaluate the latent structure of other psychological instruments.

  15. Analyses of precooling parameters for a bottom flooding ECCS rewetting velocity model

    International Nuclear Information System (INIS)

    Chun, M.H.

    1981-01-01

    An extension work of the previous paper on the rewetting velocity model is presented. Application of the rewetting velocity model presented elsewhere requires a priori values of phi. In the absence of phi values, film boiling heat transfer coefficient (hsub(df)) and fog-film length (1) data are needed. To provide these informations, a modified Bromley's correlation is first derived and used to obtain hsub(df) values at higher pressure conditions. In addition, the analysis of the precooling parameters, such as phi and 1 is further extended using much more expensive PWR FLECHT data. Thus, the applicable range of the rewetting velocity model is further expanded in this work. (author)

  16. Model-Based Fault Diagnosis: Performing Root Cause and Impact Analyses in Real Time

    Science.gov (United States)

    Figueroa, Jorge F.; Walker, Mark G.; Kapadia, Ravi; Morris, Jonathan

    2012-01-01

    Generic, object-oriented fault models, built according to causal-directed graph theory, have been integrated into an overall software architecture dedicated to monitoring and predicting the health of mission- critical systems. Processing over the generic fault models is triggered by event detection logic that is defined according to the specific functional requirements of the system and its components. Once triggered, the fault models provide an automated way for performing both upstream root cause analysis (RCA), and for predicting downstream effects or impact analysis. The methodology has been applied to integrated system health management (ISHM) implementations at NASA SSC's Rocket Engine Test Stands (RETS).

  17. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section...... transmission electron microscopy is used to acquire images from two experimental groups of rats: 1) rats subjected to a behavioral model of stress and 2) rats subjected to sham stress as the control group. The synaptic vesicle distribution and interactions are modeled by employing a point process approach...... on differences of statistical measures in section and the same measures in between sections. Three-dimensional (3D) datasets are reconstructed by using image registration techniques and estimated thicknesses. We distinguish the effect of stress by estimating the synaptic vesicle densities and modeling...

  18. Wave modelling for the North Indian Ocean using MSMR analysed winds

    Digital Repository Service at National Institute of Oceanography (India)

    Vethamony, P.; Sudheesh, K.; Rupali, S.P.; Babu, M.T.; Jayakumar, S.; Saran, A.K.; Basu, S.K.; Kumar, R.; Sarkar, A.

    NCMRWF (National Centre for Medium Range Weather Forecast) winds assimilated with MSMR (Multi-channel Scanning Microwave Radiometer) winds are used as input to MIKE21 Offshore Spectral Wave model (OSW) which takes into account wind induced wave...

  19. Reference model for measuring and analysing costs – particularly in business informatics

    Directory of Open Access Journals (Sweden)

    Milos Maryska

    2010-04-01

    Full Text Available This paper is devoted to problems of management of cost efficiency of business informatics with Business Intelligence (BI assistance. It defines basic critical points that must be taken into account during creating models for management of cost efficiency of business informatics. It proposes new model for management of cost efficiency, this model include also definitions of dimensions and indicators for measuring of this cost efficiency. The model takes into account requirements that pose claim on management of cost efficiency necessitated by accounting and requirements from consultancy with managers of companies. It also takes into account requirement of methodologies for management business informatics and methods and processes for evaluating and measuring business informatics. This methodologies, methods and processes are transformed into procedures that are appropriate for measuring and evaluating business informatics. This model is intended for monitoring of actual situation, evolution cost efficiency of business informatics and it can be used for making decisions about convenience of outsourcing of business informatics. There are presented some examples from presentation level of mentioned model for measuring of cost efficiency of business informatics. This presentation level is in the form of tables and also in the form of dashboards.

  20. Analysing stratified medicine business models and value systems: innovation-regulation interactions.

    Science.gov (United States)

    Mittra, James; Tait, Joyce

    2012-09-15

    Stratified medicine offers both opportunities and challenges to the conventional business models that drive pharmaceutical R&D. Given the increasingly unsustainable blockbuster model of drug development, due in part to maturing product pipelines, alongside increasing demands from regulators, healthcare providers and patients for higher standards of safety, efficacy and cost-effectiveness of new therapies, stratified medicine promises a range of benefits to pharmaceutical and diagnostic firms as well as healthcare providers and patients. However, the transition from 'blockbusters' to what might now be termed 'niche-busters' will require the adoption of new, innovative business models, the identification of different and perhaps novel types of value along the R&D pathway, and a smarter approach to regulation to facilitate innovation in this area. In this paper we apply the Innogen Centre's interdisciplinary ALSIS methodology, which we have developed for the analysis of life science innovation systems in contexts where the value creation process is lengthy, expensive and highly uncertain, to this emerging field of stratified medicine. In doing so, we consider the complex collaboration, timing, coordination and regulatory interactions that shape business models, value chains and value systems relevant to stratified medicine. More specifically, we explore in some depth two convergence models for co-development of a therapy and diagnostic before market authorisation, highlighting the regulatory requirements and policy initiatives within the broader value system environment that have a key role in determining the probable success and sustainability of these models. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Analysing the Costs of Integrated Care: A Case on Model Selection for Chronic Care Purposes

    Directory of Open Access Journals (Sweden)

    Marc Carreras

    2016-08-01

    Full Text Available Background: The objective of this study is to investigate whether the algorithm proposed by Manning and Mullahy, a consolidated health economics procedure, can also be used to estimate individual costs for different groups of healthcare services in the context of integrated care. Methods: A cross-sectional study focused on the population of the Baix Empordà (Catalonia-Spain for the year 2012 (N = 92,498 individuals. A set of individual cost models as a function of sex, age and morbidity burden were adjusted and individual healthcare costs were calculated using a retrospective full-costing system. The individual morbidity burden was inferred using the Clinical Risk Groups (CRG patient classification system. Results: Depending on the characteristics of the data, and according to the algorithm criteria, the choice of model was a linear model on the log of costs or a generalized linear model with a log link. We checked for goodness of fit, accuracy, linear structure and heteroscedasticity for the models obtained. Conclusion: The proposed algorithm identified a set of suitable cost models for the distinct groups of services integrated care entails. The individual morbidity burden was found to be indispensable when allocating appropriate resources to targeted individuals.

  2. Thermo-mechanical analyses and model validation in the HAW test field. Final report

    International Nuclear Information System (INIS)

    Heijdra, J.J.; Broerse, J.; Prij, J.

    1995-01-01

    An overview is given of the thermo-mechanical analysis work done for the design of the High Active Waste experiment and for the purpose of validation of the used models through comparison with experiments. A brief treatise is given on the problems of validation of models used for the prediction of physical behaviour which cannot be determined with experiments. The analysis work encompasses investigations into the initial state of stress in the field, the constitutive relations, the temperature rise, and the pressure on the liner tubes inserted in the field to guarantee the retrievability of the radioactive sources used for the experiment. The measurements of temperatures, deformations, and stresses are described and an evaluation is given of the comparison of measured and calculated data. An attempt has been made to qualify or even quantify the discrepancies, if any, between measurements and calculations. It was found that the model for the temperature calculations performed adequately. For the stresses the general tendency was good, however, large discrepancies exist mainly due to inaccuracies in the measurements. For the deformations again the general tendency of the model predictions was in accordance with the measurements. However, from the evaluation it appears that in spite of the efforts to estimate the correct initial rock pressure at the location of the experiment, this pressure has been underestimated. The evaluation has contributed to a considerable increase in confidence in the models and gives no reason to question the constitutive model for rock salt. However, due to the quality of the measurements of the stress and the relatively short period of the experiments no quantitatively firm support for the constitutive model is acquired. Collections of graphs giving the measured and calculated data are attached as appendices. (orig.)

  3. Comprehensive analyses of ventricular myocyte models identify targets exhibiting favorable rate dependence.

    Directory of Open Access Journals (Sweden)

    Megan A Cummins

    2014-03-01

    Full Text Available Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP. The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz and fast (2 Hz rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of

  4. Scenario sensitivity analyses performed on the PRESTO-EPA LLW risk assessment models

    International Nuclear Information System (INIS)

    Bandrowski, M.S.

    1988-01-01

    The US Environmental Protection Agency (EPA) is currently developing standards for the land disposal of low-level radioactive waste. As part of the standard development, EPA has performed risk assessments using the PRESTO-EPA codes. A program of sensitivity analysis was conducted on the PRESTO-EPA codes, consisting of single parameter sensitivity analysis and scenario sensitivity analysis. The results of the single parameter sensitivity analysis were discussed at the 1987 DOE LLW Management Conference. Specific scenario sensitivity analyses have been completed and evaluated. Scenario assumptions that were analyzed include: site location, disposal method, form of waste, waste volume, analysis time horizon, critical radionuclides, use of buffer zones, and global health effects

  5. Search Engine Marketing (SEM: Financial & Competitive Advantages of an Effective Hotel SEM Strategy

    Directory of Open Access Journals (Sweden)

    Leora Halpern Lanz

    2015-05-01

    Full Text Available Search Engine Marketing and Optimization (SEO, SEM are keystones of a hotels marketing strategy, in fact research shows that 90% of travelers start their vacation planning with a Google search. Learn five strategies that can enhance a hotels SEO and SEM strategies to boost bookings.

  6. Human Atrial Cell Models to Analyse Haemodialysis-Related Effects on Cardiac Electrophysiology: Work in Progress

    Directory of Open Access Journals (Sweden)

    Elisa Passini

    2014-01-01

    Full Text Available During haemodialysis (HD sessions, patients undergo alterations in the extracellular environment, mostly concerning plasma electrolyte concentrations, pH, and volume, together with a modification of sympathovagal balance. All these changes affect cardiac electrophysiology, possibly leading to an increased arrhythmic risk. Computational modeling may help to investigate the impact of HD-related changes on atrial electrophysiology. However, many different human atrial action potential (AP models are currently available, all validated only with the standard electrolyte concentrations used in experiments. Therefore, they may respond in different ways to the same environmental changes. After an overview on how the computational approach has been used in the past to investigate the effect of HD therapy on cardiac electrophysiology, the aim of this work has been to assess the current state of the art in human atrial AP models, with respect to the HD context. All the published human atrial AP models have been considered and tested for electrolytes, volume changes, and different acetylcholine concentrations. Most of them proved to be reliable for single modifications, but all of them showed some drawbacks. Therefore, there is room for a new human atrial AP model, hopefully able to physiologically reproduce all the HD-related effects. At the moment, work is still in progress in this specific field.

  7. Modelling and simulation of complex sociotechnical systems: envisioning and analysing work environments

    Science.gov (United States)

    Hettinger, Lawrence J.; Kirlik, Alex; Goh, Yang Miang; Buckle, Peter

    2015-01-01

    Accurate comprehension and analysis of complex sociotechnical systems is a daunting task. Empirically examining, or simply envisioning the structure and behaviour of such systems challenges traditional analytic and experimental approaches as well as our everyday cognitive capabilities. Computer-based models and simulations afford potentially useful means of accomplishing sociotechnical system design and analysis objectives. From a design perspective, they can provide a basis for a common mental model among stakeholders, thereby facilitating accurate comprehension of factors impacting system performance and potential effects of system modifications. From a research perspective, models and simulations afford the means to study aspects of sociotechnical system design and operation, including the potential impact of modifications to structural and dynamic system properties, in ways not feasible with traditional experimental approaches. This paper describes issues involved in the design and use of such models and simulations and describes a proposed path forward to their development and implementation. Practitioner Summary: The size and complexity of real-world sociotechnical systems can present significant barriers to their design, comprehension and empirical analysis. This article describes the potential advantages of computer-based models and simulations for understanding factors that impact sociotechnical system design and operation, particularly with respect to process and occupational safety. PMID:25761227

  8. The Computational Fluid Dynamics Analyses on Hemodynamic Characteristics in Stenosed Arterial Models

    Directory of Open Access Journals (Sweden)

    Yue Zhou

    2018-01-01

    Full Text Available Arterial stenosis plays an important role in the progressions of thrombosis and stroke. In the present study, a standard axisymmetric tube model of the stenotic artery is introduced and the degree of stenosis η is evaluated by the area ratio of the blockage to the normal vessel. A normal case (η=0 and four stenotic cases of η=0.25, 0.5, 0.625, and 0.75 with a constant Reynolds number of 300 are simulated by computational fluid dynamics (CFD, respectively, with the Newtonian and Carreau models for comparison. Results show that for both models, the poststenotic separation vortex length increases exponentially with the growth of stenosis degree. However, the vortex length of the Carreau model is shorter than that of the Newtonian model. The artery narrowing accelerates blood flow, which causes high blood pressure and wall shear stress (WSS. The pressure drop of the η=0.75 case is nearly 8 times that of the normal value, while the WSS peak at the stenosis region of η=0.75 case even reaches up to 15 times that of the normal value. The present conclusions are of generality and contribute to the understanding of the dynamic mechanisms of artery stenosis diseases.

  9. Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    Proceeding for the poster presentation at LHCP2017, Shanghai, China on the topic of "Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses" (ATL-PHYS-SLIDE-2017-265 https://cds.cern.ch/record/2265389) Deadline: 01/09/2017

  10. Latent Variable Modelling and Item Response Theory Analyses in Marketing Research

    Directory of Open Access Journals (Sweden)

    Brzezińska Justyna

    2016-12-01

    Full Text Available Item Response Theory (IRT is a modern statistical method using latent variables designed to model the interaction between a subject’s ability and the item level stimuli (difficulty, guessing. Item responses are treated as the outcome (dependent variables, and the examinee’s ability and the items’ characteristics are the latent predictor (independent variables. IRT models the relationship between a respondent’s trait (ability, attitude and the pattern of item responses. Thus, the estimation of individual latent traits can differ even for two individuals with the same total scores. IRT scores can yield additional benefits and this will be discussed in detail. In this paper theory and application with R software with the use of packages designed for modelling IRT will be presented.

  11. Analysing improvements to on-street public transport systems: a mesoscopic model approach

    DEFF Research Database (Denmark)

    Ingvardson, Jesper Bláfoss; Kornerup Jensen, Jonas; Nielsen, Otto Anker

    2017-01-01

    Light rail transit and bus rapid transit have shown to be efficient and cost-effective in improving public transport systems in cities around the world. As these systems comprise various elements, which can be tailored to any given setting, e.g. pre-board fare-collection, holding strategies...... a mesoscopic model which makes it possible to evaluate public transport operations in details, including dwell times, intelligent traffic signal timings and holding strategies while modelling impacts from other traffic using statistical distributional data thereby ensuring simplicity in use and fast...... and other advanced public transport systems (APTS), the attractiveness of such systems depends heavily on their implementation. In the early planning stage it is advantageous to deploy simple and transparent models to evaluate possible ways of implementation. For this purpose, the present study develops...

  12. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses.

    Science.gov (United States)

    Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M

    2015-07-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.

  13. Features and analyses of W7-X cryostat system FE model

    Energy Technology Data Exchange (ETDEWEB)

    Eeten, Paul van, E-mail: paul.van.eeten@ipp.mpg.de; Bräuer, Torsten; Bykov, Victor; Carls, Andre; Fellinger, Joris; Kallmeyer, J.P.

    2015-10-15

    The Wendelstein 7-X stellarator is presently under construction at the Max-Planck-Institute for Plasma Physics in Greifswald with the goal to verify that a stellarator magnetic confinement concept is a viable option for a fusion power plant. The main components of the W7-X cryostat system are the plasma vessel (PV), outer vessel (OV), ports, thermal insulation, vessel supports and the machine base (MB). The main task of the cryostat system is to provide an insulating vacuum for the cryogenic magnet system while allowing external access to the PV through ports for diagnostic, supply and heating systems. The cryostat is subjected to different types of loads during assembly, maintenance and operation. This ranges from basic weight loads from all installed components to mechanical, vacuum and thermal loads. To predict the behavior of the cryostat in terms of deformations, stresses and support load distribution a finite element (FE) global model has been created called the Global Model of the Cryostat System (GMCS). A complete refurbishment of the GM CS has been done in the last 2 years to prepare the model for future applications. This involved a complete mesh update of the model, an improvement of many model features, an update of the applied operational loads and boundary conditions as well as the creation of automatic post processing procedures. Currently the GMCS is used to support several significant assembly and commissioning steps of W7-X that involve the cryostat system, e.g. the removal of temporary supports beneath the MB, transfer of the PV from temporary to the final supports and evacuation of the cryostat. In the upcoming months the model will be used for further support of the commissioning of W7-X which includes the first evacuation of the PV.

  14. Features and analyses of W7-X cryostat system FE model

    International Nuclear Information System (INIS)

    Eeten, Paul van; Bräuer, Torsten; Bykov, Victor; Carls, Andre; Fellinger, Joris; Kallmeyer, J.P.

    2015-01-01

    The Wendelstein 7-X stellarator is presently under construction at the Max-Planck-Institute for Plasma Physics in Greifswald with the goal to verify that a stellarator magnetic confinement concept is a viable option for a fusion power plant. The main components of the W7-X cryostat system are the plasma vessel (PV), outer vessel (OV), ports, thermal insulation, vessel supports and the machine base (MB). The main task of the cryostat system is to provide an insulating vacuum for the cryogenic magnet system while allowing external access to the PV through ports for diagnostic, supply and heating systems. The cryostat is subjected to different types of loads during assembly, maintenance and operation. This ranges from basic weight loads from all installed components to mechanical, vacuum and thermal loads. To predict the behavior of the cryostat in terms of deformations, stresses and support load distribution a finite element (FE) global model has been created called the Global Model of the Cryostat System (GMCS). A complete refurbishment of the GM CS has been done in the last 2 years to prepare the model for future applications. This involved a complete mesh update of the model, an improvement of many model features, an update of the applied operational loads and boundary conditions as well as the creation of automatic post processing procedures. Currently the GMCS is used to support several significant assembly and commissioning steps of W7-X that involve the cryostat system, e.g. the removal of temporary supports beneath the MB, transfer of the PV from temporary to the final supports and evacuation of the cryostat. In the upcoming months the model will be used for further support of the commissioning of W7-X which includes the first evacuation of the PV.

  15. Building a SEM Analytics Reporting Portfolio

    Science.gov (United States)

    Goff, Jay W.; Williams, Brian G.; Kilgore, Wendy

    2016-01-01

    Effective strategic enrollment management (SEM) efforts require vast amounts of internal and external data to ensure that meaningful reporting and analysis systems can assist managers in decision making. A wide range of information is integral for leading effective and efficient student recruitment and retention programs. This article is designed…

  16. Does Sexually Explicit Media (SEM) Affect Me?

    DEFF Research Database (Denmark)

    Hald, Gert Martin; Træen, Bente; Noor, Syed W

    2015-01-01

    and understanding of one’s sexual orientation.First-person effects refer to self-perceived and self-reported effects of SEM consumptionas experienced by the consumer. In addition, the study examined and provided athorough validation of the psychometric properties of the seven-item PornographyConsumption Effect...

  17. Modeling human papillomavirus and cervical cancer in the United States for analyses of screening and vaccination

    Directory of Open Access Journals (Sweden)

    Ortendahl Jesse

    2007-10-01

    Full Text Available Abstract Background To provide quantitative insight into current U.S. policy choices for cervical cancer prevention, we developed a model of human papillomavirus (HPV and cervical cancer, explicitly incorporating uncertainty about the natural history of disease. Methods We developed a stochastic microsimulation of cervical cancer that distinguishes different HPV types by their incidence, clearance, persistence, and progression. Input parameter sets were sampled randomly from uniform distributions, and simulations undertaken with each set. Through systematic reviews and formal data synthesis, we established multiple epidemiologic targets for model calibration, including age-specific prevalence of HPV by type, age-specific prevalence of cervical intraepithelial neoplasia (CIN, HPV type distribution within CIN and cancer, and age-specific cancer incidence. For each set of sampled input parameters, likelihood-based goodness-of-fit (GOF scores were computed based on comparisons between model-predicted outcomes and calibration targets. Using 50 randomly resampled, good-fitting parameter sets, we assessed the external consistency and face validity of the model, comparing predicted screening outcomes to independent data. To illustrate the advantage of this approach in reflecting parameter uncertainty, we used the 50 sets to project the distribution of health outcomes in U.S. women under different cervical cancer prevention strategies. Results Approximately 200 good-fitting parameter sets were identified from 1,000,000 simulated sets. Modeled screening outcomes were externally consistent with results from multiple independent data sources. Based on 50 good-fitting parameter sets, the expected reductions in lifetime risk of cancer with annual or biennial screening were 76% (range across 50 sets: 69–82% and 69% (60–77%, respectively. The reduction from vaccination alone was 75%, although it ranged from 60% to 88%, reflecting considerable parameter

  18. GSEVM v.2: MCMC software to analyse genetically structured environmental variance models

    DEFF Research Database (Denmark)

    Ibáñez-Escriche, N; Garcia, M; Sorensen, D

    2010-01-01

    This note provides a description of software that allows to fit Bayesian genetically structured variance models using Markov chain Monte Carlo (MCMC). The gsevm v.2 program was written in Fortran 90. The DOS and Unix executable programs, the user's guide, and some example files are freely available...... for research purposes at http://www.bdporc.irta.es/estudis.jsp. The main feature of the program is to compute Monte Carlo estimates of marginal posterior distributions of parameters of interest. The program is quite flexible, allowing the user to fit a variety of linear models at the level of the mean...

  19. Studies of the Earth Energy Budget and Water Cycle Using Satellite Observations and Model Analyses

    Science.gov (United States)

    Campbell, G. G.; VonderHarr, T. H.; Randel, D. L.; Kidder, S. Q.

    1997-01-01

    During this research period we have utilized the ERBE data set in comparisons to surface properties and water vapor observations in the atmosphere. A relationship between cloudiness and surface temperature anomalies was found. This same relationship was found in a general circulation model, verifying the model. The attempt to construct a homogeneous time series from Nimbus 6, Nimbus 7 and ERBE data is not complete because we are still waiting for the ERBE reanalysis to be completed. It will be difficult to merge the Nimbus 6 data in because its observations occurred when the average weather was different than the other periods, so regression adjustments are not effective.

  20. Empirical analyses of a choice model that captures ordering among attribute values

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard

    2017-01-01

    an alternative additionally because it has the highest price. In this paper, we specify a discrete choice model that takes into account the ordering of attribute values across alternatives. This model is used to investigate the effect of attribute value ordering in three case studies related to alternative-fuel...... vehicles, mode choice, and route choice. In our application to choices among alternative-fuel vehicles, we see that especially the price coefficient is sensitive to changes in ordering. The ordering effect is also found in the applications to mode and route choice data where both travel time and cost...

  1. Survival data analyses in ecotoxicology: critical effect concentrations, methods and models. What should we use?

    Science.gov (United States)

    Forfait-Dubuc, Carole; Charles, Sandrine; Billoir, Elise; Delignette-Muller, Marie Laure

    2012-05-01

    In ecotoxicology, critical effect concentrations are the most common indicators to quantitatively assess risks for species exposed to contaminants. Three types of critical effect concentrations are classically used: lowest/ no observed effect concentration (LOEC/NOEC), LC( x) (x% lethal concentration) and NEC (no effect concentration). In this article, for each of these three types of critical effect concentration, we compared methods or models used for their estimation and proposed one as the most appropriate. We then compared these critical effect concentrations to each other. For that, we used nine survival data sets corresponding to D. magna exposition to nine different contaminants, for which the time-course of the response was monitored. Our results showed that: (i) LOEC/NOEC values at day 21 were method-dependent, and that the Cochran-Armitage test with a step-down procedure appeared to be the most protective for the environment; (ii) all tested concentration-response models we compared gave close values of LC50 at day 21, nevertheless the Weibull model had the lowest global mean deviance; (iii) a simple threshold NEC-model both concentration and time dependent more completely described whole data (i.e. all timepoints) and enabled a precise estimation of the NEC. We then compared the three critical effect concentrations and argued that the use of the NEC might be a good option for environmental risk assessment.

  2. Cyclodextrin--piroxicam inclusion complexes: analyses by mass spectrometry and molecular modelling

    Science.gov (United States)

    Gallagher, Richard T.; Ball, Christopher P.; Gatehouse, Deborah R.; Gates, Paul J.; Lobell, Mario; Derrick, Peter J.

    1997-11-01

    Mass spectrometry has been used to investigate the natures of non-covalent complexes formed between the anti-inflammatory drug piroxicam and [alpha]-, [beta]- and [gamma]-cyclodextrins. Energies of these complexes have been calculated by means of molecular modelling. There is a correlation between peak intensities in the mass spectra and the calculated energies.

  3. Quantitative analyses and modelling to support achievement of the 2020 goals for nine neglected tropical diseases

    NARCIS (Netherlands)

    T.D. Hollingsworth (T. Déirdre); E.R. Adams (Emily R.); R.M. Anderson (Roy); K. Atkins (Katherine); S. Bartsch (Sarah); M-G. Basáñez (María-Gloria); M. Behrend (Matthew); D.J. Blok (David); L.A.C. Chapman (Lloyd A. C.); L.E. Coffeng (Luc); O. Courtenay (Orin); R.E. Crump (Ron E.); S.J. de Vlas (Sake); A.P. Dobson (Andrew); L. Dyson (Louise); H. Farkas (Hajnal); A.P. Galvani (Alison P.); M. Gambhir (Manoj); D. Gurarie (David); M.A. Irvine (Michael A.); S. Jervis (Sarah); M.J. Keeling (Matt J.); L. Kelly-Hope (Louise); C. King (Charles); B.Y. Lee (Bruce Y.); E.A. le Rutte (Epke); T.M. Lietman (Thomas M.); M. Ndeffo-Mbah (Martial); G.F. Medley (Graham F.); E. Michael (Edwin); A. Pandey (Abhishek); J.K. Peterson (Jennifer K.); A. Pinsent (Amy); T.C. Porco (Travis C.); J.H. Richardus (Jan Hendrik); L. Reimer (Lisa); K.S. Rock (Kat S.); B.K. Singh (Brajendra K.); W.A. Stolk (Wilma); S. Swaminathan (Subramanian); S.J. Torr (Steve J.); J. Townsend (Jeffrey); J. Truscott (James); M. Walker (Martin); A. Zoueva (Alexandra)

    2015-01-01

    textabstractQuantitative analysis and mathematical models are useful tools in informing strategies to control or eliminate disease. Currently, there is an urgent need to develop these tools to inform policy to achieve the 2020 goals for neglected tropical diseases (NTDs). In this paper we give an

  4. Analyses of gust fronts by means of limited area NWP model outputs

    Czech Academy of Sciences Publication Activity Database

    Kašpar, Marek

    67-68, - (2003), s. 559-572 ISSN 0169-8095 R&D Projects: GA ČR GA205/00/1451 Institutional research plan: CEZ:AV0Z3042911 Keywords : gust front * limited area NWP model * output Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.012, year: 2003

  5. Using Latent Trait Measurement Models to Analyse Attitudinal Data: A Synthesis of Viewpoints.

    Science.gov (United States)

    Andrich, David

    A Rasch model for ordered response categories is derived and it is shown that it retains the key features of both the Thurstone and Likert approaches to studying attitude. Key features of the latter approaches are reviewed. Characteristics in common with the Thurstone approach are: statements are scaled with respect to their affective values;…

  6. Lightning NOx Statistics Derived by NASA Lightning Nitrogen Oxides Model (LNOM) Data Analyses

    Science.gov (United States)

    Koshak, William; Peterson, Harold

    2013-01-01

    What is the LNOM? The NASA Marshall Space Flight Center (MSFC) Lightning Nitrogen Oxides Model (LNOM) [Koshak et al., 2009, 2010, 2011; Koshak and Peterson 2011, 2013] analyzes VHF Lightning Mapping Array (LMA) and National Lightning Detection Network(TradeMark) (NLDN) data to estimate the lightning nitrogen oxides (LNOx) produced by individual flashes. Figure 1 provides an overview of LNOM functionality. Benefits of LNOM: (1) Does away with unrealistic "vertical stick" lightning channel models for estimating LNOx; (2) Uses ground-based VHF data that maps out the true channel in space and time to < 100 m accuracy; (3) Therefore, true channel segment height (ambient air density) is used to compute LNOx; (4) True channel length is used! (typically tens of kilometers since channel has many branches and "wiggles"); (5) Distinction between ground and cloud flashes are made; (6) For ground flashes, actual peak current from NLDN used to compute NOx from lightning return stroke; (7) NOx computed for several other lightning discharge processes (based on Cooray et al., 2009 theory): (a) Hot core of stepped leaders and dart leaders, (b) Corona sheath of stepped leader, (c) K-change, (d) Continuing Currents, and (e) M-components; and (8) LNOM statistics (see later) can be used to parameterize LNOx production for regional air quality models (like CMAQ), and for global chemical transport models (like GEOS-Chem).

  7. Quantitative structure-activity relationship models of chemical transformations from matched pairs analyses.

    Science.gov (United States)

    Beck, Jeremy M; Springer, Clayton

    2014-04-28

    The concepts of activity cliffs and matched molecular pairs (MMP) are recent paradigms for analysis of data sets to identify structural changes that may be used to modify the potency of lead molecules in drug discovery projects. Analysis of MMPs was recently demonstrated as a feasible technique for quantitative structure-activity relationship (QSAR) modeling of prospective compounds. Although within a small data set, the lack of matched pairs, and the lack of knowledge about specific chemical transformations limit prospective applications. Here we present an alternative technique that determines pairwise descriptors for each matched pair and then uses a QSAR model to estimate the activity change associated with a chemical transformation. The descriptors effectively group similar transformations and incorporate information about the transformation and its local environment. Use of a transformation QSAR model allows one to estimate the activity change for novel transformations and therefore returns predictions for a larger fraction of test set compounds. Application of the proposed methodology to four public data sets results in increased model performance over a benchmark random forest and direct application of chemical transformations using QSAR-by-matched molecular pairs analysis (QSAR-by-MMPA).

  8. Neural Network-Based Model for Landslide Susceptibility and Soil Longitudinal Profile Analyses

    DEFF Research Database (Denmark)

    Farrokhzad, F.; Barari, Amin; Choobbasti, A. J.

    2011-01-01

    trained with geotechnical data obtained from an investigation of the study area. The quality of the modeling was improved further by the application of some controlling techniques involved in ANN. The observed >90% overall accuracy produced by the ANN technique in both cases is promising for future...

  9. Analysing outsourcing policies in an asset management context : A six-stage model

    NARCIS (Netherlands)

    Schoenmaker, R.; Verlaan, J.G.

    2013-01-01

    Asset managers of civil infrastructure are increasingly outsourcing their maintenance. Whereas maintenance is a cyclic process, decisions to outsource decisions are often project-based, and confusing the discussion on the degree of outsourcing. This paper presents a six-stage model that facilitates

  10. Supporting custom quality models to analyse and compare open-source software

    NARCIS (Netherlands)

    D. Di Ruscio (Davide); D.S. Kolovos (Dimitrios); I. Korkontzelos (Ioannis); N. Matragkas (Nicholas); J.J. Vinju (Jurgen)

    2017-01-01

    textabstractThe analysis and comparison of open source software can be improved by means of quality models supporting the evaluation of the software systems being compared and the final decision about which of them has to be adopted. Since software quality can mean different things in different

  11. Ultrasonic vocalizations in Shank mouse models for autism spectrum disorders: detailed spectrographic analyses and developmental profiles.

    Science.gov (United States)

    Wöhr, Markus

    2014-06-01

    Autism spectrum disorders (ASD) are a class of neurodevelopmental disorders characterized by persistent deficits in social behavior and communication across multiple contexts, together with repetitive patterns of behavior, interests, or activities. The high concordance rate between monozygotic twins supports a strong genetic component. Among the most promising candidate genes for ASD is the SHANK gene family, including SHANK1, SHANK2 (ProSAP1), and SHANK3 (ProSAP2). SHANK genes are therefore important candidates for modeling ASD in mice and various genetic models were generated within the last few years. As the diagnostic criteria for ASD are purely behaviorally defined, the validity of mouse models for ASD strongly depends on their behavioral phenotype. Behavioral phenotyping is therefore a key component of the current translational approach and requires sensitive behavioral test paradigms with high relevance to each diagnostic symptom category. While behavioral phenotyping assays for social deficits and repetitive patterns of behavior, interests, or activities are well-established, the development of sensitive behavioral test paradigms to assess communication deficits in mice is a daunting challenge. Measuring ultrasonic vocalizations (USV) appears to be a promising strategy. In the first part of the review, an overview on the different types of mouse USV and their communicative functions will be provided. The second part is devoted to studies on the emission of USV in Shank mouse models for ASD. Evidence for communication deficits was obtained in Shank1, Shank2, and Shank3 genetic mouse models for ASD, often paralleled by behavioral phenotypes relevant to social deficits seen in ASD. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Animal models of bone cancer pain: systematic review and meta-analyses.

    Science.gov (United States)

    Currie, Gillian L; Delaney, Ada; Bennett, Michael I; Dickenson, Anthony H; Egan, Kieren J; Vesterinen, Hanna M; Sena, Emily S; Macleod, Malcolm R; Colvin, Lesley A; Fallon, Marie T

    2013-06-01

    Pain can significantly decrease the quality of life of patients with advanced cancer. Current treatment strategies often provide inadequate analgesia and unacceptable side effects. Animal models of bone cancer pain are used in the development of novel pharmacological approaches. Here we conducted a systematic review and meta-analysis of publications describing in vivo modelling of bone cancer pain in which behavioural, general health, macroscopic, histological, biochemical, or electrophysiological outcomes were reported and compared to appropriate controls. In all, 150 publications met our inclusion criteria, describing 38 different models of bone cancer pain. Reported methodological quality was low; only 31% of publications reported blinded assessment of outcome, and 11% reported random allocation to group. No publication reported a sample size calculation. Studies that reported measures to reduce bias reported smaller differences in behavioural outcomes between tumour-bearing and control animals, and studies that presented a statement regarding a conflict of interest reported larger differences in behavioural outcomes. Larger differences in behavioural outcomes were reported in female animals, when cancer cells were injected into either the tibia or femur, and when MatLyLu prostate or Lewis Lung cancer cells were used. Mechanical-evoked pain behaviours were most commonly reported; however, the largest difference was observed in spontaneous pain behaviours. In the spinal cord astrocyte activation and increased levels of Substance P receptor internalisation, c-Fos, dynorphin, tumor necrosis factor-α and interleukin-1β have been reported in bone cancer pain models, suggesting several potential therapeutic targets. However, the translational impact of animal models on clinical pain research could be enhanced by improving methodological quality. Copyright © 2013. Published by Elsevier B.V.

  13. Analyses of Spring Barley Evapotranspiration Rates Based on Gradient Measurements and Dual Crop Coefficient Model

    Directory of Open Access Journals (Sweden)

    Gabriela Pozníková

    2014-01-01

    Full Text Available The yield of agricultural crops depends on water availability to a great extent. According some projections, the likelihood of stress caused by drought is increasing in future climates expected for the Central Europe. Therefore, in order to manage agro-ecosystems properly, it is necessary to know water demand of particular crops as precisely as possible. Evapotranspiration (ET is the main part of water balance which takes the water from agro-ecosystems away. The ET consists of evaporation from the soil (E and transpiration (T through the stomata of plants. In this study, we investigated ET of spring barley 1-ha field (Domanínek, Czech Republic measured by Bowen ratio/energy balance method during growing period 2013 (May 8 to July 31. Special focus was dedicated to comparison of barley ET with the reference grass ETo calculated according FAO-56 model, i.e. the determination of barley crop coefficient (Kc. This crop coefficient was subsequently separated into soil evaporation (Ke and transpiration fraction (Kcb by adjusting soil and phenological parameters of dual crop coefficient model to minimize the root mean square error between measured and modelled ET. The resulting Kcb of barley was 0.98 during mid-growing period and 0.05 during initial and end periods. According to FAO-56, typical values are 1.10 and 0.15 for Kcb mid and Kcb end, respectively. Modelled and measured ET show satisfactory agreement with root mean square error equal 0.41 mm. Based on the sums of ET and E for the whole growing season of the spring barley, ET partitioning by FAO-56 dual crop coefficient model resulted in E/ET ratio being 0.24.

  14. A CAD Approach to Developing Mass Distribution and Composition Models for Spaceflight Radiation Risk Analyses

    Science.gov (United States)

    Zapp, E.; Shelfer, T.; Semones, E.; Johnson, A.; Weyland, M.; Golightly, M.; Smith, G.; Dardano, C.

    For roughly the past three decades, combinatorial geometries have been the predominant mode for the development of mass distribution models associated with the estimation of radiological risk for manned space flight. Examples of these are the MEVDP (Modified Elemental Volume Dose Program) vehicle representation of Liley and Hamilton, and the quadratic functional representation of the CAM/CAF (Computerized Anatomical Male/Female) human body models as modified by Billings and Yucker. These geometries, have the advantageous characteristics of being simple for a familiarized user to maintain, and because of the relative lack of any operating system or run-time library dependence, they are also easy to transfer from one computing platform to another. Unfortunately they are also limited in the amount of modeling detail possible, owing to the abstract geometric representation. In addition, combinatorial representations are also known to be error-prone in practice, since there is no convenient method for error identification (i.e. overlap, etc.), and extensive calculation and/or manual comparison may is often necessary to demonstrate that the geometry is adequately represented. We present an alternate approach linking materials -specific, CAD-based mass models directly to geometric analysis tools requiring no approximation with respect to materials , nor any meshing (i.e. tessellation) of the representative geometry. A new approach to ray tracing is presented which makes use of the fundamentals of the CAD representation to perform geometric analysis directly on the NURBS (Non-Uniform Rational BSpline) surfaces themselves. In this way we achieve a framework for- the rapid, precise development and analysis of materials-specific mass distribution models.

  15. Using species abundance distribution models and diversity indices for biogeographical analyses

    Science.gov (United States)

    Fattorini, Simone; Rigal, François; Cardoso, Pedro; Borges, Paulo A. V.

    2016-01-01

    We examine whether Species Abundance Distribution models (SADs) and diversity indices can describe how species colonization status influences species community assembly on oceanic islands. Our hypothesis is that, because of the lack of source-sink dynamics at the archipelago scale, Single Island Endemics (SIEs), i.e. endemic species restricted to only one island, should be represented by few rare species and consequently have abundance patterns that differ from those of more widespread species. To test our hypothesis, we used arthropod data from the Azorean archipelago (North Atlantic). We divided the species into three colonization categories: SIEs, archipelagic endemics (AZEs, present in at least two islands) and native non-endemics (NATs). For each category, we modelled rank-abundance plots using both the geometric series and the Gambin model, a measure of distributional amplitude. We also calculated Shannon entropy and Buzas and Gibson's evenness. We show that the slopes of the regression lines modelling SADs were significantly higher for SIEs, which indicates a relative predominance of a few highly abundant species and a lack of rare species, which also depresses diversity indices. This may be a consequence of two factors: (i) some forest specialist SIEs may be at advantage over other, less adapted species; (ii) the entire populations of SIEs are by definition concentrated on a single island, without possibility for inter-island source-sink dynamics; hence all populations must have a minimum number of individuals to survive natural, often unpredictable, fluctuations. These findings are supported by higher values of the α parameter of the Gambin mode for SIEs. In contrast, AZEs and NATs had lower regression slopes, lower α but higher diversity indices, resulting from their widespread distribution over several islands. We conclude that these differences in the SAD models and diversity indices demonstrate that the study of these metrics is useful for

  16. Evaluation of a dentoalveolar model for testing mouthguards: stress and strain analyses.

    Science.gov (United States)

    Verissimo, Crisnicaw; Costa, Paulo Victor Moura; Santos-Filho, Paulo César Freitas; Fernandes-Neto, Alfredo Júlio; Tantbirojn, Daranee; Versluis, Antheunis; Soares, Carlos José

    2016-02-01

    Custom-fitted mouthguards are devices used to decrease the likelihood of dental trauma. The aim of this study was to develop an experimental bovine dentoalveolar model with periodontal ligament to evaluate mouthguard shock absorption, and impact strain and stress behavior. A pendulum impact device was developed to perform the impact tests with two different impact materials (steel ball and baseball). Five bovine jaws were selected with standard age and dimensions. Six-mm mouthguards were made for the impact tests. The jaws were fixed in a pendulum device and impacts were performed from 90, 60, and 45° angles, with and without mouthguard. Strain gauges were attached at the palatal surface of the impacted tooth. The strain and shock absorption of the mouthguards was calculated and data were analyzed with 3-way anova and Tukey's test (α = 0.05). Two-dimensional finite element models were created based on the cross-section of the bovine dentoalveolar model used in the experiment. A nonlinear dynamic impact analysis was performed to evaluate the strain and stress distributions. Without mouthguards, the increase in impact angulation significantly increased strains and stresses. Mouthguards reduced strain and stress values. Impact velocity, impact object (steel ball or baseball), and mouthguard presence affected the impact stresses and strains in a bovine dentoalveolar model. Experimental strain measurements and finite element models predicted similar behavior; therefore, both methodologies are suitable for evaluating the biomechanical performance of mouthguards. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. On the emancipation of PLS-SEM : A commentary on Rigdon

    NARCIS (Netherlands)

    Sarstedt, Marko; Ringle, Christian M.; Henseler, Jörg; Hair, Joseph F.

    2014-01-01

    Rigdon's (2012) thoughtful article argues that PLS-SEM should free itself from CB-SEM. It should renounce all mechanisms, frameworks, and jargon associated with factor models entirely. In this comment, we shed further light on two subject areas on which Rigdon (2012) touches in his discussion of

  18. Evaluation et analyse de sensibilite du modele CERES-Maize en conditions alsaciennes

    OpenAIRE

    Plantureux, Sylvain; Girardin, Philippe; Fouquet, D.; Chapot, Jean Yves

    1991-01-01

    CERES-Maize est un modèle de simulation de la croissance et du développement du maïs élaboré et validé aux Etats-Unis. Afin d’estimer les possibilités de transposition du modèle dans des conditions européennes, des simulations ont été réalisées pour 2 variétés de maïs (LG11 et DEA) cultivées en Alsace 2 années consécutives. Pour chaque variété, le modèle a été calibré sur 1 année et validé sur la suivante. L’analyse de sensibilité des paramètres liés à la variété et au sol montre que la r...

  19. Analysing the strength of friction stir welded dissimilar aluminium alloys using Sugeno Fuzzy model

    Science.gov (United States)

    Barath, V. R.; Vaira Vignesh, R.; Padmanaban, R.

    2018-02-01

    Friction stir welding (FSW) is a promising solid state joining technique for aluminium alloys. In this study, FSW trials were conducted on two dissimilar plates of aluminium alloy AA2024 and AA7075 by varying the tool rotation speed (TRS) and welding speed (WS). Tensile strength (TS) of the joints were measured and a Sugeno - Fuzzy model was developed to interconnect the FSW process parameters with the tensile strength. From the developed model, it was observed that the optimum heat generation at WS of 15 mm.min-1 and TRS of 1050 rpm resulted in dynamic recovery and dynamic recrystallization of the material. This refined the grains in the FSW zone and resulted in peak tensile strength among the tested specimens. Crest parabolic trend was observed in tensile strength with variation of TRS from 900 rpm to 1200 rpm and TTS from 10 mm.min-1 to 20 mm.min-1.

  20. Evaluation and Improvement of Cloud and Convective Parameterizations from Analyses of ARM Observations and Models

    Energy Technology Data Exchange (ETDEWEB)

    Del Genio, Anthony D. [NASA Goddard Inst. for Space Studies (GISS), New York, NY (United States)

    2016-03-11

    Over this period the PI and his performed a broad range of data analysis, model evaluation, and model improvement studies using ARM data. These included cloud regimes in the TWP and their evolution over the MJO; M-PACE IOP SCM-CRM intercomparisons; simulations of convective updraft strength and depth during TWP-ICE; evaluation of convective entrainment parameterizations using TWP-ICE simulations; evaluation of GISS GCM cloud behavior vs. long-term SGP cloud statistics; classification of aerosol semi-direct effects on cloud cover; depolarization lidar constraints on cloud phase; preferred states of the winter Arctic atmosphere, surface, and sub-surface; sensitivity of convection to tropospheric humidity; constraints on the parameterization of mesoscale organization from TWP-ICE WRF simulations; updraft and downdraft properties in TWP-ICE simulated convection; insights from long-term ARM records at Manus and Nauru.

  1. Modelling and Analysing Access Control Policies in XACML 3.0

    DEFF Research Database (Denmark)

    Ramli, Carroline Dewi Puspa Kencana

    and verification of properties of XACML policies. Overall, we focus into two different area. The first part focuses on the access control language. More specifically our focus is on the understanding XACML 3.0. The second part focuses on how we use Logic Programming (LP) to model access control policies. We show...... semantics is described normatively using natural language. The use of English text in standardisation leads to the risk of misinterpretation and ambiguity. In order to avoid this drawback, we define an abstract syntax of XACML 3.0 and a formal XACML semantics. Second, we propose a logic-based XACML analysis...... framework using Answer Set Programming (ASP). With ASP we model an XACML PDP that loads XACML policies and evaluates XACML requests against these policies. The expressivity of ASP and the existence of efficient implementations of the answer set semantics provide the means for declarative specification...

  2. MONTE CARLO ANALYSES OF THE YALINA THERMAL FACILITY WITH SERPENT STEREOLITHOGRAPHY GEOMETRY MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, Y.

    2015-01-01

    This paper analyzes the YALINA Thermal subcritical assembly of Belarus using two different Monte Carlo transport programs, SERPENT and MCNP. The MCNP model is based on combinatorial geometry and universes hierarchy, while the SERPENT model is based on Stereolithography geometry. The latter consists of unstructured triangulated surfaces defined by the normal and vertices. This geometry format is used by 3D printers and it has been created by: the CUBIT software, MATLAB scripts, and C coding. All the Monte Carlo simulations have been performed using the ENDF/B-VII.0 nuclear data library. Both MCNP and SERPENT share the same geometry specifications, which describe the facility details without using any material homogenization. Three different configurations have been studied with different number of fuel rods. The three fuel configurations use 216, 245, or 280 fuel rods, respectively. The numerical simulations show that the agreement between SERPENT and MCNP results is within few tens of pcms.

  3. Modeling Freight Ocean Rail and Truck Transportation Flows to Support Policy Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Gearhart, Jared Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wang, Hao [Cornell Univ., Ithaca, NY (United States); Nozick, Linda Karen [Cornell Univ., Ithaca, NY (United States); Xu, Ningxiong [Cornell Univ., Ithaca, NY (United States)

    2017-11-01

    Freight transportation represents about 9.5% of GDP, is responsible for about 8% of greenhouse gas emissions and supports the import and export of about 3.6 trillion in international trade; hence it is important that our national freight transportation system is designed and operated efficiently and embodies user fees and other policies that balance costs and environmental consequences. Hence, this paper develops a mathematical model to estimate international and domestic freight flows across ocean, rail and truck modes which can be used to study the impacts of changes in our infrastructure as well as the imposition of new user fees and changes in operating policies. This model is applied to two case studies: (1) a disruption of the maritime ports at Los Angeles/Long Beach similar to the impacts that would be felt in an earthquake; and (2) implementation of new user fees at the California ports.

  4. Integrate urban‐scale seismic hazard analyses with the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Luco, Nicolas; Frankel, Arthur; Petersen, Mark D.; Aagaard, Brad T.; Baltay, Annemarie S.; Blanpied, Michael; Boyd, Oliver; Briggs, Richard; Gold, Ryan D.; Graves, Robert; Hartzell, Stephen; Rezaeian, Sanaz; Stephenson, William J.; Wald, David J.; Williams, Robert A.; Withers, Kyle

    2018-01-01

    For more than 20 yrs, damage patterns and instrumental recordings have highlighted the influence of the local 3D geologic structure on earthquake ground motions (e.g., M">M 6.7 Northridge, California, Gao et al., 1996; M">M 6.9 Kobe, Japan, Kawase, 1996; M">M 6.8 Nisqually, Washington, Frankel, Carver, and Williams, 2002). Although this and other local‐scale features are critical to improving seismic hazard forecasts, historically they have not been explicitly incorporated into the U.S. National Seismic Hazard Model (NSHM, national model and maps), primarily because the necessary basin maps and methodologies were not available at the national scale. Instead,...

  5. Analyses of the energy-dependent single separable potential models for the NN scattering

    International Nuclear Information System (INIS)

    Ahmad, S.S.; Beghi, L.

    1981-08-01

    Starting from a systematic study of the salient features regarding the quantum-mechanical two-particle scattering off an energy-dependent (ED) single separable potential and its connection with the rank-2 energy-independent (EI) separable potential in the T-(K-) amplitude formulation, the present status of the ED single separable potential models due to Tabakin (M1), Garcilazo (M2) and Ahmad (M3) has been discussed. It turned out that the incorporation of a self-consistent optimization procedure improves considerably the results of the 1 S 0 and 3 S 1 scattering phase shifts for the models (M2) and (M3) up to the CM wave number q=2.5 fm -1 , although the extrapolation of the results up to q=10 fm -1 reveals that the two models follow the typical behaviour of the well-known super-soft core potentials. It has been found that a variant of (M3) - i.e. (M4) involving one more parameter - gives the phase shifts results which are generally in excellent agreement with the data up to q=2.5 fm -1 and the extrapolation of the results for the 1 S 0 case in the higher wave number range not only follows the corresponding data qualitatively but also reflects a behaviour similar to the Reid soft core and Hamada-Johnston potentials together with a good agreement with the recent [4/3] Pade fits. A brief discussion regarding the features resulting from the variations in the ED parts of all the four models under consideration and their correlations with the inverse scattering theory methodology concludes the paper. (author)

  6. Analyses of Spring Barley Evapotranspiration Rates Based on Gradient Measurements and Dual Crop Coefficient Model

    Czech Academy of Sciences Publication Activity Database

    Pozníková, Gabriela; Fischer, Milan; Pohanková, Eva; Trnka, Miroslav

    2014-01-01

    Roč. 62, č. 5 (2014), s. 1079-1086 ISSN 1211-8516 R&D Projects: GA MŠk LH12037; GA MŠk(CZ) EE2.3.20.0248 Institutional support: RVO:67179843 Keywords : evapotranspiration * dual crop coefficient model * Bowen ratio/energy balance method * transpiration * soil evaporation * spring barley Subject RIV: EH - Ecology, Behaviour OBOR OECD: Environmental sciences (social aspects to be 5.7)

  7. Modeling and analyses of postulated UF6 release accidents in gaseous diffusion plant

    International Nuclear Information System (INIS)

    Kim, S.H.; Taleyarkhan, R.P.; Keith, K.D.; Schmidt, R.W.; Carter, J.C.; Dyer, R.H.

    1995-10-01

    Computer models have been developed to simulate the transient behavior of aerosols and vapors as a result of a postulated accident involving the release of uranium hexafluoride (UF 6 ) into the process building of a gaseous diffusion plant. UF 6 undergoes an exothermic chemical reaction with moisture (H 2 O) in the air to form hydrogen fluoride (HF) and radioactive uranyl fluoride (UO 2 F 2 ). As part of a facility-wide safety evaluation, this study evaluated source terms consisting of UO 2 F 2 as well as HF during a postulated UF 6 release accident in a process building. In the postulated accident scenario, ∼7900 kg (17,500 lb) of hot UF 6 vapor is released over a 5 min period from the process piping into the atmosphere of a large process building. UO 2 F 2 mainly remains as airborne-solid particles (aerosols), and HF is in a vapor form. Some UO 2 F 2 aerosols are removed from the air flow due to gravitational settling. The HF and the remaining UO 2 F 2 are mixed with air and exhausted through the building ventilation system. The MELCOR computer code was selected for simulating aerosols and vapor transport in the process building. MELCOR model was first used to develop a single volume representation of a process building and its results were compared with those from past lumped parameter models specifically developed for studying UF 6 release accidents. Preliminary results indicate that MELCOR predicted results (using a lumped formulation) are comparable with those from previously developed models

  8. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 2

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.

    1975-10-01

    Model 2 in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. Both the cylinder and the nozzle of model 2 had outside diameters of 10 in., giving a d 0 /D 0 ratio of 1.0, and both had outside diameter/thickness ratios of 100. Sixteen separate loading cases in which one end of the cylinder was rigidly held were analyzed. An internal pressure loading, three mutually perpendicular force components, and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. In addition to these 13 loadings, 3 additional loads were applied to the nozzle (in-plane bending moment, out-of-plane bending moment, and axial force) with the free end of the cylinder restrained. The experimental stress distributions for each of the 16 loadings were obtained using 152 three-gage strain rosettes located on the inner and outer surfaces. All the 16 loading cases were also analyzed theoretically using a finite-element shell analysis. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good general agreement, and it is felt that the analysis would be satisfactory for most engineering purposes. (auth)

  9. Consequence modeling for nuclear weapons probabilistic cost/benefit analyses of safety retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Harvey, T.F.; Peters, L.; Serduke, F.J.D.; Hall, C.; Stephens, D.R.

    1998-01-01

    The consequence models used in former studies of costs and benefits of enhanced safety retrofits are considered for (1) fuel fires; (2) non-nuclear detonations; and, (3) unintended nuclear detonations. Estimates of consequences were made using a representative accident location, i.e., an assumed mixed suburban-rural site. We have explicitly quantified land- use impacts and human-health effects (e.g. , prompt fatalities, prompt injuries, latent cancer fatalities, low- levels of radiation exposure, and clean-up areas). Uncertainty in the wind direction is quantified and used in a Monte Carlo calculation to estimate a range of results for a fuel fire with uncertain respirable amounts of released Pu. We define a nuclear source term and discuss damage levels of concern. Ranges of damages are estimated by quantifying health impacts and property damages. We discuss our dispersal and prompt effects models in some detail. The models used to loft the Pu and fission products and their particle sizes are emphasized.

  10. Analysing the origin of long-range interactions in proteins using lattice models

    Directory of Open Access Journals (Sweden)

    Unger Ron

    2009-01-01

    Full Text Available Abstract Background Long-range communication is very common in proteins but the physical basis of this phenomenon remains unclear. In order to gain insight into this problem, we decided to explore whether long-range interactions exist in lattice models of proteins. Lattice models of proteins have proven to capture some of the basic properties of real proteins and, thus, can be used for elucidating general principles of protein stability and folding. Results Using a computational version of double-mutant cycle analysis, we show that long-range interactions emerge in lattice models even though they are not an input feature of them. The coupling energy of both short- and long-range pairwise interactions is found to become more positive (destabilizing in a linear fashion with increasing 'contact-frequency', an entropic term that corresponds to the fraction of states in the conformational ensemble of the sequence in which the pair of residues is in contact. A mathematical derivation of the linear dependence of the coupling energy on 'contact-frequency' is provided. Conclusion Our work shows how 'contact-frequency' should be taken into account in attempts to stabilize proteins by introducing (or stabilizing contacts in the native state and/or through 'negative design' of non-native contacts.

  11. Drying of mint leaves in a solar dryer and under open sun: Modelling, performance analyses

    International Nuclear Information System (INIS)

    Akpinar, E. Kavak

    2010-01-01

    In this study was investigated the thin-layer drying characteristics in solar dryer with forced convection and under open sun with natural convection of mint leaves, and, performed energy analysis and exergy analysis of solar drying process of mint leaves. An indirect forced convection solar dryer consisting of a solar air collector and drying cabinet was used in the experiments. The drying data were fitted to ten the different mathematical models. Among the models, Wang and Singh model for the forced solar drying and the natural sun drying were found to best explain thin-layer drying behaviour of mint leaves. Using the first law of thermodynamics, the energy analysis throughout solar drying process was estimated. However, exergy analysis during solar drying process was determined by applying the second law of thermodynamics. Energy utilization ratio (EUR) values of drying cabinet varied in the ranges between 7.826% and 46.285%. The values of exergetic efficiency were found to be in the range of 34.760-87.717%. The values of improvement potential varied between 0 and 0.017 kJ s -1 . Energy utilization ratio and improvement potential decreased with increasing drying time and ambient temperature while exergetic efficiency increased.

  12. Bag-model analyses of proton-antiproton scattering and atomic bound states

    International Nuclear Information System (INIS)

    Alberg, M.A.; Freedman, R.A.; Henley, E.M.; Hwang, W.P.; Seckel, D.; Wilets, L.

    1983-01-01

    We study proton-antiproton (pp-bar ) scattering using the static real potential of Bryan and Phillips outside a cutoff radius rsub0 and two different shapes for the imaginary potential inside a radius R*. These forms, motivated by bag models, are a one-gluon-annihilation potential and a simple geometric-overlap form. In both cases there are three adjustable parameters: the effective bag radius R*, the effective strong coupling constant αsubssup*, and rsub0. There is also a choice for the form of the real potential inside the cutoff radius rsub0. Analysis of the pp-bar scattering data in the laboratory-momentum region 0.4--0.7 GeV/c yields an effective nucleon bag radius R* in the range 0.6--1.1 fm, with the best fit obtained for R* = 0.86 fm. Arguments are presented that the deduced value of R* is likely to be an upper bound on the isolated nucleon bag radius. The present results are consistent with the range of bag radii in current bag models. We have also used the resultant optical potential to calculate the shifts and widths of the sup3Ssub1 and sup1Ssub0 atomic bound states of the pp-bar system. For both states we find upward (repulsive) shifts and widths of about 1 keV. We find no evidence for narrow, strongly bound pp-bar states in our potential model

  13. Promoting Social Inclusion through Sport for Refugee-Background Youth in Australia: Analysing Different Participation Models

    Directory of Open Access Journals (Sweden)

    Karen Block

    2017-06-01

    Full Text Available Sports participation can confer a range of physical and psychosocial benefits and, for refugee and migrant youth, may even act as a critical mediator for achieving positive settlement and engaging meaningfully in Australian society. This group has low participation rates however, with identified barriers including costs; discrimination and a lack of cultural sensitivity in sporting environments; lack of knowledge of mainstream sports services on the part of refugee-background settlers; inadequate access to transport; culturally determined gender norms; and family attitudes. Organisations in various sectors have devised programs and strategies for addressing these participation barriers. In many cases however, these responses appear to be ad hoc and under-theorised. This article reports findings from a qualitative exploratory study conducted in a range of settings to examine the benefits, challenges and shortcomings associated with different participation models. Interview participants were drawn from non-government organisations, local governments, schools, and sports clubs. Three distinct models of participation were identified, including short term programs for refugee-background children; ongoing programs for refugee-background children and youth; and integration into mainstream clubs. These models are discussed in terms of their relative challenges and benefits and their capacity to promote sustainable engagement and social inclusion for this population group.

  14. Reproduction of the Yucca Mountain Project TSPA-LA Uncertainty and Sensitivity Analyses and Preliminary Upgrade of Models

    Energy Technology Data Exchange (ETDEWEB)

    Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis; Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis

    2016-09-01

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.

  15. Towards an Industrial Application of Statistical Uncertainty Analysis Methods to Multi-physical Modelling and Safety Analyses

    International Nuclear Information System (INIS)

    Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe

    2013-01-01

    Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)

  16. Analyses of Methods and Algorithms for Modelling and Optimization of Biotechnological Processes

    Directory of Open Access Journals (Sweden)

    Stoyan Stoyanov

    2009-08-01

    Full Text Available A review of the problems in modeling, optimization and control of biotechnological processes and systems is given in this paper. An analysis of existing and some new practical optimization methods for searching global optimum based on various advanced strategies - heuristic, stochastic, genetic and combined are presented in the paper. Methods based on the sensitivity theory, stochastic and mix strategies for optimization with partial knowledge about kinetic, technical and economic parameters in optimization problems are discussed. Several approaches for the multi-criteria optimization tasks are analyzed. The problems concerning optimal controls of biotechnological systems are also discussed.

  17. Using Rasch Modeling to Re-Evaluate Rapid Malaria Diagnosis Test Analyses

    Directory of Open Access Journals (Sweden)

    Dawit G. Ayele

    2014-06-01

    Full Text Available The objective of this study was to demonstrate the use of the Rasch model by assessing the appropriateness of the demographic, social-economic and geographic factors in providing a total score in malaria RDT in accordance with the model’s expectations. The baseline malaria indicator survey was conducted in Amhara, Oromiya and Southern Nation Nationalities and People (SNNP regions of Ethiopia by The Carter Center in 2007. The result shows high reliability and little disordering of thresholds with no evidence of differential item functioning.

  18. Fiskeoppdrett og verdsettelse : en analyse av resultatjustering og modeller for identifikasjon av slik aktivitet

    OpenAIRE

    Aaker, Harald

    2005-01-01

    Regnskapsinformasjon skal være relevant og pålitelig, men det vil alltid være skjønn forbundet med verdsettelsen.Usaklig skjønn omtales som resultatjustering (”Earnings management”) og regnskaps-manipulasjon. Det er store metodeproblemer innen ”earnings management” – forskningen, da den aktive tilpasningen i stor grad er skjult. I de senere år har ulike modeller for estimering av unormale tidsavgrensninger (”discretionary accruals”) dominert. Problemet er i å estimere de normale tidsavgrensni...

  19. Bases de Datos Semánticas

    Directory of Open Access Journals (Sweden)

    Irving Caro Fierros

    2016-12-01

    Full Text Available En 1992, cuando Tim Berners-Lee dio a conocer la primera  versión  de  la  Web,  su  visión  a  futuro  era  incorporar metadatos  con  información  semántica  en  las  páginas  Web.  Es precisamente   a   principios   de   este   siglo   que   inicia   el   auge repentino  de  la  Web  semántica  en  el  ambiente  académico  e Internet. El modelo de datos semántico se define como un modelo conceptual que permite definir el significado de los datos a través de  sus  relaciones  con  otros.  En  este  sentido,  el  formato  de representación  de  los  datos  es  fundamental  para  proporcionar información de carácter semántico. La tecnología enfocada en las bases de datos semánticas se encuentra actualmente en un punto de  inflexión,  al  pasar  del  ámbito  académico  y  de  investigación  a ser una opción comercial completa. En este artículo se realiza un análisis  del  concepto  de  base  de  datos  semántica.  También  se presenta  un  caso  de  estudio  donde  se  ejemplifican  operaciones básicas  que  involucran  la  gestión  de  la  información  almacenada en este tipo de base de datos.

  20. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    Science.gov (United States)

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  1. ANALYSING POST-SEISMIC DEFORMATION OF IZMIT EARTHQUAKE WITH INSAR, GNSS AND COULOMB STRESS MODELLING

    Directory of Open Access Journals (Sweden)

    R. A. Barut

    2016-06-01

    Full Text Available On August 17th 1999, a Mw 7.4 earthquake struck the city of Izmit in the north-west of Turkey. This event was one of the most devastating earthquakes of the twentieth century. The epicentre of the Izmit earthquake was on the North Anatolian Fault (NAF which is one of the most active right-lateral strike-slip faults on earth. However, this earthquake offers an opportunity to study how strain is accommodated in an inter-segment region of a large strike slip fault. In order to determine the Izmit earthquake post-seismic effects, the authors modelled Coulomb stress changes of the aftershocks, as well as using the deformation measurement techniques of Interferometric Synthetic Aperture Radar (InSAR and Global Navigation Satellite System (GNSS. The authors have shown that InSAR and GNSS observations over a time period of three months after the earthquake combined with Coulomb Stress Change Modelling can explain the fault zone expansion, as well as the deformation of the northern region of the NAF. It was also found that there is a strong agreement between the InSAR and GNSS results for the post-seismic phases of investigation, with differences less than 2mm, and the standard deviation of the differences is less than 1mm.

  2. Subchannel and Computational Fluid Dynamic Analyses of a Model Pin Bundle

    Energy Technology Data Exchange (ETDEWEB)

    Gairola, A.; Arif, M.; Suh, K. Y. [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    The current study showed that the simplistic approach of subchannel analysis code MATRA was not good in capturing the physical behavior of the coolant inside the rod bundle. With the incorporation of more detailed geometry of the grid spacer in the CFX code it was possible to approach the experimental values. However, it is vital to incorporate more advanced turbulence mixing models to more realistically simulate behavior of the liquid metal coolant inside the model pin bundle in parallel with the incorporation of the bottom and top grid structures. In the framework of the 11{sup th} international meeting of International Association for Hydraulic Research and Engineering (IAHR) working group on the advanced reactor thermal hydraulics a standard problem was conducted. The quintessence of the problem was to check on the hydraulics and heat transfer in a novel pin bundle with different pitch to rod diameter ratio and heat flux cooled by liquid metal. The standard problem stems from the field of nuclear safety research with the idea of validating and checking the performances of computer codes against the experimental results. Comprehensive checks between the two will succor in improving the dependability and exactness of the codes used for accident simulations.

  3. Integration of 3d Models and Diagnostic Analyses Through a Conservation-Oriented Information System

    Science.gov (United States)

    Mandelli, A.; Achille, C.; Tommasi, C.; Fassi, F.

    2017-08-01

    In the recent years, mature technologies for producing high quality virtual 3D replicas of Cultural Heritage (CH) artefacts has grown thanks to the progress of Information Technologies (IT) tools. These methods are an efficient way to present digital models that can be used with several scopes: heritage managing, support to conservation, virtual restoration, reconstruction and colouring, art cataloguing and visual communication. The work presented is an emblematic case of study oriented to the preventive conservation through monitoring activities, using different acquisition methods and instruments. It was developed inside a project founded by Lombardy Region, Italy, called "Smart Culture", which was aimed to realise a platform that gave the users the possibility to easily access to the CH artefacts, using as an example a very famous statue. The final product is a 3D reality-based model that contains a lot of information inside it, and that can be consulted through a common web browser. In the end, it was possible to define the general strategies oriented to the maintenance and the valorisation of CH artefacts, which, in this specific case, must consider the integration of different techniques and competencies, to obtain a complete, accurate and continuative monitoring of the statue.

  4. INTEGRATION OF 3D MODELS AND DIAGNOSTIC ANALYSES THROUGH A CONSERVATION-ORIENTED INFORMATION SYSTEM

    Directory of Open Access Journals (Sweden)

    A. Mandelli

    2017-08-01

    Full Text Available In the recent years, mature technologies for producing high quality virtual 3D replicas of Cultural Heritage (CH artefacts has grown thanks to the progress of Information Technologies (IT tools. These methods are an efficient way to present digital models that can be used with several scopes: heritage managing, support to conservation, virtual restoration, reconstruction and colouring, art cataloguing and visual communication. The work presented is an emblematic case of study oriented to the preventive conservation through monitoring activities, using different acquisition methods and instruments. It was developed inside a project founded by Lombardy Region, Italy, called “Smart Culture”, which was aimed to realise a platform that gave the users the possibility to easily access to the CH artefacts, using as an example a very famous statue. The final product is a 3D reality-based model that contains a lot of information inside it, and that can be consulted through a common web browser. In the end, it was possible to define the general strategies oriented to the maintenance and the valorisation of CH artefacts, which, in this specific case, must consider the integration of different techniques and competencies, to obtain a complete, accurate and continuative monitoring of the statue.

  5. Analysing and combining atmospheric general circulation model simulations forced by prescribed SST: northern extratropical response

    Directory of Open Access Journals (Sweden)

    K. Maynard

    2001-06-01

    Full Text Available The ECHAM 3.2 (T21, ECHAM 4 (T30 and LMD (version 6, grid-point resolution with 96 longitudes × 72 latitudes atmospheric general circulation models were integrated through the period 1961 to 1993 forced with the same observed Sea Surface Temperatures (SSTs as compiled at the Hadley Centre. Three runs were made for each model starting from different initial conditions. The mid-latitude circulation pattern which maximises the covariance between the simulation and the observations, i.e. the most skilful mode, and the one which maximises the covariance amongst the runs, i.e. the most reproducible mode, is calculated as the leading mode of a Singular Value Decomposition (SVD analysis of observed and simulated Sea Level Pressure (SLP and geopotential height at 500 hPa (Z500 seasonal anomalies. A common response amongst the different models, having different resolution and parametrization should be considered as a more robust atmospheric response to SST than the same response obtained with only one model. A robust skilful mode is found mainly in December-February (DJF, and in June-August (JJA. In DJF, this mode is close to the SST-forced pattern found by Straus and Shukla (2000 over the North Pacific and North America with a wavy out-of-phase between the NE Pacific and the SE US on the one hand and the NE North America on the other. This pattern evolves in a NAO-like pattern over the North Atlantic and Europe (SLP and in a more N-S tripole on the Atlantic and European sector with an out-of-phase between the middle Europe on the one hand and the northern and southern parts on the other (Z500. There are almost no spatial shifts between either field around North America (just a slight eastward shift of the highest absolute heterogeneous correlations for SLP relative to the Z500 ones. The time evolution of the SST-forced mode is moderatly to strongly related to the ENSO/LNSO events but the spread amongst the ensemble of runs is not systematically related

  6. Innovative three-dimensional neutronics analyses directly coupled with cad models of geometrically complex fusion systems

    International Nuclear Information System (INIS)

    Sawan, M.; Wilson, P.; El-Guebaly, L.; Henderson, D.; Sviatoslavsky, G.; Bohm, T.; Kiedrowski, B.; Ibrahim, A.; Smith, B.; Slaybaugh, R.; Tautges, T.

    2007-01-01

    Fusion systems are, in general, geometrically complex requiring detailed three-dimensional (3-D) nuclear analysis. This analysis is required to address tritium self-sufficiency, nuclear heating, radiation damage, shielding, and radiation streaming issues. To facilitate such calculations, we developed an innovative computational tool that is based on the continuous energy Monte Carlo code MCNP and permits the direct use of CAD-based solid models in the ray-tracing. This allows performing the neutronics calculations in a model that preserves the geometrical details without any simplification, eliminates possible human error in modeling the geometry for MCNP, and allows faster design iterations. In addition to improving the work flow for simulating complex 3- D geometries, it allows a richer representation of the geometry compared to the standard 2nd order polynomial representation. This newly developed tool has been successfully tested for a detailed 40 degree sector benchmark of the International Thermonuclear Experimental Reactor (ITER). The calculations included determining the poloidal variation of the neutron wall loading, flux and nuclear heating in the divertor components, nuclear heating in toroidal field coils, and radiation streaming in the mid-plane port. The tool has been applied to perform 3-D nuclear analysis for several fusion designs including the ARIES Compact Stellarator (ARIES-CS), the High Average Power Laser (HAPL) inertial fusion power plant, and ITER first wall/shield (FWS) modules. The ARIES-CS stellarator has a first wall shape and a plasma profile that varies toroidally within each field period compared to the uniform toroidal shape in tokamaks. Such variation cannot be modeled analytically in the standard MCNP code. The impact of the complex helical geometry and the non-uniform blanket and divertor on the overall tritium breeding ratio and total nuclear heating was determined. In addition, we calculated the neutron wall loading variation in

  7. A review on design of experiments and surrogate models in aircraft real-time and many-query aerodynamic analyses

    Science.gov (United States)

    Yondo, Raul; Andrés, Esther; Valero, Eusebio

    2018-01-01

    Full scale aerodynamic wind tunnel testing, numerical simulation of high dimensional (full-order) aerodynamic models or flight testing are some of the fundamental but complex steps in the various design phases of recent civil transport aircrafts. Current aircraft aerodynamic designs have increase in complexity (multidisciplinary, multi-objective or multi-fidelity) and need to address the challenges posed by the nonlinearity of the objective functions and constraints, uncertainty quantification in aerodynamic problems or the restrained computational budgets. With the aim to reduce the computational burden and generate low-cost but accurate models that mimic those full order models at different values of the design variables, Recent progresses have witnessed the introduction, in real-time and many-query analyses, of surrogate-based approaches as rapid and cheaper to simulate models. In this paper, a comprehensive and state-of-the art survey on common surrogate modeling techniques and surrogate-based optimization methods is given, with an emphasis on models selection and validation, dimensionality reduction, sensitivity analyses, constraints handling or infill and stopping criteria. Benefits, drawbacks and comparative discussions in applying those methods are described. Furthermore, the paper familiarizes the readers with surrogate models that have been successfully applied to the general field of fluid dynamics, but not yet in the aerospace industry. Additionally, the review revisits the most popular sampling strategies used in conducting physical and simulation-based experiments in aircraft aerodynamic design. Attractive or smart designs infrequently used in the field and discussions on advanced sampling methodologies are presented, to give a glance on the various efficient possibilities to a priori sample the parameter space. Closing remarks foster on future perspectives, challenges and shortcomings associated with the use of surrogate models by aircraft industrial

  8. A model using marginal efficiency of investment to analyse carbon and nitrogen interactions in forested ecosystems

    Science.gov (United States)

    Thomas, R. Q.; Williams, M.

    2014-12-01

    Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. Here we explore the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants using a new, simple model of ecosystem C-N cycling and interactions (ACONITE). ACONITE builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C:N, N fixation, and plant C use efficiency) based on the optimization of the marginal change in net C or N uptake associated with a change in allocation of C or N to plant tissues. We simulated and evaluated steady-state and transient ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C:N differed among the three ecosystem types (temperate deciduous plant traits. Gross primary productivity (GPP) and net primary productivity (NPP) estimates compared well to observed fluxes at the simulation sites. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C:N. Also, a widely used linear leaf N-respiration relationship did not yield a realistic leaf C:N, while a more recently reported non-linear relationship simulated leaf C:N that compared better to the global trait database than the linear relationship. Overall, our ability to constrain leaf area index and allow spatially and temporally variable leaf C:N can help address challenges simulating these properties in ecosystem and Earth System models. Furthermore, the simple approach with emergent properties based on coupled C-N dynamics has

  9. Critical factors in SEM 3D stereo microscopy

    DEFF Research Database (Denmark)

    Marinello, F.; Bariano, P.; Savio, E.

    2008-01-01

    This work addresses dimensional measurements performed with the scanning electron microscope (SEM) using 3D reconstruction of surface topography through stereo-photogrammetry. The paper presents both theoretical and experimental investigations, on the effects of instrumental variables...... factors are recognized: the first one is related to the measurement operation and the instrument set-up; the second concerns the quality of scanned images and represents the major criticality in the application of SEMs for 3D characterizations....... and measurement parameters on reconstruction accuracy. Investigations were performed on a novel sample, specifically developed and implemented for the tests. The description is based on the model function introduced by Piazzesi and adapted for eucentrically tilted stereopairs. Two main classes of influencing...

  10. Use of model analysis to analyse Thai students’ attitudes and approaches to physics problem solving

    Science.gov (United States)

    Rakkapao, S.; Prasitpong, S.

    2018-03-01

    This study applies the model analysis technique to explore the distribution of Thai students’ attitudes and approaches to physics problem solving and how those attitudes and approaches change as a result of different experiences in physics learning. We administered the Attitudes and Approaches to Problem Solving (AAPS) survey to over 700 Thai university students from five different levels, namely students entering science, first-year science students, and second-, third- and fourth-year physics students. We found that their inferred mental states were generally mixed. The largest gap between physics experts and all levels of the students was about the role of equations and formulas in physics problem solving, and in views towards difficult problems. Most participants of all levels believed that being able to handle the mathematics is the most important part of physics problem solving. Most students’ views did not change even though they gained experiences in physics learning.

  11. Statistical Analyses and Modeling of the Implementation of Agile Manufacturing Tactics in Industrial Firms

    Directory of Open Access Journals (Sweden)

    Mohammad D. AL-Tahat

    2012-01-01

    Full Text Available This paper provides a review and introduction on agile manufacturing. Tactics of agile manufacturing are mapped into different production areas (eight-construct latent: manufacturing equipment and technology, processes technology and know-how, quality and productivity improvement, production planning and control, shop floor management, product design and development, supplier relationship management, and customer relationship management. The implementation level of agile manufacturing tactics is investigated in each area. A structural equation model is proposed. Hypotheses are formulated. Feedback from 456 firms is collected using five-point-Likert-scale questionnaire. Statistical analysis is carried out using IBM SPSS and AMOS. Multicollinearity, content validity, consistency, construct validity, ANOVA analysis, and relationships between agile components are tested. The results of this study prove that the agile manufacturing tactics have positive effect on the overall agility level. This conclusion can be used by manufacturing firms to manage challenges when trying to be agile.

  12. Use of CFD modelling for analysing air parameters in auditorium halls

    Science.gov (United States)

    Cichowicz, Robert

    2017-11-01

    Modelling with the use of numerical methods is currently the most popular method of solving scientific as well as engineering problems. Thanks to the use of computer methods it is possible for example to comprehensively describe the conditions in a given room and to determine thermal comfort, which is a complex issue including subjective sensations of the persons in a given room. The article presents the results of measurements and numerical computing that enabled carrying out the assessment of environment parameters, taking into consideration microclimate, temperature comfort, speeds in the zone of human presence and dustiness in auditory halls. For this purpose measurements of temperature, relative humidity and dustiness were made with the use of a digital microclimate meter and a laser dust particles counter. Thanks to the above by using the application DesignBuilder numerical computing was performed and the obtained results enabled determining PMV comfort indicator in selected rooms.

  13. Statistical modelling of measurement errors in gas chromatographic analyses of blood alcohol content.

    Science.gov (United States)

    Moroni, Rossana; Blomstedt, Paul; Wilhelm, Lars; Reinikainen, Tapani; Sippola, Erkki; Corander, Jukka

    2010-10-10

    Headspace gas chromatographic measurements of ethanol content in blood specimens from suspect drunk drivers are routinely carried out in forensic laboratories. In the widely established standard statistical framework, measurement errors in such data are represented by Gaussian distributions for the population of blood specimens at any given level of ethanol content. It is known that the variance of measurement errors increases as a function of the level of ethanol content and the standard statistical approach addresses this issue by replacing the unknown population variances by estimates derived from large sample using a linear regression model. Appropriate statistical analysis of the systematic and random components in the measurement errors is necessary in order to guarantee legally sound security corrections reported to the police authority. Here we address this issue by developing a novel statistical approach that takes into account any potential non-linearity in the relationship between the level of ethanol content and the variability of measurement errors. Our method is based on standard non-parametric kernel techniques for density estimation using a large database of laboratory measurements for blood specimens. Furthermore, we address also the issue of systematic errors in the measurement process by a statistical model that incorporates the sign of the error term in the security correction calculations. Analysis of a set of certified reference materials (CRMs) blood samples demonstrates the importance of explicitly handling the direction of the systematic errors in establishing the statistical uncertainty about the true level of ethanol content. Use of our statistical framework to aid quality control in the laboratory is also discussed. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  14. Comparative ultrastructural analyses of platelets and fibrin networks using the murine model of asthma.

    Science.gov (United States)

    Pretorius, E; Ekpo, O E; Smit, E

    2007-10-01

    The murine Balb/c asthma model has been used successfully for a number of in vivo immunological applications and for testing novel therapeutics, and it is a reliable, clinically relevant facsimile of the human disease. Here we investigate whether this model can be used to study other components of the human body, e.g. ultrastructure. In particular, we investigate the effect of the phytomedicine Euphorbia hirta (used to treat asthma), on the ultrastructure of fibrin as well as platelets, cellular structures that both play an important role in the coagulation process. Hydrocortisone is used as positive control. Ultrastructure of the fibrin networks and platelets of control mice were compared to mice that were asthmatic, treated with two concentrations of hydrocortisone and one concentration of the plant material. Results indicate control mice possess major, thick fibers and minor thin fibers as well as tight round platelet aggregates with typical pseudopodia formation. Minor fibers of asthmatic mice have a netlike appearance covering the major fibers, while the platelets seem to form loosely connected, granular aggregates. Both concentrations of hydrocortisone make the fibrin more fragile and that platelet morphology changes form a tight platelet aggregate to a more granular aggregate not closely fused to each other. We conclude that E. hirta does not impact on the fragility of the fibrin and that it prevents the minor fibers to form the dense netlike layer over the major fibers, as is seen in untreated asthmatic mice. This ultrastructural morphology might give us better insight into asthma and the possible new treatment regimes.

  15. SEM investigation of heart tissue samples

    Energy Technology Data Exchange (ETDEWEB)

    Saunders, R; Amoroso, M [Physics Department, University of the West Indies, St. Augustine, Trinidad and Tobago, West Indies (Trinidad and Tobago)

    2010-07-01

    We used the scanning electron microscope to examine the cardiac tissue of a cow (Bos taurus), a pig (Sus scrofa), and a human (Homo sapiens). 1mm{sup 3} blocks of left ventricular tissue were prepared for SEM scanning by fixing in 96% ethanol followed by critical point drying (cryofixation), then sputter-coating with gold. The typical ridged structure of the myofibrils was observed for all the species. In addition crystal like structures were found in one of the samples of the heart tissue of the pig. These structures were investigated further using an EDVAC x-ray analysis attachment to the SEM. Elemental x-ray analysis showed highest peaks occurred for gold, followed by carbon, oxygen, magnesium and potassium. As the samples were coated with gold for conductivity, this highest peak is expected. Much lower peaks at carbon, oxygen, magnesium and potassium suggest that a cystallized salt such as a carbonate was present in the tissue before sacrifice.

  16. SEM investigation of heart tissue samples

    Science.gov (United States)

    Saunders, R.; Amoroso, M.

    2010-07-01

    We used the scanning electron microscope to examine the cardiac tissue of a cow (Bos taurus), a pig (Sus scrofa), and a human (Homo sapiens). 1mm3 blocks of left ventricular tissue were prepared for SEM scanning by fixing in 96% ethanol followed by critical point drying (cryofixation), then sputter-coating with gold. The typical ridged structure of the myofibrils was observed for all the species. In addition crystal like structures were found in one of the samples of the heart tissue of the pig. These structures were investigated further using an EDVAC x-ray analysis attachment to the SEM. Elemental x-ray analysis showed highest peaks occurred for gold, followed by carbon, oxygen, magnesium and potassium. As the samples were coated with gold for conductivity, this highest peak is expected. Much lower peaks at carbon, oxygen, magnesium and potassium suggest that a cystallized salt such as a carbonate was present in the tissue before sacrifice.

  17. Viewing Integrated-Circuit Interconnections By SEM

    Science.gov (United States)

    Lawton, Russel A.; Gauldin, Robert E.; Ruiz, Ronald P.

    1990-01-01

    Back-scattering of energetic electrons reveals hidden metal layers. Experiment shows that with suitable operating adjustments, scanning electron microscopy (SEM) used to look for defects in aluminum interconnections in integrated circuits. Enables monitoring, in situ, of changes in defects caused by changes in temperature. Gives truer picture of defects, as etching can change stress field of metal-and-passivation pattern, causing changes in defects.

  18. On Deriving Requirements for the Surface Mass Balance forcing of a Greenland Ice Sheet Model using Uncertainty Analyses

    Science.gov (United States)

    Schlegel, N.; Larour, E. Y.; Box, J. E.

    2015-12-01

    During July of 2012, the percentage of the Greenland surface exposed to melt was the largest in recorded history. And, even though evidence of increased melt rates had been captured by remote sensing observations throughout the last decade, this particular event took the community by surprise. How Greenland ice flow will respond to such an event or to increased frequencies of extreme melt events in the future is unclear, as it requires detailed comprehension of Greenland surface climate and the ice sheet's sensitivity to associated uncertainties. With established uncertainty quantification (UQ) tools embedded within the Ice Sheet System Model (ISSM), we conduct decadal-scale forward modeling experiments to 1) quantify the spatial resolution needed to effectively force surface mass balance (SMB) in various regions of the ice sheet and 2) determine the dynamic response of Greenland outlet glaciers to variations in SMB. First, we perform sensitivity analyses to determine how perturbations in SMB affect model output; results allow us to investigate the locations where variations most significantly affect ice flow, and on what spatial scales. Next, we apply Monte-Carlo style sampling analyses to determine how errors in SMB propagate through the model as uncertainties in estimates of Greenland ice discharge and regional mass balance. This work is performed at the California Institute of Technology's Jet Propulsion Laboratory under a contract with the National Aeronautics and Space Administration's Cryosphere Program.

  19. Analyses and optimization of Lee propagation model for LoRa 868 MHz network deployments in urban areas

    Directory of Open Access Journals (Sweden)

    Dobrilović Dalibor

    2017-01-01

    Full Text Available In the recent period, fast ICT expansion and rapid appearance of new technologies raised the importance of fast and accurate planning and deployment of emerging communication technologies, especially wireless ones. In this paper is analyzed possible usage of Lee propagation model for planning, design and management of networks based on LoRa 868MHz technology. LoRa is wireless technology which can be deployed in various Internet of Things and Smart City scenarios in urban areas. The analyses are based on comparison of field measurements with model calculations. Besides the analyses of Lee propagation model usability, the possible optimization of the model is discussed as well. The research results can be used for accurate design, planning and for preparation of high-performance wireless resource management of various Internet of Things and Smart City applications in urban areas based on LoRa or similar wireless technology. The equipment used for measurements is based on open-source hardware.

  20. Parametric analyses of DEMO Divertor using two dimensional transient thermal hydraulic modelling

    Science.gov (United States)

    Domalapally, Phani; Di Caro, Marco

    2017-11-01

    Among the options considered for cooling of the Plasma facing components of the DEMO reactor, water cooling is a conservative option because of its high heat removal capability. In this work a two-dimensional transient thermal hydraulic code is developed to support the design of the divertor for the projected DEMO reactor with water as a coolant. The mathematical model accounts for transient 2D heat conduction in the divertor section. Temperature-dependent properties are used for more accurate analysis. Correlations for single phase flow forced convection, partially developed subcooled nucleate boiling, fully developed subcooled nucleate boiling and film boiling are used to calculate the heat transfer coefficients on the channel side considering the swirl flow, wherein different correlations found in the literature are compared against each other. Correlation for the Critical Heat Flux is used to estimate its limit for a given flow conditions. This paper then investigates the results of the parametric analysis performed, whereby flow velocity, diameter of the coolant channel, thickness of the coolant pipe, thickness of the armor material, inlet temperature and operating pressure affect the behavior of the divertor under steady or transient heat fluxes. This code will help in understanding the basic parameterś effect on the behavior of the divertor, to achieve a better design from a thermal hydraulic point of view.

  1. Modelling and Analysing Deadlock in Flexible Manufacturing System using Timed Petri Net

    Directory of Open Access Journals (Sweden)

    Assem Hatem Taha

    2017-03-01

    Full Text Available Flexible manufacturing system (FMS has several advantages compared to conventional systems such as higher machine utilization, higher efficiency, less inventory, and less production time. On the other hand, FMS is expensive and complicated. One of the main problems that may happen is the deadlock. Deadlock is a case that happens when one operation or more are unable to complete their tasks because of waiting of resources that are used by other processes. This may occur due to inappropriate sharing of the resources or improper resource allocation logic which may lead to deadlock occurrence due to the complexity of assigning shared resources to different tasks in an efficient way. One of the most effective tools to model and detect the deadlocks is the petri net. In this research the Matlab software has been used to detect the deadlock in two parallel lines with one shared machines. The analysis shows that deadlock exists at transition with high utilization and place with high waiting time

  2. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

    Directory of Open Access Journals (Sweden)

    Giuliano de Oliveira Freitas

    2013-10-01

    Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

  3. Modelling and optimization of combined cycle power plant based on exergoeconomic and environmental analyses

    International Nuclear Information System (INIS)

    Ganjehkaviri, A.; Mohd Jaafar, M.N.; Ahmadi, P.; Barzegaravval, H.

    2014-01-01

    This research paper presents a study on a comprehensive thermodynamic modelling of a combined cycle power plant (CCPP). The effects of economic strategies and design parameters on the plant optimization are also studied. Exergoeconomic analysis is conducted in order to determine the cost of electricity and cost of exergy destruction. In addition, a comprehensive optimization study is performed to determine the optimal design parameters of the power plant. Next, the effects of economic parameters variations on the sustainability, carbon dioxide emission and fuel consumption of the plant are investigated and are presented for a typical combined cycle power plant. Therefore, the changes in economic parameters caused the balance between cash flows and fix costs of the plant changes at optimum point. Moreover, economic strategies greatly limited the maximum reasonable carbon emission and fuel consumption reduction. The results showed that by using the optimum values, the exergy efficiency increases for about 6%, while CO 2 emission decreases by 5.63%. However, the variation in the cost was less than 1% due to the fact that a cost constraint was implemented. In addition, the sensitivity analysis for the optimization study was curtailed to be carried out; therefore, the optimization process and results to two important parameters are presented and discussed.

  4. Microarray and bioinformatic analyses suggest models for carbon metabolism in the autotroph Acidithiobacillus ferrooxidans

    Energy Technology Data Exchange (ETDEWEB)

    C. Appia-ayme; R. Quatrini; Y. Denis; F. Denizot; S. Silver; F. Roberto; F. Veloso; J. Valdes; J. P. Cardenas; M. Esparza; O. Orellana; E. Jedlicki; V. Bonnefoy; D. Holmes

    2006-09-01

    Acidithiobacillus ferrooxidans is a chemolithoautotrophic bacterium that uses iron or sulfur as an energy and electron source. Bioinformatic analysis was used to identify putative genes and potential metabolic pathways involved in CO2 fixation, 2P-glycolate detoxification, carboxysome formation and glycogen utilization in At. ferrooxidans. Microarray transcript profiling was carried out to compare the relative expression of the predicted genes of these pathways when the microorganism was grown in the presence of iron versus sulfur. Several gene expression patterns were confirmed by real-time PCR. Genes for each of the above predicted pathways were found to be organized into discrete clusters. Clusters exhibited differential gene expression depending on the presence of iron or sulfur in the medium. Concordance of gene expression within each cluster, suggested that they are operons Most notably, clusters of genes predicted to be involved in CO2 fixation, carboxysome formation, 2P-glycolate detoxification and glycogen biosynthesis were up-regulated in sulfur medium, whereas genes involved in glycogen utilization were preferentially expressed in iron medium. These results can be explained in terms of models of gene regulation that suggest how A. ferrooxidans can adjust its central carbon management to respond to changing environmental conditions.

  5. Analysing hydro-mechanical behaviour of reinforced slopes through centrifuge modelling

    Science.gov (United States)

    Veenhof, Rick; Wu, Wei

    2017-04-01

    Every year, slope instability is causing casualties and damage to properties and the environment. The behaviour of slopes during and after these kind of events is complex and depends on meteorological conditions, slope geometry, hydro-mechanical soil properties, boundary conditions and the initial state of the soils. This study describes the effects of adding reinforcement, consisting of randomly distributed polyolefin monofilament fibres or Ryegrass (Lolium), on the behaviour of medium-fine sand in loose and medium dense conditions. Direct shear tests were performed on sand specimens with different void ratios, water content and fibre or root density, respectively. To simulate the stress state of real scale field situations, centrifuge model tests were conducted on sand specimens with different slope angles, thickness of the reinforced layer, fibre density, void ratio and water content. An increase in peak shear strength is observed in all reinforced cases. Centrifuge tests show that for slopes that are reinforced the period until failure is extended. The location of shear band formation and patch displacement behaviour indicate that the design of slope reinforcement has a significant effect on the failure behaviour. Future research will focus on the effect of plant water uptake on soil cohesion.

  6. Modeling and Analysing of Air Filter in Air Intake System in Automobile Engine

    Directory of Open Access Journals (Sweden)

    R. Manikantan

    2013-01-01

    Full Text Available As the legislations on the emission and performance of automobiles are being made more stringent, the expected performance of all the subsystems of an internal combustion engine is also becoming crucial. Nowadays the engines are downsized, and their power increased the demand on the air intake system that has increased phenomenally. Hence, an analysis was carried on a typical air filter fitted into the intake system to determine its flow characteristics. In the present investigation, a CAD model of an existing air filter was designed, and CFD analysis was done pertaining to various operating regimes of an internal combustion engine. The numerical results were validated with the experimental data. From the postprocessed result, we can see that there is a deficit in the design of the present filter, as the bottom portion of the filter is preventing the upward movement of air. Hence, the intake passage can be rearranged to provide an upward tangential motion, which can enhance the removal of larger dust and soot particles effectively by the inertial action of air alone.

  7. Analysing movements in investor’s risk aversion using the Heston volatility model

    Directory of Open Access Journals (Sweden)

    Alexie ALUPOAIEI

    2013-03-01

    Full Text Available In this paper we intend to identify and analyze, if it is the case, an “epidemiological” relationship between forecasts of professional investors and short-term developments in the EUR/RON exchange rate. Even that we don’t call a typical epidemiological model as those ones used in biology fields of research, we investigated the hypothesis according to which after the Lehman Brothers crash and implicit the generation of the current financial crisis, the forecasts of professional investors pose a significant explanatory power on the futures short-run movements of EUR/RON. How does it work this mechanism? Firstly, the professional forecasters account for the current macro, financial and political states, then they elaborate forecasts. Secondly, based on that forecasts they get positions in the Romanian exchange market for hedging and/or speculation purposes. But their positions incorporate in addition different degrees of uncertainty. In parallel, a part of their anticipations are disseminated to the public via media channels. Since some important movements are viewed within macro, financial or political fields, the positions of professsional investors from FX derivative market are activated. The current study represents a first step in that direction of analysis for Romanian case. For the above formulated objectives, in this paper different measures of EUR/RON rate volatility have been estimated and compared with implied volatilities. In a second timeframe we called the co-integration and dynamic correlation based tools in order to investigate the relationship between implied volatility and daily returns of EUR/RON exchange rate.

  8. Computational Modeling of Oxygen Transport in the Microcirculation: From an Experiment-Based Model to Theoretical Analyses

    OpenAIRE

    Lücker, Adrien

    2017-01-01

    Oxygen supply to cells by the cardiovascular system involves multiple physical and chemical processes that aim to satisfy fluctuating metabolic demand. Regulation mechanisms range from increased heart rate to minute adaptations in the microvasculature. The challenges and limitations of experimental studies in vivo make computational models an invaluable complement. In this thesis, oxygen transport from capillaries to tissue is investigated using a new numerical model that is tailored for vali...

  9. Characterization of Yeast Biofilm by Cryo-SEM and FIB-SEM

    Czech Academy of Sciences Publication Activity Database

    Hrubanová, Kamila; Nebesářová, Jana; Růžička, F.; Dluhoš, J.; Krzyžánek, Vladislav

    2013-01-01

    Roč. 19, S2 (2013), s. 226-227 ISSN 1431-9276 R&D Projects: GA MŠk EE.2.3.20.0103; GA TA ČR TE01020118; GA ČR GAP205/11/1687 Institutional support: RVO:68081731 ; RVO:60077344 Keywords : yeast biofilm * cryo-SEM * FIB-SEM Subject RIV: BH - Optics, Masers, Lasers Impact factor: 1.757, year: 2013

  10. Pathophysiologic and transcriptomic analyses of viscerotropic yellow fever in a rhesus macaque model.

    Science.gov (United States)

    Engelmann, Flora; Josset, Laurence; Girke, Thomas; Park, Byung; Barron, Alex; Dewane, Jesse; Hammarlund, Erika; Lewis, Anne; Axthelm, Michael K; Slifka, Mark K; Messaoudi, Ilhem

    2014-01-01

    Infection with yellow fever virus (YFV), an explosively replicating flavivirus, results in viral hemorrhagic disease characterized by cardiovascular shock and multi-organ failure. Unvaccinated populations experience 20 to 50% fatality. Few studies have examined the pathophysiological changes that occur in humans during YFV infection due to the sporadic nature and remote locations of outbreaks. Rhesus macaques are highly susceptible to YFV infection, providing a robust animal model to investigate host-pathogen interactions. In this study, we characterized disease progression as well as alterations in immune system homeostasis, cytokine production and gene expression in rhesus macaques infected with the virulent YFV strain DakH1279 (YFV-DakH1279). Following infection, YFV-DakH1279 replicated to high titers resulting in viscerotropic disease with ∼72% mortality. Data presented in this manuscript demonstrate for the first time that lethal YFV infection results in profound lymphopenia that precedes the hallmark changes in liver enzymes and that although tissue damage was noted in liver, kidneys, and lymphoid tissues, viral antigen was only detected in the liver. These observations suggest that additional tissue damage could be due to indirect effects of viral replication. Indeed, circulating levels of several cytokines peaked shortly before euthanasia. Our study also includes the first description of YFV-DakH1279-induced changes in gene expression within peripheral blood mononuclear cells 3 days post-infection prior to any clinical signs. These data show that infection with wild type YFV-DakH1279 or live-attenuated vaccine strain YFV-17D, resulted in 765 and 46 differentially expressed genes (DEGs), respectively. DEGs detected after YFV-17D infection were mostly associated with innate immunity, whereas YFV-DakH1279 infection resulted in dysregulation of genes associated with the development of immune response, ion metabolism, and apoptosis. Therefore, WT-YFV infection

  11. Model-based performance and energy analyses of reverse osmosis to reuse wastewater in a PVC production site.

    Science.gov (United States)

    Hu, Kang; Fiedler, Thorsten; Blanco, Laura; Geissen, Sven-Uwe; Zander, Simon; Prieto, David; Blanco, Angeles; Negro, Carlos; Swinnen, Nathalie

    2017-11-10

    A pilot-scale reverse osmosis (RO) followed behind a membrane bioreactor (MBR) was developed for the desalination to reuse wastewater in a PVC production site. The solution-diffusion-film model (SDFM) based on the solution-diffusion model (SDM) and the film theory was proposed to describe rejections of electrolyte mixtures in the MBR effluent which consists of dominant ions (Na + and Cl - ) and several trace ions (Ca 2+ , Mg 2+ , K + and SO 4 2- ). The universal global optimisation method was used to estimate the ion permeability coefficients (B) and mass transfer coefficients (K) in SDFM. Then, the membrane performance was evaluated based on the estimated parameters which demonstrated that the theoretical simulations were in line with the experimental results for the dominant ions. Moreover, an energy analysis model with the consideration of limitation imposed by the thermodynamic restriction was proposed to analyse the specific energy consumption of the pilot-scale RO system in various scenarios.

  12. An assessment of the wind re-analyses in the modelling of an extreme sea state in the Black Sea

    Science.gov (United States)

    Akpinar, Adem; Ponce de León, S.

    2016-03-01

    This study aims at an assessment of wind re-analyses for modelling storms in the Black Sea. A wind-wave modelling system (Simulating WAve Nearshore, SWAN) is applied to the Black Sea basin and calibrated with buoy data for three recent re-analysis wind sources, namely the European Centre for Medium-Range Weather Forecasts Reanalysis-Interim (ERA-Interim), Climate Forecast System Reanalysis (CFSR), and Modern Era Retrospective Analysis for Research and Applications (MERRA) during an extreme wave condition that occurred in the north eastern part of the Black Sea. The SWAN model simulations are carried out for default and tuning settings for deep water source terms, especially whitecapping. Performances of the best model configurations based on calibration with buoy data are discussed using data from the JASON2, TOPEX-Poseidon, ENVISAT and GFO satellites. The SWAN model calibration shows that the best configuration is obtained with Janssen and Komen formulations with whitecapping coefficient (Cds) equal to 1.8e-5 for wave generation by wind and whitecapping dissipation using ERA-Interim. In addition, from the collocated SWAN results against the satellite records, the best configuration is determined to be the SWAN using the CFSR winds. Numerical results, thus show that the accuracy of a wave forecast will depend on the quality of the wind field and the ability of the SWAN model to simulate the waves under extreme wind conditions in fetch limited wave conditions.

  13. Gamma-ray pulsar physics: gap-model populations and light-curve analyses in the Fermi era

    International Nuclear Information System (INIS)

    Pierbattista, M.

    2010-01-01

    This thesis research focusses on the study of the young and energetic isolated ordinary pulsar population detected by the Fermi gamma-ray space telescope. We compared the model expectations of four emission models and the LAT data. We found that all the models fail to reproduce the LAT detections, in particular the large number of high E objects observed. This inconsistency is not model dependent. A discrepancy between the radio-loud/radio-quiet objects ratio was also found between the observed and predicted samples. The L γ α E 0.5 relation is robustly confirmed by all the assumed models with particular agreement in the slot gap (SG) case. On luminosity bases, the intermediate altitude emission of the two pole caustic SG model is favoured. The beaming factor f Ω shows an E dependency that is slightly visible in the SG case. Estimates of the pulsar orientations have been obtained to explain the simultaneous gamma and radio light-curves. By analysing the solutions we found a relation between the observed energy cutoff and the width of the emission slot gap. This relation has been theoretically predicted. A possible magnetic obliquity α alignment with time is rejected -for all the models- on timescale of the order of 10 6 years. The light-curve morphology study shows that the outer magnetosphere gap emission (OGs) are favoured to explain the observed radio-gamma lag. The light curve moment studies (symmetry and sharpness) on the contrary favour a two pole caustic SG emission. All the model predictions suggest a different magnetic field layout with an hybrid two pole caustic and intermediate altitude emission to explain both the pulsar luminosity and light curve morphology. The low magnetosphere emission mechanism of the polar cap model, is systematically rejected by all the tests done. (author) [fr

  14. Modeling and stress analyses of a normal foot-ankle and a prosthetic foot-ankle complex.

    Science.gov (United States)

    Ozen, Mustafa; Sayman, Onur; Havitcioglu, Hasan

    2013-01-01

    Total ankle replacement (TAR) is a relatively new concept and is becoming more popular for treatment of ankle arthritis and fractures. Because of the high costs and difficulties of experimental studies, the developments of TAR prostheses are progressing very slowly. For this reason, the medical imaging techniques such as CT, and MR have become more and more useful. The finite element method (FEM) is a widely used technique to estimate the mechanical behaviors of materials and structures in engineering applications. FEM has also been increasingly applied to biomechanical analyses of human bones, tissues and organs, thanks to the development of both the computing capabilities and the medical imaging techniques. 3-D finite element models of the human foot and ankle from reconstruction of MR and CT images have been investigated by some authors. In this study, data of geometries (used in modeling) of a normal and a prosthetic foot and ankle were obtained from a 3D reconstruction of CT images. The segmentation software, MIMICS was used to generate the 3D images of the bony structures, soft tissues and components of prosthesis of normal and prosthetic ankle-foot complex. Except the spaces between the adjacent surface of the phalanges fused, metatarsals, cuneiforms, cuboid, navicular, talus and calcaneus bones, soft tissues and components of prosthesis were independently developed to form foot and ankle complex. SOLIDWORKS program was used to form the boundary surfaces of all model components and then the solid models were obtained from these boundary surfaces. Finite element analyses software, ABAQUS was used to perform the numerical stress analyses of these models for balanced standing position. Plantar pressure and von Mises stress distributions of the normal and prosthetic ankles were compared with each other. There was a peak pressure increase at the 4th metatarsal, first metatarsal and talus bones and a decrease at the intermediate cuneiform and calcaneus bones, in

  15. WEB SemânticaSemantic web

    Directory of Open Access Journals (Sweden)

    Gisele Vasconcelos Dziekaniak

    2004-01-01

    Full Text Available O trabalho aborda a Web Semântica: a nova versão da web que está em desenvolvimento, através de projetos como o Scorpion1 e o Desire2. Estes projetos buscam organizar o conhecimento armazenado em seus arquivos e páginas web, prometendo a compreensão da linguagem humana pelas máquinas na recuperação da informação, sem que o usuário precise dominar refinadas estratégias de buscas. O artigo apresenta o padrão de metadados Dublin Core como o padrão mais utilizado atualmente pelas comunidades desenvolvedoras de projetos na área da Web Semântica e aborda o RDF como estrutura indicada pelos visionários desta nova web para desenvolver esquemas semânticos na representação da informação disponibilizada via rede, bem como o XML enquanto linguagem de marcação de dados estruturados. Revela a necessidade de melhorias na organização da informação no cenário brasileiro de indexação eletrônica a fim de que o mesmo possa acompanhar o novo paradigma da recuperação da informação e organização do conhecimento.This paper approaches the Semantic Web: a new version of web development, through projects as Scorpion and Desire. The aim of these projects in to organize knowledge stored in their files and web pages promissing the understanding of human language by the machines to recover information, without the user needs to dominate refined searching strategies. The article presents the metadatas pattern Dublin Core as the present day most used pattern by the project developer communities in the area of the Web Semantic and approaches RDF as suitable structure for the visionary of this new web to develop semantic outlines in the representation of the information made available through net, as well as XML as language of demarcation of structured data. Reveals the need of improvements in the treatment of the information in the Brazilian scenery of electronic indexation so that the same can accompany the new paradigm of recovery of

  16. O ciberativismo sem bússola

    Directory of Open Access Journals (Sweden)

    Francisco Rüdiger

    2014-07-01

    Full Text Available Questiona-se no texto se uma abordagem que, no essencial, relata a trajetória do chamado ciberativismo de acordo com seus próprios termos se justifica academicamente ou, em vez disso, se mantém prisioneira de uma mitologia que o fenômeno, em si mesmo, já construiu e, por isso, autoriza seus sujeitos a dispensarem sem prejuízo eventual contribuição de origem universitária.

  17. Genetic analyses using GGE model and a mixed linear model approach, and stability analyses using AMMI bi-plot for late-maturity alpha-amylase activity in bread wheat genotypes.

    Science.gov (United States)

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye

    2017-06-01

    Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.

  18. ATOP - The Advanced Taiwan Ocean Prediction System Based on the mpiPOM. Part 1: Model Descriptions, Analyses and Results

    Directory of Open Access Journals (Sweden)

    Leo Oey

    2013-01-01

    Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of ¡§(i¡¨ are also given.

  19. A regional tidal/subtidal circulation model of the southeastern Bering Sea: development, sensitivity analyses and hindcasting

    Science.gov (United States)

    Hermann, Albert J.; Stabeno, Phyllis J.; Haidvogel, Dale B.; Musgrave, David L.

    2002-12-01

    A regional eddy-resolving primitive equation circulation model was used to simulate circulation on the southeastern Bering Sea (SEBS) shelf and basin. This model resolves the dominant observed mean currents, eddies and meanders in the region, and simultaneously includes both tidal and subtidal dynamics. Circulation, temperature, and salinity fields for years 1995 and 1997 were hindcast, using daily wind and buoyancy flux estimates, and tidal forcing derived from a global model. This paper describes the development of the regional model, a comparison of model results with available Eulerian and Lagrangian data, a comparison of results between the two hindcast years, and a sensitivity analysis. Based on these hindcasts and sensitivity analyses, we suggest the following: (1) The Bering Slope Current is a primary source of large ( ˜100 km diameter) eddies in the SEBS basin. Smaller meanders are also formed along the 100 m isobath on the southeastern shelf, and along the 200-m isobath near the shelf break. (2) There is substantial interannual variability in the statistics of eddies within the basin, driven by variability in the strength of the ANSC. (3) The mean flow on the shelf is not strongly sensitive to changes in the imposed strength of the ANSC; rather, it is strongly sensitive to the local wind forcing. (4) Vertical mixing in the SEBS is strongly affected by both tidal and subtidal dynamics. Strongest mixing in the SEBS may in fact occur between the 100- and 400-m isobaths, near the Pribilof Islands, and in Unimak Pass.

  20. One size does not fit all: On how Markov model order dictates performance of genomic sequence analyses

    Science.gov (United States)

    Narlikar, Leelavati; Mehta, Nidhi; Galande, Sanjeev; Arjunwadkar, Mihir

    2013-01-01

    The structural simplicity and ability to capture serial correlations make Markov models a popular modeling choice in several genomic analyses, such as identification of motifs, genes and regulatory elements. A critical, yet relatively unexplored, issue is the determination of the order of the Markov model. Most biological applications use a predetermined order for all data sets indiscriminately. Here, we show the vast variation in the performance of such applications with the order. To identify the ‘optimal’ order, we investigated two model selection criteria: Akaike information criterion and Bayesian information criterion (BIC). The BIC optimal order delivers the best performance for mammalian phylogeny reconstruction and motif discovery. Importantly, this order is different from orders typically used by many tools, suggesting that a simple additional step determining this order can significantly improve results. Further, we describe a novel classification approach based on BIC optimal Markov models to predict functionality of tissue-specific promoters. Our classifier discriminates between promoters active across 12 different tissues with remarkable accuracy, yielding 3 times the precision expected by chance. Application to the metagenomics problem of identifying the taxum from a short DNA fragment yields accuracies at least as high as the more complex mainstream methodologies, while retaining conceptual and computational simplicity. PMID:23267010

  1. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes

    Directory of Open Access Journals (Sweden)

    Ilona Naujokaitis-Lewis

    2016-07-01

    Full Text Available Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0 that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat

  2. Data Assimilation Tools for CO2 Reservoir Model Development – A Review of Key Data Types, Analyses, and Selected Software

    Energy Technology Data Exchange (ETDEWEB)

    Rockhold, Mark L.; Sullivan, E. C.; Murray, Christopher J.; Last, George V.; Black, Gary D.

    2009-09-30

    Pacific Northwest National Laboratory (PNNL) has embarked on an initiative to develop world-class capabilities for performing experimental and computational analyses associated with geologic sequestration of carbon dioxide. The ultimate goal of this initiative is to provide science-based solutions for helping to mitigate the adverse effects of greenhouse gas emissions. This Laboratory-Directed Research and Development (LDRD) initiative currently has two primary focus areas—advanced experimental methods and computational analysis. The experimental methods focus area involves the development of new experimental capabilities, supported in part by the U.S. Department of Energy’s (DOE) Environmental Molecular Science Laboratory (EMSL) housed at PNNL, for quantifying mineral reaction kinetics with CO2 under high temperature and pressure (supercritical) conditions. The computational analysis focus area involves numerical simulation of coupled, multi-scale processes associated with CO2 sequestration in geologic media, and the development of software to facilitate building and parameterizing conceptual and numerical models of subsurface reservoirs that represent geologic repositories for injected CO2. This report describes work in support of the computational analysis focus area. The computational analysis focus area currently consists of several collaborative research projects. These are all geared towards the development and application of conceptual and numerical models for geologic sequestration of CO2. The software being developed for this focus area is referred to as the Geologic Sequestration Software Suite or GS3. A wiki-based software framework is being developed to support GS3. This report summarizes work performed in FY09 on one of the LDRD projects in the computational analysis focus area. The title of this project is Data Assimilation Tools for CO2 Reservoir Model Development. Some key objectives of this project in FY09 were to assess the current state

  3. A systematic review of approaches to modelling lower limb muscle forces during gait: Applicability to clinical gait analyses.

    Science.gov (United States)

    Trinler, Ursula; Hollands, Kristen; Jones, Richard; Baker, Richard

    2018-03-01

    Computational methods to estimate muscle forces during walking are becoming more common in biomechanical research but not yet in clinical gait analysis. This systematic review aims to identify the current state-of-the-art, examine the differences between approaches, and consider applicability of the current approaches in clinical gait analysis. A systematic database search identified studies including estimated muscle force profiles of the lower limb during healthy walking. These were rated for quality and the muscle force profiles digitised for comparison. From 13.449 identified studies, 22 were finally included which used four modelling approaches: static optimisation, enhanced static optimisation, forward dynamics and EMG-driven. These used a range of different musculoskeletal models, muscle-tendon characteristics and cost functions. There is visually broad agreement between and within approaches about when muscles are active throughout the gait cycle. There remain, considerable differences (CV 7%-151%, range of timing of peak forces in gait cycle 1%-31%) in patterns and magnitudes of force between and within modelling approaches. The main source of this variability is not clear. Different musculoskeletal models, experimental protocols, and modelling approaches will clearly have an effect as will the variability of joint kinetics between healthy individuals. Limited validation of modelling approaches, particularly at the level of individual participants, makes it difficult to conclude if any of the approaches give consistently better estimates than others. While muscle force modelling has clear potential to enhance clinical gait analyses future research is needed to improve validation, accuracy and feasibility of implementation in clinical practice. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Using plant growth modeling to analyse C source-sink relations under drought: inter and intra specific comparison

    Directory of Open Access Journals (Sweden)

    Benoit ePallas

    2013-11-01

    Full Text Available The ability to assimilate C and allocate NSC (non structural carbohydrates to the most appropriate organs is crucial to maximize plant ecological or agronomic performance. Such C source and sink activities are differentially affected by environmental constraints. Under drought, plant growth is generally more sink than source limited as organ expansion or appearance rate is earlier and stronger affected than C assimilation. This favors plant survival and recovery but not always agronomic performance as NSC are stored rather than used for growth due to a modified metabolism in source and sink leaves. Such interactions between plant C and water balance are complex and plant modeling can help analyzing their impact on plant phenotype. This paper addresses the impact of trade-offs between C sink and source activities and plant production under drought, combining experimental and modeling approaches. Two contrasted monocotyledonous species (rice, oil palm were studied. Experimentally, the sink limitation of plant growth under moderate drought was confirmed as well as the modifications in NSC metabolism in source and sink organs. Under severe stress, when C source became limiting, plant NSC concentration decreased. Two plant models dedicated to oil palm and rice morphogenesis were used to perform a sensitivity analysis and further explore how to optimize C sink and source drought sensitivity to maximize plant growth. Modeling results highlighted that optimal drought sensitivity depends both on drought type and species and that modeling is a great opportunity to analyse such complex processes. Further modeling needs and more generally the challenge of using models to support complex trait breeding are discussed.

  5. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Directory of Open Access Journals (Sweden)

    Catherine L Worth

    Full Text Available BACKGROUND: Up until recently the only available experimental (high resolution structure of a G-protein-coupled receptor (GPCR was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. METHODOLOGY: We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s for building a comparative molecular model. CONCLUSIONS: The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying

  6. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Science.gov (United States)

    Worth, Catherine L; Kleinau, Gunnar; Krause, Gerd

    2009-09-16

    Up until recently the only available experimental (high resolution) structure of a G-protein-coupled receptor (GPCR) was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s) to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s) for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s) for building a comparative molecular model. The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying suitable templates for GPCR homology modelling that will

  7. Static and free-vibration analyses of dental prosthesis and atherosclerotic human artery by refined finite element models.

    Science.gov (United States)

    Carrera, E; Guarnera, D; Pagani, A

    2018-04-01

    Static and modal responses of representative biomechanical structures are investigated in this paper by employing higher-order theories of structures and finite element approximations. Refined models are implemented in the domain of the Carrera unified formulation (CUF), according to which low- to high-order kinematics can be postulated as arbitrary and, eventually, hierarchical expansions of the generalized displacement unknowns. By using CUF along with the principle of virtual work, the governing equations are expressed in terms of fundamental nuclei of finite element arrays. The fundamental nuclei are invariant of the theory approximation order and can be opportunely employed to implement variable kinematics theories of bio-structures. In this work, static and free-vibration analyses of an atherosclerotic plaque of a human artery and a dental prosthesis are discussed. The results from the proposed methodologies highlight a number of advantages of CUF models with respect to already established theories and commercial software tools. Namely, (i) CUF models can represent correctly the higher-order phenomena related to complex stress/strain field distributions and coupled mode shapes; (ii) bio-structures can be modeled in a component-wise sense by only employing the physical boundaries of the problem domain and without making any geometrical simplification. This latter aspect, in particular, can be currently accomplished only by using three-dimensional analysis, which may be computationally unbearable as complex bio-systems are considered.

  8. A multinomial logit model-Bayesian network hybrid approach for driver injury severity analyses in rear-end crashes.

    Science.gov (United States)

    Chen, Cong; Zhang, Guohui; Tarefder, Rafiqul; Ma, Jianming; Wei, Heng; Guan, Hongzhi

    2015-07-01

    Rear-end crash is one of the most common types of traffic crashes in the U.S. A good understanding of its characteristics and contributing factors is of practical importance. Previously, both multinomial Logit models and Bayesian network methods have been used in crash modeling and analysis, respectively, although each of them has its own application restrictions and limitations. In this study, a hybrid approach is developed to combine multinomial logit models and Bayesian network methods for comprehensively analyzing driver injury severities in rear-end crashes based on state-wide crash data collected in New Mexico from 2010 to 2011. A multinomial logit model is developed to investigate and identify significant contributing factors for rear-end crash driver injury severities classified into three categories: no injury, injury, and fatality. Then, the identified significant factors are utilized to establish a Bayesian network to explicitly formulate statistical associations between injury severity outcomes and explanatory attributes, including driver behavior, demographic features, vehicle factors, geometric and environmental characteristics, etc. The test results demonstrate that the proposed hybrid approach performs reasonably well. The Bayesian network reference analyses indicate that the factors including truck-involvement, inferior lighting conditions, windy weather conditions, the number of vehicles involved, etc. could significantly increase driver injury severities in rear-end crashes. The developed methodology and estimation results provide insights for developing effective countermeasures to reduce rear-end crash injury severities and improve traffic system safety performance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Comparative analyses reveal potential uses of Brachypodium distachyon as a model for cold stress responses in temperate grasses

    Directory of Open Access Journals (Sweden)

    Li Chuan

    2012-05-01

    Full Text Available Abstract Background Little is known about the potential of Brachypodium distachyon as a model for low temperature stress responses in Pooideae. The ice recrystallization inhibition protein (IRIP genes, fructosyltransferase (FST genes, and many C-repeat binding factor (CBF genes are Pooideae specific and important in low temperature responses. Here we used comparative analyses to study conservation and evolution of these gene families in B. distachyon to better understand its potential as a model species for agriculturally important temperate grasses. Results Brachypodium distachyon contains cold responsive IRIP genes which have evolved through Brachypodium specific gene family expansions. A large cold responsive CBF3 subfamily was identified in B. distachyon, while CBF4 homologs are absent from the genome. No B. distachyon FST gene homologs encode typical core Pooideae FST-motifs and low temperature induced fructan accumulation was dramatically different in B. distachyon compared to core Pooideae species. Conclusions We conclude that B. distachyon can serve as an interesting model for specific molecular mechanisms involved in low temperature responses in core Pooideae species. However, the evolutionary history of key genes involved in low temperature responses has been different in Brachypodium and core Pooideae species. These differences limit the use of B. distachyon as a model for holistic studies relevant for agricultural core Pooideae species.

  10. Comparative SEM analysis of nine F22 aligner cleaning strategies.

    Science.gov (United States)

    Lombardo, Luca; Martini, Marco; Cervinara, Francesca; Spedicato, Giorgio Alfredo; Oliverio, Teresa; Siciliani, Giuseppe

    2017-12-01

    The orthodontics industry has paid great attention to the aesthetics of orthodontic appliances, seeking to make them as invisible as possible. There are several advantages to clear aligner systems, including aesthetics, comfort, chairside time reduction, and the fact that they can be removed for meals and oral hygiene procedures. Five patients were each given a series of F22 aligners, each to be worn for 14 days and nights, with the exception of meal and brushing times. Patients were instructed to clean each aligner using a prescribed strategy, and sections of the used aligners were observed under SEM. One grey-scale SEM image was saved per aligner in JPEG format with an 8-bit colour depth, and a total of 45 measurements on the grey scale ("Value" variable) were made. This dataset was analysed statistically via repeated measures ANOVA to determine the effect of each of the nine cleaning strategies in each of the five patients. A statistically significant difference in the efficacy of the cleaning strategies was detected. Specifically, rinsing with water alone was significantly less efficacious, and a combination of cationic detergent solution and ultrasonication was significantly more efficacious than the other methods (p aligners.

  11. Assessing models of speciation under different biogeographic scenarios; An empirical study using multi-locus and RNA-seq analyses

    Science.gov (United States)

    Edwards, Taylor; Tollis, Marc; Hsieh, PingHsun; Gutenkunst, Ryan N.; Liu, Zhen; Kusumi, Kenro; Culver, Melanie; Murphy, Robert W.

    2016-01-01

    Evolutionary biology often seeks to decipher the drivers of speciation, and much debate persists over the relative importance of isolation and gene flow in the formation of new species. Genetic studies of closely related species can assess if gene flow was present during speciation, because signatures of past introgression often persist in the genome. We test hypotheses on which mechanisms of speciation drove diversity among three distinct lineages of desert tortoise in the genus Gopherus. These lineages offer a powerful system to study speciation, because different biogeographic patterns (physical vs. ecological segregation) are observed at opposing ends of their distributions. We use 82 samples collected from 38 sites, representing the entire species' distribution and generate sequence data for mtDNA and four nuclear loci. A multilocus phylogenetic analysis in *BEAST estimates the species tree. RNA-seq data yield 20,126 synonymous variants from 7665 contigs from two individuals of each of the three lineages. Analyses of these data using the demographic inference package ∂a∂i serve to test the null hypothesis of no gene flow during divergence. The best-fit demographic model for the three taxa is concordant with the *BEAST species tree, and the ∂a∂i analysis does not indicate gene flow among any of the three lineages during their divergence. These analyses suggest that divergence among the lineages occurred in the absence of gene flow and in this scenario the genetic signature of ecological isolation (parapatric model) cannot be differentiated from geographic isolation (allopatric model).

  12. Scanning Electron Microscopy (SEM) Procedure for HE Powders on a Zeiss Sigma HD VP SEM

    Energy Technology Data Exchange (ETDEWEB)

    Zaka, F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-11-15

    This method describes the characterization of inert and HE materials by the Zeiss Sigma HD VP field emission Scanning Electron Microscope (SEM). The SEM uses an accelerated electron beam to generate high-magnification images of explosives and other materials. It is fitted with five detectors (SE, Inlens, STEM, VPSE, HDBSD) to enable imaging of the sample via different secondary electron signatures, angles, and energies. In addition to imaging through electron detection, the microscope is also fitted with two Oxford Instrument Energy Dispersive Spectrometer (EDS) 80 mm detectors to generate elemental constituent spectra and two-dimensional maps of the material being scanned.

  13. Three-Dimensional (3D) Nanometrology Based on Scanning Electron Microscope (SEM) Stereophotogrammetry.

    Science.gov (United States)

    Tondare, Vipin N; Villarrubia, John S; Vlada R, András E

    2017-10-01

    Three-dimensional (3D) reconstruction of a sample surface from scanning electron microscope (SEM) images taken at two perspectives has been known for decades. Nowadays, there exist several commercially available stereophotogrammetry software packages. For testing these software packages, in this study we used Monte Carlo simulated SEM images of virtual samples. A virtual sample is a model in a computer, and its true dimensions are known exactly, which is impossible for real SEM samples due to measurement uncertainty. The simulated SEM images can be used for algorithm testing, development, and validation. We tested two stereophotogrammetry software packages and compared their reconstructed 3D models with the known geometry of the virtual samples used to create the simulated SEM images. Both packages performed relatively well with simulated SEM images of a sample with a rough surface. However, in a sample containing nearly uniform and therefore low-contrast zones, the height reconstruction error was ≈46%. The present stereophotogrammetry software packages need further improvement before they can be used reliably with SEM images with uniform zones.

  14. Systems genetics of obesity in an F2 pig model by genome-wide association, genetic network and pathway analyses

    Directory of Open Access Journals (Sweden)

    Lisette J. A. Kogelman

    2014-07-01

    Full Text Available Obesity is a complex condition with world-wide exponentially rising prevalence rates, linked with severe diseases like Type 2 Diabetes. Economic and welfare consequences have led to a raised interest in a better understanding of the biological and genetic background. To date, whole genome investigations focusing on single genetic variants have achieved limited success, and the importance of including genetic interactions is becoming evident. Here, the aim was to perform an integrative genomic analysis in an F2 pig resource population that was constructed with an aim to maximize genetic variation of obesity-related phenotypes and genotyped using the 60K SNP chip. Firstly, Genome Wide Association (GWA analysis was performed on the Obesity Index to locate candidate genomic regions that were further validated using combined Linkage Disequilibrium Linkage Analysis and investigated by evaluation of haplotype blocks. We built Weighted Interaction SNP Hub (WISH and differentially wired (DW networks using genotypic correlations amongst obesity-associated SNPs resulting from GWA analysis. GWA results and SNP modules detected by WISH and DW analyses were further investigated by functional enrichment analyses. The functional annotation of SNPs revealed several genes associated with obesity, e.g. NPC2 and OR4D10. Moreover, gene enrichment analyses identified several significantly associated pathways, over and above the GWA study results, that may influence obesity and obesity related diseases, e.g. metabolic processes. WISH networks based on genotypic correlations allowed further identification of various gene ontology terms and pathways related to obesity and related traits, which were not identified by the GWA study. In conclusion, this is the first study to develop a (genetic obesity index and employ systems genetics in a porcine model to provide important insights into the complex genetic architecture associated with obesity and many biological pathways

  15. Clustering structures of large proteins using multifractal analyses based on a 6-letter model and hydrophobicity scale of amino acids

    International Nuclear Information System (INIS)

    Yang Jianyi; Yu Zuguo; Anh, Vo

    2009-01-01

    The Schneider and Wrede hydrophobicity scale of amino acids and the 6-letter model of protein are proposed to study the relationship between the primary structure and the secondary structural classification of proteins. Two kinds of multifractal analyses are performed on the two measures obtained from these two kinds of data on large proteins. Nine parameters from the multifractal analyses are considered to construct the parameter spaces. Each protein is represented by one point in these spaces. A procedure is proposed to separate large proteins in the α, β, α + β and α/β structural classes in these parameter spaces. Fisher's linear discriminant algorithm is used to assess our clustering accuracy on the 49 selected large proteins. Numerical results indicate that the discriminant accuracies are satisfactory. In particular, they reach 100.00% and 84.21% in separating the α proteins from the {β, α + β, α/β} proteins in a parameter space; 92.86% and 86.96% in separating the β proteins from the {α + β, α/β} proteins in another parameter space; 91.67% and 83.33% in separating the α/β proteins from the α + β proteins in the last parameter space.

  16. Nonlinear Modeling and Dynamic Simulation Using Bifurcation and Stability Analyses of Regenerative Chatter of Ball-End Milling Process

    Directory of Open Access Journals (Sweden)

    Jeehyun Jung

    2016-01-01

    Full Text Available A dynamic model for a ball-end milling process that includes the consideration of cutting force nonlinearities and regenerative chatter effects is presented. The nonlinear cutting force is approximated using a Fourier series and then expanded into a Taylor series up to the third order. A series of nonlinear analyses was performed to investigate the nonlinear dynamic behavior of a ball-end milling system, and the differences between the nonlinear analysis approach and its linear counterpart were examined. A bifurcation analysis of points near the critical equilibrium points was performed using the method of multiple scales (MMS and the method of harmonic balance (MHB to analyse the local chatter behaviors of the system. The bifurcation analysis was conducted at two subcritical Hopf bifurcation points. It was also found that a ball-end milling system with nonlinear cutting forces near its critical equilibrium points is conditionally stable. The analysis and simulation results were compared with experimental data reported in the literature, and the physical significance of the results is discussed.

  17. SEM Investigation of Superheater Deposits from Biomass-Fired Boilers

    DEFF Research Database (Denmark)

    Jensen, Peter Arendt; Frandsen, Flemming; Hansen, Jørn

    2004-01-01

    Straw is used as fuel in relatively small-scale combined heat and power producing (CHP) grate boilers in Denmark. The large content of potassium and chlorine in straw greatly increases the deposit formation and corrosion of the superheater coils, compared to boilers firing coal. In this study......, mature superheater deposit samples were extracted from two straw-fired boilers, Masnedø and Ensted, with fuel inputs of 33 MWth and 100 MWth, respectively. SEM (scanning electron microscopy) images and EDX (energy dispersive X-ray) analyses were performed on the deposit samples. Different strategies...... are adopted to minimize deposit problems at the two boilers. At Masnedø the final superheater steam temperature is 520 °C, no soot blowing of the superheaters is applied and a relatively large superheater area is used. At Ensted, an external wood-fired superheater is used in order to obtain a final steam...

  18. SEMS: System for Environmental Monitoring and Sustainability

    Science.gov (United States)

    Arvidson, Raymond E.

    1998-01-01

    The goal of this project was to establish a computational and data management system, SEMS, building on our existing system and MTPE-related research. We proposed that the new system would help support Washington University's efforts in environmental sustainability through use in: (a) Problem-based environmental curriculum for freshmen and sophomores funded by the Hewlett Foundation that integrates scientific, cultural, and policy perspectives to understand the dynamics of wetland degradation, deforestation, and desertification and that will develop policies for sustainable environments and economies; (b) Higher-level undergraduate and graduate courses focused on monitoring the environment and developing policies that will lead to sustainable environmental and economic conditions; and (c) Interdisciplinary research focused on the dynamics of the Missouri River system and development of policies that lead to sustainable environmental and economic floodplain conditions.

  19. Civil engineering: EDF needs for concrete modelling; Genie civile: analyse des besoins EDF en modelisation du comportement des betons

    Energy Technology Data Exchange (ETDEWEB)

    Didry, O.; Gerard, B.; Bui, D. [Electricite de France (EDF), Direction des Etudes et Recherches, 92 - Clamart (France)

    1997-12-31

    Concrete structures which are encountered at EDF, like all civil engineering structures, age. In order to adapt the maintenance conditions of these structures, particularly to extend their service life, and also to prepare constructions of future structures, tools for predicting the behaviour of these structures in their environment should be available. For EDF the technical risks are high and consequently very appropriate R and D actions are required. In this context the Direction des Etudes et Recherches (DER) has developed a methodology for analysing concrete structure behaviour modelling. This approach has several aims: - making a distinction between the problems which refer to the existing models and those which require R and D; - displaying disciplinary links between different problems encountered on EDF structures (non-linear mechanical, chemical - hydraulic - mechanical coupling, etc); - listing of the existing tools and positioning the DER `Aster` finite element code among them. This document is a state of the art of scientific knowledge intended to shed light on the fields in which one should be involved when there is, on one part a strong requirement on the side of structure operators, and on the other one, the present tools do not allow this requirement to be satisfactorily met. The analysis has been done on 12 scientific subjects: 1) Hydration of concrete at early ages: exothermicity, hardening, autogenous shrinkage; 2) Drying and drying shrinkage; 3) Alkali-silica reaction and bulky stage formation; 4) Long term deterioration by leaching; 5) Ionic diffusion and associated attacks: the chlorides case; 6) Permeability / tightness of concrete; 7) Concretes -nonlinear behaviour and cracking (I): contribution of the plasticity models; 8) Concretes - nonlinear behaviour and cracking (II): contribution of the damage models; 9) Concretes - nonlinear behaviour and cracking (III): the contribution of the probabilistic analysis model; 10) Delayed behaviour of

  20. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  1. Sem analysis zirconia-ceramic adhesion interface

    Science.gov (United States)

    CARDELLI, P.; VERTUCCI, V.; MONTANI, M.; ARCURI, C.

    2015-01-01

    SUMMARY Objectives Modern dentistry increasingly tends to use materials aesthetically acceptable and biomimetic. Among these are zirconia and ceramics for several years, a combination that now has becoming synonym of aesthetic; however, what could be the real link between these two materials and especially its nature, remains a controversial topic debated in the literature. The aim of our study was to “underline” the type of bonding that could exist between these materials. Materials and methods To investigate the nature of this bond we used a SEM microscopy (Zeiss SUPRA 25). Different bilaminar specimens: “white” zirconia Zircodent® and ceramic “Noritake®”, after being tested with loading test in bending (three-point-bending) and FEM analysis, were analyzed by SEM. Fragments’ analysis in closeness of the fracture’s point has allowed us to be able to “see” if at large magnifications between these two materials, and without the use of linear, could exist a lasting bond and the possible type of failure that could incur. Results From our analysis of the specimens’ fragments analyzed after test Equipment, it is difficult to highlight a clear margin and no-adhesion zones between the two materials, although the analysis involving fragments adjacent to the fracture that has taken place at the time of Mechanical test Equipment. Conclusions According to our analysis and with all the clarification of the case, we can assume that you can obtain a long and lasting bond between the zirconia and ceramics. Agree to the data present in the literature, we can say that the type of bond varies according to the type of specimens and of course also the type of failure. In samples where the superstructure envelops the ceramic framework Zirconium we are in the presence of a cohesive failure, otherwise in a presence of adhesive failure. PMID:27555905

  2. Primary enamel permeability: a SEM evaluation in vivo.

    Science.gov (United States)

    Lucchese, A; Bertacci, A; Chersoni, S; Portelli, M

    2012-09-01

    The aim of this study was to evaluate in vivo the occurrence of outward fluid flow on primary tooth sound enamel surface. Sixty primary upper canines from preadolescent patients (mean age 8.0±1.9) and 24 retained primary upper canines from adult subjects (mean age 35.0±1.8) were analysed. The enamel surface was gently polished and air dried for 10 s. An impression was immediately obtained by vinyl polyxiloxane. Replicas were then obtained by polyether impression material, gold coated and inspected under SEM. The hydrophobic vinyl polyxiloxane material enabled to obtain in situ a morphological image of the presence of droplets, most likely resulting from outward fluids flow through outer enamel. For each sample three different representative areas of 5μ² in the cervical, medium and incisal third were examined and droplets presence values was recorded. All data were analysed by by Fisher's exact test. Primary enamel showed a substantial permeability expressed as droplets discharge on its surface. Droplets distribution covered, without any specific localisation, the entire enamel surface in all the samples. No signs of post-eruptive maturation with changes in droplets distribution were observed in samples from adult subjects. No statistically significant differences (P = 0.955) were noted in the percentage distribution of enamel area covered with droplets among the two group studied. SEM evaluation of droplets distribution on enamel surface indicated a substantial enamel permeability in primary teeth, accordingly with histological features, without changes during aging. A relationship between enamel permeability, caries susceptibility and bonding procedures effectiveness could be hypothesised.

  3. Using niche-modelling and species-specific cost analyses to determine a multispecies corridor in a fragmented landscape

    Science.gov (United States)

    Zurano, Juan Pablo; Selleski, Nicole; Schneider, Rosio G.

    2017-01-01

    types independent of the degree of legal protection. These data used with multifocal GIS analyses balance the varying degree of overlap and unique properties among them allowing for comprehensive conservation strategies to be developed relatively rapidly. Our comprehensive approach serves as a model to other regions faced with habitat loss and lack of data. The five carnivores focused on in our study have wide ranges, so the results from this study can be expanded and combined with surrounding countries, with analyses at the species or community level. PMID:28841692

  4. Nondestructive SEM for surface and subsurface wafer imaging

    Science.gov (United States)

    Propst, Roy H.; Bagnell, C. Robert; Cole, Edward I., Jr.; Davies, Brian G.; Dibianca, Frank A.; Johnson, Darryl G.; Oxford, William V.; Smith, Craig A.

    1987-01-01

    The scanning electron microscope (SEM) is considered as a tool for both failure analysis as well as device characterization. A survey is made of various operational SEM modes and their applicability to image processing methods on semiconductor devices.

  5. Spatially quantitative models for vulnerability analyses and resilience measures in flood risk management: Case study Rafina, Greece

    Science.gov (United States)

    Karagiorgos, Konstantinos; Chiari, Michael; Hübl, Johannes; Maris, Fotis; Thaler, Thomas; Fuchs, Sven

    2013-04-01

    We will address spatially quantitative models for vulnerability analyses in flood risk management in the catchment of Rafina, 25 km east of Athens, Greece; and potential measures to reduce damage costs. The evaluation of flood damage losses is relatively advanced. Nevertheless, major problems arise since there are no market prices for the evaluation process available. Moreover, there is particular gap in quantifying the damages and necessary expenditures for the implementation of mitigation measures with respect to flash floods. The key issue is to develop prototypes for assessing flood losses and the impact of mitigation measures on flood resilience by adjusting a vulnerability model and to further develop the method in a Mediterranean region influenced by both, mountain and coastal characteristics of land development. The objective of this study is to create a spatial and temporal analysis of the vulnerability factors based on a method combining spatially explicit loss data, data on the value of exposed elements at risk, and data on flood intensities. In this contribution, a methodology for the development of a flood damage assessment as a function of the process intensity and the degree of loss is presented. It is shown that (1) such relationships for defined object categories are dependent on site-specific and process-specific characteristics, but there is a correlation between process types that have similar characteristics; (2) existing semi-quantitative approaches of vulnerability assessment for elements at risk can be improved based on the proposed quantitative method; and (3) the concept of risk can be enhanced with respect to a standardised and comprehensive implementation by applying the vulnerability functions to be developed within the proposed research. Therefore, loss data were collected from responsible administrative bodies and analysed on an object level. The used model is based on a basin scale approach as well as data on elements at risk exposed

  6. Water Quality Development in the Semíč Stream

    Directory of Open Access Journals (Sweden)

    Petra Oppeltová

    2015-01-01

    Full Text Available The aims of the work were to analyse selected quality indicators of a small water stream called Semíč and evaluate the results based on the valid legislation. Eight sampling profiles (SP were selected and water was sampled four times a year in the period May 2013–April 2014. PH, conductivity, oxygen content and temperature were measured directly in the field. Subsequently, ferrum, nitric nitrogen, ammoniacal nitrogen, sulphates, chlorides, chemical oxygen demand tested using dichromate, total phosphorus, total nitrogen and manganese were analysed in the laboratory. Analyses of selected heavy metals – zinc, copper and aluminum – were carried out in spring 2014. The results were classified in compliance with Government Decree (GD No. 61/2003 Coll., as amended, and Czech standard ČSN 75 7221. The results of the period 2013–2014 were compared with the results from 2002–2003 and 1992. The resulting concentrations of substances manifest considerable instability during the year, which can most likely be attributed to large changes in flow rates in different seasons. When comparing the values to older results, it can be concluded that the concentrations of a number of substances have decreased; by contrast, others have increased. An extreme increase in copper was detected, where the concentration exceeded the environmental quality standard several times.

  7. The luminal surface of thyroid cysts in SEM

    DEFF Research Database (Denmark)

    Zelander, T; Kirkeby, S

    1978-01-01

    Four of the five kinds of cells constituting the walls of thyroid cysts can be identified in the SEM. These are cuboidal cells, mucous cells, cells with large granules and ciliated cells. A correlation between SEM and TEM observations is attempted.......Four of the five kinds of cells constituting the walls of thyroid cysts can be identified in the SEM. These are cuboidal cells, mucous cells, cells with large granules and ciliated cells. A correlation between SEM and TEM observations is attempted....

  8. Classification and printability of EUV mask defects from SEM images

    Science.gov (United States)

    Cho, Wonil; Price, Daniel; Morgan, Paul A.; Rost, Daniel; Satake, Masaki; Tolani, Vikram L.

    2017-10-01

    -to-Aerial printability) analysis of every defect. First, a defect-free or reference mask SEM is rendered from the post-OPC design, and the defective signature is detected from the defect-reference difference image. These signatures help assess the true nature of the defect as evident in e-beam imaging; for example, excess or missing absorber, line-edge roughness, contamination, etc. Next, defect and reference contours are extracted from the grayscale SEM images and fed into the simulation engine with an EUV scanner model to generate corresponding EUV defect and reference aerial images. These are then analyzed for printability and dispositioned using an Aerial Image Analyzer (AIA) application to automatically measure and determine the amount of CD errors. Thus by integrating EUV ADC and S2A applications together, every defect detection is characterized for its type and printability which is essential for not only determining which defects to repair, but also in monitoring the performance of EUV mask process tools. The accuracy of the S2A print modeling has been verified with other commercially-available simulators, and will also be verified with actual wafer print results. With EUV lithography progressing towards volume manufacturing at 5nm technology, and the likelihood of EBMI inspectors approaching the horizon, the EUV ADC-S2A system will continue serving an essential role of dispositioning defects off e-beam imaging.

  9. Raman-in-SEM, a multimodal and multiscale analytical tool: performance for materials and expertise.

    Science.gov (United States)

    Wille, Guillaume; Bourrat, Xavier; Maubec, Nicolas; Lahfid, Abdeltif

    2014-12-01

    The availability of Raman spectroscopy in a powerful analytical scanning electron microscope (SEM) allows morphological, elemental, chemical, physical and electronic analysis without moving the sample between instruments. This paper documents the metrological performance of the SEMSCA commercial Raman interface operated in a low vacuum SEM. It provides multiscale and multimodal analyses as Raman/EDS, Raman/cathodoluminescence or Raman/STEM (STEM: scanning transmission electron microscopy) as well as Raman spectroscopy on nanomaterials. Since Raman spectroscopy in a SEM can be influenced by several SEM-related phenomena, this paper firstly presents a comparison of this new tool with a conventional micro-Raman spectrometer. Then, some possible artefacts are documented, which are due to the impact of electron beam-induced contamination or cathodoluminescence contribution to the Raman spectra, especially with geological samples. These effects are easily overcome by changing or adapting the Raman spectrometer and the SEM settings and methodology. The deletion of the adverse effect of cathodoluminescence is solved by using a SEM beam shutter during Raman acquisition. In contrast, this interface provides the ability to record the cathodoluminescence (CL) spectrum of a phase. In a second part, this study highlights the interest and efficiency of the coupling in characterizing micrometric phases at the same point. This multimodal approach is illustrated with various issues encountered in geosciences. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Robust surface reconstruction by design-guided SEM photometric stereo

    Science.gov (United States)

    Miyamoto, Atsushi; Matsuse, Hiroki; Koutaki, Gou

    2017-04-01

    We present a novel approach that addresses the blind reconstruction problem in scanning electron microscope (SEM) photometric stereo for complicated semiconductor patterns to be measured. In our previous work, we developed a bootstrapping de-shadowing and self-calibration (BDS) method, which automatically calibrates the parameter of the gradient measurement formulas and resolves shadowing errors for estimating an accurate three-dimensional (3D) shape and underlying shadowless images. Experimental results on 3D surface reconstruction demonstrated the significance of the BDS method for simple shapes, such as an isolated line pattern. However, we found that complicated shapes, such as line-and-space (L&S) and multilayered patterns, produce deformed and inaccurate measurement results. This problem is due to brightness fluctuations in the SEM images, which are mainly caused by the energy fluctuations of the primary electron beam, variations in the electronic expanse inside a specimen, and electrical charging of specimens. Despite these being essential difficulties encountered in SEM photometric stereo, it is difficult to model accurately all the complicated physical phenomena of electronic behavior. We improved the robustness of the surface reconstruction in order to deal with these practical difficulties with complicated shapes. Here, design data are useful clues as to the pattern layout and layer information of integrated semiconductors. We used the design data as a guide of the measured shape and incorporated a geometrical constraint term to evaluate the difference between the measured and designed shapes into the objective function of the BDS method. Because the true shape does not necessarily correspond to the designed one, we use an iterative scheme to develop proper guide patterns and a 3D surface that provides both a less distorted and more accurate 3D shape after convergence. Extensive experiments on real image data demonstrate the robustness and effectiveness

  11. Computational modeling and statistical analyses on individual contact rate and exposure to disease in complex and confined transportation hubs

    Science.gov (United States)

    Wang, W. L.; Tsui, K. L.; Lo, S. M.; Liu, S. B.

    2018-01-01

    Crowded transportation hubs such as metro stations are thought as ideal places for the development and spread of epidemics. However, for the special features of complex spatial layout, confined environment with a large number of highly mobile individuals, it is difficult to quantify human contacts in such environments, wherein disease spreading dynamics were less explored in the previous studies. Due to the heterogeneity and dynamic nature of human interactions, increasing studies proved the importance of contact distance and length of contact in transmission probabilities. In this study, we show how detailed information on contact and exposure patterns can be obtained by statistical analyses on microscopic crowd simulation data. To be specific, a pedestrian simulation model-CityFlow was employed to reproduce individuals' movements in a metro station based on site survey data, values and distributions of individual contact rate and exposure in different simulation cases were obtained and analyzed. It is interesting that Weibull distribution fitted the histogram values of individual-based exposure in each case very well. Moreover, we found both individual contact rate and exposure had linear relationship with the average crowd densities of the environments. The results obtained in this paper can provide reference to epidemic study in complex and confined transportation hubs and refine the existing disease spreading models.

  12. Control volume analyses of glottal flow using a fully-coupled numerical fluid-structure interaction model

    Science.gov (United States)

    Yang, Jubiao; Krane, Michael; Zhang, Lucy

    2013-11-01

    Vocal fold vibrations and the glottal jet are successfully simulated using the modified Immersed Finite Element method (mIFEM), a fully coupled dynamics approach to model fluid-structure interactions. A self-sustained and steady vocal fold vibration is captured given a constant pressure input at the glottal entrance. The flow rates at different axial locations in the glottis are calculated, showing small variations among them due to the vocal fold motion and deformation. To further facilitate the understanding of the phonation process, two control volume analyses, specifically with Bernoulli's equation and Newton's 2nd law, are carried out for the glottal flow based on the simulation results. A generalized Bernoulli's equation is derived to interpret the correlations between the velocity and pressure temporally and spatially along the center line which is a streamline using a half-space model with symmetry boundary condition. A specialized Newton's 2nd law equation is developed and divided into terms to help understand the driving mechanism of the glottal flow.

  13. Bayesian salamanders: analysing the demography of an underground population of the European plethodontid Speleomantes strinatii with state-space modelling

    Directory of Open Access Journals (Sweden)

    Salvidio Sebastiano

    2010-02-01

    Full Text Available Abstract Background It has been suggested that Plethodontid salamanders are excellent candidates for indicating ecosystem health. However, detailed, long-term data sets of their populations are rare, limiting our understanding of the demographic processes underlying their population fluctuations. Here we present a demographic analysis based on a 1996 - 2008 data set on an underground population of Speleomantes strinatii (Aellen in NW Italy. We utilised a Bayesian state-space approach allowing us to parameterise a stage-structured Lefkovitch model. We used all the available population data from annual temporary removal experiments to provide us with the baseline data on the numbers of juveniles, subadults and adult males and females present at any given time. Results Sampling the posterior chains of the converged state-space model gives us the likelihood distributions of the state-specific demographic rates and the associated uncertainty of these estimates. Analysing the resulting parameterised Lefkovitch matrices shows that the population growth is very close to 1, and that at population equilibrium we expect half of the individuals present to be adults of reproductive age which is what we also observe in the data. Elasticity analysis shows that adult survival is the key determinant for population growth. Conclusion This analysis demonstrates how an understanding of population demography can be gained from structured population data even in a case where following marked individuals over their whole lifespan is not practical.

  14. Time Headway Modelling of Motorcycle-Dominated Traffic to Analyse Traffic Safety Performance and Road Link Capacity of Single Carriageways

    Directory of Open Access Journals (Sweden)

    D. M. Priyantha Wedagama

    2017-04-01

    Full Text Available This study aims to develop time headway distribution models to analyse traffic safety performance and road link capacities for motorcycle-dominated traffic in Denpasar, Bali. Three road links selected as the case study are Jl. Hayam Wuruk, Jl.Hang Tuah, and Jl. Padma. Data analysis showed that between 55%-80% of motorists in Denpasar during morning and evening peak hours paid less attention to the safe distance with the vehicles in front. The study found that Lognormal distribution models are best to fit time headway data during morning peak hours while either Weibull (3P or Pearson III distributions is for evening peak hours. Road link capacities for mixed traffic predominantly motorcycles are apparently affected by the behaviour of motorists in keeping safe distance with the vehicles in front. Theoretical road link capacities for Jl. Hayam Wuruk, Jl. Hang Tuah and Jl. Padma are 3,186 vehicles/hour, 3,077 vehicles/hour and 1935 vehicles/hour respectively.

  15. Round-robin pretest analyses of a 1:6-scale reinforced concrete containment model subject to static internal pressurization

    International Nuclear Information System (INIS)

    Clauss, D.B.

    1987-05-01

    Analyses of a 1:6-scale reinforced concrete containment model that will be tested to failure at Sandia National Laboratories in the spring of 1987 were conducted by the following organizations in the United States and Europe: Sandia National Laboratories (USA), Argonne National Laboratory (USA), Electric Power Research Institute (USA), Commissariat a L'Energie Atomique (France), HM Nuclear Installations Inspectorate (UK), Comitato Nazionale per la ricerca e per lo sviluppo dell'Energia Nucleare e delle Energie Alternative (Italy), UK Atomic Energy Authority, Safety and Reliability Directorate (UK), Gesellschaft fuer Reaktorsicherheit (FRG), Brookhaven National Laboratory (USA), and Central Electricity Generating Board (UK). Each organization was supplied with a standard information package, which included construction drawings and actual material properties for most of the materials used in the model. Each organization worked independently using their own analytical methods. This report includes descriptions of the various analytical approaches and pretest predictions submitted by each organization. Significant milestones that occur with increasing pressure, such as damage to the concrete (cracking and crushing) and yielding of the steel components, and the failure pressure (capacity) and failure mechanism are described. Analytical predictions for pressure histories of strain in the liner and rebar and displacements are compared at locations where experimental results will be available after the test. Thus, these predictions can be compared to one another and to experimental results after the test

  16. Latent vs. Observed Variables : Analysis of Irrigation Water Efficiency Using SEM and SUR

    NARCIS (Netherlands)

    Tang, Jianjun; Folmer, Henk

    In this paper we compare conceptualising single factor technical and allocative efficiency as indicators of a single latent variable, or as separate observed variables. In the former case, the impacts on both efficiency types are analysed by means of structural equationmodeling (SEM), in the latter

  17. Semântica e lexicografia

    Directory of Open Access Journals (Sweden)

    Julio Casares

    2001-01-01

    Full Text Available

    A Semântica e a Lexicografia se interpenetram mutuamente porque a Lexicografia não se limita a recolher as palavras do léxico, mas procura descrever a significação dos vocábulos e seus usos. O lexicógrafo também se ocupa de evolução dos sentidos das palavras para estabelecer a escala das acepções de um signo lexical. Casares conceitua acepção e discute o problema da discriminação das acepções e da sua ordenação no caso de palavras polissêmicas. Outra Questão delicada para o lexicógrafo é o reconhecimento e a identificação correta dos valores metafóricos. O autor usa como exemplo ilustrativo o verbete lat. ordo > esp. orden (port. ordem, signo polissêmico. Traça gráficos da ma-, lha de significações na semântica evolutiva dessa palavra, do étimo original latino ao espanhol moderno. Casares também trata do problema da lematização, ou seja, a decisão técnica de escolher como entrada de um dicionário, uma ou outra forma vocabular, o que envolve controvérsias permanentes em meio aos lexicólogos sobre as lexias (palavras complexas e como e quando se dá a categorização lexical de um polinómio vocabular. Esse problema é ampliado por causa da tradição caótica de muitas grafias, particularmente no caso de "locuções vocabulares". Advoga as vantagens e as virtudes de um dicionário que tivesse um índice de freqüência do uso de cada palavra, ou de cada acepção de um vocábulo.

  18. Morphological characteristics of primary enamel surfaces versus permanent enamel surfaces: SEM digital analysis.

    Science.gov (United States)

    Lucchese, A; Storti, E

    2011-09-01

    The morphology of permanent and primary enamel surface merits further analysis. The objective of this study was to illustrate a method of SEM digital image processing able to quantify and discriminate between the morphological characteristics of primary and permanent tooth enamel. Sixteen extracted teeth, 8 primary teeth and 8 permanent teeth, kept in saline solution, were analysed. The teeth were observed under SEM. The SEM images were analysed by means of digitally processed algorithms. The two algorithms used were: Local standard deviation to measure surface roughness with the roughness index (RI); Hough's theorem to identify linear structures with the linear structure index (LSI). The SEM images of primary teeth enamel show smooth enamel with little areas of irregularity. No linear structures are apparent. The SEM images of permanent enamel show a not perfectly smooth surface; there are furrows and irregularities of variable depth and width. In the clinical practice a number of different situations require the removal of a thin layer of enamel. Only a good morphological knowledge of both permanent and primary tooth enamel gives the opportunity to identify and exploit the effects of rotary tools on enamel, thus allowing for a correct finishing technique.

  19. Analysis list: sem-4 [Chip-atlas[Archive

    Lifescience Database Archive (English)

    Full Text Available sem-4 Larvae + ce10 http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/target/sem-4.1....tsv http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/target/sem-4.5.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/target/sem...-4.10.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/colo/sem-4.Larvae.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/colo/Larvae.gml ...

  20. On groundwater flow modelling in safety analyses of spent fuel disposal. A comparative study with emphasis on boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Jussila, P

    1999-11-01

    Modelling groundwater flow is an essential part of the safety assessment of spent fuel disposal because moving groundwater makes a physical connection between a geological repository and the biosphere. Some of the common approaches to model groundwater flow in bedrock are equivalent porous continuum (EC), stochastic continuum and various fracture network concepts. The actual flow system is complex and measuring data are limited. Multiple distinct approaches and models, alternative scenarios as well as calibration and sensitivity analyses are used to give confidence on the results of the calculations. The correctness and orders of magnitude of results of such complex research can be assessed by comparing them to the results of simplified and robust approaches. The first part of this study is a survey of the objects, contents and methods of the groundwater flow modelling performed in the safety assessment of the spent fuel disposal in Finland and Sweden. The most apparent difference of the Swedish studies compared to the Finnish ones is the approach of using more different models, which is enabled by the more resources available in Sweden. The results of more comprehensive approaches provided by international co-operation are very useful to give perspective to the results obtained in Finland. In the second part of this study, the influence of boundary conditions on the flow fields of a simple 2D model is examined. The assumptions and simplifications in this approach include e.g. the following: (1) the EC model is used, in which the 2-dimensional domain is considered a continuum of equivalent properties without fractures present, (2) the calculations are done for stationary fields, without sources or sinks present in the domain and with a constant density of the groundwater, (3) the repository is represented by an isotropic plate, the hydraulic conductivity of which is given fictitious values, (4) the hydraulic conductivity of rock is supposed to have an exponential

  1. Preparation and characterization of analcime powders by x-ray and sem analyses

    Czech Academy of Sciences Publication Activity Database

    Kohoutková, Martina; Kloužková, A.; Maixner, J.; Mrázová, M.

    2007-01-01

    Roč. 51, č. 1 (2007), s. 9-14 ISSN 0862-5468 R&D Projects: GA MPO 2A-1TP1/063 Institutional research plan: CEZ:AV0Z40320502 Keywords : analcime * hydrothermal synthesis * X-ray diffraction Subject RIV: CA - Inorganic Chemistry Impact factor: 0.488, year: 2007

  2. Comparative SEM analysis of nine F22 aligner cleaning strategies

    Directory of Open Access Journals (Sweden)

    Luca Lombardo

    2017-09-01

    Full Text Available Abstract Background The orthodontics industry has paid great attention to the aesthetics of orthodontic appliances, seeking to make them as invisible as possible. There are several advantages to clear aligner systems, including aesthetics, comfort, chairside time reduction, and the fact that they can be removed for meals and oral hygiene procedures. Methods Five patients were each given a series of F22 aligners, each to be worn for 14 days and nights, with the exception of meal and brushing times. Patients were instructed to clean each aligner using a prescribed strategy, and sections of the used aligners were observed under SEM. One grey-scale SEM image was saved per aligner in JPEG format with an 8-bit colour depth, and a total of 45 measurements on the grey scale (“Value” variable were made. This dataset was analysed statistically via repeated measures ANOVA to determine the effect of each of the nine cleaning strategies in each of the five patients. Results A statistically significant difference in the efficacy of the cleaning strategies was detected. Specifically, rinsing with water alone was significantly less efficacious, and a combination of cationic detergent solution and ultrasonication was significantly more efficacious than the other methods (p < 0.05. Conclusions Of the nine cleaning strategies examined, only that involving 5 min of ultrasonication at 42 k Hz combined with a 0.3% germicidal cationic detergent was observed to be statistically effective at removing the bacterial biofilm from the surface of F22 aligners.

  3. Water flow experiments and analyses on the cross-flow type mercury target model with the flow guide plates

    CERN Document Server

    Haga, K; Kaminaga, M; Hino, R

    2001-01-01

    A mercury target is used in the spallation neutron source driven by a high-intensity proton accelerator. In this study, the effectiveness of the cross-flow type mercury target structure was evaluated experimentally and analytically. Prior to the experiment, the mercury flow field and the temperature distribution in the target container were analyzed assuming a proton beam energy and power of 1.5 GeV and 5 MW, respectively, and the feasibility of the cross-flow type target was evaluated. Then the average water flow velocity field in the target mock-up model, which was fabricated from Plexiglass for a water experiment, was measured at room temperature using the PIV technique. Water flow analyses were conducted and the analytical results were compared with the experimental results. The experimental results showed that the cross-flow could be realized in most of the proton beam path area and the analytical result of the water flow velocity field showed good correspondence to the experimental results in the case w...

  4. Surgery on spinal epidural metastases (SEM) in renal cell carcinoma: a plea for a new paradigm.

    Science.gov (United States)

    Bakker, Nicolaas A; Coppes, Maarten H; Vergeer, Rob A; Kuijlen, Jos M A; Groen, Rob J M

    2014-09-01

    Prediction models for outcome of decompressive surgical resection of spinal epidural metastases (SEM) have in common that they have been developed for all types of SEM, irrespective of the type of primary tumor. It is our experience in clinical practice, however, that these models often fail to accurately predict outcome in the individual patient. To investigate whether decision making could be optimized by applying tumor-specific prediction models. For the proof of concept, we analyzed patients with SEM from renal cell carcinoma that we have operated on. Retrospective chart analysis 2006 to 2012. Twenty-one consecutive patients with symptomatic SEM of renal cell carcinoma. Predictive factors for survival. Next to established predictive factors for survival, we analyzed the predictive value of the Motzer criteria in these patients. The Motzer criteria comprise a specific and validated risk model for survival in patients with renal cell carcinoma. After multivariable analysis, only Motzer intermediate (hazard ratio [HR] 17.4, 95% confidence interval [CI] 1.82-166, p=.01) and high risk (HR 39.3, 95% CI 3.10-499, p=.005) turned out to be significantly associated with survival in patients with renal cell carcinoma that we have operated on. In this study, we have demonstrated that decision making could have been optimized by implementing the Motzer criteria next to established prediction models. We, therefore, suggest that in future, in patients with SEM from renal cell carcinoma, the Motzer criteria are also taken into account. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Modeling Acequia Irrigation Systems Using System Dynamics: Model Development, Evaluation, and Sensitivity Analyses to Investigate Effects of Socio-Economic and Biophysical Feedbacks

    Directory of Open Access Journals (Sweden)

    Benjamin L. Turner

    2016-10-01

    Full Text Available Agriculture-based irrigation communities of northern New Mexico have survived for centuries despite the arid environment in which they reside. These irrigation communities are threatened by regional population growth, urbanization, a changing demographic profile, economic development, climate change, and other factors. Within this context, we investigated the extent to which community resource management practices centering on shared resources (e.g., water for agricultural in the floodplains and grazing resources in the uplands and mutualism (i.e., shared responsibility of local residents to maintaining traditional irrigation policies and upholding cultural and spiritual observances embedded within the community structure influence acequia function. We used a system dynamics modeling approach as an interdisciplinary platform to integrate these systems, specifically the relationship between community structure and resource management. In this paper we describe the background and context of acequia communities in northern New Mexico and the challenges they face. We formulate a Dynamic Hypothesis capturing the endogenous feedbacks driving acequia community vitality. Development of the model centered on major stock-and-flow components, including linkages for hydrology, ecology, community, and economics. Calibration metrics were used for model evaluation, including statistical correlation of observed and predicted values and Theil inequality statistics. Results indicated that the model reproduced trends exhibited by the observed system. Sensitivity analyses of socio-cultural processes identified absentee decisions, cumulative income effect on time in agriculture, and land use preference due to time allocation, community demographic effect, effect of employment on participation, and farm size effect as key determinants of system behavior and response. Sensitivity analyses of biophysical parameters revealed that several key parameters (e.g., acres per

  6. Epidemiology of HPV 16 and cervical cancer in Finland and the potential impact of vaccination: mathematical modelling analyses.

    Directory of Open Access Journals (Sweden)

    Ruanne V Barnabas

    2006-05-01

    Full Text Available BACKGROUND: Candidate human papillomavirus (HPV vaccines have demonstrated almost 90%-100% efficacy in preventing persistent, type-specific HPV infection over 18 mo in clinical trials. If these vaccines go on to demonstrate prevention of precancerous lesions in phase III clinical trials, they will be licensed for public use in the near future. How these vaccines will be used in countries with national cervical cancer screening programmes is an important question. METHODS AND FINDINGS: We developed a transmission model of HPV 16 infection and progression to cervical cancer and calibrated it to Finnish HPV 16 seroprevalence over time. The model was used to estimate the transmission probability of the virus, to look at the effect of changes in patterns of sexual behaviour and smoking on age-specific trends in cancer incidence, and to explore the impact of HPV 16 vaccination. We estimated a high per-partnership transmission probability of HPV 16, of 0.6. The modelling analyses showed that changes in sexual behaviour and smoking accounted, in part, for the increase seen in cervical cancer incidence in 35- to 39-y-old women from 1990 to 1999. At both low (10% in opportunistic immunisation and high (90% in a national immunisation programme coverage of the adolescent population, vaccinating women and men had little benefit over vaccinating women alone. We estimate that vaccinating 90% of young women before sexual debut has the potential to decrease HPV type-specific (e.g., type 16 cervical cancer incidence by 91%. If older women are more likely to have persistent infections and progress to cancer, then vaccination with a duration of protection of less than 15 y could result in an older susceptible cohort and no decrease in cancer incidence. While vaccination has the potential to significantly reduce type-specific cancer incidence, its combination with screening further improves cancer prevention. CONCLUSIONS: HPV vaccination has the potential to

  7. Intercomparison and analyses of the climatology of the West African monsoon in the West African monsoon modeling and evaluation project (WAMME) first model intercomparison experiment

    Energy Technology Data Exchange (ETDEWEB)

    Xue, Yongkang; Sales, Fernando De [University of California, Los Angeles, CA (United States); Lau, W.K.M.; Schubert, Siegfried D.; Wu, Man-Li C. [NASA, Goddard Space Flight Center, Greenbelt, MD (United States); Boone, Aaron [Centre National de Recherches Meteorologiques, Meteo-France Toulouse, Toulouse (France); Feng, Jinming [University of California, Los Angeles, CA (United States); Chinese Academy of Sciences, Institute of Atmospheric Physics, Beijing (China); Dirmeyer, Paul; Guo, Zhichang [Center for Ocean-Land-Atmosphere Interactions, Calverton, MD (United States); Kim, Kyu-Myong [University of Maryland Baltimore County, Baltimore, MD (United States); Kitoh, Akio [Meteorological Research Institute, Tsukuba (Japan); Kumar, Vadlamani [National Center for Environmental Prediction, Camp Springs, MD (United States); Wyle Information Systems, Gaithersburg, MD (United States); Poccard-Leclercq, Isabelle [Universite de Bourgogne, Centre de Recherches de Climatologie UMR5210 CNRS, Dijon (France); Mahowald, Natalie [Cornell University, Ithaca, NY (United States); Moufouma-Okia, Wilfran; Rowell, David P. [Met Office Hadley Centre, Exeter (United Kingdom); Pegion, Phillip [NASA, Goddard Space Flight Center, Greenbelt, MD (United States); National Center for Environmental Prediction, Camp Springs, MD (United States); Schemm, Jae; Thiaw, Wassila M. [National Center for Environmental Prediction, Camp Springs, MD (United States); Sealy, Andrea [The Caribbean Institute for Meteorology and Hydrology, St. James (Barbados); Vintzileos, Augustin [National Center for Environmental Prediction, Camp Springs, MD (United States); Science Applications International Corporation, Camp Springs, MD (United States); Williams, Steven F. [National Center for Atmospheric Research, Boulder, CO (United States)

    2010-07-15

    This paper briefly presents the West African monsoon (WAM) modeling and evaluation project (WAMME) and evaluates WAMME general circulation models' (GCM) performances in simulating variability of WAM precipitation, surface temperature, and major circulation features at seasonal and intraseasonal scales in the first WAMME experiment. The analyses indicate that models with specified sea surface temperature generally have reasonable simulations of the pattern of spatial distribution of WAM seasonal mean precipitation and surface temperature as well as the averaged zonal wind in latitude-height cross-section and low level circulation. But there are large differences among models in simulating spatial correlation, intensity, and variance of precipitation compared with observations. Furthermore, the majority of models fail to produce proper intensities of the African Easterly Jet (AEJ) and the tropical easterly jet. AMMA Land Surface Model Intercomparison Project (ALMIP) data are used to analyze the association between simulated surface processes and the WAM and to investigate the WAM mechanism. It has been identified that the spatial distributions of surface sensible heat flux, surface temperature, and moisture convergence are closely associated with the simulated spatial distribution of precipitation; while surface latent heat flux is closely associated with the AEJ and contributes to divergence in AEJ simulation. Common empirical orthogonal functions (CEOF) analysis is applied to characterize the WAM precipitation evolution and has identified a major WAM precipitation mode and two temperature modes (Sahara mode and Sahel mode). Results indicate that the WAMME models produce reasonable temporal evolutions of major CEOF modes but have deficiencies/uncertainties in producing variances explained by major modes. Furthermore, the CEOF analysis shows that WAM precipitation evolution is closely related to the enhanced Sahara mode and the weakened Sahel mode, supporting

  8. Filler segmentation of SEM paper images based on mathematical morphology.

    Science.gov (United States)

    Ait Kbir, M; Benslimane, Rachid; Princi, Elisabetta; Vicini, Silvia; Pedemonte, Enrico

    2007-07-01

    Recent developments in microscopy and image processing have made digital measurements on high-resolution images of fibrous materials possible. This helps to gain a better understanding of the structure and other properties of the material at micro level. In this paper SEM image segmentation based on mathematical morphology is proposed. In fact, paper models images (Whatman, Murillo, Watercolor, Newsprint paper) selected in the context of the Euro Mediterranean PaperTech Project have different distributions of fibers and fillers, caused by the presence of SiAl and CaCO3 particles. It is a microscopy challenge to make filler particles in the sheet distinguishable from the other components of the paper surface. This objectif is reached here by using switable strutural elements and mathematical morphology operators.

  9. A second-generation device for automated training and quantitative behavior analyses of molecularly-tractable model organisms.

    Directory of Open Access Journals (Sweden)

    Douglas Blackiston

    2010-12-01

    Full Text Available A deep understanding of cognitive processes requires functional, quantitative analyses of the steps leading from genetics and the development of nervous system structure to behavior. Molecularly-tractable model systems such as Xenopus laevis and planaria offer an unprecedented opportunity to dissect the mechanisms determining the complex structure of the brain and CNS. A standardized platform that facilitated quantitative analysis of behavior would make a significant impact on evolutionary ethology, neuropharmacology, and cognitive science. While some animal tracking systems exist, the available systems do not allow automated training (feedback to individual subjects in real time, which is necessary for operant conditioning assays. The lack of standardization in the field, and the numerous technical challenges that face the development of a versatile system with the necessary capabilities, comprise a significant barrier keeping molecular developmental biology labs from integrating behavior analysis endpoints into their pharmacological and genetic perturbations. Here we report the development of a second-generation system that is a highly flexible, powerful machine vision and environmental control platform. In order to enable multidisciplinary studies aimed at understanding the roles of genes in brain function and behavior, and aid other laboratories that do not have the facilities to undergo complex engineering development, we describe the device and the problems that it overcomes. We also present sample data using frog tadpoles and flatworms to illustrate its use. Having solved significant engineering challenges in its construction, the resulting design is a relatively inexpensive instrument of wide relevance for several fields, and will accelerate interdisciplinary discovery in pharmacology, neurobiology, regenerative medicine, and cognitive science.

  10. Transcriptomics and proteomics analyses of the PACAP38 influenced ischemic brain in permanent middle cerebral artery occlusion model mice

    Directory of Open Access Journals (Sweden)

    Hori Motohide

    2012-11-01

    Full Text Available Abstract Introduction The neuropeptide pituitary adenylate cyclase-activating polypeptide (PACAP is considered to be a potential therapeutic agent for prevention of cerebral ischemia. Ischemia is a most common cause of death after heart attack and cancer causing major negative social and economic consequences. This study was designed to investigate the effect of PACAP38 injection intracerebroventrically in a mouse model of permanent middle cerebral artery occlusion (PMCAO along with corresponding SHAM control that used 0.9% saline injection. Methods Ischemic and non-ischemic brain tissues were sampled at 6 and 24 hours post-treatment. Following behavioral analyses to confirm whether the ischemia has occurred, we investigated the genome-wide changes in gene and protein expression using DNA microarray chip (4x44K, Agilent and two-dimensional gel electrophoresis (2-DGE coupled with matrix assisted laser desorption/ionization-time of flight-mass spectrometry (MALDI-TOF-MS, respectively. Western blotting and immunofluorescent staining were also used to further examine the identified protein factor. Results Our results revealed numerous changes in the transcriptome of ischemic hemisphere (ipsilateral treated with PACAP38 compared to the saline-injected SHAM control hemisphere (contralateral. Previously known (such as the interleukin family and novel (Gabra6, Crtam genes were identified under PACAP influence. In parallel, 2-DGE analysis revealed a highly expressed protein spot in the ischemic hemisphere that was identified as dihydropyrimidinase-related protein 2 (DPYL2. The DPYL2, also known as Crmp2, is a marker for the axonal growth and nerve development. Interestingly, PACAP treatment slightly increased its abundance (by 2-DGE and immunostaining at 6 h but not at 24 h in the ischemic hemisphere, suggesting PACAP activates neuronal defense mechanism early on. Conclusions This study provides a detailed inventory of PACAP influenced gene expressions

  11. SEM-based overlay measurement between via patterns and buried M1 patterns using high-voltage SEM

    Science.gov (United States)

    Hasumi, Kazuhisa; Inoue, Osamu; Okagawa, Yutaka; Shao, Chuanyu; Leray, Philippe; Halder, Sandip; Lorusso, Gian; Jehoul, Christiane

    2017-03-01

    The miniaturization of semiconductors continues, importance of overlay measurement is increasing. We measured overlay with analysis SEM called Miracle Eye which can output ultrahigh acceleration voltage in 1998. Meanwhile, since 2006, we have been working on SEM based overlay measurement and developed overlay measurement function of the same layer using CD-SEM. Then, we evaluated overlay of the same layer pattern after etching. This time, in order to measure overlay after lithography, we evaluated the see-through overlay using high voltage SEM CV5000 released in October 2016. In collaboration between imec and Hitachi High-Technologies, we evaluated repeatability, TIS of SEM-OVL as well as correlation between SEM-OVL and Opt-OVL in the M1@ADI and V0@ADI process. Repeatability and TIS results are reasonable and SEM-OVL has good correlation with Opt-OVL. By overlay measurement using CV 5000, we got the following conclusions. (1)SEM_OVL results of both M1 and V0 at ADI show good correlation to OPT_OVL. (2)High voltage SEM can prove the measurement capability of a small pattern(Less than 1 2um) like device that can be placed in-die area. (3)"In-die SEM based overlay" shows possibility for high order control of scanner

  12. Canticum Novum: música sem palavras e palavras sem som no pensamento de Santo Agostinho

    Directory of Open Access Journals (Sweden)

    Lorenzo Mammì

    2000-04-01

    Full Text Available NO De Magistro, Santo Agostinho coloca a reza e o canto numa posição similar, à margem das funções imediatamente comunicativas da linguagem. A reflexão agostiniana sobre a reza se baseia nos hábitos cristãos da leitura, da oração e da meditação silenciosas. Há sobre o canto, na prática igualmente inovadora do jubilus, melodia sem palavra destinada aos momentos mais intensos e gaudiosos da liturgia. A oração silenciosa e o jubilus são temas recorrentes da literatura patrística, mas Agostinho os aborda de maneira original, desenhando, a partir das palavras sem som da oração e do som sem palavra do jubilus, o perfil de um discurso interior, que não se destina aos homens, mas a Deus.IN HIS De Magistro Saint Augustine places prayer and song on a similar level, alongside the language immediately communicative functions. His considerations on prayer are grounded on the Christian habits of silent reading, prayer and meditation; those on song, on the equally innovating practice called jubilus, which is melody without words designed for the intensest and most joyous liturgical moments. Silent prayer and jubilus are recurring topics in patristic literature, but Augustine deals with them in an original way, drawing from the soundless words of prayer and the wordless sound of jubilus an inner discourse, addressed not to men but to God.

  13. Minimal resin embedding of multicellular specimens for targeted FIB-SEM imaging.

    Science.gov (United States)

    Schieber, Nicole L; Machado, Pedro; Markert, Sebastian M; Stigloher, Christian; Schwab, Yannick; Steyer, Anna M

    2017-01-01

    Correlative light and electron microscopy (CLEM) is a powerful tool to perform ultrastructural analysis of targeted tissues or cells. The large field of view of the light microscope (LM) enables quick and efficient surveys of the whole specimen. It is also compatible with live imaging, giving access to functional assays. CLEM protocols take advantage of the features to efficiently retrace the position of targeted sites when switching from one modality to the other. They more often rely on anatomical cues that are visible both by light and electron microscopy. We present here a simple workflow where multicellular specimens are embedded in minimal amounts of resin, exposing their surface topology that can be imaged by scanning electron microscopy (SEM). LM and SEM both benefit from a large field of view that can cover whole model organisms. As a result, targeting specific anatomic locations by focused ion beam-SEM (FIB-SEM) tomography becomes straightforward. We illustrate this application on three different model organisms, used in our laboratory: the zebrafish embryo Danio rerio, the marine worm Platynereis dumerilii, and the dauer larva of the nematode Caenorhabditis elegans. Here we focus on the experimental steps to reduce the amount of resin covering the samples and to image the specimens inside an FIB-SEM. We expect this approach to have widespread applications for volume electron microscopy on multiple model organisms. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Experimental model to evaluate in vivo and in vitro cartilage MR imaging by means of histological analyses

    Energy Technology Data Exchange (ETDEWEB)

    Bittersohl, B. [Department of Orthopedic Surgery, University of Berne, Inselspital, 3010 Bern (Switzerland); Mamisch, T.C. [Department of Orthopedic Surgery, University of Berne, Inselspital, 3010 Bern (Switzerland)], E-mail: mamisch@bwh.harvard.edu; Welsch, G.H. [Department of Trauma Surgery, University of Erlangen (Germany); Stratmann, J.; Forst, R. [Department of Orthopedic Surgery, University of Erlangen (Germany); Swoboda, B. [Department of Orthopedic Rheumatology, University of Erlangen (Germany); Bautz, W. [Department of Diagnostic Radiology, University of Erlangen (Germany); Rechenberg, B. von [Musculoskeletal Research Unit (MSRU), University of Zurich (Switzerland); Cavallaro, A. [Department of Diagnostic Radiology, University of Erlangen (Germany)

    2009-06-15

    Objectives: Implementation of an experimental model to compare cartilage MR imaging by means of histological analyses. Material and methods: MRI was obtained from 4 patients expecting total knee replacement at 1.5 and/or 3 T prior surgery. The timeframe between pre-op MRI and knee replacement was within two days. Resected cartilage-bone samples were tagged with Ethi-pins to reproduce the histological cutting course. Pre-operative scanning at 1.5 T included following parameters for fast low angle shot (FLASH: TR/TE/FA = 33 ms/6 ms/30 deg., BW = 110 kHz, 120 mm x 120 mm FOV, 256 x 256 matrix, 0.65 mm slice-thickness) and double echo steady state (DESS: TR/TE/FA = 23.7 ms/6.9 ms/40 deg., BW = 130 kHz, 120 x 120 mm FOV, 256 x 256 matrix, 0.65 mm slice-thickness). At 3 T, scan parameters were: FLASH (TR/TE/FA = 12.2 ms/5.1 ms/10 deg., BW = 130 kHz, 170 x 170 mm FOV, 320 x 320, 0.5 mm slice-thickness) and DESS (TR/TE/FA = 15.6 ms/4.5 ms/25 deg., BW = 200 kHz, 135 mm x 150 mm FOV, 288 x 320 matrix, 0.5 mm slice-thickness). Imaging of the specimens was done the same day at 1.5 T. MRI (Noyes) and histological (Mankin) score scales were correlated using the paired t-test. Sensitivity and specificity for the detection of different grades of cartilage degeneration were assessed. Inter-reader and intra-reader reliability was determined using Kappa analysis. Results: Low correlation (sensitivity, specificity) was found for both sequences in normal to mild Mankin grades. Only moderate to severe changes were diagnosed with higher significance and specificity. The use of higher field-strengths was advantageous for both protocols with sensitivity values ranging from 13.6% to 93.3% (FLASH) and 20.5% to 96.2% (DESS). Kappa values ranged from 0.488 to 0.944. Conclusions: Correlating MR images with continuous histological slices was feasible by using three-dimensional imaging, multi-planar-reformat and marker pins. The capability of diagnosing early cartilage changes with high accuracy

  15. Automated SEM Modal Analysis Applied to the Diogenites

    Science.gov (United States)

    Bowman, L. E.; Spilde, M. N.; Papike, James J.

    1996-01-01

    Analysis of volume proportions of minerals, or modal analysis, is routinely accomplished by point counting on an optical microscope, but the process, particularly on brecciated samples such as the diogenite meteorites, is tedious and prone to error by misidentification of very small fragments, which may make up a significant volume of the sample. Precise volume percentage data can be gathered on a scanning electron microscope (SEM) utilizing digital imaging and an energy dispersive spectrometer (EDS). This form of automated phase analysis reduces error, and at the same time provides more information than could be gathered using simple point counting alone, such as particle morphology statistics and chemical analyses. We have previously studied major, minor, and trace-element chemistry of orthopyroxene from a suite of diogenites. This abstract describes the method applied to determine the modes on this same suite of meteorites and the results of that research. The modal abundances thus determined add additional information on the petrogenesis of the diogenites. In addition, low-abundance phases such as spinels were located for further analysis by this method.

  16. Modelling of the spallation reaction: analysis and testing of nuclear models; Simulation de la spallation: analyse et test des modeles nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Toccoli, C

    2000-04-03

    The spallation reaction is considered as a 2-step process. First a very quick stage (10{sup -22}, 10{sup -29} s) which corresponds to the individual interaction between the incident projectile and nucleons, this interaction is followed by a series of nucleon-nucleon collisions (intranuclear cascade) during which fast particles are emitted, the nucleus is left in a strongly excited level. Secondly a slower stage (10{sup -18}, 10{sup -19} s) during which the nucleus is expected to de-excite completely. This de-excitation is performed by evaporation of light particles (n, p, d, t, {sup 3}He, {sup 4}He) or/and fission or/and fragmentation. The HETC code has been designed to simulate spallation reactions, this simulation is based on the 2-steps process and on several models of intranuclear cascades (Bertini model, Cugnon model, Helder Duarte model), the evaporation model relies on the statistical theory of Weiskopf-Ewing. The purpose of this work is to evaluate the ability of the HETC code to predict experimental results. A methodology about the comparison of relevant experimental data with results of calculation is presented and a preliminary estimation of the systematic error of the HETC code is proposed. The main problem of cascade models originates in the difficulty of simulating inelastic nucleon-nucleon collisions, the emission of pions is over-estimated and corresponding differential spectra are badly reproduced. The inaccuracy of cascade models has a great impact to determine the excited level of the nucleus at the end of the first step and indirectly on the distribution of final residual nuclei. The test of the evaporation model has shown that the emission of high energy light particles is under-estimated. (A.C.)

  17. Generic Linking of Finite Element Models for non-linear static and global dynamic analyses of aircraft structures

    NARCIS (Netherlands)

    de Wit, A.J.; Akcay-Perdahcioglu, Didem; van den Brink, W.M.; de Boer, Andries

    2012-01-01

    Depending on the type of analysis, Finite Element(FE) models of different fidelity are necessary. Creating these models manually is a labor intensive task. This paper discusses a generic approach for generating FE models of different fidelity from a single reference FE model. These different

  18. Generic linking of finite element models for non-linear static and global dynamic analyses for aircraft structures

    NARCIS (Netherlands)

    de Wit, A.J.; Akcay-Perdahcioglu, Didem; van den Brink, W.M.; de Boer, Andries; Rolfes, R.; Jansen, E.L.

    2011-01-01

    Depending on the type of analysis, Finite Element(FE) models of different fidelity are necessary. Creating these models manually is a labor intensive task. This paper discusses a generic approach for generating FE models of different fidelity from a single reference FE model. These different

  19. The influence of environment temperature on SEM image quality

    International Nuclear Information System (INIS)

    Chen, Li; Liu, Junshan

    2015-01-01

    As the structure dimension goes down to the nano-scale, it often requires a scanning electron microscope (SEM) to provide image magnification up to 100 000  ×. However, SEM images at such a high magnification usually suffer from high resolution value and low signal-to-noise ratio, which results in low quality of the SEM image. In this paper, the quality of the SEM image is improved by optimizing the environment temperature. The experimental results indicate that at 100 000  ×, the quality of the SEM image is influenced by the environment temperature, whereas at 50 000  × it is not. At 100 000  × the best SEM image quality can be achieved from the environment temperature ranging 292 from 294 K, and the SEM image quality evaluated by the double stimulus continuous quality scale method can increase from grade 1 to grade 5. It is expected that this image quality improving method can be used in routine measurements with ordinary SEMs to get high quality images by optimizing the environment temperature. (paper)

  20. Web semántica y servicios web semanticos

    OpenAIRE

    Marquez Solis, Santiago

    2007-01-01

    Des d'aquest TFC volem estudiar l'evolució de la Web actual cap a la Web Semàntica. Desde este TFC queremos estudiar la evolución de la Web actual hacia la Web Semántica. From this Final Degree Project we want to study the evolution of the current Web to the Semantic Web.

  1. SEM metrology on bit patterned media nanoimprint template: issues and improvements

    Science.gov (United States)

    Hwu, Justin J.; Babin, Sergey; Yushmanov, Peter

    2012-03-01

    Critical dimension measurement is the most essential metrology needed in nanofabrication processes and the practice is most commonly executed using SEMs for its flexibility in sampling, imaging, and data processing. In bit patterned media process development, nanoimprint lithography (NIL) is used for template replication and media fabrication. SEM imaging on templates provide not only individual dot size, but also information for dot size distribution, the location of dots, pitch and array alignment quality, etc. It is very important to know the SEM measurement limit since the feature nominal size is less than 20 nm and the dot feature size and other metrics will relate to the final media performance. In our work an analytical SEM was used. We performed and compared two imaging analysis approaches for metrology information. The SEM beam was characterized using BEAMETR test sample and software for proper beam condition setup. A series of images obtained on a 27 nm nominal pitch dot array patterns were analyzed by conventional brightness intensity threshold method and physical model based analysis using myCD software. Through comparison we identified the issues with threshold method and the strength of using model based analysis for its improvement in feature size and pitch measurement uncertainty and accuracy. TEM cross sections were performed as accuracy reference for better understanding the source of measurement accuracy deviation.

  2. Item Response Theory Modeling and Categorical Regression Analyses of the Five-Factor Model Rating Form: A Study on Italian Community-Dwelling Adolescent Participants and Adult Participants.

    Science.gov (United States)

    Fossati, Andrea; Widiger, Thomas A; Borroni, Serena; Maffei, Cesare; Somma, Antonella

    2017-06-01

    To extend the evidence on the reliability and construct validity of the Five-Factor Model Rating Form (FFMRF) in its self-report version, two independent samples of Italian participants, which were composed of 510 adolescent high school students and 457 community-dwelling adults, respectively, were administered the FFMRF in its Italian translation. Adolescent participants were also administered the Italian translation of the Borderline Personality Features Scale for Children-11 (BPFSC-11), whereas adult participants were administered the Italian translation of the Triarchic Psychopathy Measure (TriPM). Cronbach α values were consistent with previous findings; in both samples, average interitem r values indicated acceptable internal consistency for all FFMRF scales. A multidimensional graded item response theory model indicated that the majority of FFMRF items had adequate discrimination parameters; information indices supported the reliability of the FFMRF scales. Both categorical (i.e., item-level) and scale-level regression analyses suggested that the FFMRF scores may predict a nonnegligible amount of variance in the BPFSC-11 total score in adolescent participants, and in the TriPM scale scores in adult participants.

  3. Overview of fuel behaviour and core degradation, based on modelling analyses. Overview of fuel behaviour and core degradation, on the basis of modelling results

    International Nuclear Information System (INIS)

    Massara, Simone

    2013-01-01

    Since the very first hours after the accident at Fukushima-Daiichi, numerical simulations by means of severe accident codes have been carried out, aiming at highlighting the key physical phenomena allowing a correct understanding of the sequence of events, and - on a long enough timeline - improving models and methods, in order to reduce the discrepancy between calculated and measured data. A last long-term objective is to support the future decommissioning phase. The presentation summarises some of the available elements on the role of the fuel/cladding-water interaction, which became available only through modelling because of the absence of measured data directly related to the cladding-steam interaction. This presentation also aims at drawing some conclusions on the status of the modelling capabilities of current tools, particularly for the purpose of the foreseen application to ATF fuels: - analyses with MELCOR, MAAP, THALES2 and RELAP5 are presented; - input data are taken from BWR Mark-I Fukushima-Daiichi Units 1, 2 and 3, completed with operational data published by TEPCO. In the case of missing or incomplete data or hypotheses, these are adjusted to reduce the calculation/measurement discrepancy. The behaviour of the accident is well understood on a qualitative level (major trends on RPV pressure and water level, dry-wet and PCV pressure are well represented), allowing a certain level of confidence in the results of the analysis of the zirconium-steam reaction - which is accessible only through numerical simulations. These show an extremely fast sequence of events (here for Unit 1): - the top of fuel is uncovered in 3 hours (after the tsunami); - the steam line breaks at 6.5 hours. Vessel dries at 10 hours, with a heat-up rate in a first moment driven by the decay heat only (∼7 K/min) and afterwards by the chemical heat from Zr-oxidation (over 30 K/min), associated with massive hydrogen production. It appears that the level of uncertainty increases with

  4. Analysing conflicts around small-scale gold mining in the Amazon : The contribution of a multi-temporal model

    NARCIS (Netherlands)

    Salman, Ton; de Theije, Marjo

    Conflict is small-scale gold mining's middle name. In only a very few situations do mining operations take place without some sort of conflict accompanying the activity, and often various conflicting stakeholders struggle for their interests simultaneously. Analyses of such conflicts are typically

  5. Sensitivity of the direct stop pair production analyses in phenomenological MSSM simplified models with the ATLAS detectors

    CERN Document Server

    Snyder, Ian Michael; The ATLAS collaboration

    2018-01-01

    The sensitivity of the searches for the direct pair production of stops often has been evaluated in simple SUSY scenarios, where only a limited set of supersymmetric particles take part to the stop decay. In this talk, the interpretations of the analyses requiring zero, one or two leptons in the final states to simple but well motivated MSSM scenarios will be discussed.

  6. State of the art in establishing computed models of adsorption processes to serve as a basis of radionuclide migration assessment for safety analyses

    International Nuclear Information System (INIS)

    Koss, V.

    1991-01-01

    An important point in safety analysis of an underground repository is adsorption of radionuclides in the overlying cover. Adsorption may be judged according to experimental results or to model calculations. Because of the reliability aspired in safety analyses, it is necessary to strengthen experimental results by theoretical calculations. At the time, there is no single thermodynamic model of adsorption to be agreed on. Therefore, this work reviews existing equilibrium models of adsorption. Limitations of the K d -concept and of adsorption-isotherms according to Freundlich and Langmuir are mentioned. The surface ionisation and complexation edl model is explained in full as is the criticism of this model. The application is stressed of simple surface complexation models to adsorption experiments in natural systems as is experimental and modelling work according to systems from Gorleben. Hints are given how to deal with modelling of adsorption related to Gorleben systems in the future. (orig.) [de

  7. Structured modelling and nonlinear analysis of PEM fuel cells; Strukturierte Modellierung und nichtlineare Analyse von PEM-Brennstoffzellen

    Energy Technology Data Exchange (ETDEWEB)

    Hanke-Rauschenbach, R.

    2007-10-26

    In the first part of this work a model structuring concept for electrochemical systems is presented. The application of such a concept for the structuring of a process model allows it to combine different fuel cell models to form a whole model family, regardless of their level of detail. Beyond this the concept offers the opportunity to flexibly exchange model entities on different model levels. The second part of the work deals with the nonlinear behaviour of PEM fuel cells. With the help of a simple, spatially lumped and isothermal model, bistable current-voltage characteristics of PEM fuel cells operated with low humidified feed gases are predicted and discussed in detail. The cell is found to exhibit current-voltage curves with pronounced local extrema in a parameter range that is of practical interest when operated at constant feed gas flow rates. (orig.)

  8. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    Science.gov (United States)

    Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  9. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation.

    Science.gov (United States)

    Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D

    2015-07-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  10. Automated CD-SEM metrology for efficient TD and HVM

    Science.gov (United States)

    Starikov, Alexander; Mulapudi, Satya P.

    2008-03-01

    CD-SEM is the metrology tool of choice for patterning process development and production process control. We can make these applications more efficient by extracting more information from each CD-SEM image. This enables direct monitors of key process parameters, such as lithography dose and focus, or predicting the outcome of processing, such as etched dimensions or electrical parameters. Automating CD-SEM recipes at the early stages of process development can accelerate technology characterization, segmentation of variance and process improvements. This leverages the engineering effort, reduces development costs and helps to manage the risks inherent in new technology. Automating CD-SEM for manufacturing enables efficient operations. Novel SEM Alarm Time Indicator (SATI) makes this task manageable. SATI pulls together data mining, trend charting of the key recipe and Operations (OPS) indicators, Pareto of OPS losses and inputs for root cause analysis. This approach proved natural to our FAB personnel. After minimal initial training, we applied new methods in 65nm FLASH manufacture. This resulted in significant lasting improvements of CD-SEM recipe robustness, portability and automation, increased CD-SEM capacity and MT productivity.

  11. Using Structural Equation Modelling (SEM) to predict use of ...

    African Journals Online (AJOL)

    mother to child transmission of HIV . It was found to be effective in changing behaviour and studies ... indirectly via intention (willingness) in order to influence the behaviour of coming for VCT. This is because VCT .... similar to those reported in hospital or satellite. (stand-alone) VCT centres, we feel that our findings can also ...

  12. From global economic modelling to household level analyses of food security and sustainability: how big is the gap and can we bridge it?

    NARCIS (Netherlands)

    Wijk, van M.T.

    2014-01-01

    Policy and decision makers have to make difficult choices to improve the food security of local people against the background of drastic global and local changes. Ex-ante impact assessment using integrated models can help them with these decisions. This review analyses the state of affairs of the

  13. Understanding N2O formation mechanisms through sensitivity analyses using a plant-wide benchmark simulation model

    DEFF Research Database (Denmark)

    Boiocchi, Riccardo; Gernaey, Krist; Sin, Gürkan

    2017-01-01

    In the present work, sensitivity analyses are performed on a plant-wide model incorporating the typical treatment unit of a full-scale wastewater treatment plant and N2O production and emission dynamics. The influence of operating temperatureis investigated. The results are exploited to identify...

  14. Global sensitivity analysis of thermomechanical models in modelling of welding; Analyse de sensibilite globale de modeles thermomecanique de simulation numerique du soudage

    Energy Technology Data Exchange (ETDEWEB)

    Petelet, M

    2008-07-01

    Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range. This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases.The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)

  15. Global sensitivity analysis of thermo-mechanical models in numerical weld modelling; Analyse de sensibilite globale de modeles thermomecaniques de simulation numerique du soudage

    Energy Technology Data Exchange (ETDEWEB)

    Petelet, M

    2007-10-15

    Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range {exclamation_point} This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases. The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)

  16. Comparison of SEM and Optical Analysis of DT Neutron Tracks in CR-39 Detectors

    Energy Technology Data Exchange (ETDEWEB)

    Mosier-Boss, P A; Carbonelle, P; Morey, M S; Tinsley, J R; Hurley, J P

    2012-01-01

    CR-39 detectors were exposed to DT neutrons generated by a Thermo Fisher model A290 neutron generator. Afterwards, the etched tracks were examined both optically and by SEM. The purpose of the analysis was to compare the two techniques and to determine whether additional information on track geometry could be obtained by SEM analysis. The use of these techniques to examine triple tracks, diagnostic of ≥9.6 MeV neutrons, observed in CR-39 used in Pd/D codeposition experiments will also be discussed.

  17. In Situ Characterization of Boehmite Particles in Water Using Liquid SEM.

    Science.gov (United States)

    Yao, Juan; Arey, Bruce W; Yang, Li; Zhang, Fei; Komorek, Rachel; Chun, Jaehun; Yu, Xiao-Ying

    2017-09-27

    In situ imaging and elemental analysis of boehmite (AlOOH) particles in water is realized using the System for Analysis at the Liquid Vacuum Interface (SALVI) and Scanning Electron Microscopy (SEM). This paper describes the method and key steps in integrating the vacuum compatible SAVLI to SEM and obtaining secondary electron (SE) images of particles in liquid in high vacuum. Energy dispersive x-ray spectroscopy (EDX) is used to obtain elemental analysis of particles in liquid and control samples including deionized (DI) water only and an empty channel as well. Synthesized boehmite (AlOOH) particles suspended in liquid are used as a model in the liquid SEM illustration. The results demonstrate that the particles can be imaged in the SE mode with good resolution (i.e., 400 nm). The AlOOH EDX spectrum shows significant signal from the aluminum (Al) when compared with the DI water and the empty channel control. In situ liquid SEM is a powerful technique to study particles in liquid with many exciting applications. This procedure aims to provide technical know-how in order to conduct liquid SEM imaging and EDX analysis using SALVI and to reduce potential pitfalls when using this approach.

  18. Direct and Indirect Effects of Parental Influence upon Adolescent Alcohol Use: A Structural Equation Modeling Analysis

    Science.gov (United States)

    Kim, Young-Mi; Neff, James Alan

    2010-01-01

    A model incorporating the direct and indirect effects of parental monitoring on adolescent alcohol use was evaluated by applying structural equation modeling (SEM) techniques to data on 4,765 tenth-graders in the 2001 Monitoring the Future Study. Analyses indicated good fit of hypothesized measurement and structural models. Analyses supported both…

  19. Analysis and modelling of the energy requirements of batch processes; Analyse und Modellierung des Energiebedarfes in Batch-Prozessen

    Energy Technology Data Exchange (ETDEWEB)

    Bieler, P.S.

    2002-07-01

    This intermediate report for the Swiss Federal Office of Energy (SFOE) presents the results of a project aiming to model the energy consumption of multi-product, multi-purpose batch production plants. The utilities investigated were electricity, brine and steam. Both top-down and bottom-up approaches are described, whereby top-down was used for the buildings where the batch process apparatus was installed. Modelling showed that for batch-plants at the building level, the product mix can be too variable and the diversity of products and processes too great for simple modelling. Further results obtained by comparing six different production plants that could be modelled are discussed. The several models developed are described and their wider applicability is discussed. Also, the results of comparisons made between modelled and actual values are presented. Recommendations for further work are made.

  20. A growth curve model with fractional polynomials for analysing incomplete time-course data in microarray gene expression studies

    DEFF Research Database (Denmark)

    Tan, Qihua; Thomassen, Mads; Hjelmborg, Jacob V B

    2011-01-01

    -course pattern in a gene by gene manner. We introduce a growth curve model with fractional polynomials to automatically capture the various time-dependent expression patterns and meanwhile efficiently handle missing values due to incomplete observations. For each gene, our procedure compares the performances...... among fractional polynomial models with power terms from a set of fixed values that offer a wide range of curve shapes and suggests a best fitting model. After a limited simulation study, the model has been applied to our human in vivo irritated epidermis data with missing observations to investigate...... time-dependent transcriptional responses to a chemical irritant. Our method was able to identify the various nonlinear time-course expression trajectories. The integration of growth curves with fractional polynomials provides a flexible way to model different time-course patterns together with model...

  1. SPECIFICS OF THE APPLICATIONS OF MULTIPLE REGRESSION MODEL IN THE ANALYSES OF THE EFFECTS OF GLOBAL FINANCIAL CRISES

    Directory of Open Access Journals (Sweden)

    Željko V. Račić

    2010-12-01

    Full Text Available This paper aims to present the specifics of the application of multiple linear regression model. The economic (financial crisis is analyzed in terms of gross domestic product which is in a function of the foreign trade balance (on one hand and the credit cards, i.e. indebtedness of the population on this basis (on the other hand, in the USA (from 1999. to 2008. We used the extended application model which shows how the analyst should run the whole development process of regression model. This process began with simple statistical features and the application of regression procedures, and ended with residual analysis, intended for the study of compatibility of data and model settings. This paper also analyzes the values of some standard statistics used in the selection of appropriate regression model. Testing of the model is carried out with the use of the Statistics PASW 17 program.

  2. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

  3. Alternative SEM techniques for observing pyritised fossil material.

    Science.gov (United States)

    Poole; Lloyd

    2000-11-01

    Two scanning electron microscopy (SEM) electron-specimen interactions that provide images based on sample crystal structure, electron channelling and electron backscattered diffraction, are described. The SEM operating conditions and sample preparation are presented, followed by an example application of these techniques to the study of pyritised plant material. The two approaches provide an opportunity to examine simultaneously, at higher magnifications normally available optically, detailed specimen anatomy and preservation state. Our investigation suggests that whereas both techniques have their advantages, the electron channelling approach is generally more readily available to most SEM users. However, electron backscattered diffraction does afford the opportunity of automated examination and characterisation of pyritised fossil material.

  4. Improvement of geometrical measurements from 3D-SEM reconstructions

    DEFF Research Database (Denmark)

    Carli, Lorenzo; De Chiffre, Leonardo; Horsewell, Andy

    2009-01-01

    The quantification of 3D geometry at the nanometric scale is a major metrological challenge. In this work geometrical measurements on cylindrical items obtained with a 3D-SEM were investigated. Two items were measured: a wire gauge having a 0.25 mm nominal diameter and a hypodermic needle having...... that the diameter estimation performed using the 3D-SEM leads to an overestimation of approx. 7% compared to the reference values obtained using a 1-D length measuring machine. Standard deviation of SEM measurements performed on the wire gauge is approx. 1.5 times lower than the one performed on the hypodermic...

  5. Development of an Economical Interfacing Circuit for Upgrading of SEM Data Printing System

    International Nuclear Information System (INIS)

    Punnachaiya, S.; Thong-Aram, D.

    2002-01-01

    The operating conditions of a Scanning Electron Microscope (SEM) i.e., magnification, accelerating voltage, micron mark and film identification labeling, are very important for the accurate interpretation of a micrograph picture. In the old model SEM, the built-in data printing system for film identification can be inputted only the numerical number. This will be made a confusing problems when various operating conditions were applied in routine work. An economical interfacing circuit, therefore, was developed to upgrade the data printing system for capable of alphanumerical labeling. The developed circuit was tested on both data printing systems of JSM-T220 and JSM-T330 (JEOL SEM). It was found that the interfacing function worked properly and easily installed

  6. Modelo de web semántica para universidades

    Directory of Open Access Journals (Sweden)

    Karla Abad

    2015-12-01

    Full Text Available A raíz del estudio de estado actual de micrositios y repositorios en la Universidad Estatal Península de Santa Elena se encontró que su información carecía de semántica óptima y adecuada. Bajo estas circunstancias, se plantea entonces la necesidad de crear un modelo de estructura de web semántica para Universidades, el cual posteriormente fue aplicado a micrositios y repositorio digital de la UPSE, como caso de prueba. Parte de este proyecto incluye la instalación de módulos de software con sus respectivas configuraciones y la utilización de estándares de metadatos como DUBLIN CORE, para la mejora del SEO (optimización en motores de búsqueda; con ello se ha logrado la generación de metadatos estandarizados y la creación de políticas para la subida de información. El uso de metadatos transforma datos simples en estructuras bien organizadas que aportan información y conocimiento para generar resultados en buscadores web. Al culminar la implementación del modelo de web semántica es posible decir que la universidad ha mejorado su presencia y visibilidad en la web a través del indexamiento de información en diferentes motores de búsqueda y posicionamiento en la categorización de universidades y de repositorios de Webometrics (ranking que proporciona clasificación de universidades de todo el mundo.   Abstract After examining the current microsites and repositories situation in University, Peninsula of Santa Elena´s, it was found that information lacked optimal and appropriate semantic. Under these circumstances, there is a need to create a semantic web structure model for Universities, which was subsequently applied to UPSE´s microsites and digital repositories, as a test study case. Part of this project includes the installation of software modules with their respective configurations and the use of metadata standards such as DUBLIN CORE, to improve the SEO (Search Engine Optimization; with these applications, it was

  7. QTL analyses on genotype-specific component traits in a crop simulation model for capsicum annuum L.

    NARCIS (Netherlands)

    Wubs, A.M.; Heuvelink, E.; Dieleman, J.A.; Magan, J.J.; Palloix, A.; Eeuwijk, van F.A.

    2012-01-01

    Abstract: QTL for a complex trait like yield tend to be unstable across environments and show QTL by environment interaction. Direct improvement of complex traits by selecting on QTL is therefore difficult. For improvement of complex traits, crop growth models can be useful, as such models can

  8. A CFBPN Artificial Neural Network Model for Educational Qualitative Data Analyses: Example of Students' Attitudes Based on Kellerts' Typologies

    Science.gov (United States)

    Yorek, Nurettin; Ugulu, Ilker

    2015-01-01

    In this study, artificial neural networks are suggested as a model that can be "trained" to yield qualitative results out of a huge amount of categorical data. It can be said that this is a new approach applied in educational qualitative data analysis. In this direction, a cascade-forward back-propagation neural network (CFBPN) model was…

  9. Analyses of freshwater stress with a couple ground and surface water model in the Pra Basin, Ghana

    Science.gov (United States)

    Owusu, George; Owusu, Alex B.; Amankwaa, Ebenezer Forkuo; Eshun, Fatima

    2017-03-01

    The optimal management of water resources requires that the collected hydrogeological, meteorological, and spatial data be simulated and analyzed with appropriate models. In this study, a catchment-scale distributed hydrological modeling approach is applied to simulate water stress for the years 2000 and 2050 in a data scarce Pra Basin, Ghana. The model is divided into three parts: The first computes surface and groundwater availability as well as shallow and deep groundwater residence times by using POLFLOW model; the second extends the POLFLOW model with water demand (Domestic, Industrial and Agricultural) model; and the third part involves modeling water stress indices—from the ratio of water demand to water availability—for every part of the basin. On water availability, the model estimated long-term annual Pra river discharge at the outflow point of the basin, Deboase, to be 198 m3/s as against long-term average measurement of 197 m3/s. Moreover, the relationship between simulated discharge and measured discharge at 9 substations in the basin scored Nash-Sutcliffe model efficiency coefficient of 0.98, which indicates that the model estimation is in agreement with the long-term measured discharge. The estimated total water demand significantly increases from 959,049,096 m3/year in 2000 to 3,749,559,019 m3/year in 2050 ( p < 0.05). The number of districts experiencing water stress significantly increases ( p = 0.00044) from 8 in 2000 to 21 out of 35 by the year 2050. This study will among other things help the stakeholders in water resources management to identify and manage water stress areas in the basin.

  10. Automated transmission-mode scanning electron microscopy (tSEM for large volume analysis at nanoscale resolution.

    Directory of Open Access Journals (Sweden)

    Masaaki Kuwajima

    Full Text Available Transmission-mode scanning electron microscopy (tSEM on a field emission SEM platform was developed for efficient and cost-effective imaging of circuit-scale volumes from brain at nanoscale resolution. Image area was maximized while optimizing the resolution and dynamic range necessary for discriminating key subcellular structures, such as small axonal, dendritic and glial processes, synapses, smooth endoplasmic reticulum, vesicles, microtubules, polyribosomes, and endosomes which are critical for neuronal function. Individual image fields from the tSEM system were up to 4,295 µm(2 (65.54 µm per side at 2 nm pixel size, contrasting with image fields from a modern transmission electron microscope (TEM system, which were only 66.59 µm(2 (8.160 µm per side at the same pixel size. The tSEM produced outstanding images and had reduced distortion and drift relative to TEM. Automated stage and scan control in tSEM easily provided unattended serial section imaging and montaging. Lens and scan properties on both TEM and SEM platforms revealed no significant nonlinear distortions within a central field of ∼100 µm(2 and produced near-perfect image registration across serial sections using the computational elastic alignment tool in Fiji/TrakEM2 software, and reliable geometric measurements from RECONSTRUCT™ or Fiji/TrakEM2 software. Axial resolution limits the analysis of small structures contained within a section (∼45 nm. Since this new tSEM is non-destructive, objects within a section can be explored at finer axial resolution in TEM tomography with current methods. Future development of tSEM tomography promises thinner axial resolution producing nearly isotropic voxels and should provide within-section analyses of structures without changing platforms. Brain was the test system given our interest in synaptic connectivity and plasticity; however, the new tSEM system is readily applicable to other biological systems.

  11. 3-D Analysis of Graphite Nodules in Ductile Cast Iron Using FIB-SEM

    DEFF Research Database (Denmark)

    D'Angelo, Luca; Jespersen, Freja N.; MacDonald, A. Nicole

    Ductile cast iron samples were analysed in a Focused Ion Beam Scanning Electron Microscope, FIB-SEM. The focussed ion beam was used to carefully remove layers of the graphite nodules to reveal internal structures in the nodules. The sample preparation and milling procedure for sectioning graphite...... nodules is described and ef-fects of preparation methods discussed. It was found that nodules contain different types of inclusions. These were analysed for chemical composition and crystallography using energy dispersive spectrometry (EDS) and electron back-scatter patterns (EBSP). Location of inclusions...

  12. A Review On Accuracy and Uncertainty of Spatial Data and Analyses with special reference to Urban and Hydrological Modelling

    Science.gov (United States)

    Devendran, A. A.; Lakshmanan, G.

    2014-11-01

    Data quality for GIS processing and analysis is becoming an increased concern due to the accelerated application of GIS technology for problem solving and decision making roles. Uncertainty in the geographic representation of the real world arises as these representations are incomplete. Identification of the sources of these uncertainties and the ways in which they operate in GIS based representations become crucial in any spatial data representation and geospatial analysis applied to any field of application. This paper reviews the articles on the various components of spatial data quality and various uncertainties inherent in them and special focus is paid to two fields of application such as Urban Simulation and Hydrological Modelling. Urban growth is a complicated process involving the spatio-temporal changes of all socio-economic and physical components at different scales. Cellular Automata (CA) model is one of the simulation models, which randomly selects potential cells for urbanisation and the transition rules evaluate the properties of the cell and its neighbour. Uncertainty arising from CA modelling is assessed mainly using sensitivity analysis including Monte Carlo simulation method. Likewise, the importance of hydrological uncertainty analysis has been emphasized in recent years and there is an urgent need to incorporate uncertainty estimation into water resources assessment procedures. The Soil and Water Assessment Tool (SWAT) is a continuous time watershed model to evaluate various impacts of land use management and climate on hydrology and water quality. Hydrological model uncertainties using SWAT model are dealt primarily by Generalized Likelihood Uncertainty Estimation (GLUE) method.

  13. Analysing Possible Applications for Available Mathematical Models of Tracked Vehicle Movement Over the Rough Terrain to Examine Tracked Chain Dynamic Processes

    Directory of Open Access Journals (Sweden)

    M. E. Lupyan

    2014-01-01

    Full Text Available The article offered for consideration provides a survey of methods to study a tracked vehicle movement over unpaved grounds and obstacles using various software systems. The relevant issue is to optimize chassis elements of a caterpillar at the design stage. The challenges, engineers face using different methods to study the tracked vehicle elements, are given. Advantages of using simulation to study a state of the various components of the loaded chassis are described. Beside, an important and relevant issue is brought up i.e. modeling a vehicle movement in real time.While writing an article, different modeling methods for an interaction between a tracked vehicle chassis and an underlying subgrade used both in domestic and in foreign practice have been analysed. The applied analytical assumptions in creating these models and their basic elements are described. The way to specify an interaction between the track and road wheels of a caterpillar, crawler belt specification, and interaction between its elements have been analysed in detail as well. Special attention was also paid to the various ways of specifying the subgrade both in planar models and in models enabling us to study all chassis elements of a caterpillar as a whole.In addition to the classical simulation of tracked vehicle movement used to analyse Ride qualities of tracked vehicle and loaded state of various chassis elements, is offered a model used to simulate a movement coil in real time.The article presents advantages and disadvantages of different models of movement in terms of engineering analysis of caterpillar elements. A task to develop a simulation model of caterpillar movement is set. Requirements for a model in case of its use in engineering analysis of chassis elements of a caterpillar are defined. A problem of a lack of the single technique to conduct engineering analysis of tracked vehicle chassis is noted. 

  14. Combining hydraulic model, hydrogeomorphological observations and chemical analyses of surface waters to improve knowledge on karst flash floods genesis

    Directory of Open Access Journals (Sweden)

    F. Raynaud

    2015-06-01

    Full Text Available During a flood event over a karst watershed, the connections between surface and ground waters appear to be complex ones. The karst may attenuate surface floods by absorbing water or contribute to the surface flood by direct contribution of karst waters in the rivers (perennial and overflowing springs and by diffuse resurgence along the hillslopes. If it is possible to monitor each known outlet of a karst system, the diffuse contribution is yet difficult to assess. Furthermore, all these connections vary over time according to several factors such as the water content of the soil and underground, the rainfall characteristics, the runoff pathways. Therefore, the contribution of each compartment is generally difficult to assess, and flood dynamics are not fully understood. To face these misunderstandings and difficulties, we analysed surface waters during six recent flood events in the Lirou watershed (a karst tributary of the Lez, in South of France. Because of the specific chemical signature of karst waters, chemical analyses can supply information about water pathways and flood dynamics. Then, we used the dilution law to combine chemical results, flow data and field observations to assess the dynamics of the karst component of the flood. To end, we discussed the surface or karst origin of the waters responsible for the apparent runoff coefficient rise during flash karst flood.

  15. Role of scanning electron microscope )SEM) in metal failure analysis

    International Nuclear Information System (INIS)

    Shaiful Rizam Shamsudin; Hafizal Yazid; Mohd Harun; Siti Selina Abd Hamid; Nadira Kamarudin; Zaiton Selamat; Mohd Shariff Sattar; Muhamad Jalil

    2005-01-01

    Scanning electron microscope (SEM) is a scientific instrument that uses a beam of highly energetic electrons to examine the surface and phase distribution of specimens on a micro scale through the live imaging of secondary electrons (SE) and back-scattered electrons (BSE) images. One of the main activities of SEM Laboratory at MINT is for failure analysis on metal part and components. The capability of SEM is excellent for determining the root cause of metal failures such as ductility or brittleness, stress corrosion, fatigue and other types of failures. Most of our customers that request for failure analysis are from local petrochemical plants, manufacturers of automotive components, pipeline maintenance personnel and engineers who involved in the development of metal parts and component. This paper intends to discuss some of the technical concepts in failure analysis associated with SEM. (Author)

  16. 3D reconstruction of SEM images by use of optical photogrammetry software.

    Science.gov (United States)

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Mechanical analyses on the digital behaviour of the Tokay gecko (Gekko gecko) based on a multi-level directional adhesion model

    OpenAIRE

    Wu, Xuan; Wang, Xiaojie; Mei, Tao; Sun, Shaoming

    2015-01-01

    This paper proposes a multi-level hierarchical model for the Tokay gecko (Gekko gecko) adhesive system and analyses the digital behaviour of the G. gecko under macro/meso-level scale. The model describes the structures of G. gecko's adhesive system from the nano-level spatulae to the sub-millimetre-level lamella. The G. gecko's seta is modelled using inextensible fibril based on Euler's elastica theorem. Considering the side contact of the spatular pads of the seta on the flat and rigid subst...

  18. Digital Destinations in the Tourist Sector: A Path Model for the Impact of e-Services on Tourist Expenditures in Amsterdam

    NARCIS (Netherlands)

    Neuts, B.; Romao, J.; Nijkamp, P.; van Leeuwen, E.S.

    2013-01-01

    Innovations in information and communication technologies (ICT) in recent decades have had profound implications for tourism services, promotion, or distribution. We apply a Structural Equations Model (SEM) to analyse the relationships between the characteristics of tourists visiting Amsterdam, the

  19. Analysing the accuracy of pavement performance models in the short and long terms: GMDH and ANFIS methods

    NARCIS (Netherlands)

    Ziari, H.; Sobhani, J.; Ayoubinejad, J.; Hartmann, Timo

    2016-01-01

    The accuracy of pavement performance prediction is a critical part of pavement management and directly influences maintenance and rehabilitation strategies. Many models with various specifications have been proposed by researchers and used by agencies. This study presents nine variables affecting

  20. Quantitative voltage contrast method for electron irradiated insulators in SEM

    Energy Technology Data Exchange (ETDEWEB)

    Belhaj, M [UR MMA INSAT Centre Urbain Nord, BP 676-1080, Tunis (Tunisia); Jbara, O [LASSI/GRESPI, Faculte des Sciences, BP 1039, 51687 Reims Cedex 2 (France); Fakhfakh, S [LaMaCop, Faculte des sciences de SFAX, Route Soukra Km 3, BP 802, CP 3018 Sfax (Tunisia)], E-mail: mohamed.belhaj@free.fr

    2008-09-07

    A surface potential mapping method for electron irradiated insulators in the scanning electron microscope (SEM) is proposed. This method, based on the use of a highly compact electrostatic toroidal spectrometer specially adapted to SEM applications, is able to monitor the spatial variation of surface potentials of strongly negatively charged materials. The capabilities of this method are tested on a made-up heterogeneous sample. First results prove that the method is particularly appropriate for the reconstitution of the surface potential distribution.

  1. Quantitative voltage contrast method for electron irradiated insulators in SEM

    International Nuclear Information System (INIS)

    Belhaj, M; Jbara, O; Fakhfakh, S

    2008-01-01

    A surface potential mapping method for electron irradiated insulators in the scanning electron microscope (SEM) is proposed. This method, based on the use of a highly compact electrostatic toroidal spectrometer specially adapted to SEM applications, is able to monitor the spatial variation of surface potentials of strongly negatively charged materials. The capabilities of this method are tested on a made-up heterogeneous sample. First results prove that the method is particularly appropriate for the reconstitution of the surface potential distribution

  2. Landscaping analyses of the ROC predictions of discrete-slots and signal-detection models of visual working memory.

    Science.gov (United States)

    Donkin, Chris; Tran, Sophia Chi; Nosofsky, Robert

    2014-10-01

    A fundamental issue concerning visual working memory is whether its capacity limits are better characterized in terms of a limited number of discrete slots (DSs) or a limited amount of a shared continuous resource. Rouder et al. (2008) found that a mixed-attention, fixed-capacity, DS model provided the best explanation of behavior in a change detection task, outperforming alternative continuous signal detection theory (SDT) models. Here, we extend their analysis in two ways: first, with experiments aimed at better distinguishing between the predictions of the DS and SDT models, and second, using a model-based analysis technique called landscaping, in which the functional-form complexity of the models is taken into account. We find that the balance of evidence supports a DS account of behavior in change detection tasks but that the SDT model is best when the visual displays always consist of the same number of items. In our General Discussion section, we outline, but ultimately reject, a number of potential explanations for the observed pattern of results. We finish by describing future research that is needed to pinpoint the basis for this observed pattern of results.

  3. Assessment of the primary rotational stability of uncemented hip stems using an analytical model: comparison with finite element analyses.

    Science.gov (United States)

    Zeman, Maria E; Sauwen, Nicolas; Labey, Luc; Mulier, Michiel; Van der Perre, Georges; Jaecques, Siegfried V N

    2008-09-25

    Sufficient primary stability is a prerequisite for the clinical success of cementless implants. Therefore, it is important to have an estimation of the primary stability that can be achieved with new stem designs in a pre-clinical trial. Fast assessment of the primary stability is also useful in the preoperative planning of total hip replacements, and to an even larger extent in intraoperatively custom-made prosthesis systems, which result in a wide variety of stem geometries. An analytical model is proposed to numerically predict the relative primary stability of cementless hip stems. This analytical approach is based upon the principle of virtual work and a straightforward mechanical model. For five custom-made implant designs, the resistance against axial rotation was assessed through the analytical model as well as through finite element modelling (FEM). The analytical approach can be considered as a first attempt to theoretically evaluate the primary stability of hip stems without using FEM, which makes it fast and inexpensive compared to other methods. A reasonable agreement was found in the stability ranking of the stems obtained with both methods. However, due to the simplifying assumptions underlying the analytical model it predicts very rigid stability behaviour: estimated stem rotation was two to three orders of magnitude smaller, compared with the FEM results. Based on the results of this study, the analytical model might be useful as a comparative tool for the assessment of the primary stability of cementless hip stems.

  4. Assessment of the primary rotational stability of uncemented hip stems using an analytical model: Comparison with finite element analyses

    Directory of Open Access Journals (Sweden)

    Van der Perre Georges

    2008-09-01

    Full Text Available Abstract Background Sufficient primary stability is a prerequisite for the clinical success of cementless implants. Therefore, it is important to have an estimation of the primary stability that can be achieved with new stem designs in a pre-clinical trial. Fast assessment of the primary stability is also useful in the preoperative planning of total hip replacements, and to an even larger extent in intraoperatively custom-made prosthesis systems, which result in a wide variety of stem geometries. Methods An analytical model is proposed to numerically predict the relative primary stability of cementless hip stems. This analytical approach is based upon the principle of virtual work and a straightforward mechanical model. For five custom-made implant designs, the resistance against axial rotation was assessed through the analytical model as well as through finite element modelling (FEM. Results The analytical approach can be considered as a first attempt to theoretically evaluate the primary stability of hip stems without using FEM, which makes it fast and inexpensive compared to other methods. A reasonable agreement was found in the stability ranking of the stems obtained with both methods. However, due to the simplifying assumptions underlying the analytical model it predicts very rigid stability behaviour: estimated stem rotation was two to three orders of magnitude smaller, compared with the FEM results. Conclusion Based on the results of this study, the analytical model might be useful as a comparative tool for the assessment of the primary stability of cementless hip stems.

  5. A Growth Curve Model with Fractional Polynomials for Analysing Incomplete Time-Course Data in Microarray Gene Expression Studies

    Science.gov (United States)

    Tan, Qihua; Thomassen, Mads; Hjelmborg, Jacob v. B.; Clemmensen, Anders; Andersen, Klaus Ejner; Petersen, Thomas K.; McGue, Matthew; Christensen, Kaare; Kruse, Torben A.

    2011-01-01

    Identifying the various gene expression response patterns is a challenging issue in expression microarray time-course experiments. Due to heterogeneity in the regulatory reaction among thousands of genes tested, it is impossible to manually characterize a parametric form for each of the time-course pattern in a gene by gene manner. We introduce a growth curve model with fractional polynomials to automatically capture the various time-dependent expression patterns and meanwhile efficiently handle missing values due to incomplete observations. For each gene, our procedure compares the performances among fractional polynomial models with power terms from a set of fixed values that offer a wide range of curve shapes and suggests a best fitting model. After a limited simulation study, the model has been applied to our human in vivo irritated epidermis data with missing observations to investigate time-dependent transcriptional responses to a chemical irritant. Our method was able to identify the various nonlinear time-course expression trajectories. The integration of growth curves with fractional polynomials provides a flexible way to model different time-course patterns together with model selection and significant gene identification strategies that can be applied in microarray-based time-course gene expression experiments with missing observations. PMID:21966290

  6. Modeling the potential risk factors of bovine viral diarrhea prevalence in Egypt using univariable and multivariable logistic regression analyses

    Directory of Open Access Journals (Sweden)

    Abdelfattah M. Selim

    2018-03-01

    Full Text Available Aim: The present cross-sectional study was conducted to determine the seroprevalence and potential risk factors associated with Bovine viral diarrhea virus (BVDV disease in cattle and buffaloes in Egypt, to model the potential risk factors associated with the disease using logistic regression (LR models, and to fit the best predictive model for the current data. Materials and Methods: A total of 740 blood samples were collected within November 2012-March 2013 from animals aged between 6 months and 3 years. The potential risk factors studied were species, age, sex, and herd location. All serum samples were examined with indirect ELIZA test for antibody detection. Data were analyzed with different statistical approaches such as Chi-square test, odds ratios (OR, univariable, and multivariable LR models. Results: Results revealed a non-significant association between being seropositive with BVDV and all risk factors, except for species of animal. Seroprevalence percentages were 40% and 23% for cattle and buffaloes, respectively. OR for all categories were close to one with the highest OR for cattle relative to buffaloes, which was 2.237. Likelihood ratio tests showed a significant drop of the -2LL from univariable LR to multivariable LR models. Conclusion: There was an evidence of high seroprevalence of BVDV among cattle as compared with buffaloes with the possibility of infection in different age groups of animals. In addition, multivariable LR model was proved to provide more information for association and prediction purposes relative to univariable LR models and Chi-square tests if we have more than one predictor.

  7. Systems genetics of obesity in an F2 pig model by genome-wide association, genetic network and pathway analyses

    DEFF Research Database (Denmark)

    Kogelman, Lisette; Pant, Sameer Dinkar; Fredholm, Merete

    2014-01-01

    Obesity is a complex condition with world-wide exponentially rising prevalence rates, linked with severe diseases like Type 2 Diabetes. Economic and welfare consequences have led to a raised interest in a better understanding of the biological and genetic background. To date, whole genome...... of obesity-related phenotypes and genotyped using the 60K SNP chip. Firstly, Genome Wide Association (GWA) analysis was performed on the Obesity Index to locate candidate genomic regions that were further validated using combined Linkage Disequilibrium Linkage Analysis and investigated by evaluation...... of haplotype blocks. We built Weighted Interaction SNP Hub (WISH) and differentially wired (DW) networks using genotypic correlations amongst obesity-associated SNPs resulting from GWA analysis. GWA results and SNP modules detected by WISH and DW analyses were further investigated by functional enrichment...

  8. Complex patterns of divergence among green-sensitive (RH2a African cichlid opsins revealed by Clade model analyses

    Directory of Open Access Journals (Sweden)

    Weadick Cameron J

    2012-10-01

    Full Text Available Abstract Background Gene duplications play an important role in the evolution of functional protein diversity. Some models of duplicate gene evolution predict complex forms of paralog divergence; orthologous proteins may diverge as well, further complicating patterns of divergence among and within gene families. Consequently, studying the link between protein sequence evolution and duplication requires the use of flexible substitution models that can accommodate multiple shifts in selection across a phylogeny. Here, we employed a variety of codon substitution models, primarily Clade models, to explore how selective constraint evolved following the duplication of a green-sensitive (RH2a visual pigment protein (opsin in African cichlids. Past studies have linked opsin divergence to ecological and sexual divergence within the African cichlid adaptive radiation. Furthermore, biochemical and regulatory differences between the RH2aα and RH2aβ paralogs have been documented. It thus seems likely that selection varies in complex ways throughout this gene family. Results Clade model analysis of African cichlid RH2a opsins revealed a large increase in the nonsynonymous-to-synonymous substitution rate ratio (ω following the duplication, as well as an even larger increase, one consistent with positive selection, for Lake Tanganyikan cichlid RH2aβ opsins. Analysis using the popular Branch-site models, by contrast, revealed no such alteration of constraint. Several amino acid sites known to influence spectral and non-spectral aspects of opsin biochemistry were found to be evolving divergently, suggesting that orthologous RH2a opsins may vary in terms of spectral sensitivity and response kinetics. Divergence appears to be occurring despite intronic gene conversion among the tandemly-arranged duplicates. Conclusions Our findings indicate that variation in selective constraint is associated with both gene duplication and divergence among orthologs in African

  9. Review of models used in economic analyses of new oral treatments for type 2 diabetes mellitus.

    Science.gov (United States)

    Asche, Carl V; Hippler, Stephen E; Eurich, Dean T

    2014-01-01

    Economic models are considered to be important, as they help evaluate the long-term impact of diabetes treatment. To date, it appears that no article has reviewed and critically appraised the cost-effectiveness models developed to evaluate new oral treatments [glucagon-like peptide-1 (GLP-1) receptor agonists and dipeptidyl peptidase-4 (DPP-4) inhibitors] for type 2 diabetes mellitus (T2DM). This study aimed to provide insight into the utilization of cost-effectiveness modelling methods. The focus of our study was aimed at the applicability of these models, particularly around the major assumptions related to the clinical parameters (glycated haemoglobin [A1c], systolic blood pressure [SBP], lipids and weight) used in the models, and subsequent clinical outcomes. MEDLINE and EMBASE were searched from 1 January 2004 to 14 February 2013 in order to identify published cost-effectiveness evaluations for the treatment of T2DM by new oral treatments (GLP-1 receptor agonists and DPP-4 inhibitors). Once identified, the articles were reviewed and grouped together according to the type of model. The following data were captured for each study: comparators; country; evaluation and key cost drivers; time horizon; perspective; discounting rates; currency/year; cost-effectiveness threshold, sensitivity analysis; and cost-effectiveness analysis curves. A total of 15 studies were identified in our review. Nearly all of the models utilized a health care payer perspective and provided a lifetime horizon. The CORE Diabetes Model, UK Prospective Diabetes Study (UKPDS) Outcomes Model, Cardiff Diabetes Model, Centers for Disease Control and Prevention (CDC) Diabetes Cost-Effectiveness Group Model and Diabetes Mellitus Model were cited. With the exception of two studies, all of the studies made significant assumptions surrounding the impact of GLP-1 receptor agonists or DPP-4 inhibitors on clinical parameters and subsequent short- and long-term outcomes. Moreover, often the differences

  10. Assessment of engineered surfaces roughness by high-resolution 3D SEM photogrammetry

    Energy Technology Data Exchange (ETDEWEB)

    Gontard, L.C., E-mail: lionelcg@gmail.com [Departamento de Ciencia de los Materiales e Ingeniería Metalúrgica y Química Inorgánica, Universidad de Cádiz, Puerto Real 11510 (Spain); López-Castro, J.D.; González-Rovira, L. [Departamento de Ciencia de los Materiales e Ingeniería Metalúrgica y Química Inorgánica, Escuela Superior de Ingeniería, Laboratorio de Corrosión, Universidad de Cádiz, Puerto Real 11519 (Spain); Vázquez-Martínez, J.M. [Departamento de Ingeniería Mecánica y Diseño Industrial, Escuela Superior de Ingeniería, Universidad de Cádiz, Puerto Real 11519 (Spain); Varela-Feria, F.M. [Servicio de Microscopía Centro de Investigación, Tecnología e Innovación (CITIUS), Universidad de Sevilla, Av. Reina Mercedes 4b, 41012 Sevilla (Spain); Marcos, M. [Departamento de Ingeniería Mecánica y Diseño Industrial, Escuela Superior de Ingeniería, Universidad de Cádiz, Puerto Real 11519 (Spain); and others

    2017-06-15

    Highlights: • We describe a method to acquire a high-angle tilt series of SEM images that is symmetrical respect to the zero tilt of the sample stage. The method can be applied in any SEM microscope. • Using the method, high-resolution 3D SEM photogrammetry can be applied on planar surfaces. • 3D models of three surfaces patterned with grooves are reconstructed with high resolution using multi-view freeware photogrammetry software as described in LC Gontard et al. Ultramicroscopy, 2016. • From the 3D models roughness parameters are measured • 3D SEM high-resolution photogrammetry is compared with two conventional methods used for roughness characetrization: stereophotogrammetry and contact profilometry. • It provides three-dimensional information with high-resolution that is out of reach for any other metrological technique. - Abstract: We describe a methodology to obtain three-dimensional models of engineered surfaces using scanning electron microscopy and multi-view photogrammetry (3DSEM). For the reconstruction of the 3D models of the surfaces we used freeware available in the cloud. The method was applied to study the surface roughness of metallic samples patterned with parallel grooves by means of laser. The results are compared with measurements obtained using stylus profilometry (PR) and SEM stereo-photogrammetry (SP). The application of 3DSEM is more time demanding than PR or SP, but it provides a more accurate representation of the surfaces. The results obtained with the three techniques are compared by investigating the influence of sampling step on roughness parameters.

  11. Analysis and modelling of the fuels european market; Analyse et modelisation des prix des produits petroliers combustibles en europe

    Energy Technology Data Exchange (ETDEWEB)

    Simon, V

    1999-04-01

    The research focus on the European fuel market prices referring to the Rotterdam and Genoa spot markets as well the German, Italian and French domestic markets. The thesis try to explain the impact of the London IPE future market on spot prices too. The mainstream research has demonstrated that co-integration seems to be the best theoretical approach to investigate the long run equilibrium relations. A particular attention will be devoted to the structural change in the econometric modelling on these equilibriums. A deep analysis of the main European petroleum products markets permit a better model specification concerning each of these markets. Further, we will test if any evidence of relations between spot and domestic prices could be confirmed. Finally, alternative scenarios will be depicted to forecast prices in the petroleum products markets. The objective is to observe the model reaction to changes crude oil prices. (author)

  12. Qualitative and quantitative analyses of the echolocation strategies of bats on the basis of mathematical modelling and laboratory experiments.

    Science.gov (United States)

    Aihara, Ikkyu; Fujioka, Emyo; Hiryu, Shizuko

    2013-01-01

    Prey pursuit by an echolocating bat was studied theoretically and experimentally. First, a mathematical model was proposed to describe the flight dynamics of a bat and a single prey. In this model, the flight angle of the bat was affected by [Formula: see text] angles related to the flight path of the single moving prey, that is, the angle from the bat to the prey and the flight angle of the prey. Numerical simulation showed that the success rate of prey capture was high, when the bat mainly used the angle to the prey to minimize the distance to the prey, and also used the flight angle of the prey to minimize the difference in flight directions of itself and the prey. Second, parameters in the model were estimated according to experimental data obtained from video recordings taken while a Japanese horseshoe bat (Rhinolphus derrumequinum nippon) pursued a moving moth (Goniocraspidum pryeri) in a flight chamber. One of the estimated parameter values, which represents the ratio in the use of the [Formula: see text] angles, was consistent with the optimal value of the numerical simulation. This agreement between the numerical simulation and parameter estimation suggests that a bat chooses an effective flight path for successful prey capture by using the [Formula: see text] angles. Finally, the mathematical model was extended to include a bat and [Formula: see text] prey. Parameter estimation of the extended model based on laboratory experiments revealed the existence of bat's dynamical attention towards [Formula: see text] prey, that is, simultaneous pursuit of [Formula: see text] prey and selective pursuit of respective prey. Thus, our mathematical model contributes not only to quantitative analysis of effective foraging, but also to qualitative evaluation of a bat's dynamical flight strategy during multiple prey pursuit.

  13. Qualitative and quantitative analyses of the echolocation strategies of bats on the basis of mathematical modelling and laboratory experiments.

    Directory of Open Access Journals (Sweden)

    Ikkyu Aihara

    Full Text Available Prey pursuit by an echolocating bat was studied theoretically and experimentally. First, a mathematical model was proposed to describe the flight dynamics of a bat and a single prey. In this model, the flight angle of the bat was affected by [Formula: see text] angles related to the flight path of the single moving prey, that is, the angle from the bat to the prey and the flight angle of the prey. Numerical simulation showed that the success rate of prey capture was high, when the bat mainly used the angle to the prey to minimize the distance to the prey, and also used the flight angle of the prey to minimize the difference in flight directions of itself and the prey. Second, parameters in the model were estimated according to experimental data obtained from video recordings taken while a Japanese horseshoe bat (Rhinolphus derrumequinum nippon pursued a moving moth (Goniocraspidum pryeri in a flight chamber. One of the estimated parameter values, which represents the ratio in the use of the [Formula: see text] angles, was consistent with the optimal value of the numerical simulation. This agreement between the numerical simulation and parameter estimation suggests that a bat chooses an effective flight path for successful prey capture by using the [Formula: see text] angles. Finally, the mathematical model was extended to include a bat and [Formula: see text] prey. Parameter estimation of the extended model based on laboratory experiments revealed the existence of bat's dynamical attention towards [Formula: see text] prey, that is, simultaneous pursuit of [Formula: see text] prey and selective pursuit of respective prey. Thus, our mathematical model contributes not only to quantitative analysis of effective foraging, but also to qualitative evaluation of a bat's dynamical flight strategy during multiple prey pursuit.

  14. Methods and theory in bone modeling drift: comparing spatial analyses of primary bone distributions in the human humerus.

    Science.gov (United States)

    Maggiano, Corey M; Maggiano, Isabel S; Tiesler, Vera G; Chi-Keb, Julio R; Stout, Sam D

    2016-01-01

    This study compares two novel methods quantifying bone shaft tissue distributions, and relates observations on human humeral growth patterns for applications in anthropological and anatomical research. Microstructural variation in compact bone occurs due to developmental and mechanically adaptive circumstances that are 'recorded' by forming bone and are important for interpretations of growth, health, physical activity, adaptation, and identity in the past and present. Those interpretations hinge on a detailed understanding of the modeling process by which bones achieve their diametric shape, diaphyseal curvature, and general position relative to other elements. Bone modeling is a complex aspect of growth, potentially causing the shaft to drift transversely through formation and resorption on opposing cortices. Unfortunately, the specifics of modeling drift are largely unknown for most skeletal elements. Moreover, bone modeling has seen little quantitative methodological development compared with secondary bone processes, such as intracortical remodeling. The techniques proposed here, starburst point-count and 45° cross-polarization hand-drawn histomorphometry, permit the statistical and populational analysis of human primary tissue distributions and provide similar results despite being suitable for different applications. This analysis of a pooled archaeological and modern skeletal sample confirms the importance of extreme asymmetry in bone modeling as a major determinant of microstructural variation in diaphyses. Specifically, humeral drift is posteromedial in the human humerus, accompanied by a significant rotational trend. In general, results encourage the usage of endocortical primary bone distributions as an indicator and summary of bone modeling drift, enabling quantitative analysis by direction and proportion in other elements and populations. © 2015 Anatomical Society.

  15. Mind the gaps: a state-space model for analysing the dynamics of North Sea herring spawning components

    DEFF Research Database (Denmark)

    Payne, Mark

    2010-01-01

    , the sum of the fitted abundance indices across all components proves an excellent proxy for the biomass of the total stock, even though the model utilizes information at the individual-component level. The Orkney–Shetland component appears to have recovered faster from historic depletion events than...... the other components, whereas the Downs component has been the slowest. These differences give rise to changes in stock composition, which are shown to vary widely within a relatively short time. The modelling framework provides a valuable tool for studying and monitoring the dynamics of the individual...

  16. FIB-SEM cathodoluminescence tomography: practical and theoretical considerations.

    Science.gov (United States)

    De Winter, D A M; Lebbink, M N; Wiggers De Vries, D F; Post, J A; Drury, M R

    2011-09-01

    Focused ion beam-scanning electron microscope (FIB-SEM) tomography is a powerful application in obtaining three-dimensional (3D) information. The FIB creates a cross section and subsequently removes thin slices. The SEM takes images using secondary or backscattered electrons, or maps every slice using X-rays and/or electron backscatter diffraction patterns. The objective of this study is to assess the possibilities of combining FIB-SEM tomography with cathodoluminescence (CL) imaging. The intensity of CL emission is related to variations in defect or impurity concentrations. A potential problem with FIB-SEM CL tomography is that ion milling may change the defect state of the material and the CL emission. In addition the conventional tilted sample geometry used in FIB-SEM tomography is not compatible with conventional CL detectors. Here we examine the influence of the FIB on CL emission in natural diamond and the feasibility of FIB-SEM CL tomography. A systematic investigation establishes that the ion beam influences CL emission of diamond, with a dependency on both the ion beam and electron beam acceleration voltage. CL emission in natural diamond is enhanced particularly at low ion beam and electron beam voltages. This enhancement of the CL emission can be partly explained by an increase in surface defects induced by ion milling. CL emission enhancement could be used to improve the CL image quality. To conduct FIB-SEM CL tomography, a recently developed novel specimen geometry is adopted to enable sequential ion milling and CL imaging on an untilted sample. We show that CL imaging can be manually combined with FIB-SEM tomography with a modified protocol for 3D microstructure reconstruction. In principle, automated FIB-SEM CL tomography should be feasible, provided that dedicated CL detectors are developed that allow subsequent milling and CL imaging without manual intervention, as the current CL detector needs to be manually retracted before a slice can be milled

  17. Greenhouse gas network design using backward Lagrangian particle dispersion modelling – Part 2: Sensitivity analyses and South African test case

    CSIR Research Space (South Africa)

    Nickless, A

    2014-05-01

    Full Text Available This is the second part of a two-part paper considering network design based on a Lagrangian stochastic particle dispersion model (LPDM), aimed at reducing the uncertainty of the flux estimates achievable for the region of interest by the continuous...

  18. The mental health care model in Brazil: analyses of the funding, governance processes, and mechanisms of assessment

    Directory of Open Access Journals (Sweden)

    Thiago Lavras Trapé

    Full Text Available ABSTRACT OBJECTIVE This study aims to analyze the current status of the mental health care model of the Brazilian Unified Health System, according to its funding, governance processes, and mechanisms of assessment. METHODS We have carried out a documentary analysis of the ordinances, technical reports, conference reports, normative resolutions, and decrees from 2009 to 2014. RESULTS This is a time of consolidation of the psychosocial model, with expansion of the health care network and inversion of the funding for community services with a strong emphasis on the area of crack cocaine and other drugs. Mental health is an underfunded area within the chronically underfunded Brazilian Unified Health System. The governance model constrains the progress of essential services, which creates the need for the incorporation of a process of regionalization of the management. The mechanisms of assessment are not incorporated into the health policy in the bureaucratic field. CONCLUSIONS There is a need to expand the global funding of the area of health, specifically mental health, which has been shown to be a successful policy. The current focus of the policy seems to be archaic in relation to the precepts of the psychosocial model. Mechanisms of assessment need to be expanded.

  19. Occupant-level injury severity analyses for taxis in Hong Kong: A Bayesian space-time logistic model.

    Science.gov (United States)

    Meng, Fanyu; Xu, Pengpeng; Wong, S C; Huang, Helai; Li, Y C

    2017-11-01

    This study aimed to identify the factors affecting the crash-related severity level of injuries in taxis and quantify the associations between these factors and taxi occupant injury severity. Casualties resulting from taxi crashes from 2004 to 2013 in Hong Kong were divided into four categories: taxi drivers, taxi passengers, private car drivers and private car passengers. To avoid any biased interpretation caused by unobserved spatial and temporal effects, a Bayesian hierarchical logistic modeling approach with conditional autoregressive priors was applied, and four different model forms were tested. For taxi drivers and passengers, the model with space-time interaction was proven to most properly address the unobserved heterogeneity effects. The results indicated that time of week, number of vehicles involved, weather, point of impact and driver age were closely associated with taxi drivers' injury severity level in a crash. For taxi passengers' injury severity an additional factor, taxi service area, was influential. To investigate the differences between taxis and other traffic, similar models were established for private car drivers and passengers. The results revealed that although location in the network and driver gender significantly influenced private car drivers' injury severity, they did not influence taxi drivers' injury severity. Compared with taxi passengers, the injury severity of private car passengers was more sensitive to average speed and whether seat belts were worn. Older drivers, urban taxis and fatigued driving were identified as factors that increased taxi occupant injury severity in Hong Kong. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Radiation transport analyses for IFMIF design by the Attila software using a Monte-Carlo source model

    International Nuclear Information System (INIS)

    Arter, W.; Loughlin, M.J.

    2009-01-01

    Accurate calculation of the neutron transport through the shielding of the IFMIF test cell, defined by CAD, is a difficult task for several reasons. The ability of the powerful deterministic radiation transport code Attila, to do this rapidly and reliably has been studied. Three models of increasing geometrical complexity were produced from the CAD using the CADfix software. A fourth model was produced to represent transport within the cell. The work also involved the conversion of the Vitenea-IEF database for high energy neutrons into a format usable by Attila, and the conversion of a particle source specified in MCNP wssaformat to a form usable by Attila. The final model encompassed the entire test cell environment, with only minor modifications. On a state-of-the-art PC, Attila took approximately 3 h to perform the calculations, as a consequence of a careful mesh 'layering'. The results strongly suggest that Attila will be a valuable tool for modelling radiation transport in IFMIF, and for similar problems

  1. A mathematical high bar-human body model for analysing and interpreting mechanical-energetic processes on the high bar.

    Science.gov (United States)

    Arampatzis, A; Brüggemann, G P

    1998-12-01

    The aims of this study were: 1. To study the transfer of energy between the high bar and the gymnast. 2. To develop criteria from the utilisation of high bar elasticity and the utilisation of muscle capacity to assess the effectiveness of a movement solution. 3. To study the influence of varying segment movement upon release parameters. For these purposes a model of the human body attached to the high bar (high bar-human body model) was developed. The human body was modelled using a 15-segment body system. The joint-beam element method (superelement) was employed for modelling the high bar. A superelement consists of four rigid segments connected by joints (two Cardan joints and one rotational-translational joint) and springs (seven rotation springs and one tension-compression spring). The high bar was modelled using three superelements. The input data required for the high bar human body model were collected with video-kinematographic (50 Hz) and dynamometric (500 Hz) techniques. Masses and moments of inertia of the 15 segments were calculated using the data from the Zatsiorsky et al. (1984) model. There are two major phases characteristic of the giant swing prior to dismounts from the high bar. In the first phase the gymnast attempts to supply energy to the high bar-humanbody system through muscle activity and to store this energy in the high bar. The difference between the energy transferred to the high bar and the reduction in the total energy of the body could be adopted as a criterion for the utilisation of high bar elasticity. The energy previously transferred into the high bar is returned to the body during the second phase. An advantageous increase in total body energy at the end of the exercise could only be obtained through muscle energy supply. An index characterising the utilisation of muscle capacity was developed out of the difference between the increase in total body energy and the energy returned from the high bar. A delayed and initially slow but

  2. Effects of wintertime atmospheric river landfalls on surface air temperatures in the Western US: Analyses and model evaluation

    Science.gov (United States)

    Kim, J.; Guan, B.; Waliser, D. E.; Ferraro, R.

    2016-12-01

    Landfalling atmospheric rivers (ARs) affect the wintertime surface air temperatures as shown in earlier studies. The AR-related surface air temperatures can exert significant influence on the hydrology in the US Pacific coast region especially through rainfall-snowfall partitioning and the snowpack in high elevation watersheds as they are directly related with the freezing-level altitudes. These effects of temperature perturbations can in turn affect hydrologic events of various time scales such as flash flooding by the combined effects of rainfall and snowmelt, and the warm season runoff from melting snowpack, especially in conjunction with the AR effects on winter precipitation and rain-on-snow events in WUS. Thus, understanding the effects of AR landfalls on the surface temperatures and examining the capability of climate models in simulating these effects are an important practical concern for WUS. This study aims to understand the effects of AR landfalls on the characteristics of surface air temperatures in WUS, especially seasonal means and PDFs and to evaluate the fidelity of model data produced in the NASA downscaling experiment for the 10 winters from Nov. 1999 to Mar. 2010 using an AR-landfall chronology based on the vertically-integrated water vapor flux calculated from the MERRA2 reanalysis. Model skill is measured using metrics including regional means, a skill score based on correlations and mean-square errors, the similarity between two PDF shapes, and Taylor diagrams. Results show that the AR landfalls are related with higher surface air temperatures in WUS, especially in inland regions. The AR landfalls also reduce the range of surface air temperature PDF, largely by reducing the events in the lower temperature range. The shift in the surface air temperature PDF is consistent with the positive anomalies in the winter-mean temperature. Model data from the NASA downscaling experiment reproduce the AR effects on the temperature PDF, at least

  3. Updated model for radionuclide transport in the near-surface till at Forsmark - Implementation of decay chains and sensitivity analyses

    Energy Technology Data Exchange (ETDEWEB)

    Pique, Angels; Pekala, Marek; Molinero, Jorge; Duro, Lara; Trinchero, Paolo; Vries, Luis Manuel de [Amphos 21 Consulting S.L., Barcelona (Spain)

    2013-02-15

    The Forsmark area has been proposed for potential siting of a deep underground (geological) repository for radioactive waste in Sweden. Safety assessment of the repository requires radionuclide transport from the disposal depth to recipients at the surface to be studied quantitatively. The near-surface quaternary deposits at Forsmark are considered a pathway for potential discharge of radioactivity from the underground facility to the biosphere, thus radionuclide transport in this system has been extensively investigated over the last years. The most recent work of Pique and co-workers (reported in SKB report R-10-30) demonstrated that in case of release of radioactivity the near-surface sedimentary system at Forsmark would act as an important geochemical barrier, retarding the transport of reactive radionuclides through a combination of retention processes. In this report the conceptual model of radionuclide transport in the quaternary till at Forsmark has been updated, by considering recent revisions regarding the near-surface lithology. In addition, the impact of important conceptual assumptions made in the model has been evaluated through a series of deterministic and probabilistic (Monte Carlo) sensitivity calculations. The sensitivity study focused on the following effects: 1. Radioactive decay of {sup 135}Cs, {sup 59}Ni, {sup 230}Th and {sup 226}Ra and effects on their transport. 2. Variability in key geochemical parameters, such as the composition of the deep groundwater, availability of sorbing materials in the till, and mineral equilibria. 3. Variability in hydraulic parameters, such as the definition of hydraulic boundaries, and values of hydraulic conductivity, dispersivity and the deep groundwater inflow rate. The overarching conclusion from this study is that the current implementation of the model is robust (the model is largely insensitive to variations in the parameters within the studied ranges) and conservative (the Base Case calculations have a

  4. Entropic potential field formed for a linear-motor protein near a filament: Statistical-mechanical analyses using simple models

    Science.gov (United States)

    Amano, Ken-ichi; Yoshidome, Takashi; Iwaki, Mitsuhiro; Suzuki, Makoto; Kinoshita, Masahiro

    2010-07-01

    We report a new progress in elucidating the mechanism of the unidirectional movement of a linear-motor protein (e.g., myosin) along a filament (e.g., F-actin). The basic concept emphasized here is that a potential field is entropically formed for the protein on the filament immersed in solvent due to the effect of the translational displacement of solvent molecules. The entropic potential field is strongly dependent on geometric features of the protein and the filament, their overall shapes as well as details of the polyatomic structures. The features and the corresponding field are judiciously adjusted by the binding of adenosine triphosphate (ATP) to the protein, hydrolysis of ATP into adenosine diphosphate (ADP)+Pi, and release of Pi and ADP. As the first step, we propose the following physical picture: The potential field formed along the filament for the protein without the binding of ATP or ADP+Pi to it is largely different from that for the protein with the binding, and the directed movement is realized by repeated switches from one of the fields to the other. To illustrate the picture, we analyze the spatial distribution of the entropic potential between a large solute and a large body using the three-dimensional integral equation theory. The solute is modeled as a large hard sphere. Two model filaments are considered as the body: model 1 is a set of one-dimensionally connected large hard spheres and model 2 is a double helical structure formed by two sets of connected large hard spheres. The solute and the filament are immersed in small hard spheres forming the solvent. The major findings are as follows. The solute is strongly confined within a narrow space in contact with the filament. Within the space there are locations with sharply deep local potential minima along the filament, and the distance between two adjacent locations is equal to the diameter of the large spheres constituting the filament. The potential minima form a ringlike domain in model 1

  5. The generic MESSy submodel TENDENCY (v1.0 for process-based analyses in Earth system models

    Directory of Open Access Journals (Sweden)

    R. Eichinger

    2014-07-01

    Full Text Available The tendencies of prognostic variables in Earth system models are usually only accessible, e.g. for output, as a sum over all physical, dynamical and chemical processes at the end of one time integration step. Information about the contribution of individual processes to the total tendency is lost, if no special precautions are implemented. The knowledge on individual contributions, however, can be of importance to track down specific mechanisms in the model system. We present the new MESSy (Modular Earth Submodel System infrastructure submodel TENDENCY and use it exemplarily within the EMAC (ECHAM/MESSy Atmospheric Chemistry model to trace process-based tendencies of prognostic variables. The main idea is the outsourcing of the tendency accounting for the state variables from the process operators (submodels to the TENDENCY submodel itself. In this way, a record of the tendencies of all process–prognostic variable pairs can be stored. The selection of these pairs can be specified by the user, tailor-made for the desired application, in order to minimise memory requirements. Moreover, a standard interface allows the access to the individual process tendencies by other submodels, e.g. for on-line diagnostics or for additional parameterisations, which depend on individual process tendencies. An optional closure test assures the correct treatment of tendency accounting in all submodels and thus serves to reduce the model's susceptibility. TENDENCY is independent of the time integration scheme and therefore the concept is applicable to other model systems as well. Test simulations with TENDENCY show an increase of computing time for the EMAC model (in a setup without atmospheric chemistry of 1.8 ± 1% due to the additional subroutine calls when using TENDENCY. Exemplary results reveal the dissolving mechanisms of the stratospheric tape recorder signal in height over time. The separation of the tendency of the specific humidity into the respective

  6. Sparsity-Based Super Resolution for SEM Images.

    Science.gov (United States)

    Tsiper, Shahar; Dicker, Or; Kaizerman, Idan; Zohar, Zeev; Segev, Mordechai; Eldar, Yonina C

    2017-09-13

    The scanning electron microscope (SEM) is an electron microscope that produces an image of a sample by scanning it with a focused beam of electrons. The electrons interact with the atoms in the sample, which emit secondary electrons that contain information about the surface topography and composition. The sample is scanned by the electron beam point by point, until an image of the surface is formed. Since its invention in 1942, the capabilities of SEMs have become paramount in the discovery and understanding of the nanometer world, and today it is extensively used for both research and in industry. In principle, SEMs can achieve resolution better than one nanometer. However, for many applications, working at subnanometer resolution implies an exceedingly large number of scanning points. For exactly this reason, the SEM diagnostics of microelectronic chips is performed either at high resolution (HR) over a small area or at low resolution (LR) while capturing a larger portion of the chip. Here, we employ sparse coding and dictionary learning to algorithmically enhance low-resolution SEM images of microelectronic chips-up to the level of the HR images acquired by slow SEM scans, while considerably reducing the noise. Our methodology consists of two steps: an offline stage of learning a joint dictionary from a sequence of LR and HR images of the same region in the chip, followed by a fast-online super-resolution step where the resolution of a new LR image is enhanced. We provide several examples with typical chips used in the microelectronics industry, as well as a statistical study on arbitrary images with characteristic structural features. Conceptually, our method works well when the images have similar characteristics, as microelectronics chips do. This work demonstrates that employing sparsity concepts can greatly improve the performance of SEM, thereby considerably increasing the scanning throughput without compromising on analysis quality and resolution.

  7. Strategic Marketing for Indonesian Plywood Industry: An Analyse by using Porter Five Forces Model and Generic Strategy Framework

    OpenAIRE

    Makkarennu; Nakayasu, A.; Osozawa, K.; Ichikawa, M.

    2014-01-01

    The target for a marketing strategy is to find a way of achieving a sustainable competitive advantage over the other competing products and firms in a market.Good strategy serves as a road map for effective action. Porter???s five forces model and three generic strategies were used to evaluate the structure and the strategy for positioning of plywood industry in South Sulawesi, Indonesia. Qualitative research was carried out by using in-depth interview method. Having expressed either agree...

  8. Defining the Transfer Functions of the PCAD Model in North Atlantic Right Whales (Eubalaena glacialis) - Retrospective Analyses of Existing Data

    Science.gov (United States)

    2012-09-30

    Prescribed by ANSI Std Z39-18 2 health assessments), Peter Corkeron, National Marine Fisheries Service (statistics), Kathleen Hunt ( endocrinology ...each adult female right whale transitioning between three possible states – Pregnancy , Lactation and Resting. Only female whales with at least 20...whale (Eubalaena glacialis). General and Comparative Endocrinology 148:260-272. Kuhn, M. 2008. Building predictive models in R using the caret package

  9. HCV kinetic and modeling analyses indicate similar time to cure among sofosbuvir combination regimens with daclatasvir, simeprevir or ledipasvir.

    Science.gov (United States)

    Dahari, Harel; Canini, Laetitia; Graw, Frederik; Uprichard, Susan L; Araújo, Evaldo S A; Penaranda, Guillaume; Coquet, Emilie; Chiche, Laurent; Riso, Aurelie; Renou, Christophe; Bourliere, Marc; Cotler, Scott J; Halfon, Philippe

    2016-06-01

    Recent clinical trials of direct-acting-antiviral agents (DAAs) against hepatitis C virus (HCV) achieved >90% sustained virological response (SVR) rates, suggesting that cure often took place before the end of treatment (EOT). We sought to evaluate retrospectively whether early response kinetics can provide the basis to individualize therapy to achieve optimal results while reducing duration and cost. 58 chronic HCV patients were treated with 12-week sofosbuvir+simeprevir (n=19), sofosbuvir+daclatasvir (n=19), or sofosbuvir+ledipasvir in three French referral centers. HCV was measured at baseline, day 2, every other week, EOT and 12weeks post EOT. Mathematical modeling was used to predict the time to cure, i.e., <1 virus copy in the entire extracellular body fluid. All but one patient who relapsed achieved SVR. Mean age was 60±11years, 53% were male, 86% HCV genotype-1, 9% HIV coinfected, 43% advanced fibrosis (F3), and 57% had cirrhosis. At weeks 2, 4 and 6, 48%, 88% and 100% of patients had HCV<15IU/ml, with 27%, 74% and 91% of observations having target not detected, respectively. Modeling results predicted that 23 (43%), 16 (30%), 7 (13%), 5 (9%) and 3 (5%) subjects were predicted to reach cure within 6, 8, 10, 12 and 13weeks of therapy, respectively. The modeling suggested that the patient who relapsed would have benefitted from an additional week of sofosbuvir+ledipasvir. Adjusting duration of treatment according to the modeling predicts reduced medication costs of 43-45% and 17-30% in subjects who had HCV<15IU/ml at weeks 2 and 4, respectively. The use of early viral kinetic analysis has the potential to individualize duration of DAA therapy with a projected average cost saving of 16-20% per 100-treated persons. Copyright © 2016 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  10. Modelling software failures of digital I and C in probabilistic safety analyses based on the TELEPERM registered XS operating experience

    International Nuclear Information System (INIS)

    Jockenhoevel-Barttfeld, Mariana; Taurines Andre; Baeckstroem, Ola; Holmberg, Jan-Erik; Porthin, Markus; Tyrvaeinen, Tero

    2015-01-01

    Digital instrumentation and control (I and C) systems appear as upgrades in existing nuclear power plants (NPPs) and in new plant designs. In order to assess the impact of digital system failures, quantifiable reliability models are needed along with data for digital systems that are compatible with existing probabilistic safety assessments (PSA). The paper focuses on the modelling of software failures of digital I and C systems in probabilistic assessments. An analysis of software faults, failures and effects is presented to derive relevant failure modes of system and application software for the PSA. The estimations of software failure probabilities are based on an analysis of the operating experience of TELEPERM registered XS (TXS). For the assessment of application software failures the analysis combines the use of the TXS operating experience at an application function level combined with conservative engineering judgments. Failure probabilities to actuate on demand and of spurious actuation of typical reactor protection application are estimated. Moreover, the paper gives guidelines for the modelling of software failures in the PSA. The strategy presented in this paper is generic and can be applied to different software platforms and their applications.

  11. Bridging Research and Policy in Energy Transition. Contributing to shape energy and climate policies through economic modelling and analyses

    International Nuclear Information System (INIS)

    Paugam, Anne; Giraud, Gael; Thauvin, Eric

    2015-11-01

    The growth model of the 20. century relied heavily on the exploitation of fossil energy and natural resources extracted at low cost. Yet, the depletion of these resources, the upward trend of their prices over the long term and the consequences of their use for the environment and climate are now challenging the sustainability of this model. The notion of energy transition is directed at rethinking the use of energy resources and natural capital to reach an economic growth that mitigates negative environmental effects, without sacrificing the well-being of populations. Turning this idea into action is a challenging task. AFD has designed and funded research and technical cooperation projects in order to inform decisions on the short-term cost and long-term impact of measures designed to accelerate the transition to low-carbon energy regimes. Using tools for empirical economic analysis (particularly 'economy-energy' models), these projects have been carried out in several intervention settings, including South Africa, China and Mexico, which are discussed in this paper

  12. A finite-volume model of a parabolic trough photovoltaic/thermal collector: Energetic and exergetic analyses

    International Nuclear Information System (INIS)

    Calise, Francesco; Palombo, Adolfo; Vanoli, Laura

    2012-01-01

    This paper presents a detailed finite-volume model of a concentrating photovoltaic/thermal (PVT) solar collector. The PVT solar collector consists in a parabolic trough concentrator and a linear triangular receiver. The bottom surfaces of the triangular receiver are equipped with triple-junction cells whereas the top surface is covered by an absorbing surface. The cooling fluid (water) flows inside a channel along the longitudinal direction of the PVT collector. The system was discretized along its axis and, for each slice of the discretized computational domain, mass and energy balances were considered. The model allows one to evaluate both thermodynamic and electrical parameters along the axis of the PVT collector. Then, for each slice of the computational domain, exergy balances were also considered in order to evaluate the corresponding exergy destruction rate and exergetic efficiency. Therefore, the model also calculates the magnitude of the irreversibilities inside the collector and it allows one to detect where these irreversibilities occur. A sensitivity analysis is also performed with the scope to evaluate the effect of the variation of the main design/environmental parameters on the energetic and exergetic performance of the PVT collector. -- Highlights: ► The paper investigates an innovative concentrating photovoltaic thermal solar collector. ► The collector is equipped with triple-junction photovoltaic layers. ► A local exergetic analysis is performed in order to detect sources of irreversibilities. ► Irreversibilities are mainly due to the heat transfer between sun and PVT collector.

  13. Respiratory system model for quasistatic pulmonary pressure-volume (P-V) curve: inflation-deflation loop analyses.

    Science.gov (United States)

    Amini, R; Narusawa, U

    2008-06-01

    A respiratory system model (RSM) is developed for the deflation process of a quasistatic pressure-volume (P-V) curve, following the model for the inflation process reported earlier. In the RSM of both the inflation and the deflation limb, a respiratory system consists of a large population of basic alveolar elements, each consisting of a piston-spring-cylinder subsystem. A normal distribution of the basic elements is derived from Boltzmann statistical model with the alveolar closing (opening) pressure as the distribution parameter for the deflation (inflation) process. An error minimization by the method of least squares applied to existing P-V loop data from two different data sources confirms that a simultaneous inflation-deflation analysis is required for an accurate determination of RSM parameters. Commonly used terms such as lower inflection point, upper inflection point, and compliance are examined based on the P-V equations, on the distribution function, as well as on the geometric and physical properties of the basic alveolar element.

  14. Model analyses of atmospheric mercury: present air quality and effects of transpacific transport on the United States

    Science.gov (United States)

    Lei, H.; Liang, X.-Z.; Wuebbles, D. J.; Tao, Z.

    2013-11-01

    Atmospheric mercury is a toxic air and water pollutant that is of significant concern because of its effects on human health and ecosystems. A mechanistic representation of the atmospheric mercury cycle is developed for the state-of-the-art global climate-chemistry model, CAM-Chem (Community Atmospheric Model with Chemistry). The model simulates the emission, transport, transformation and deposition of atmospheric mercury (Hg) in three forms: elemental mercury (Hg(0)), reactive mercury (Hg(II)), and particulate mercury (PHg). Emissions of mercury include those from human, land, ocean, biomass burning and volcano related sources. Land emissions are calculated based on surface solar radiation flux and skin temperature. A simplified air-sea mercury exchange scheme is used to calculate emissions from the oceans. The chemistry mechanism includes the oxidation of Hg(0) in gaseous phase by ozone with temperature dependence, OH, H2O2 and chlorine. Aqueous chemistry includes both oxidation and reduction of Hg(0). Transport and deposition of mercury species are calculated through adapting the original formulations in CAM-Chem. The CAM-Chem model with mercury is driven by present meteorology to simulate the present mercury air quality during the 1999-2001 period. The resulting surface concentrations of total gaseous mercury (TGM) are then compared with the observations from worldwide sites. Simulated wet depositions of mercury over the continental United States are compared to the observations from 26 Mercury Deposition Network stations to test the wet deposition simulations. The evaluations of gaseous concentrations and wet deposition confirm a strong capability for the CAM-Chem mercury mechanism to simulate the atmospheric mercury cycle. The general reproduction of global TGM concentrations and the overestimation on South Africa indicate that model simulations of TGM are seriously affected by emissions. The comparison to wet deposition indicates that wet deposition patterns

  15. A Miniaturized Variable Pressure Scanning Electron Microscope (MVP-SEM) for In-Situ Mars Surface Sample Analysis

    Science.gov (United States)

    Edmunson, J.; Gaskin, J. A.; Jerman, G. A.; Harvey, R. P.; Doloboff, I. J.; Neidholdt, E. L.

    2016-01-01

    The Miniaturized Variable Pressure Scanning Electron Microscope (MVP-SEM) project, funded by the NASA Planetary Instrument Concepts for the Advancement of Solar System Observations (PICASSO) Research Opportunities in Space and Earth Sciences (ROSES), will build upon previous miniaturized SEM designs and recent advancements in variable pressure SEM's to design and build a SEM to complete analyses of samples on the surface of Mars using the atmosphere as an imaging medium. This project is a collaboration between NASA Marshall Space Flight Center (MSFC), the Jet Propulsion Laboratory (JPL), electron gun and optics manufacturer Applied Physics Technologies, and small vacuum system manufacturer Creare. Dr. Ralph Harvery and environmental SEM (ESEM) inventor Dr. Gerry Danilatos serve as advisors to the team. Variable pressure SEMs allow for fine (nm-scale) resolution imaging and micron-scale chemical study of materials without sample preparation (e.g., carbon or gold coating). Charging of a sample is reduced or eliminated by the gas surrounding the sample. It is this property of ESEMs that make them ideal for locations where sample preparation is not yet feasible, such as the surface of Mars. In addition, the lack of sample preparation needed here will simplify the sample acquisition process and allow caching of the samples for future complementary payload use.

  16. Quantifying Golgi structure using EM: combining volume-SEM and stereology for higher throughput.

    Science.gov (United States)

    Ferguson, Sophie; Steyer, Anna M; Mayhew, Terry M; Schwab, Yannick; Lucocq, John Milton

    2017-06-01

    Investigating organelles such as the Golgi complex depends increasingly on high-throughput quantitative morphological analyses from multiple experimental or genetic conditions. Light microscopy (LM) has been an effective tool for screening but fails to reveal fine details of Golgi structures such as vesicles, tubules and cisternae. Electron microscopy (EM) has sufficient resolution but traditional transmission EM (TEM) methods are slow and inefficient. Newer volume scanning EM (volume-SEM) methods now have the potential to speed up 3D analysis by automated sectioning and imaging. However, they produce large arrays of sections and/or images, which require labour-intensive 3D reconstruction for quantitation on limited cell numbers. Here, we show that the information storage, digital waste and workload involved in using volume-SEM can be reduced substantially using sampling-based stereology. Using the Golgi as an example, we describe how Golgi populations can be sensed quantitatively using single random slices and how accurate quantitative structural data on Golgi organelles of individual cells can be obtained using only 5-10 sections/images taken from a volume-SEM series (thereby sensing population parameters and cell-cell variability). The approach will be useful in techniques such as correlative LM and EM (CLEM) where small samples of cells are treated and where there may be variable responses. For Golgi study, we outline a series of stereological estimators that are suited to these analyses and suggest workflows, which have the potential to enhance the speed and relevance of data acquisition in volume-SEM.

  17. Technical damage analysis of a mechanical seal based on thermal waves and correlated with EDX and SEM

    Science.gov (United States)

    Haj-Daoud, A.; Katscher, U.; Bein, B. K.; Pelzl, J.; Bach, H.; Oswald, W.

    1999-03-01

    A seal which had been in contact with sea water of high salt concentration, has been analysed, in order to characterize the erosion effects and throw light on the erosion mechanisms. The measured effective thermal depth profiles have been interpreted phenomenologically and have been correlated with energy-dispersive X-ray microanalysis (EDX) and scanning electron microscopy (SEM).

  18. Generelle aspekter ved mediereception? – Et bud på en multidimensional model for analyse af kvalitative receptionsinterviews

    Directory of Open Access Journals (Sweden)

    Kim Schrøder

    2003-09-01

    Full Text Available Findes der generelle aspekter ved receptionen af medieprodukter, som det kan være analytisk frugtbart at orientere sig efter, og som man altid bør belyse, når man analyserer kvalitative receptionsdata – og måske også allerede når man skal planlægge det empiriske feltarbejde i et em- pirisk receptionsprojekt? Denne artikel bygger på, at dette spørgsmål kan besvares bekræftende, og fremlægger et bud på, hvordan en multi- dimensional model for kvalitativ receptionsanalyse kunne se ud.

  19. Hydrogeochemical Processes of Groundwater Using Multivariate Statistical Analyses and Inverse Geochemical Modeling in Samrak Park of Nakdong River Basin, Korea

    Science.gov (United States)

    Chung, Sang Yong

    2015-04-01

    Multivariate statistical methods and inverse geochemical modelling were used to assess the hydrogeochemical processes of groundwater in Nakdong River basin. The study area is located in a part of Nakdong River basin, the Busan Metropolitan City, Kora. Quaternary deposits forms Samrak Park region and are underlain by intrusive rocks of Bulkuksa group and sedimentary rocks of Yucheon group in the Cretaceous Period. The Samrak park region is acting as two aquifer systems of unconfined aquifer and confined aquifer. The unconfined aquifer consists of upper sand, and confined aquifer is comprised of clay, lower sand, gravel, weathered rock. Porosity and hydraulic conductivity of the area is 37 to 59% and 1.7 to 200m/day, respectively. Depth of the wells ranges from 9 to 77m. Piper's trilinear diagram, CaCl2 type was useful for unconfined aquifer and NaCl type was dominant for confined aquifer. By hierarchical cluster analysis (HCA), Group 1 and Group 2 are fully composed of unconfined aquifer and confined aquifer, respectively. In factor analysis (FA), Factor 1 is described by the strong loadings of EC, Na, K, Ca, Mg, Cl, HCO3, SO4 and Si, and Factor 2 represents the strong loadings of pH and Al. Base on the Gibbs diagram, the unconfined and confined aquifer samples are scattered discretely in the rock and evaporation areas. The principal hydrogeochemical processes occurring in the confined and unconfined aquifers are the ion exchange due to the phenomena of freshening under natural recharge and water-rock interactions followed by evaporation and dissolution. The saturation index of minerals such as Ca-montmorillonite, dolomite and calcite represents oversaturated, and the albite, gypsum and halite show undersaturated. Inverse geochemical modeling using PHREEQC code demonstrated that relatively few phases were required to derive the differences in groundwater chemistry along the flow path in the area. It also suggested that dissolution of carbonate and ion exchange

  20. A model to analyse the flow of an incompressible Newtonian fluid through a rigid, homogeneous, isotropic and infinite porous medium

    International Nuclear Information System (INIS)

    Gama, R.M.S. da; Sampaio, R.

    1985-01-01

    The flow of an incompressible Newtonian fluid through a rigid, homogeneous, isotropic and infinite porous medium which has a given inicial distribuition of the mentioned fluid, is analyzed. It is proposed a model that assumes that the motion is caused by concentration gradient, but it does not consider the friction between the porous medium and the fluid. We solve an onedimensional case where the mathematical problem is reduced to the solution of a non-linear hyperbolic system of differential equations, subjected to an inicial condition given by a step function, called 'Riemann Problem'. (Author) [pt

  1. A Simple Object-Oriented and Open Source Model for Scientific and Policy Analyses of the Global Carbon Cycle-Hector

    Science.gov (United States)

    Hartin, C.; Bond-Lamberty, B. P.; Patel, P.; Link, R. P.

    2014-12-01

    Simple climate models play an integral role in policy and scientific communities. They are used in climate mitigation scenarios within integrated assessment models, complex climate model emulation, and uncertainty analyses. Here we describe, Hector an open source, object-oriented, simple global climate carbon-cycle model. This model runs essentially instantaneously while still representing the most critical global scale earth system processes, e.g., carbon fluxes between the ocean and atmosphere, and respiration and primary production on land. Hector has three main carbon pools: an atmosphere, land, and ocean. The terrestrial carbon cycle is represented by a simple design with respiration and primary production, accommodating arbitrary geographic divisions into, e.g., ecological biomes or political units. The ocean carbon cycle actively solves the inorganic carbon system in the surface ocean, directly calculating air-sea fluxes of carbon and ocean pH. Hector reproduces the large-scale global trends found in historical data of atmospheric [CO2] and surface temperature and simulates all four Representative Concentration Pathways. Hector's results compare well with current observations of critical climate variables, MAGICC (a well-known simple climate model), as well as, model output from the Coupled Model Intercomparison Project version 5. Hector has the ability to be a key analytical tool used across many scientific and policy communities due to its modern software architecture, open source, and object-oriented structure. In particular, Hector can be used to emulate larger complex models to help fill gaps in scenario coverage for future scenario processes.

  2. A methodology for eliciting, representing, and analysing stakeholder knowledge for decision making on complex socio-ecological systems: from cognitive maps to agent-based models.

    Science.gov (United States)

    Elsawah, Sondoss; Guillaume, Joseph H A; Filatova, Tatiana; Rook, Josefine; Jakeman, Anthony J

    2015-03-15

    This paper aims to contribute to developing better ways for incorporating essential human elements in decision making processes for modelling of complex socio-ecological systems. It presents a step-wise methodology for integrating perceptions of stakeholders (qualitative) into formal simulation models (quantitative) with the ultimate goal of improving understanding and communication about decision making in complex socio-ecological systems. The methodology integrates cognitive mapping and agent based modelling. It cascades through a sequence of qualitative/soft and numerical methods comprising: (1) Interviews to elicit mental models; (2) Cognitive maps to represent and analyse individual and group mental models; (3) Time-sequence diagrams to chronologically structure the decision making process; (4) All-encompassing conceptual model of decision making, and (5) computational (in this case agent-based) Model. We apply the proposed methodology (labelled ICTAM) in a case study of viticulture irrigation in South Australia. Finally, we use strengths-weakness-opportunities-threats (SWOT) analysis to reflect on the methodology. Results show that the methodology leverages the use of cognitive mapping to capture the richness of decision making and mental models, and provides a combination of divergent and convergent analysis methods leading to the construction of an Agent Based Model. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. 4Cin: A computational pipeline for 3D genome modeling and virtual Hi-C analyses from 4C data.

    Science.gov (United States)

    Irastorza-Azcarate, Ibai; Acemel, Rafael D; Tena, Juan J; Maeso, Ignacio; Gómez-Skarmeta, José Luis; Devos, Damien P

    2018-03-01

    The use of 3C-based methods has revealed the importance of the 3D organization of the chromatin for key aspects of genome biology. However, the different caveats of the variants of 3C techniques have limited their scope and the range of scientific fields that could benefit from these approaches. To address these limitations, we present 4Cin, a method to generate 3D models and derive virtual Hi-C (vHi-C) heat maps of genomic loci based on 4C-seq or any kind of 4C-seq-like data, such as those derived from NG Capture-C. 3D genome organization is determined by integrative consideration of the spatial distances derived from as few as four 4C-seq experiments. The 3D models obtained from 4C-seq data, together with their associated vHi-C maps, allow the inference of all chromosomal contacts within a given genomic region, facilitating the identification of Topological Associating Domains (TAD) boundaries. Thus, 4Cin offers a much cheaper, accessible and versatile alternative to other available techniques while providing a comprehensive 3D topological profiling. By studying TAD modifications in genomic structural variants associated to disease phenotypes and performing cross-species evolutionary comparisons of 3D chromatin structures in a quantitative manner, we demonstrate the broad potential and novel range of applications of our method.

  4. Whole Genome and Global Gene Expression Analyses of the Model Mushroom Flammulina velutipes Reveal a High Capacity for Lignocellulose Degradation

    Science.gov (United States)

    Park, Young-Jin; Baek, Jeong Hun; Lee, Seonwook; Kim, Changhoon; Rhee, Hwanseok; Kim, Hyungtae; Seo, Jeong-Sun; Park, Hae-Ran; Yoon, Dae-Eun; Nam, Jae-Young; Kim, Hong-Il; Kim, Jong-Guk; Yoon, Hyeokjun; Kang, Hee-Wan; Cho, Jae-Yong; Song, Eun-Sung; Sung, Gi-Ho; Yoo, Young-Bok; Lee, Chang-Soo; Lee, Byoung-Moo; Kong, Won-Sik

    2014-01-01

    Flammulina velutipes is a fungus with health and medicinal benefits that has been used for consumption and cultivation in East Asia. F. velutipes is also known to degrade lignocellulose and produce ethanol. The overlapping interests of mushroom production and wood bioconversion make F. velutipes an attractive new model for fungal wood related studies. Here, we present the complete sequence of the F. velutipes genome. This is the first sequenced genome for a commercially produced edible mushroom that also degrades wood. The 35.6-Mb genome contained 12,218 predicted protein-encoding genes and 287 tRNA genes assembled into 11 scaffolds corresponding with the 11 chromosomes of strain KACC42780. The 88.4-kb mitochondrial genome contained 35 genes. Well-developed wood degrading machinery with strong potential for lignin degradation (69 auxiliary activities, formerly FOLymes) and carbohydrate degradation (392 CAZymes), along with 58 alcohol dehydrogenase genes were highly expressed in the mycelium, demonstrating the potential application of this organism to bioethanol production. Thus, the newly uncovered wood degrading capacity and sequential nature of this process in F. velutipes, offer interesting possibilities for more detailed studies on either lignin or (hemi-) cellulose degradation in complex wood substrates. The mutual interest in wood degradation by the mushroom industry and (ligno-)cellulose biomass related industries further increase the significance of F. velutipes as a new model. PMID:24714189

  5. Scanning electron microscopy and micro-analyses

    International Nuclear Information System (INIS)

    Brisset, F.; Repoux, L.; Ruste, J.; Grillon, F.; Robaut, F.

    2008-01-01

    Scanning electron microscopy (SEM) and the related micro-analyses are involved in extremely various domains, from the academic environments to the industrial ones. The overall theoretical bases, the main technical characteristics, and some complements of information about practical usage and maintenance are developed in this book. high-vacuum and controlled-vacuum electron microscopes are thoroughly presented, as well as the last generation of EDS (energy dispersive spectrometer) and WDS (wavelength dispersive spectrometer) micro-analysers. Beside these main topics, other analysis or observation techniques are approached, such as EBSD (electron backscattering diffraction), 3-D imaging, FIB (focussed ion beams), Monte-Carlo simulations, in-situ tests etc.. This book, in French language, is the only one which treats of this subject in such an exhaustive way. It represents the actualized and totally updated version of a previous edition of 1979. It gathers the lectures given in 2006 at the summer school of Saint Martin d'Heres (France). Content: 1 - electron-matter interactions; 2 - characteristic X-radiation, Bremsstrahlung; 3 - electron guns in SEM; 4 - elements of electronic optics; 5 - vacuum techniques; 6 - detectors used in SEM; 7 - image formation and optimization in SEM; 7a - SEM practical instructions for use; 8 - controlled pressure microscopy; 8a - applications; 9 - energy selection X-spectrometers (energy dispersive spectrometers - EDS); 9a - EDS analysis; 9b - X-EDS mapping; 10 - technological aspects of WDS; 11 - processing of EDS and WDS spectra; 12 - X-microanalysis quantifying methods; 12a - quantitative WDS microanalysis of very light elements; 13 - statistics: precision and detection limits in microanalysis; 14 - analysis of stratified samples; 15 - crystallography applied to EBSD; 16 - EBSD: history, principle and applications; 16a - EBSD analysis; 17 - Monte Carlo simulation; 18 - insulating samples in SEM and X-ray microanalysis; 18a - insulating

  6. Comparative analyses of hydrological responses of two adjacent watersheds to climate variability and change using the SWAT model

    Directory of Open Access Journals (Sweden)

    S. Lee

    2018-01-01

    Full Text Available Water quality problems in the Chesapeake Bay Watershed (CBW are expected to be exacerbated by climate variability and change. However, climate impacts on agricultural lands and resultant nutrient loads into surface water resources are largely unknown. This study evaluated the impacts of climate variability and change on two adjacent watersheds in the Coastal Plain of the CBW, using the Soil and Water Assessment Tool (SWAT model. We prepared six climate sensitivity scenarios to assess the individual impacts of variations in CO2 concentration (590 and 850 ppm, precipitation increase (11 and 21 %, and temperature increase (2.9 and 5.0 °C, based on regional general circulation model (GCM projections. Further, we considered the ensemble of five GCM projections (2085–2098 under the Representative Concentration Pathway (RCP 8.5 scenario to evaluate simultaneous changes in CO2, precipitation, and temperature. Using SWAT model simulations from 2001 to 2014 as a baseline scenario, predicted hydrologic outputs (water and nitrate budgets and crop growth were analyzed. Compared to the baseline scenario, a precipitation increase of 21 % and elevated CO2 concentration of 850 ppm significantly increased streamflow and nitrate loads by 50 and 52 %, respectively, while a temperature increase of 5.0 °C reduced streamflow and nitrate loads by 12 and 13 %, respectively. Crop biomass increased with elevated CO2 concentrations due to enhanced radiation- and water-use efficiency, while it decreased with precipitation and temperature increases. Over the GCM ensemble mean, annual streamflow and nitrate loads showed an increase of  ∼  70 % relative to the baseline scenario, due to elevated CO2 concentrations and precipitation increase. Different hydrological responses to climate change were observed from the two watersheds, due to contrasting land use and soil characteristics. The watershed with a larger percent of croplands demonstrated a

  7. A model using marginal efficiency of investment to analyse carbon and nitrogen interactions in terrestrial ecosystems (ACONITE Version 1)

    Science.gov (United States)

    Thomas, R. Q.; Williams, M.

    2014-04-01

    Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. However there is little understanding of the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants. Here we describe a new, simple model of ecosystem C-N cycling and interactions (ACONITE), that builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C : N, N fixation, and plant C use efficiency) using emergent constraints provided by marginal returns on investment for C and/or N allocation. We simulated and evaluated steady-state ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C : N differed among the three ecosystem types (temperate deciduous plant traits. Gross primary productivity (GPP) and net primary productivity (NPP) estimates compared well to observed fluxes at the simulation sites. Simulated N fixation at steady-state, calculated based on relative demand for N and the marginal return on C investment to acquire N, was an order of magnitude higher in the tropical forest than in the temperate forest, consistent with observations. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C : N. Also, a widely used linear leaf N-respiration relationship did not yield a realistic leaf C : N, while a more recently reported non-linear relationship performed better. A parameter governing how photosynthesis scales with day length had the largest influence on total vegetation C, GPP, and NPP

  8. Comparative analyses of hydrological responses of two adjacent watersheds to climate variability and change using the SWAT model

    Science.gov (United States)

    Lee, Sangchul; Yeo, In-Young; Sadeghi, Ali M.; McCarty, Gregory W.; Hively, Wells D.; Lang, Megan W.; Sharifi, Amir

    2018-01-01

    Water quality problems in the Chesapeake Bay Watershed (CBW) are expected to be exacerbated by climate variability and change. However, climate impacts on agricultural lands and resultant nutrient loads into surface water resources are largely unknown. This study evaluated the impacts of climate variability and change on two adjacent watersheds in the Coastal Plain of the CBW, using the Soil and Water Assessment Tool (SWAT) model. We prepared six climate sensitivity scenarios to assess the individual impacts of variations in CO2 concentration (590 and 850 ppm), precipitation increase (11 and 21 %), and temperature increase (2.9 and 5.0 °C), based on regional general circulation model (GCM) projections. Further, we considered the ensemble of five GCM projections (2085-2098) under the Representative Concentration Pathway (RCP) 8.5 scenario to evaluate simultaneous changes in CO2, precipitation, and temperature. Using SWAT model simulations from 2001 to 2014 as a baseline scenario, predicted hydrologic outputs (water and nitrate budgets) and crop growth were analyzed. Compared to the baseline scenario, a precipitation increase of 21 % and elevated CO2 concentration of 850 ppm significantly increased streamflow and nitrate loads by 50 and 52 %, respectively, while a temperature increase of 5.0 °C reduced streamflow and nitrate loads by 12 and 13 %, respectively. Crop biomass increased with elevated CO2 concentrations due to enhanced radiation- and water-use efficiency, while it decreased with precipitation and temperature increases. Over the GCM ensemble mean, annual streamflow and nitrate loads showed an increase of ˜ 70 % relative to the baseline scenario, due to elevated CO2 concentrations and precipitation increase. Different hydrological responses to climate change were observed from the two watersheds, due to contrasting land use and soil characteristics. The watershed with a larger percent of croplands demonstrated a greater increased rate of 5.2 kg N ha-1 in

  9. Improved Analyses and Forecasts of Snowpack, Runoff and Drought through Remote Sensing and Land Surface Modeling in Southeastern Europe

    Science.gov (United States)

    Matthews, D.; Brilly, M.; Gregoric, G.; Polajnar, J.; Kobold, M.; Zagar, M.; Knoblauch, H.; Staudinger, M.; Mecklenburg, S.; Lehning, M.; Schweizer, J.; Balint, G.; Cacic, I.; Houser, P.; Pozzi, W.

    2008-12-01

    European hydrometeorological services and research centers are faced with increasing challenges from extremes of weather and climate that require significant investments in new technology and better utilization of existing human and natural resources to provide improved forecasts. Major advances in remote sensing, observation networks, data assimilation, numerical modeling, and communications continue to improve our ability to disseminate information to decision-makers and stake holders. This paper identifies gaps in current technologies, key research and decision-maker teams, and recommends means for moving forward through focused applied research and integration of results into decision support tools. This paper reports on the WaterNet - NASA Water Cycle Solutions Network contacts in Europe and summarizes progress in improving water cycle related decision-making using NASA research results. Products from the Hydrologic Sciences Branch, Goddard Space Flight Center, NASA, Land Information System's (LIS) Land Surface Models (LSM), the SPoRT, CREW , and European Space Agency (ESA), and Joint Research Center's (JRC) natural hazards products, and Swiss Federal Institute for Snow and Avalanche Research's (SLF), and others are discussed. They will be used in collaboration with the ESA and the European Commission to provide solutions for improved prediction of water supplies and stream flow, and droughts and floods, and snow avalanches in the major river basins serviced by EARS, ZAMG, SLF, Vituki Consult, and other European forecast centers. This region of Europe includes the Alps and Carpathian Mountains and is an area of extreme topography with abrupt 2000 m mountains adjacent to the Adriatic Sea. These extremes result in the highest precipitation ( > 5000 mm) in Europe in Montenegro and low precipitation of 300-400 mm at the mouth of the Danube during droughts. The current flood and drought forecasting systems have a spatial resolution of 9 km, which is currently being

  10. Analysing recent socioeconomic trends in coronary heart disease mortality in England, 2000-2007: a population modelling study.

    Directory of Open Access Journals (Sweden)

    Madhavi Bajekal

    Full Text Available Coronary heart disease (CHD mortality in England fell by approximately 6% every year between 2000 and 2007. However, rates fell differentially between social groups with inequalities actually widening. We sought to describe the extent to which this reduction in CHD mortality was attributable to changes in either levels of risk factors or treatment uptake, both across and within socioeconomic groups.A widely used and replicated epidemiological model was used to synthesise estimates stratified by age, gender, and area deprivation quintiles for the English population aged 25 and older between 2000 and 2007. Mortality rates fell, with approximately 38,000 fewer CHD deaths in 2007. The model explained about 86% (95% uncertainty interval: 65%-107% of this mortality fall. Decreases in major cardiovascular risk factors contributed approximately 34% (21%-47% to the overall decline in CHD mortality: ranging from about 44% (31%-61% in the most deprived to 29% (16%-42% in the most affluent quintile. The biggest contribution came from a substantial fall in systolic blood pressure in the population not on hypertension medication (29%; 18%-40%; more so in deprived (37% than in affluent (25% areas. Other risk factor contributions were relatively modest across all social groups: total cholesterol (6%, smoking (3%, and physical activity (2%. Furthermore, these benefits were partly negated by mortality increases attributable to rises in body mass index and diabetes (-9%; -17% to -3%, particularly in more deprived quintiles. Treatments accounted for approximately 52% (40%-70% of the mortality decline, equitably distributed across all social groups. Lipid reduction (14%, chronic angina treatment (13%, and secondary prevention (11% made the largest medical contributions.The model suggests that approximately half the recent CHD mortality fall in England was attributable to improved treatment uptake. This benefit occurred evenly across all social groups. However

  11. Comparative analyses of hydrological responses of two adjacent watersheds to climate variability and change using the SWAT model

    Science.gov (United States)

    Lee, Sangchul; Yeo, In-Young; Sadeghi, Ali M.; McCarty, Gregory W.; Hively, Wells; Lang, Megan W.; Sharifi, Amir

    2018-01-01

    Water quality problems in the Chesapeake Bay Watershed (CBW) are expected to be exacerbated by climate variability and change. However, climate impacts on agricultural lands and resultant nutrient loads into surface water resources are largely unknown. This study evaluated the impacts of climate variability and change on two adjacent watersheds in the Coastal Plain of the CBW, using the Soil and Water Assessment Tool (SWAT) model. We prepared six climate sensitivity scenarios to assess the individual impacts of variations in CO2concentration (590 and 850 ppm), precipitation increase (11 and 21 %), and temperature increase (2.9 and 5.0 °C), based on regional general circulation model (GCM) projections. Further, we considered the ensemble of five GCM projections (2085–2098) under the Representative Concentration Pathway (RCP) 8.5 scenario to evaluate simultaneous changes in CO2, precipitation, and temperature. Using SWAT model simulations from 2001 to 2014 as a baseline scenario, predicted hydrologic outputs (water and nitrate budgets) and crop growth were analyzed. Compared to the baseline scenario, a precipitation increase of 21 % and elevated CO2 concentration of 850 ppm significantly increased streamflow and nitrate loads by 50 and 52 %, respectively, while a temperature increase of 5.0 °C reduced streamflow and nitrate loads by 12 and 13 %, respectively. Crop biomass increased with elevated CO2 concentrations due to enhanced radiation- and water-use efficiency, while it decreased with precipitation and temperature increases. Over the GCM ensemble mean, annual streamflow and nitrate loads showed an increase of  ∼  70 % relative to the baseline scenario, due to elevated CO2 concentrations and precipitation increase. Different hydrological responses to climate change were observed from the two watersheds, due to contrasting land use and soil characteristics. The watershed with a larger percent of croplands demonstrated a greater

  12. The mental health care model in Brazil: analyses of the funding, governance processes, and mechanisms of assessment.

    Science.gov (United States)

    Trapé, Thiago Lavras; Campos, Rosana Onocko

    2017-03-23

    This study aims to analyze the current status of the mental health care model of the Brazilian Unified Health System, according to its funding, governance processes, and mechanisms of assessment. We have carried out a documentary analysis of the ordinances, technical reports, conference reports, normative resolutions, and decrees from 2009 to 2014. This is a time of consolidation of the psychosocial model, with expansion of the health care network and inversion of the funding for community services with a strong emphasis on the area of crack cocaine and other drugs. Mental health is an underfunded area within the chronically underfunded Brazilian Unified Health System. The governance model constrains the progress of essential services, which creates the need for the incorporation of a process of regionalization of the management. The mechanisms of assessment are not incorporated into the health policy in the bureaucratic field. There is a need to expand the global funding of the area of health, specifically mental health, which has been shown to be a successful policy. The current focus of the policy seems to be archaic in relation to the precepts of the psychosocial model. Mechanisms of assessment need to be expanded. Analisar o estágio atual do modelo de atenção à saúde mental do Sistema Único de Saúde, segundo seu financiamento, processos de governança e mecanismos de avaliação. Foi realizada uma análise documental de portarias, informes técnicos, relatórios de conferência, resoluções e decretos de 2009 a 2014. Trata-se de um momento de consolidação do modelo psicossocial, com ampliação da rede assistencial, inversão de financiamento para serviços comunitários com forte ênfase na área de crack e outras drogas. A saúde mental é uma área subfinanciada dentro do subfinanciamento crônico do Sistema Único de Saúde. O modelo de governança constrange o avanço de serviços essenciais, havendo a necessidade da incorporação de um

  13. Studies and analyses of the space shuttle main engine. Failure information propagation model data base and software

    Science.gov (United States)

    Tischer, A. E.

    1987-01-01

    The failure information propagation model (FIPM) data base was developed to store and manipulate the large amount of information anticipated for the various Space Shuttle Main Engine (SSME) FIPMs. The organization and structure of the FIPM data base is described, including a summary of the data fields and key attributes associated with each FIPM data file. The menu-driven software developed to facilitate and control the entry, modification, and listing of data base records is also discussed. The transfer of the FIPM data base and software to the NASA Marshall Space Flight Center is described. Complete listings of all of the data base definition commands and software procedures are included in the appendixes.

  14. Construct validation of health-relevant personality traits: interpersonal circumplex and five-factor model analyses of the Aggression Questionnaire.

    Science.gov (United States)

    Gallo, L C; Smith, T W

    1998-01-01

    The general literature on personality traits as risk factors for physical illness--as well as the specific literature on health consequences of anger, hostility, and aggressive behavior--often suffers from incomplete or inconsistent construct validation of personality measures. This study illustrates the utility of two conceptual tools in this regard--the five-factor model and the interpersonal circumplex. The similarities and differences among anger, hostility, verbal aggressiveness, and physical aggressiveness as measured by the Buss and Perry (1992) Aggression Questionnaire were identified. Results support the interpretation of anger and hostility as primarily reflecting neurotic hostility and antagonistic hostility to a lesser extent. In contrast, verbal and physical aggressiveness can he seen as primarily reflecting antagonistic hostility, and to a lesser extent neurotic hostility. Further, verbal aggressiveness was associated with hostile dominance, whereas hostility was associated with hostile submissiveness. These findings identify potentially important distinctions among these related constructs and illustrate the potential integrative value of standard validation procedures.

  15. Analysing the influence of FSP process parameters on IGC susceptibility of AA5083 using Sugeno – Fuzzy model

    Science.gov (United States)

    Jayakarthick, C.; Povendhan, A. P.; Vaira Vignesh, R.; Padmanaban, R.

    2018-02-01

    Aluminium alloy AA5083 was friction stir processed to improve the intergranular corrosion (IGC) resistance. FSP trials were performed by varying the process parameters as per Taguchi’s L18 orthogonal array. IGC resistance of the friction stir processed specimens were found by immersing them in concentrated nitric acid and measuring the mass loss per unit area. Results indicate that dispersion and partial dissolution of secondary phase increased IGC resistance of the friction stir processed specimens. A Sugeno fuzzy model was developed to study the effect of FSP process parameters on the IGC susceptibility of friction stir processed specimens. Tool Rotation Speed, Tool Traverse Speed and Shoulder Diameter have a significant effect on the IGC susceptibility of the friction stir processed specimens.

  16. Study of interactions between metal ions and protein model compounds by energy decomposition analyses and the AMOEBA force field

    Science.gov (United States)

    Jing, Zhifeng; Qi, Rui; Liu, Chengwen; Ren, Pengyu

    2017-10-01

    The interactions between metal ions and proteins are ubiquitous in biology. The selective binding of metal ions has a variety of regulatory functions. Therefore, there is a need to understand the mechanism of protein-ion binding. The interactions involving metal ions are complicated in nature, where short-range charge-penetration, charge transfer, polarization, and many-body effects all contribute significantly, and a quantitative description of all these interactions is lacking. In addition, it is unclear how well current polarizable force fields can capture these energy terms and whether these polarization models are good enough to describe the many-body effects. In this work, two energy decomposition methods, absolutely localized molecular orbitals and symmetry-adapted perturbation theory, were utilized to study the interactions between Mg2+/Ca2+ and model compounds for amino acids. Comparison of individual interaction components revealed that while there are significant charge-penetration and charge-transfer effects in Ca complexes, these effects can be captured by the van der Waals (vdW) term in the AMOEBA force field. The electrostatic interaction in Mg complexes is well described by AMOEBA since the charge penetration is small, but the distance-dependent polarization energy is problematic. Many-body effects were shown to be important for protein-ion binding. In the absence of many-body effects, highly charged binding pockets will be over-stabilized, and the pockets will always favor Mg and thus lose selectivity. Therefore, many-body effects must be incorporated in the force field in order to predict the structure and energetics of metalloproteins. Also, the many-body effects of charge transfer in Ca complexes were found to be non-negligible. The absorption of charge-transfer energy into the additive vdW term was a main source of error for the AMOEBA many-body interaction energies.

  17. Expression Analyses of ABCDE Model Genes and Changes in Levels of Endogenous Hormones in Chinese Cabbage Exhibiting Petal-Loss

    Directory of Open Access Journals (Sweden)

    Chuan MENG

    2017-07-01

    Full Text Available Abnormal formation of floral organs affects plant reproduction and can directly interfere with the progress of breeding programs. Using PCR amplification, ABCDE model genes BraAP2, BraAP3, BraPI, BraAG, BraSHP, and BraSEP were isolated from Chinese cabbage (Brassica rapa L. ssp. pekinensis. We examined six development stages of floral buds collected from Chinese cabbage and compared between a line demonstrating normal flowering (A-8 and two mutated lines that exhibited plants having petal-loss (A-16 and A-17. The expression of ABCDE model genes has been analyzed by qRT-PCR. Compared with flower buds of petal-loss plants and normal plants, the expression of A-class gene BraAP2 was significantly decreased during the first to fourth stages, C-class gene BraAG expression was significantly decreased during the first to fifth stages, and D-class gene BraSHP expression was significantly decreased during the first to third stages. Furthermore, B-class gene BraAP3 and BraPI and E-class gene BraSEP expressions were significantly decreased during all six stages of petal-loss plants compared with normal plants. Enzyme-linked immunosorbent assays detected nine endogenous phytohormones during all stages examined here. Except for the second-stage and third-stage buds, levels of the auxin IAA and cytokinin dhZR were always higher in the petal-loss plants than the normal plants at corresponding time points. Meanwhile, concentrations of GA1+3 at the first, fourth, and fifth stages were higher in the petal-loss plants than in the normal plants. Our results provide a theoretical basis for future exploration of the molecular mechanism that determines petal loss and the effects that hormones have on such development in Chinese cabbage plants.

  18. Annual International DIC Society Conference and SEM Fall Conference

    CERN Document Server

    Reu, Phillip

    2017-01-01

    This collection represents a single volume of technical papers presented at the Annual International DIC Society Conference and SEM Fall Conference organized by the Society for Experimental Mechanics and Sandia National Laboratories and held in Philadelphia, PA, November 7-10, 2016. The volume presents early findings from experimental, standards development and various other investigations concerning digital image correlation - an important area within Experimental Mechanics. The area of Digital Image Correlation has been an integral track within the SEM Annual Conference spearheaded by Professor Michael Sutton from the University of South Carolina. In 2016, the SEM and Sandia joined their collaborative strengths to launch a standing fall meeting focusing specifically on developments in the area of Digital Image Correlation. The contributed papers within this volume span numerous technical aspects of DIC including standards development for the industry. .

  19. A Data-Driven Approach to SEM Development at a Two-Year College

    Science.gov (United States)

    Pirius, Landon K.

    2014-01-01

    This article explores implementation of strategic enrollment management (SEM) at a two-year college and why SEM