WorldWideScience

Sample records for modeling sem method

  1. SEM-microphotogrammetry, a new take on an old method for generating high-resolution 3D models from SEM images.

    Science.gov (United States)

    Ball, A D; Job, P A; Walker, A E L

    2017-08-01

    The method we present here uses a scanning electron microscope programmed via macros to automatically capture dozens of images at suitable angles to generate accurate, detailed three-dimensional (3D) surface models with micron-scale resolution. We demonstrate that it is possible to use these Scanning Electron Microscope (SEM) images in conjunction with commercially available software originally developed for photogrammetry reconstructions from Digital Single Lens Reflex (DSLR) cameras and to reconstruct 3D models of the specimen. These 3D models can then be exported as polygon meshes and eventually 3D printed. This technique offers the potential to obtain data suitable to reconstruct very tiny features (e.g. diatoms, butterfly scales and mineral fabrics) at nanometre resolution. Ultimately, we foresee this as being a useful tool for better understanding spatial relationships at very high resolution. However, our motivation is also to use it to produce 3D models to be used in public outreach events and exhibitions, especially for the blind or partially sighted. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  2. Structuring Consumer Preferences with the SEM Method

    OpenAIRE

    Rosa, Franco

    2002-01-01

    Structuring preferences has been developed with econometric models using functional flexible parametric form and the exploring the perceptions about expressed and latent needs using different multivariate approaches. Purpose of this research is to explore the demand for a new drink using the mean-end chain (MEC) theory and multivariate SEM procedure. The first part is dedicated to description of specialty foods for their capacity to create new niche markets. The MEC theory is introduced to ex...

  3. semPLS: Structural Equation Modeling Using Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Armin Monecke

    2012-05-01

    Full Text Available Structural equation models (SEM are very popular in many disciplines. The partial least squares (PLS approach to SEM offers an alternative to covariance-based SEM, which is especially suited for situations when data is not normally distributed. PLS path modelling is referred to as soft-modeling-technique with minimum demands regarding mea- surement scales, sample sizes and residual distributions. The semPLS package provides the capability to estimate PLS path models within the R programming environment. Different setups for the estimation of factor scores can be used. Furthermore it contains modular methods for computation of bootstrap confidence intervals, model parameters and several quality indices. Various plot functions help to evaluate the model. The well known mobile phone dataset from marketing research is used to demonstrate the features of the package.

  4. Quantitative voltage contrast method for electron irradiated insulators in SEM

    Energy Technology Data Exchange (ETDEWEB)

    Belhaj, M [UR MMA INSAT Centre Urbain Nord, BP 676-1080, Tunis (Tunisia); Jbara, O [LASSI/GRESPI, Faculte des Sciences, BP 1039, 51687 Reims Cedex 2 (France); Fakhfakh, S [LaMaCop, Faculte des sciences de SFAX, Route Soukra Km 3, BP 802, CP 3018 Sfax (Tunisia)], E-mail: mohamed.belhaj@free.fr

    2008-09-07

    A surface potential mapping method for electron irradiated insulators in the scanning electron microscope (SEM) is proposed. This method, based on the use of a highly compact electrostatic toroidal spectrometer specially adapted to SEM applications, is able to monitor the spatial variation of surface potentials of strongly negatively charged materials. The capabilities of this method are tested on a made-up heterogeneous sample. First results prove that the method is particularly appropriate for the reconstitution of the surface potential distribution.

  5. Quantitative voltage contrast method for electron irradiated insulators in SEM

    International Nuclear Information System (INIS)

    Belhaj, M; Jbara, O; Fakhfakh, S

    2008-01-01

    A surface potential mapping method for electron irradiated insulators in the scanning electron microscope (SEM) is proposed. This method, based on the use of a highly compact electrostatic toroidal spectrometer specially adapted to SEM applications, is able to monitor the spatial variation of surface potentials of strongly negatively charged materials. The capabilities of this method are tested on a made-up heterogeneous sample. First results prove that the method is particularly appropriate for the reconstitution of the surface potential distribution

  6. The use of Structural Equation Modelling (SEM) in Capital Structure ...

    African Journals Online (AJOL)

    analytic structural equation modelling (SEM) methodology. The SEM Methodology allows the use of more than one indicator for a latent variable. It also estimates the latent variables and accommodates reciprocal causation and interdependences ...

  7. Design and Use of the Simple Event Model (SEM)

    NARCIS (Netherlands)

    van Hage, W.R.; Malaisé, V.; Segers, R.H.; Hollink, L.

    2011-01-01

    Events have become central elements in the representation of data from domains such as history, cultural heritage, multimedia and geography. The Simple Event Model (SEM) is created to model events in these various domains, without making assumptions about the domain-specific vocabularies used. SEM

  8. Continuous time modeling of panel data by means of SEM

    NARCIS (Netherlands)

    Oud, J.H.L.; Delsing, M.J.M.H.; Montfort, C.A.G.M.; Oud, J.H.L.; Satorra, A.

    2010-01-01

    After a brief history of continuous time modeling and its implementation in panel analysis by means of structural equation modeling (SEM), the problems of discrete time modeling are discussed in detail. This is done by means of the popular cross-lagged panel design. Next, the exact discrete model

  9. From patterns to causal understanding: Structural equation modeling (SEM) in soil ecology

    Science.gov (United States)

    Eisenhauer, Nico; Powell, Jeff R; Grace, James B.; Bowker, Matthew A.

    2015-01-01

    In this perspectives paper we highlight a heretofore underused statistical method in soil ecological research, structural equation modeling (SEM). SEM is commonly used in the general ecological literature to develop causal understanding from observational data, but has been more slowly adopted by soil ecologists. We provide some basic information on the many advantages and possibilities associated with using SEM and provide some examples of how SEM can be used by soil ecologists to shift focus from describing patterns to developing causal understanding and inspiring new types of experimental tests. SEM is a promising tool to aid the growth of soil ecology as a discipline, particularly by supporting research that is increasingly hypothesis-driven and interdisciplinary, thus shining light into the black box of interactions belowground.

  10. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    Science.gov (United States)

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  11. Mathematical model of the seismic electromagnetic signals (SEMS) in non crystalline substances

    Energy Technology Data Exchange (ETDEWEB)

    Dennis, L. C. C.; Yahya, N.; Daud, H.; Shafie, A. [Electromagnetic cluster, Universiti Teknologi Petronas, 31750 Tronoh, Perak (Malaysia)

    2012-09-26

    The mathematical model of seismic electromagnetic waves in non crystalline substances is developed and the solutions are discussed to show the possibility of improving the electromagnetic waves especially the electric field. The shear stress of the medium in fourth order tensor gives the equation of motion. Analytic methods are selected for the solutions written in Hansen vector form. From the simulated SEMS, the frequency of seismic waves has significant effects to the SEMS propagating characteristics. EM waves transform into SEMS or energized seismic waves. Traveling distance increases once the frequency of the seismic waves increases from 100% to 1000%. SEMS with greater seismic frequency will give seismic alike waves but greater energy is embedded by EM waves and hence further distance the waves travel.

  12. Evaluation of bone formation in calcium phosphate scaffolds with μCT-method validation using SEM.

    Science.gov (United States)

    Lewin, S; Barba, A; Persson, C; Franch, J; Ginebra, M-P; Öhman-Mägi, C

    2017-10-05

    There is a plethora of calcium phosphate (CaP) scaffolds used as synthetic substitutes to bone grafts. The scaffold performance is often evaluated from the quantity of bone formed within or in direct contact with the scaffold. Micro-computed tomography (μCT) allows three-dimensional evaluation of bone formation inside scaffolds. However, the almost identical x-ray attenuation of CaP and bone obtrude the separation of these phases in μCT images. Commonly, segmentation of bone in μCT images is based on gray scale intensity, with manually determined global thresholds. However, image analysis methods, and methods for manual thresholding in particular, lack standardization and may consequently suffer from subjectivity. The aim of the present study was to provide a methodological framework for addressing these issues. Bone formation in two types of CaP scaffold architectures (foamed and robocast), obtained from a larger animal study (a 12 week canine animal model) was evaluated by μCT. In addition, cross-sectional scanning electron microscopy (SEM) images were acquired as references to determine thresholds and to validate the result. μCT datasets were registered to the corresponding SEM reference. Global thresholds were then determined by quantitatively correlating the different area fractions in the μCT image, towards the area fractions in the corresponding SEM image. For comparison, area fractions were also quantified using global thresholds determined manually by two different approaches. In the validation the manually determined thresholds resulted in large average errors in area fraction (up to 17%), whereas for the evaluation using SEM references, the errors were estimated to be less than 3%. Furthermore, it was found that basing the thresholds on one single SEM reference gave lower errors than determining them manually. This study provides an objective, robust and less error prone method to determine global thresholds for the evaluation of bone formation in

  13. SEM Based CARMA Time Series Modeling for Arbitrary N.

    Science.gov (United States)

    Oud, Johan H L; Voelkle, Manuel C; Driver, Charles C

    2018-01-01

    This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be presented in which a single panel model (T = 41 time points) is estimated for a sample of N = 1,000 individuals as well as for samples of N = 100 and N = 50 individuals, followed by estimating 100 separate models for each of the one-hundred N = 1 cases in the N = 100 sample. Furthermore, we will demonstrate how to test the difference between the full panel model and each N = 1 model by means of a subject-group-reproducibility test. Finally, the proposed analyses will be applied in an empirical example, in which the relationships between mood at work and mood at home are studied in a sample of N = 55 women. All analyses are carried out by ctsem, an R-package for continuous time modeling, interfacing to OpenMx.

  14. Morphological modelling of three-phase microstructures of anode layers using SEM images.

    Science.gov (United States)

    Abdallah, Bassam; Willot, François; Jeulin, Dominique

    2016-07-01

    A general method is proposed to model 3D microstructures representative of three-phases anode layers used in fuel cells. The models are based on SEM images of cells with varying morphologies. The materials are first characterized using three morphological measurements: (cross-)covariances, granulometry and linear erosion. They are measured on segmented SEM images, for each of the three phases. Second, a generic model for three-phases materials is proposed. The model is based on two independent underlying random sets which are otherwise arbitrary. The validity of this model is verified using the cross-covariance functions of the various phases. In a third step, several types of Boolean random sets and plurigaussian models are considered for the unknown underlying random sets. Overall, good agreement is found between the SEM images and three-phases models based on plurigaussian random sets, for all morphological measurements considered in the present work: covariances, granulometry and linear erosion. The spatial distribution and shapes of the phases produced by the plurigaussian model are visually very close to the real material. Furthermore, the proposed models require no numerical optimization and are straightforward to generate using the covariance functions measured on the SEM images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  15. AxiSEM3D: broadband seismic wavefields in 3-D aspherical Earth models

    Science.gov (United States)

    Leng, K.; Nissen-Meyer, T.; Zad, K. H.; van Driel, M.; Al-Attar, D.

    2017-12-01

    Seismology is the primary tool for data-informed inference of Earth structure and dynamics. Simulating seismic wave propagation at a global scale is fundamental to seismology, but remains as one of most challenging problems in scientific computing, because of both the multiscale nature of Earth's interior and the observable frequency band of seismic data. We present a novel numerical method to simulate global seismic wave propagation in realistic 3-D Earth models. Our method, named AxiSEM3D, is a hybrid of spectral element method and pseudospectral method. It reduces the azimuthal dimension of wavefields by means of a global Fourier series parameterization, of which the number of terms can be locally adapted to the inherent azimuthal smoothness of the wavefields. AxiSEM3D allows not only for material heterogeneities, such as velocity, density, anisotropy and attenuation, but also for finite undulations on radial discontinuities, both solid-solid and solid-fluid, and thereby a variety of aspherical Earth features such as ellipticity, topography, variable crustal thickness, and core-mantle boundary topography. Such interface undulations are equivalently interpreted as material perturbations of the contiguous media, based on the "particle relabelling transformation". Efficiency comparisons show that AxiSEM3D can be 1 to 3 orders of magnitude faster than conventional 3-D methods, with the speedup increasing with simulation frequency and decreasing with model complexity, but for all realistic structures the speedup remains at least one order of magnitude. The observable frequency range of global seismic data (up to 1 Hz) has been covered for wavefield modelling upon a 3-D Earth model with reasonable computing resources. We show an application of surface wave modelling within a state-of-the-art global crustal model (Crust1.0), with the synthetics compared to real data. The high-performance C++ code is released at github.com/AxiSEM3D/AxiSEM3D.

  16. GW-SEM: A Statistical Package to Conduct Genome-Wide Structural Equation Modeling.

    Science.gov (United States)

    Verhulst, Brad; Maes, Hermine H; Neale, Michael C

    2017-05-01

    Improving the accuracy of phenotyping through the use of advanced psychometric tools will increase the power to find significant associations with genetic variants and expand the range of possible hypotheses that can be tested on a genome-wide scale. Multivariate methods, such as structural equation modeling (SEM), are valuable in the phenotypic analysis of psychiatric and substance use phenotypes, but these methods have not been integrated into standard genome-wide association analyses because fitting a SEM at each single nucleotide polymorphism (SNP) along the genome was hitherto considered to be too computationally demanding. By developing a method that can efficiently fit SEMs, it is possible to expand the set of models that can be tested. This is particularly necessary in psychiatric and behavioral genetics, where the statistical methods are often handicapped by phenotypes with large components of stochastic variance. Due to the enormous amount of data that genome-wide scans produce, the statistical methods used to analyze the data are relatively elementary and do not directly correspond with the rich theoretical development, and lack the potential to test more complex hypotheses about the measurement of, and interaction between, comorbid traits. In this paper, we present a method to test the association of a SNP with multiple phenotypes or a latent construct on a genome-wide basis using a diagonally weighted least squares (DWLS) estimator for four common SEMs: a one-factor model, a one-factor residuals model, a two-factor model, and a latent growth model. We demonstrate that the DWLS parameters and p-values strongly correspond with the more traditional full information maximum likelihood parameters and p-values. We also present the timing of simulations and power analyses and a comparison with and existing multivariate GWAS software package.

  17. Tillandsia stricta Sol (Bromeliaceae) leaves as monitors of airborne particulate matter-A comparative SEM methods evaluation: Unveiling an accurate and odd HP-SEM method.

    Science.gov (United States)

    de Oliveira, Martha Lima; de Melo, Edésio José Tenório; Miguens, Flávio Costa

    2016-09-01

    Airborne particulate matter (PM) has been included among the most important air pollutants by governmental environment agencies and academy researchers. The use of terrestrial plants for monitoring PM has been widely accepted, particularly when it is coupled with SEM/EDS. Herein, Tillandsia stricta leaves were used as monitors of PM, focusing on a comparative evaluation of Environmental SEM (ESEM) and High-Pressure SEM (HPSEM). In addition, specimens air-dried at formaldehyde atmosphere (AD/FA) were introduced as an SEM procedure. Hydrated specimen observation by ESEM was the best way to get information from T. stricta leaves. If any artifacts were introduced by AD/FA, they were indiscernible from those caused by CPD. Leaf anatomy was always well preserved. PM density was determined on adaxial and abaxial leaf epidermis for each of the SEM proceedings. When compared with ESEM, particle extraction varied from 0 to 20% in air-dried leaves while 23-78% of particles deposited on leaves surfaces were extracted by CPD procedures. ESEM was obviously the best choice over other methods but morphological artifacts increased in function of operation time while HPSEM operation time was without limit. AD/FA avoided the shrinkage observed in the air-dried leaves and particle extraction was low when compared with CPD. Structural and particle density results suggest AD/FA as an important methodological approach to air pollution biomonitoring that can be widely used in all electron microscopy labs. Otherwise, previous PM assessments using terrestrial plants as biomonitors and performed by conventional SEM could have underestimated airborne particulate matter concentration. © 2016 Wiley Periodicals, Inc.

  18. Analysis of Balance Scorecards Model Performance and Perspective Strategy Synergized by SEM

    Directory of Open Access Journals (Sweden)

    Waluyo Minto

    2016-01-01

    Full Text Available The performance assessment analysis after the economic crisis by using Balanced Scorecard (BSC method becomes a powerful and effective tool and can provide an integrated view of the performance of an organization. This strategy led to the Indonesian economy being stretched positively after the economic crisis. Taking effective decisions is not spared from combining four BSC perspectives and strategies that focus on a system with different behavior or steps. This paper combines two methods of BSC with structural equation modeling (SEM because they have the same concept, which is a causal relationship, where the research model concept SEM variables use BSC variable. The purpose of this paper is to investigate the influence of variables that synergized between balanced scorecard with SEM as a means of strategic planning in the future. This study used primary data with a large enough sample to meet the maximum likelihood estimation by assessment scale of seven semantic points. This research model is a combination of one and two step models. The next step is to test the measurement model, structural equation modeling, and modification models. The test results indicated that the model has multi colinearities. Therefore, the model is converted into one step model. The test results after being modified into a model of the goodness of fit indices showed a good score. All BSC variables have direct significant influence, including the perspective of strategic goals and sustainable competitive advantage. The implication of the simulation model of goodness of fit-modification results are DF = 227, Chi-square =276.550, P =0.058, CMIN/DF = 1.150, GFI = 0.831, AGFI = 0.791, CFI = 0.972, TLI = 0.965 and RMSEA = 0.039.

  19. A brief discussion about image quality and SEM methods for quantitative fractography of polymer composites.

    Science.gov (United States)

    Hein, L R O; Campos, K A; Caltabiano, P C R O; Kostov, K G

    2013-01-01

    The methodology for fracture analysis of polymeric composites with scanning electron microscopes (SEM) is still under discussion. Many authors prefer to use sputter coating with a conductive material instead of applying low-voltage (LV) or variable-pressure (VP) methods, which preserves the original surfaces. The present work examines the effects of sputter coating with 25 nm of gold on the topography of carbon-epoxy composites fracture surfaces, using an atomic force microscope. Also, the influence of SEM imaging parameters on fractal measurements is evaluated for the VP-SEM and LV-SEM methods. It was observed that topographic measurements were not significantly affected by the gold coating at tested scale. Moreover, changes on SEM setup leads to nonlinear outcome on texture parameters, such as fractal dimension and entropy values. For VP-SEM or LV-SEM, fractal dimension and entropy values did not present any evident relation with image quality parameters, but the resolution must be optimized with imaging setup, accompanied by charge neutralization. © Wiley Periodicals, Inc.

  20. Environmental SEM and dye penetration observation on resin-tooth interface using different light curing method.

    Science.gov (United States)

    Yoshikawa, Takako; Morigami, Makoto; Sadr, Alireza; Tagami, Junji

    2016-01-01

    The aim of this study was the effects of different light curing methods on marginal sealing and resin composite adaptation to the cavity wall using the dye penetration test and environmental scanning electron microscope (SEM) observations. Cylindrical cavities were prepared on cervical regions. The teeth were restored with Clearfil Liner Bond 2 V adhesive and filled with Clearfil Photo Bright or Palfique Estelite resin composites. These resins were cured with a conventional light-curing method or a slow-start curing method. After thermal cycling, the specimens were subjected to the dye penetration test to evaluate marginal sealing and adaptation of the resin composites to the cavity walls. These resin-tooth interfaces were then observed using environmental SEM. The light-cured resin composite, which exhibited increased contrast ratios during polymerization, suggests high compensation for polymerization stress using the slow-start curing method. There was a high correlation between dye penetration test and environmental SEM observation.

  1. CUFE at SemEval-2016 Task 4: A Gated Recurrent Model for Sentiment Classification

    KAUST Repository

    Nabil, Mahmoud

    2016-06-16

    In this paper we describe a deep learning system that has been built for SemEval 2016 Task4 (Subtask A and B). In this work we trained a Gated Recurrent Unit (GRU) neural network model on top of two sets of word embeddings: (a) general word embeddings generated from unsupervised neural language model; and (b) task specific word embeddings generated from supervised neural language model that was trained to classify tweets into positive and negative categories. We also added a method for analyzing and splitting multi-words hashtags and appending them to the tweet body before feeding it to our model. Our models achieved 0.58 F1-measure for Subtask A (ranked 12/34) and 0.679 Recall for Subtask B (ranked 12/19).

  2. Effects of Missing Data Methods in SEM under Conditions of Incomplete and Nonnormal Data

    Science.gov (United States)

    Li, Jian; Lomax, Richard G.

    2017-01-01

    Using Monte Carlo simulations, this research examined the performance of four missing data methods in SEM under different multivariate distributional conditions. The effects of four independent variables (sample size, missing proportion, distribution shape, and factor loading magnitude) were investigated on six outcome variables: convergence rate,…

  3. A hybrid 3D SEM reconstruction method optimized for complex geologic material surfaces.

    Science.gov (United States)

    Yan, Shang; Adegbule, Aderonke; Kibbey, Tohren C G

    2017-08-01

    Reconstruction methods are widely used to extract three-dimensional information from scanning electron microscope (SEM) images. This paper presents a new hybrid reconstruction method that combines stereoscopic reconstruction with shape-from-shading calculations to generate highly-detailed elevation maps from SEM image pairs. The method makes use of an imaged glass sphere to determine the quantitative relationship between observed intensity and angles between the beam and surface normal, and the detector and surface normal. Two specific equations are derived to make use of image intensity information in creating the final elevation map. The equations are used together, one making use of intensities in the two images, the other making use of intensities within a single image. The method is specifically designed for SEM images captured with a single secondary electron detector, and is optimized to capture maximum detail from complex natural surfaces. The method is illustrated with a complex structured abrasive material, and a rough natural sand grain. Results show that the method is capable of capturing details such as angular surface features, varying surface roughness, and surface striations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. SEM Model Medical Solid Waste Hospital Management In Medan City

    Science.gov (United States)

    Simarmata, Verawaty; Pandia, Setiaty; Mawengkang, Herman

    2018-01-01

    In daily activities, hospitals, as one of the important health care unit, generate both medical solid waste and non-medical solid waste. The occurrence of medical solid waste could be from the results of treatment activities, such as, in the treatment room for a hospital inpatient, general clinic, a dental clinic, a mother and child clinic, laboratories and pharmacies. Most of the medical solid waste contains infectious and hazardous materials. Therefore it should be managed properly, otherwise it could be a source of new infectious for the community around the hospital as well as for health workers themselves. Efforts surveillance of various environmental factors need to be applied in accordance with the principles of sanitation focuses on environmental cleanliness. One of the efforts that need to be done in improving the quality of the environment is to undertake waste management activities, because with proper waste management is the most important in order to achieve an optimal degree of human health. Health development in Indonesian aims to achieve a future in which the Indonesian people live in a healthy environment, its people behave clean and healthy, able to reach quality health services, fair and equitable, so as to have optimal health status, health development paradigm anchored to the healthy. The healthy condition of the individual and society can be influenced by the environment. Poor environmental quality is a cause of various health problems. Efforts surveillance of various environmental factors need to be applied in accordance with the principles of sanitation focuses on environmental cleanliness. This paper proposes a model for managing the medical solid waste in hospitals in Medan city, in order to create healthy environment around hospitals.

  5. Evaluating Neighborhoods Livability in Nigeria: A Structural Equation Modelling (SEM Approach

    Directory of Open Access Journals (Sweden)

    Sule Abass Iyanda

    2018-01-01

    Full Text Available There is a growing concern about city livability around the world and of particular concern is the aspects of the person-environment relationship which encompasses many characteristics suffice to make a place livable. Extant literature provides livability dimensions such as housing unit characteristics, neighborhood facilities, economic vitality and safety environment. These livability dimensions as well as their attributes found in the extant literature have been reported to have high reliability measurement level. Although, various methods have been applied to examine relationships among the variables however structural equation modeling (SEM has been found more holistic as a modeling technique to understand and explain the relationships that may exist among variable measurements. Structural equation modeling simultaneously performs multivariate analysis including multiple regression, path and factor analysis in the cause-effect relationships between latent constructs. Therefore, this study investigates the key factors of livability of planned residential neighborhoods in Minna, Nigeria with the research objectives of – (a to study the livability level of the selected residential neighborhoods, (b to determine the dimensions and indicators which most influence the level of livability in the selected residential neighborhoods, and (c to reliably test the efficacy of structural equation modeling (SEM in the assessment of livability. The methodology adopted in this study includes- Data collection with the aid of structured questionnaire survey administered to the residents of the study area based on stratified random sampling. The data collected was analyzed with the aid of the Statistical Package for Social Sciences (SPSS 22.0 and AMOS 22.0 software for structural equation modeling (a second-order factor. The study revealed that livability as a second-order factor is indicated by economic vitality, safety environment, neighborhood facilities

  6. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    Science.gov (United States)

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  7. Subjective Values of Quality of Life Dimensions in Elderly People. A SEM Preference Model Approach

    Science.gov (United States)

    Elosua, Paula

    2011-01-01

    This article proposes a Thurstonian model in the framework of Structural Equation Modelling (SEM) to assess preferences among quality of life dimensions for the elderly. Data were gathered by a paired comparison design in a sample comprised of 323 people aged from 65 to 94 years old. Five dimensions of quality of life were evaluated: Health,…

  8. Prescriptive Statements and Educational Practice: What Can Structural Equation Modeling (SEM) Offer?

    Science.gov (United States)

    Martin, Andrew J.

    2011-01-01

    Longitudinal structural equation modeling (SEM) can be a basis for making prescriptive statements on educational practice and offers yields over "traditional" statistical techniques under the general linear model. The extent to which prescriptive statements can be made will rely on the appropriate accommodation of key elements of research design,…

  9. Adapted methods for scanning electron microscopy (SEM in assessment of human sperm morphology

    Directory of Open Access Journals (Sweden)

    Petra Nussdorfer

    2018-02-01

    Full Text Available Infertility is a widespread problem, and in some cases, the routine basic semen analysis is not sufficient to detect the cause of male infertility. The use of the scanning electron microscope (SEM could provide a detailed insight into spermatozoa morphology, but it requires specific sample preparation techniques. The purpose of this study was to select, adjust, and optimize a method for the preparation of spermatozoa samples prior to SEM analysis, and to establish the protocol required for its use in clinical practice. We examined sperm samples of 50 men. The samples were fixed with modified iso-osmolar aldehyde solution followed by osmium post-fixation. In the first method, dehydration of the cells and subsequent critical point drying (CPD were performed on a coverslip. In the second method, the samples were dehydrated in centrifuge tubes; hexamethyldisilazane (HMDS was used as a drying agent instead of CPD, and the samples were air-dried. The third procedure was based on a membrane filter. The samples were dehydrated and dried with HMDS in a Gooch crucible, continuously, without centrifugation or redispersion of the sample. Our results showed that the fixation with modified iso-osmolar aldehyde solution followed by osmium post-fixation, and combined with dehydration and CPD on a coverslip, is the most convenient procedure for SEM sample preparation. In the case of small-size samples or low sperm concentration, dehydration and drying with HMDS on the membrane filter enabled the best reliability, repeatability, and comparability of the results. The presented procedures are suitable for routine use, and they can be applied to confirm as well as to correct a diagnosis.

  10. SEM, EDS, PL and absorbance study of CdTe thin films grown by CSS method

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Torres, M.E.; Silva-Gonzalez, R.; Gracia-Jimenez, J.M. [Instituto de Fisica, BUAP, Apdo. Postal J-48, San Manuel, 72570 Puebla, Pue. (Mexico); Casarrubias-Segura, G. [CIE- UNAM, 62580 Temixco, Morelos (Mexico)

    2006-09-22

    Oxygen-doped CdTe films were grown on conducting glass substrates by the close spaced sublimation (CSS) method and characterized using SEM, EDS, photoluminescence (PL) and absorbance. A significant change in the polycrystalline morphology is observed when the oxygen proportion is increased in the deposition atmosphere. The EDS analysis showed that all samples are nonstoichiometric with excess Te. The PL spectra show emission bands associated with Te vacancies (V{sub Te}), whose intensities decrease as the oxygen proportion in the CSS chamber is increased. The oxygen impurities occupy Te vacancies and modify the surfaces states, improving the nonradiative process. (author)

  11. Recent improvement of a FIB-SEM serial-sectioning method for precise 3D image reconstruction - application of the orthogonally-arranged FIB-SEM.

    Science.gov (United States)

    Hara, Toru

    2014-11-01

    plasma cleaner, many kinds of signals can be obtained simultaneously.jmicro;63/suppl_1/i5-a/DFU077F1F1DFU077F1Fig. 1.Schematic illustration described (a) a standard type arrangement, (b) an orthogonal type arrangement. Recent topics and Future prospectsWe have applied this instrument for wide area of microstructure analysis; Metals and Alloys, Semiconductor devices, Battery electrodes, Minerals, Biomaterials, and so on. In my presentation, I would like to introduce some of our application results and will discuss about future development of the methodology of a FIB-SEM serial sectioning. As the applied research field becomes wider, various requests for the method were arisen. However, most requests can be summarized as follows: observation of larger area, expansion of applicable sample, obtain many kind of information, linkage with other instruments. AcknowledgmentsThe instrument introduced in this work was installed at NIMS by a part of "Low-carbon research network Japan" funded by the MEXT,Japan. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. BIB-SEM of representative area clay structures paving towards an alternative model of porosity

    Science.gov (United States)

    Desbois, G.; Urai, J. L.; Houben, M.; Hemes, S.; Klaver, J.

    2012-04-01

    A major contribution to understanding the sealing capacity, coupled flow, capillary processes and associated deformation in clay-rich geomaterials is based on detailed investigation of the rock microstructures. However, the direct characterization of pores in representative elementary area (REA) and below µm-scale resolution remains challenging. To investigate directly the mm- to nm-scale porosity, SEM is certainly the most direct approach, but it is limited by the poor quality of the investigated surfaces. The recent development of ion milling tools (BIB and FIB; Desbois et al, 2009, 2011; Heath et al., 2011; Keller et al., 2011) and cryo-SEM allows respectively producing exceptional high quality polished cross-sections suitable for high resolution porosity SEM-imaging at nm-scale and investigating samples under wet conditions by cryogenic stabilization. This contribution focuses mainly on the SEM description of pore microstructures in 2D BIB-polished cross-sections of Boom (Mol site, Belgium) and Opalinus (Mont Terri, Switzerland) clays down to the SEM resolution. Pores detected in images are statistically analyzed to perform porosity quantification in REA. On the one hand, BIB-SEM results allow retrieving MIP measurements obtained from larger sample volumes. On the other hand, the BIB-SEM approach allows characterizing porosity-homogeneous and -predictable islands, which form the elementary components of an alternative concept of porosity/permeability model based on pore microstructures. Desbois G., Urai J.L. and Kukla P.A. (2009) Morphology of the pore space in claystones - evidence from BIB/FIB ion beam sectioning and cryo-SEM observations. E-Earth, 4, 15-22. Desbois G., Urai J.L., Kukla P.A., Konstanty J. and Baerle C. (2011). High-resolution 3D fabric and porosity model in a tight gas sandstone reservoir: a new approach to investigate microstructures from mm- to nm-scale combining argon beam cross-sectioning and SEM imaging . Journal of Petroleum Science

  13. Structural equation modeling methods and applications

    CERN Document Server

    Wang, Jichuan

    2012-01-01

    A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

  14. Hybrid OPC modeling with SEM contour technique for 10nm node process

    Science.gov (United States)

    Hitomi, Keiichiro; Halle, Scott; Miller, Marshal; Graur, Ioana; Saulnier, Nicole; Dunn, Derren; Okai, Nobuhiro; Hotta, Shoji; Yamaguchi, Atsuko; Komuro, Hitoshi; Ishimoto, Toru; Koshihara, Shunsuke; Hojo, Yutaka

    2014-03-01

    Hybrid OPC modeling is investigated using both CDs from 1D and simple 2D structures and contours extracted from complex 2D structures, which are obtained by a Critical Dimension-Scanning Electron Microscope (CD-SEM). Recent studies have addressed some of key issues needed for the implementation of contour extraction, including an edge detection algorithm consistent with conventional CD measurements, contour averaging and contour alignment. Firstly, pattern contours obtained from CD-SEM images were used to complement traditional site driven CD metrology for the calibration of OPC models for both metal and contact layers of 10 nm-node logic device, developed in Albany Nano-Tech. The accuracy of hybrid OPC model was compared with that of conventional OPC model, which was created with only CD data. Accuracy of the model, defined as total error root-mean-square (RMS), was improved by 23% with the use of hybrid OPC modeling for contact layer and 18% for metal layer, respectively. Pattern specific benefit of hybrid modeling was also examined. Resist shrink correction was applied to contours extracted from CD-SEM images in order to improve accuracy of the contours, and shrink corrected contours were used for OPC modeling. The accuracy of OPC model with shrink correction was compared with that without shrink correction, and total error RMS was decreased by 0.2nm (12%) with shrink correction technique. Variation of model accuracy among 8 modeling runs with different model calibration patterns was reduced by applying shrink correction. The shrink correction of contours can improve accuracy and stability of OPC model.

  15. Automated Nanofiber Diameter Measurement in SEM Images Using a Robust Image Analysis Method

    Directory of Open Access Journals (Sweden)

    Ertan Öznergiz

    2014-01-01

    Full Text Available Due to the high surface area, porosity, and rigidity, applications of nanofibers and nanosurfaces have developed in recent years. Nanofibers and nanosurfaces are typically produced by electrospinning method. In the production process, determination of average fiber diameter is crucial for quality assessment. Average fiber diameter is determined by manually measuring the diameters of randomly selected fibers on scanning electron microscopy (SEM images. However, as the number of the images increases, manual fiber diameter determination becomes a tedious and time consuming task as well as being sensitive to human errors. Therefore, an automated fiber diameter measurement system is desired. In the literature, this task is achieved by using image analysis algorithms. Typically, these methods first isolate each fiber in the image and measure the diameter of each isolated fiber. Fiber isolation is an error-prone process. In this study, automated calculation of nanofiber diameter is achieved without fiber isolation using image processing and analysis algorithms. Performance of the proposed method was tested on real data. The effectiveness of the proposed method is shown by comparing automatically and manually measured nanofiber diameter values.

  16. PAT and SEM study of porous silicon formed by anodization methods

    International Nuclear Information System (INIS)

    Liu Jian; Wei Long; Wang Huiyao; Ma Chuangxin; Wang Baoyi

    2000-01-01

    The porous silicon formed by anodization of crystal silicon was studied by positron annihilation technique (PAT) and scanning electron microscopy (SEM). The PAT experiments showed that the mean life and vacancy defects increased with the increasing anodization time. While the intensities of the longest lifetime, several ns-tens ns (ortho-positronium) dropped down. Small single-crystal Si spheres with mean radius of a few μm were observed by SEM after anodization. Pits with mean radius of a few μm from the divorcement of single-crystal spheres were also observed after further anodization. The increases of vacancy defects might be that the extension of structures of porous silicon towards inner layer with anodization time and caused more vacancy defects in inner layer. The SEM observation presented another possibility of the increase of density of vacancy defects in surface layer induced by the change of structures

  17. Structural Equations Model (SEM of a questionnaire on the evaluation of intercultural secondary education classrooms

    Directory of Open Access Journals (Sweden)

    Eva María Olmedo Moreno

    2014-12-01

    Full Text Available This research includes the design of a questionnaire for evaluating cultural coexistence in secondary education classrooms (Berrocal, Olmedo & Olmos, 2014; Olmedo et al., 2014, as well as the comparison of its psychometric properties in a multicultural population of schools in southern Spain. An attempt is made to create a valid, reliable and useful tool for teachers to measure conflict situations in the classroom, as well as understanding the nature of the conflict from the point of view of all those involved. The metric aspects show a maximized content and construct validity (Muñiz, 2010 using a Structural Equation Model (SEM and Confirmatory Factor Analysis (CFA analysis, checking and modifying its model by Wald and Lagrange indicators (Bentler, 2007, to obtain the most adjusted model to the theoretical and goodness criteria.

  18. Incorporating Latent Variables into Discrete Choice Models - A Simultaneous Estimation Approach Using SEM Software

    Directory of Open Access Journals (Sweden)

    Dirk Temme

    2008-12-01

    Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.

  19. A Data Matrix Method for Improving the Quantification of Element Percentages of SEM/EDX Analysis

    Science.gov (United States)

    Lane, John

    2009-01-01

    A simple 2D M N matrix involving sample preparation enables the microanalyst to peer below the noise floor of element percentages reported by the SEM/EDX (scanning electron microscopy/ energy dispersive x-ray) analysis, thus yielding more meaningful data. Using the example of a 2 3 sample set, there are M = 2 concentration levels of the original mix under test: 10 percent ilmenite (90 percent silica) and 20 percent ilmenite (80 percent silica). For each of these M samples, N = 3 separate SEM/EDX samples were drawn. In this test, ilmenite is the element of interest. By plotting the linear trend of the M sample s known concentration versus the average of the N samples, a much higher resolution of elemental analysis can be performed. The resulting trend also shows how the noise is affecting the data, and at what point (of smaller concentrations) is it impractical to try to extract any further useful data.

  20. Methods for Additive Hydration Allowing Observation of Fully Hydrated State of Wet Samples in Environmental SEM

    Czech Academy of Sciences Publication Activity Database

    Neděla, Vilém

    2007-01-01

    Roč. 70, č. 2 (2007), s. 95-100 ISSN 1059-910X R&D Projects: GA ČR(CZ) GA102/05/0886; GA AV ČR KJB200650602 Institutional research plan: CEZ:AV0Z20650511 Keywords : agar * natural structure * biological specimens * environmental SEM Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.644, year: 2007

  1. An SEM approach to continuous time modeling of panel data: Relating authoritarianism and anomia: Correction to Voelkle, Oud, Davidov, and Schmidt

    NARCIS (Netherlands)

    Voelkle, M.C.; Oud, J.H.L.; Davidov, E.; Schmidt, P.

    2012-01-01

    Reports an error in "An SEM approach to continuous time modeling of panel data: Relating authoritarianism and anomia" by Manuel C. Voelkle, Johan H. L. Oud, Eldad Davidov and Peter Schmidt (Psychological Methods, 2012[Jun], Vol 17[2], 176-192). The supplemental materials link was missing. All

  2. RegSEM: a versatile code based on the spectral element method to compute seismic wave propagation at the regional scale

    Science.gov (United States)

    Cupillard, Paul; Delavaud, Elise; Burgos, Gaël.; Festa, Geatano; Vilotte, Jean-Pierre; Capdeville, Yann; Montagner, Jean-Paul

    2012-03-01

    The spectral element method, which provides an accurate solution of the elastodynamic problem in heterogeneous media, is implemented in a code, called RegSEM, to compute seismic wave propagation at the regional scale. By regional scale we here mean distances ranging from about 1 km (local scale) to 90° (continental scale). The advantage of RegSEM resides in its ability to accurately take into account 3-D discontinuities such as the sediment-rock interface and the Moho. For this purpose, one version of the code handles local unstructured meshes and another version manages continental structured meshes. The wave equation can be solved in any velocity model, including anisotropy and intrinsic attenuation in the continental version. To validate the code, results from RegSEM are compared to analytical and semi-analytical solutions available in simple cases (e.g. explosion in PREM, plane wave in a hemispherical basin). In addition, realistic simulations of an earthquake in different tomographic models of Europe are performed. All these simulations show the great flexibility of the code and point out the large influence of the shallow layers on the propagation of seismic waves at the regional scale. RegSEM is written in Fortran 90 but it also contains a couple of C routines. It is an open-source software which runs on distributed memory architectures. It can give rise to interesting applications, such as testing regional tomographic models, developing tomography using either passive (i.e. noise correlations) or active (i.e. earthquakes) data, or improving our knowledge on effects linked with sedimentary basins.

  3. The SEM Risk Behavior (SRB) Model: A New Conceptual Model of how Pornography Influences the Sexual Intentions and HIV Risk Behavior of MSM.

    Science.gov (United States)

    Wilkerson, J Michael; Iantaffi, Alex; Smolenski, Derek J; Brady, Sonya S; Horvath, Keith J; Grey, Jeremy A; Rosser, B R Simon

    2012-01-01

    While the effects of sexually explicit media (SEM) on heterosexuals' sexual intentions and behaviors have been studied, little is known about the consumption and possible influence of SEM among men who have sex with men (MSM). Importantly, conceptual models of how Internet-based SEM influences behavior are lacking. Seventy-nine MSM participated in online focus groups about their SEM viewing preferences and sexual behavior. Twenty-three participants reported recent exposure to a new behavior via SEM. Whether participants modified their sexual intentions and/or engaged in the new behavior depended on three factors: arousal when imagining the behavior, pleasure when attempting the behavior, and trust between sex partners. Based on MSM's experience, we advance a model of how viewing a new sexual behavior in SEM influences sexual intentions and behaviors. The model includes five paths. Three paths result in the maintenance of sexual intentions and behaviors. One path results in a modification of sexual intentions while maintaining previous sexual behaviors, and one path results in a modification of both sexual intentions and behaviors. With this model, researchers have a framework to test associations between SEM consumption and sexual intentions and behavior, and public health programs have a framework to conceptualize SEM-based HIV/STI prevention programs.

  4. Characterization of a millefiori glass find from Aquincum by SEM-EDX and micro-PIXE methods

    Energy Technology Data Exchange (ETDEWEB)

    Uzonyi, I., E-mail: uzonyi@atomki.hu [Institute of Nuclear Research of the Hungarian Academy of Sciences, H-4026 Debrecen, Bem ter 18/C (Hungary); Csontos, K.; Verebes, A. [Budapest History Museum, Aquincum Museum H-8211 Budapest, Zahony u. 4. (Hungary); Cserhati, C. [Department of Solid State Physics, University of Debrecen H-4032 Debrecen, Bem ter 18/B (Hungary); Csedreki, L.; Kis-Varga, M.; Kiss, A.Z. [Institute of Nuclear Research of the Hungarian Academy of Sciences, H-4026 Debrecen, Bem ter 18/C (Hungary)

    2011-10-15

    Research has been focused on the analysis of archaeological glasses from the Roman age and medieval times. Study of ancient millefiori type glasses from the collection of Hungarian Museums has been started. A test measurement, carried out on a glass fragment supposedly part of a dish, was performed by SEM-EDX and micro-PIXE methods. Complementary analytical data were obtained for texture and composition. Results suggest that Roman and Mesopotamian techniques were used together. Our data contribute to data bases of millefiori glasses.

  5. A SEM Model in Assessing the Effect of Convergent, Divergent and Logical Thinking on Students' Understanding of Chemical Phenomena

    Science.gov (United States)

    Stamovlasis, D.; Kypraios, N.; Papageorgiou, G.

    2015-01-01

    In this study, structural equation modeling (SEM) is applied to an instrument assessing students' understanding of chemical change. The instrument comprised items on understanding the structure of substances, chemical changes and their interpretation. The structural relationships among particular groups of items are investigated and analyzed using…

  6. Case Studies of Successful Schoolwide Enrichment Model-Reading (SEM-R) Classroom Implementations. Research Monograph Series. RM10204

    Science.gov (United States)

    Reis, Sally M.; Little, Catherine A.; Fogarty, Elizabeth; Housand, Angela M.; Housand, Brian C.; Sweeny, Sheelah M.; Eckert, Rebecca D.; Muller, Lisa M.

    2010-01-01

    The purpose of this qualitative study was to examine the scaling up of the Schoolwide Enrichment Model in Reading (SEM-R) in 11 elementary and middle schools in geographically diverse sites across the country. Qualitative comparative analysis was used in this study, with multiple data sources compiled into 11 in-depth school case studies…

  7. Direct observation of unstained biological specimens in water by the frequency transmission electric-field method using SEM.

    Directory of Open Access Journals (Sweden)

    Toshihiko Ogura

    Full Text Available Scanning electron microscopy (SEM is a powerful tool for the direct visualization of biological specimens at nanometre-scale resolution. However, images of unstained specimens in water using an atmospheric holder exhibit very poor contrast and heavy radiation damage. Here, we present a new form of microscopy, the frequency transmission electric-field (FTE method using SEM, that offers low radiation damage and high-contrast observation of unstained biological samples in water. The wet biological specimens are enclosed in two silicon nitride (SiN films. The metal-coated SiN film is irradiated using a focused modulation electron beam (EB at a low-accelerating voltage. A measurement terminal under the sample holder detects the electric-field frequency signal, which contains structural information relating to the biological specimens. Our results in very little radiation damage to the sample, and the observation image is similar to the transmission image, depending on the sample volume. Our developed method can easily be utilized for the observation of various biological specimens in water.

  8. Encapsulation of a Decision-Making Model to Optimize Supplier Selection via Structural Equation Modeling (SEM)

    Science.gov (United States)

    Sahul Hameed, Ruzanna; Thiruchelvam, Sivadass; Nasharuddin Mustapha, Kamal; Che Muda, Zakaria; Mat Husin, Norhayati; Ezanee Rusli, Mohd; Yong, Lee Choon; Ghazali, Azrul; Itam, Zarina; Hakimie, Hazlinda; Beddu, Salmia; Liyana Mohd Kamal, Nur

    2016-03-01

    This paper proposes a conceptual framework to compare criteria/factor that influence the supplier selection. A mixed methods approach comprising qualitative and quantitative survey will be used. The study intend to identify and define the metrics that key stakeholders at Public Works Department (PWD) believed should be used for supplier. The outcomes would foresee the possible initiatives to bring procurement in PWD to a strategic level. The results will provide a deeper understanding of drivers for supplier’s selection in the construction industry. The obtained output will benefit many parties involved in the supplier selection decision-making. The findings provides useful information and greater understanding of the perceptions that PWD executives hold regarding supplier selection and the extent to which these perceptions are consistent with findings from prior studies. The findings from this paper can be utilized as input for policy makers to outline any changes in the current procurement code of practice in order to enhance the degree of transparency and integrity in decision-making.

  9. APLIKASI STRUCTURAL EQUATION MODEL (SEM DALAM PENENTUAN ALTERNATIF PENGELOLAAN LINGKUNGAN INDUSTRI KOMPONEN ALAT BERAT BERBASIS PARTISIPASI DAN KEMITRAAN MASYARAKAT

    Directory of Open Access Journals (Sweden)

    Budi Setyo Utomo

    2012-07-01

    Full Text Available As a company engaged in the industrial sector by producing certain components and localized in an industrial area, there will be an impact on the environment. These impacts can be positive in the form of employment, reducing dependence on imported heavy equipment, increase in foreign exchange due to reduced imports and increased exports, increased government revenue from taxes, public facilities improvement and supporting infrastructure, and opening up opportunities for other related industries. These impacts can also be negative in the form of environmental degradation such as noise disturbance, dust, and micro climate change, and changes in social and cultural conditions surrounding the industry. Data analysis was performed descriptively and with the Structural Equation Model (SEM. SEM is a multivariate statistical technique which is a combination of factor analysis and regression analysis (correlation, which aims to test the connections between existing variables in a model, whether it is between the indicator with the construct, or the connections between constructs. SEM model consists of two parts, which is the latent variable model and the observed variable model. In contrast to ordinary regression linking the causality between the observed variables, it is also possible in SEM to identify the causality between latent variables. The results of SEM analysis showed that the developed model has a fairly high level of validity that is shown by the minimum fit chi-square value of 93.15 (P = 0.00029. Based on said model, it shows that the company's performance in waste management is largely determined by employee integrity and objectivity of the new employees followed later by the independence of the employees in waste management. The most important factor that determines the employee integrity in waste management in the model is honesty, individual wisdom, and a sense of responsibility. The most important factor in the employee objectivity

  10. An SEM Approach to Continuous Time Modeling of Panel Data: Relating Authoritarianism and Anomia

    Science.gov (United States)

    Voelkle, Manuel C.; Oud, Johan H. L.; Davidov, Eldad; Schmidt, Peter

    2012-01-01

    Panel studies, in which the same subjects are repeatedly observed at multiple time points, are among the most popular longitudinal designs in psychology. Meanwhile, there exists a wide range of different methods to analyze such data, with autoregressive and cross-lagged models being 2 of the most well known representatives. Unfortunately, in these…

  11. Effects of a potassium nitrate mouthwash on dentinal tubules--a SEM analysis using the dentine disc model.

    Science.gov (United States)

    Pereira, Richard; Chava, Vijay K

    2002-04-01

    The concept of tubular occlusion as a method of dentine desensitisation is a logical conclusion from the hydrodynamic hypothesis put forth by Brannström. The aim of this study was therefore to investigate qualitatively by SEM whether a 3% potassium nitrate/0.2% sodium fluoride mouthwash occluded tubule orifices, and by x-ray microanalysis, to characterise the nature of the deposits if any, following application. Following the 'dentine disc model' methodology 1mm thick tooth sections from unerupted molars were obtained. These were treated with the test and control mouthwashes and subjected to scanning electron microscopy. If any deposits were seen, they were to be subjected to elemental analysis using the energy dispersive x-ray analyser. Examination of all the dentine disc surfaces, treated by water (control), active and control mouthwashes demonstrated that none of the treatments, at any of the time intervals, had any visible effect on the dentinal tubule orifices i.e. there was no dentinal tubular occlusion seen. The results suggest that potassium nitrate does not reduce dentinal hypersensitivity, at least by tubule occlusion. This could mean that there is a different mechanism of action, which could not be detected by this in vitro model.

  12. Cryo-SEM method for the observation of entrapped bubbles and degree of water filling in large wet powder compacts.

    Science.gov (United States)

    Mouzon, J; Bhuiyan, I U; Forsmo, S P E; Hedlund, J

    2011-05-01

    There are generally two problems associated with cryogenic scanning electron microscopy (cryo-SEM) observations of large wet powder compacts. First, because water cannot be vitrified in such samples, formation of artefacts is unavoidable. Second, large frozen samples are difficult to fracture but also to machine into regular pieces which fit in standard holders, especially if made of hard materials like ceramics. In this article, we first describe a simple method for planning hard cryo-samples and a low-cost technique for cryo-fracture and transfer of large specimens. Subsequently, after applying the entire procedure to green pellets of iron ore produced by balling, we compare the influence of plunge- and unidirectional freezing on large entrapped bubbles throughout the samples as well as the degree of water filling at the outer surface of the pellets. By carefully investigating the presence of artefacts in large areas of the samples and by controlling the orientation of the sample during freezing and preparation, we demonstrate that unidirectional freezing enables the observation of large entrapped bubbles with minimum formation of artefacts, whereas plunge freezing is preferable for the characterization of the degree of water filling at the outer surface of wet powder compacts. The minimum formation of artefacts was due to the high packing density of the iron ore particles in the matrix. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  13. Development and application of loop-mediated isothermal amplification methods targeting the seM gene for detection of Streptococcus equi subsp. equi.

    Science.gov (United States)

    Hobo, Seiji; Niwa, Hidekazu; Oku, Kazuomi

    2012-03-01

    Loop-mediated isothermal amplification (LAMP) constitutes a potentially valuable diagnostic tool for rapid diagnosis of contagious diseases. In this study, we developed a novel LAMP method (seM-LAMP) to detect the seM gene of Streptococcus equi subsp. equi (S. equi), the causative agent of strangles in equids. The seM-LAMP successfully amplified the target sequence of the seM gene at 63°C within 60 min. The sensitivity of the seM-LAMP was slightly lower than the 2nd reaction of the seM semi-nested PCR. To evaluate the species specificity of the seM-LAMP, we tested 100 S. equi and 189 non-S. equi strains. Significant amplification of the DNA originating from S. equi was observed within 60 min incubation, but no amplification of non-S. equi DNA occurred. The results were identical to those of seM semi-nested PCR. To investigate the clinical usefulness of the methods, the seM-LAMP and the seM semi-nested PCR were used to screen 590 nasal swabs obtained during an outbreak of strangles. Both methods showed that 79 and 511 swabs were S. equi positive and negative, respectively, and the results were identical to those of the culture examination. These results indicate that the seM-LAMP is potentially useful for the reliable routine diagnosis of Streptococcus equi subsp. equi infections.

  14. Assessing Actual Visit Behavior through Antecedents of Tourists Satisfaction among International Tourists in Jordan: A Structural Equation Modeling (SEM Approach

    Directory of Open Access Journals (Sweden)

    Ayed Moh’d Al Muala

    2011-06-01

    Full Text Available Jordan tourism industry is facing fluctuating tourist visit provoked by dissatisfaction, high visit risk, low hotel service, or negative Jordan image. This study aims to examine the relationships between the antecedents of tourist satisfaction and actual visit behavior in tourism of Jordan, and the mediating effect of tourist satisfaction (SAT in the relationship between Jordan image (JOM, service climate (SER and actual visit behavior (ACT. A total of 850 international tourists completed a survey that were conducted at southern sites in Jordan. Using structural equation modeling (SEM technique, confirmatory Factor Analysis (CFA was performed to examine the reliability and validity of the measurement, and the structural equation modeling techniques (Amos 6.0 were used to evaluate the casual model. Results of the study demonstrate the strong predictive power and explain of international tourists’ behavior in Jordan. The findings highlighted that the relationship between Jordan image and service climate are significant and positive on actual visit behavior.

  15. COMMIT at SemEval-2017 Task 5: Ontology-based Method for Sentiment Analysis of Financial Headlines

    NARCIS (Netherlands)

    Schouten, Kim; Frasincar, Flavius; de Jong, F.M.G.

    2017-01-01

    This paper describes our submission to Task 5 of SemEval 2017, Fine-Grained Sentiment Analysis on Financial Microblogs and News, where we limit ourselves to performing sentiment analysis on news headlines only (track 2). The approach presented in this paper uses a Support Vector Machine to do the

  16. Bottom-up nanoarchitecture of semiconductor nano-building blocks by controllable in situ SEM-FIB thermal soldering method

    KAUST Repository

    Zhang, Xuan

    2017-08-10

    Here we demonstrate that the building blocks of semiconductor WO3 nanowires can be controllably soldered together by a novel nano-soldering technique of in situ SEM-FIB thermal soldering, in which the soldering temperature can precisely remain in an optimal range to avoid a strong thermal diffusion.

  17. Filipino Nursing Students' Behavioral Intentions toward Geriatric Care: A Structural Equation Model (SEM)

    Science.gov (United States)

    de Guzman, Allan B.; Jimenez, Benito Christian B.; Jocson, Kathlyn P.; Junio, Aileen R.; Junio, Drazen E.; Jurado, Jasper Benjamin N.; Justiniano, Angela Bianca F.

    2013-01-01

    Anchored on the key constucts of Ajzen's Theory of Planned Behavior (1985), this paper seeks to test a model that explores the influence of knowledge, attitude, and caring behavior on nursing students' behavioral intention toward geriatric care. A five-part survey-questionnaire was administered to 839 third and fourth year nursing students from a…

  18. An SEM approach to continuous time modeling of panel data: Relating authoritarianism and anomia

    NARCIS (Netherlands)

    Voelkle, M.C.; Oud, J.H.L.; Davidov, E.; Schmidt, P.

    2012-01-01

    [Correction Notice: An Erratum for this article was reported in Vol 17(3) of Psychological Methods (see record 2012-24038-005). The supplemental materials link was missing. All versions of this article have been corrected.] Panel studies, in which the same subjects are repeatedly observed at

  19. The Effect of Nonnormality on CB-SEM and PLS-SEM Path Estimates

    OpenAIRE

    Z. Jannoo; B. W. Yap; N. Auchoybur; M. A. Lazim

    2014-01-01

    The two common approaches to Structural Equation Modeling (SEM) are the Covariance-Based SEM (CB-SEM) and Partial Least Squares SEM (PLS-SEM). There is much debate on the performance of CB-SEM and PLS-SEM for small sample size and when distributions are nonnormal. This study evaluates the performance of CB-SEM and PLS-SEM under normality and nonnormality conditions via a simulation. Monte Carlo Simulation in R programming language was employed to generate data based on the theoretical model w...

  20. Palyno-morphological characteristics of gymnosperm flora of pakistan and its taxonomic implications with LM and SEM methods.

    Science.gov (United States)

    Khan, Raees; Ul Abidin, Sheikh Zain; Ahmad, Mushtaq; Zafar, Muhammad; Liu, Jie; Amina, Hafiza

    2018-01-01

    The present study is intended to assess gymnosperms pollen flora of Pakistan using Light Microscope (LM) and Scanning Electron Microscopy (SEM) for its taxonomic significance in identification of gymnosperms. Pollens of 35 gymnosperm species (12 genera and five families) were collected from its various distributional sites of gymnosperms in Pakistan. LM and SEM were used to investigate different palyno-morphological characteristics. Five pollen types (i.e., Inaperturate, Monolete, Monoporate, Vesiculate-bisaccate and Polyplicate) were observed. Six In equatorial view seven types of pollens were observed, in which ten species were sub-angular, nine species were Traingular, six species were Perprolate, three species were Rhomboidal, three species were semi-angular, two species were rectangular and two species were prolate. While five types of pollen were observed in polar view, in which ten species were Spheroidal, nine species were Angular, eight were Interlobate, six species were Circular, two species were Elliptic. Eighteen species has rugulate and 17 species has faveolate ornamentation. Eighteen species has verrucate and 17 have gemmate type sculpturing. The data was analysed through cluster analysis. The study showed that these palyno-morphological features have significance value in classification and identification of gymnosperms. Based on these different palyno-morphological features, a taxonomic key was proposed for the accurate and fast identifications of gymnosperms from Pakistan. © 2017 Wiley Periodicals, Inc.

  1. Chapter 24: Strategic Energy Management (SEM) Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, James [The Cadmus Group, Portland, OR (United States)

    2017-05-18

    Strategic energy management (SEM) focuses on achieving energy-efficiency improvements through systematic and planned changes in facility operations, maintenance, and behaviors (OM&B) and capital equipment upgrades in large energy-using facilities, including industrial buildings, commercial buildings, and multi-facility organizations such as campuses or communities. Facilities can institute a spectrum of SEM actions, ranging from a simple process for regularly identifying energy-savings actions, to establishing a formal, third-party recognized or certified SEM framework for continuous improvement of energy performance. In general, SEM programs that would be considered part of a utility program will contain a set of energy-reducing goals, principles, and practices emphasizing continuous improvements in energy performance or savings through energy management and an energy management system (EnMS).

  2. Application of the Matrix Pencil Method for Estimating the SEM (Singularity Expansion Method) Poles of Source Free Transient Responses From Multiple Look Directions

    National Research Council Canada - National Science Library

    Sarkar, Tapan

    2000-01-01

    .... The SEM poles are independent of the angle at which the transient response is recorded. The only difference between the various waveforms are that the residues at the various poles are of different magnitudes...

  3. A simple method for detection of gunshot residue particles from hands, hair, face, and clothing using scanning electron microscopy/wavelength dispersive X-ray (SEM/WDX).

    Science.gov (United States)

    Kage, S; Kudo, K; Kaizoji, A; Ryumoto, J; Ikeda, H; Ikeda, N

    2001-07-01

    We devised a simple and rapid method for detection of gunshot residue (GSR) particles, using scanning electron microscopy/wavelength dispersive X-ray (SEM/WDX) analysis. Experiments were done on samples containing GSR particles obtained from hands, hair, face, and clothing, using double-sided adhesive coated aluminum stubs (tape-lift method). SEM/WDX analyses for GSR were carried out in three steps: the first step was map analysis for barium (Ba) to search for GSR particles from lead styphnate primed ammunition, or tin (Sn) to search for GSR particles from mercury fulminate primed ammunition. The second step was determination of the location of GSR particles by X-ray imaging of Ba or Sn at a magnification of x 1000-2000 in the SEM, using data of map analysis, and the third step was identification of GSR particles, using WDX spectrometers. Analysis of samples from each primer of a stub took about 3 h. Practical applications were shown for utility of this method.

  4. SEM method for direct visual tracking of nanoscale morphological changes of platinum based electrocatalysts on fixed locations upon electrochemical or thermal treatments.

    Science.gov (United States)

    Zorko, Milena; Jozinović, Barbara; Bele, Marjan; Hodnik, Nejc; Gaberšček, Miran

    2014-05-01

    A general method for tracking morphological surface changes on a nanometer scale with scanning electron microscopy (SEM) is introduced. We exemplify the usefulness of the method by showing consecutive SEM images of an identical location before and after the electrochemical and thermal treatments of platinum-based nanoparticles deposited on a high surface area carbon. Observations reveal an insight into platinum based catalyst degradation occurring during potential cycling treatment. The presence of chloride clearly increases the rate of degradation. At these conditions the dominant degradation mechanism seems to be the platinum dissolution with some subsequent redeposition on the top of the catalyst film. By contrast, at the temperature of 60°C, under potentiostatic conditions some carbon corrosion and particle aggregation was observed. Temperature treatment simulating the annealing step of the synthesis reveals sintering of small platinum based composite aggregates into uniform spherical particles. The method provides a direct proof of induced surface phenomena occurring on a chosen location without the usual statistical uncertainty in usual, random SEM observations across relatively large surface areas. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Nutrition, Balance and Fear of Falling as Predictors of Risk for Falls among Filipino Elderly in Nursing Homes: A Structural Equation Model (SEM)

    Science.gov (United States)

    de Guzman, Allan B.; Ines, Joanna Louise C.; Inofinada, Nina Josefa A.; Ituralde, Nielson Louie J.; Janolo, John Robert E.; Jerezo, Jnyv L.; Jhun, Hyae Suk J.

    2013-01-01

    While a number of empirical studies have been conducted regarding risk for falls among the elderly, there is still a paucity of similar studies in a developing country like the Philippines. This study purports to test through Structural Equation Modeling (SEM) a model that shows the interaction between and among nutrition, balance, fear of…

  6. Characterization Studies of Cyclotron CS-30 Carbon Puller Material Using Powder X-Ray Diffraction and SEM, EDX Cross Section Method

    Science.gov (United States)

    Febriana, S.; Riyanto, E. S.; Dimyati, A.; Handayani, A.

    2018-01-01

    The carbon puller material of Cyclotron CS-30 has been characterized to investigate the structure and composition of the puller. Two samples of pullers have been prepared in the form of powder from locally made carbon and the original cyclotron puller. X-ray diffraction (XRD) analysis of both samples powder gives identification of the structure and phase type of carbon, meanwhile the morphology and the elemental composition of both samples can be shown by Scanning Electron Microscopy (SEM) and Energy Dispersive X-Ray Analysis (EDX) technique. XRD samples are prepared by grinding the puller surface meanwhile the samples for SEM analysis are prepared by cross section method. XRD result shows amorphous carbon structure and a typical broad XRD profile’s peaks and the diffused nature of the XRD profile indicate the presence of disorder in the samples. The first peak of the profile corresponds to the (002) peak of the hexagonal graphite structure, while the second peak corresponds to the (100) and (101) for both samples. Results of SEM-EDX showed that the carbon material from the original default puller of the CS-30 cyclotron contains mass percentage of lead significantly by 9.15%.

  7. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... statebased on an idealized mechanical model to be adapted to the original limit state by the model correction factor. Reliable approximations are obtained by iterative use of gradient information on the original limit state function analogously to previous response surface approaches. However, the strength...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  8. Ao leitor sem medo

    Directory of Open Access Journals (Sweden)

    José Eisenberg

    2000-05-01

    Full Text Available O texto resenha Ao leitor sem medo, de Renato Janine Ribeiro (Belo Horizonte, UFMG, 1999.This text is a review of Ao leitor sem medo by Renato Janine Ribeiro (Belo Horizonte, UFMG, 1999

  9. A new challenge: in-situ investigation of the elusive nanostructures in wet halite and clay using BIB/FIB-cryo-SEM methods

    Science.gov (United States)

    Desbois, G.; Urai, J. L.

    2009-04-01

    Mudrocks and saltrocks form seals for hydrocarbon accumulations, aquitards and chemical barriers. The sealing capacity is controlled either by the rock microstructure or by chemical interactions between minerals and the permeating fluid. A detailed knowledge about the sealing characteristics is of particular interest in Petroleum Sciences. Other fields of interest are the storage of anthropogenic carbon dioxide and radioactive waste in geologic formations. A key factor to the understanding of sealing by mudstones and saltrocks is the study of their porosity. However, Halite and clay are so fluids sensitive that investigation on dried samples required by traditional methods of investigations (metal injection methods [6],[3]; magnetic susceptibility measurement [4]; SEM imaging of broken surfaces [5] and CT scanner computing [7]) are critical for robust interpretation. In one hand, none of these methods is able to directly describe the in-situ porosity at the pore scale and on the other hand, most of these methods require dried samples in which the natural structure of pores could be damaged due to the desiccation, dehydration and dissolution-recrystallisation of the fabric. SEM imaging is certainly the most direct approach to investigate the porosity but it is generally limited by the poor quality of the mechanically prepared surfaces. This problem is solved by the recent development of ion milling tools (FIB: Focussed Ion Beam or BIB: Broad Ion Beam, which allows producing in-situ high quality polished cross-sections suitable for high resolution pores SEM imaging at nano-scale. More over, new and innovative developments of the cryo-SEM approach in the Geosciences allow investigating samples under wet natural conditions. Thus, we are developing the combination of FIB/BIB-cryo-SEM methods ([1],[2]), which combine in one machine the vitrification of the pore fluids by very rapid cooling, the excavation of the sample by ion milling tool and SEM imaging. By these, we

  10. Application of a New Resampling Method to SEM: A Comparison of S-SMART with the Bootstrap

    Science.gov (United States)

    Bai, Haiyan; Sivo, Stephen A.; Pan, Wei; Fan, Xitao

    2016-01-01

    Among the commonly used resampling methods of dealing with small-sample problems, the bootstrap enjoys the widest applications because it often outperforms its counterparts. However, the bootstrap still has limitations when its operations are contemplated. Therefore, the purpose of this study is to examine an alternative, new resampling method…

  11. Study of the magnetic microstructure of high-coercivity sintered SmCo5 permanent magnets with the conventional Bitter pattern technique and the colloid-SEM method

    International Nuclear Information System (INIS)

    Szmaja, Witold

    2007-01-01

    The magnetic microstructure of high-coercivity sintered SmCo 5 permanent magnets was studied with the conventional Bitter pattern technique, and also for the first time with the colloid-scanning electron microscopy (colloid-SEM) method. Both techniques were supported by digital image acquisition, enhancement and analysis. Thanks to this, it was possible to obtain high-contrast and clear images of the magnetic microstructure and to analyze them in detail, and consequently also to achieve improvements over earlier results. In the thermally demagnetized state the grains were composed of magnetic domains. On the surface perpendicular to the alignment axis, the main domains forming a maze pattern and surface reverse spikes were observed. Investigations on the surface parallel to the alignment axis, especially by the colloid-SEM technique, provided a detailed insight into the orientation of grains. The alignment of grains was good, but certainly not perfect; there were also strongly misaligned grains, although generally very rare. In most cases the domain structures within grains were independent of their neighbors, but in some cases (not so rare) the domain walls were observed to continue through the grain boundaries, indicating significant magnetostatic interaction between neighboring grains. Studies of the behavior of the magnetic microstructure under the influence of an external magnetic field, performed for the first time on the surface parallel to the alignment axis (with the conventional Bitter pattern method), showed that the domain walls move easily within the grains and that the magnetization reversal mechanism is mainly related to the nucleation and growth of reverse domains, i.e. that sintered SmCo 5 magnets are nucleation-dominated systems. Groupwise magnetization reversal of adjacent magnetically coupled grains was observed, an unfavorable effect for high-coercivity magnets. Images obtained by the colloid-SEM technique and the conventional Bitter pattern

  12. Spectral-element Method for 3D Marine Controlled-source EM Modeling

    Science.gov (United States)

    Liu, L.; Yin, C.; Zhang, B., Sr.; Liu, Y.; Qiu, C.; Huang, X.; Zhu, J.

    2017-12-01

    As one of the predrill reservoir appraisal methods, marine controlled-source EM (MCSEM) has been widely used in mapping oil reservoirs to reduce risk of deep water exploration. With the technical development of MCSEM, the need for improved forward modeling tools has become evident. We introduce in this paper spectral element method (SEM) for 3D MCSEM modeling. It combines the flexibility of finite-element and high accuracy of spectral method. We use Galerkin weighted residual method to discretize the vector Helmholtz equation, where the curl-conforming Gauss-Lobatto-Chebyshev (GLC) polynomials are chosen as vector basis functions. As a kind of high-order complete orthogonal polynomials, the GLC have the characteristic of exponential convergence. This helps derive the matrix elements analytically and improves the modeling accuracy. Numerical 1D models using SEM with different orders show that SEM method delivers accurate results. With increasing SEM orders, the modeling accuracy improves largely. Further we compare our SEM with finite-difference (FD) method for a 3D reservoir model (Figure 1). The results show that SEM method is more effective than FD method. Only when the mesh is fine enough, can FD achieve the same accuracy of SEM. Therefore, to obtain the same precision, SEM greatly reduces the degrees of freedom and cost. Numerical experiments with different models (not shown here) demonstrate that SEM is an efficient and effective tool for MSCEM modeling that has significant advantages over traditional numerical methods.This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900).

  13. GeoSemOLAP

    DEFF Research Database (Denmark)

    Gur, Nurefsan; Nielsen, Jacob; Hose, Katja

    2017-01-01

    very difficult for inexperienced users. Hence, we have developed GeoSemOLAP to enable users without detailed knowledge of RDF and SPARQL to query the SW with SOLAP. GeoSemOLAP generates SPARQL queries based on high-level SOLAP operators and allows the user to interactively formulate queries using...

  14. Cheap non-toxic non-corrosive method of glass cleaning evaluated by contact angle, AFM, and SEM-EDX measurements.

    Science.gov (United States)

    Dey, Tania; Naughton, Daragh

    2017-05-01

    Glass surface cleaning is the very first step in advanced coating deposition and it also finds use in conserving museum objects. However, most of the wet chemical methods of glass cleaning use toxic and corrosive chemicals like concentrated sulfuric acid (H 2 SO 4 ), piranha (a mixture of concentrated sulfuric acid and 30% hydrogen peroxide), and hydrogen fluoride (HF). On the other hand, most of the dry cleaning techniques like UV-ozone, plasma, and laser treatment require costly instruments. In this report, five eco-friendly wet chemical methods of glass cleaning were evaluated in terms of contact angle (measured by optical tensiometer), nano-scale surface roughness (measured by atomic force microscopy or AFM), and elemental composition (measured by energy dispersive x-ray spectroscopy or SEM-EDX). These glass cleaning methods are devoid of harsh chemicals and costly equipment, hence can be applied in situ in close proximity with plantation such as greenhouse or upon subtle objects such as museum artifacts. Out of these five methods, three methods are based on the chemical principle of chelation. It was found that the citric acid cleaning method gave the greatest change in contact angle within the hydrophilic regime (14.25° for new glass) indicating effective cleansing and the least surface roughness (0.178 nm for new glass) indicating no corrosive effect. One of the glass sample showed unique features which were traced backed to the history of the glass usage.

  15. FAKTOR ADOPSI INTERNET MARKETING UNTUK USAHA MIKRO DAN USAHA KECIL MENENGAH (UMKM DI KABUPATEN KUDUS DENGAN SEM (STRUCTURAL EQUATION MODEL DAN FRAMEWORK COBIT 4.1

    Directory of Open Access Journals (Sweden)

    Endang Supriyati

    2013-06-01

    Full Text Available ABSTRAK Pemasaran melalui internet merupakan strategi baru dalam era teknologi informasi saat ini. Teknologi informasi diarahkan untuk mendukung proses bisnis utama dan pendukung yang ada di Usaha Mikro Dan Usaha Kecil Menengah (UMKM. Penelitian ini dilakukan pada UMKM di Kab Kudus yang bergerak di bidang konveksi dan kerajinan bordir. Analisa terhadap Tata Kelola TI diperoleh Domain COBIT yang sesuai yaitu PO5 (Mengukur Investasi TI. Indikator yang dianalisa adalah indikator penggunaan internet marketing. Dari identifikasi ini, kuisioner disebar ke UMKM. Pendekatan Struktural Equation Modeling (SEM digunakan untuk menganalisa secara empiris tentang faktor-faktor yang terkait dengan penggunaan internet marketing dalam memasarkan produk UMKM. Dari hasil penelitian ini menunjukkan bahwa korelasi Internet Marketing dengan PO5 cukup kuat (-0,358 akan tetapi arahnya negatif sehingga semakin kecil pengaturan investasi TI semakin kecil juga penggunaan Internet Marketing. Kata Kunci : UMKM, Internet marketing, COBIT, PO5, SEM

  16. Sexual Arousal and Sexually Explicit Media (SEM)

    DEFF Research Database (Denmark)

    Hald, Gert Martin; Stulhofer, Aleksandar; Lange, Theis

    2018-01-01

    INTRODUCTION: Investigations of patterns of sexual arousal to certain groups of sexually explicit media (SEM) in the general population in non-laboratory settings are rare. Such knowledge could be important to understand more about the relative specificity of sexual arousal in different SEM users....... AIMS: (i) To investigate whether sexual arousal to non-mainstream vs mainstream SEM contents could be categorized across gender and sexual orientation, (ii) to compare levels of SEM-induced sexual arousal, sexual satisfaction, and self-evaluated sexual interests and fantasies between non......-mainstream and mainstream SEM groups, and (iii) to explore the validity and predictive accuracy of the Non-Mainstream Pornography Arousal Scale (NPAS). METHODS: Online cross-sectional survey of 2,035 regular SEM users in Croatia. MAIN OUTCOMES MEASURES: Patterns of sexual arousal to 27 different SEM themes, sexual...

  17. Um modelo semântico de publicações eletrônicas | A semantic model for electronic publishing

    Directory of Open Access Journals (Sweden)

    Carlos Henrique Marcondes

    2011-03-01

    Full Text Available Resumo Publicações eletrônicas, apesar dos avanços das Tecnologias da Informação, são ainda calcados no modelo impresso. O formato textual impede que programas possam ser usados para o processamento “semântico” desses conteúdos. È porposto um modelo “semântico” de publicações cientificas eletrônicas, no qual as conclusões contidas no texto do artigo fornecidas por autores e representadas em formato “inteligível” por programas, permitindo recuperação semântica, identificação de indícios de novas descobertas científicas e de incoerências sobre este conhecimento. O modelo se baseia nos conceitos de estrutura profunda, ou semântica, da linguagem (CHOMSKY, 1975, de microestrutura, macroestrutura e superestrutura, (KINTSH, VAN DIJK, 1972, na estrutura retórica de artigos científicos (HUTCHINS, 1977, (GROSS, 1990 e nos elementos de metodologia cientifica, como problema, questão, objetivo, hipótese, experimento e conclusão. Resulta da análise de 89 artigos biomédicos. Foi desenvolvido um protótipo de sistema que implementa parcialmente o modelo. Questionários foram usados com autores para embasar o desenvolvimento do protótipo. O protótipo foi testando com pesquisadores-autores. Foram identificados quatro padrões de raciocínio e encadeamento dos elementos semânticos em artigos científicos. O modelo de conteúdo foi implementado como uma ontologia computacional. Foi desenvolvido e avaliado um protótipo de uma interface web de submissão artigos pelos autores a um sistema eletrônico de publicação de periódicos que implementa o modelo. Palavras-chave publicações eletrônicas; metodológica científica; comunicação científica; representação do conhecimento; ontologias; processamento semântico de conteúdos; e-Ciência Abstract Electronic publishing, although Information Technologies advancements, are still based in the print text model. The textual format prevents programs to semantic process

  18. Energirenovering af Sems Have

    DEFF Research Database (Denmark)

    Jensen, Søren Østergaard; Rose, Jørgen; Mørck, Ove

    energirenovering af boligblokke. Vejledningen omfatter optimering af økonomi, energibesparelser og CO2-reduktion ved renovering af boligblokke til lavenerginiveau. Fokus er på elementbyggeri fra 60-70erne samt murstensbyggeri. Der tages udgangspunkt i to konkrete renoverings-cases: Traneparken og Sems Have, hvor...... renoveringen er udført på to principielt forskellige måder: Traneparken med udvendig efterisolering til næsten lavenergiklasse 2015 niveau (nuværende BR2015 krav), Sems Have med helt ny klimaskærm og nye installationer til bygningsklasse 2020 niveau. Begge bebyggelser har fået nyt ventilationsanlæg samt PV......-anlæg. Nærværende rapport beskriver renoveringen af Sems Have....

  19. Explorative methods in linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2004-01-01

    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  20. SEM microcharacterization of semiconductors

    CERN Document Server

    Holt, D B

    1989-01-01

    Applications of SEM techniques of microcharacterization have proliferated to cover every type of material and virtually every branch of science and technology. This book emphasizes the fundamental physical principles. The first section deals with the foundation of microcharacterization in electron beam instruments and the second deals with the interpretation of the information obtained in the main operating modes of a scanning electron microscope.

  1. SEM-EDX

    African Journals Online (AJOL)

    aghomotsegin

    2015-03-11

    Mar 11, 2015 ... *Corresponding author. E-mail: wenchung@dragon.nchu.edu.tw. Abbreviations: SEM-EDX, Scanning electron microscopy-energy dispersive X-ray spectrometer; As, arsenic; Cd, cadmium; ICP-. MS, inductively coupled plasma-mass spectrometer; AAS, atomic absorption spectrometry. Author(s) agree that ...

  2. An introduction to the partial least squares approach to structural equation modelling: a method for exploratory psychiatric research.

    Science.gov (United States)

    Riou, Julien; Guyon, Hervé; Falissard, Bruno

    2016-09-01

    In psychiatry and psychology, relationship patterns connecting disorders and risk factors are always complex and intricate. Advanced statistical methods have been developed to overcome this issue, the most common being structural equation modelling (SEM). The main approach to SEM (CB-SEM for covariance-based SEM) has been widely used by psychiatry and psychology researchers to test whether a comprehensive theoretical model is compatible with observed data. While the validity of this approach method has been demonstrated, its application is limited in some situations, such as early-stage exploratory studies using small sample sizes. The partial least squares approach to SEM (PLS-SEM) has risen in many scientific fields as an alternative method that is especially useful when sample size restricts the use of CB-SEM. In this article, we aim to provide a comprehensive introduction to PLS-SEM intended to CB-SEM users in psychiatric and psychological fields, with an illustration using data on suicidality among prisoners. Researchers in these fields could benefit from PLS-SEM, a promising exploratory technique well adapted to studies on infrequent diseases or specific population subsets. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Modes of Occurrence of Fluorine by Extraction and SEM Method in a Coal-Fired Power Plant from Inner Mongolia, China

    Directory of Open Access Journals (Sweden)

    Guangmeng Wang

    2015-12-01

    Full Text Available In this study, an extraction method and environmental scanning electron microscopy (SEM are employed to reveal the changes in the occurrence mode of fluorine in a coal-fired power plant in Inner Mongolia, China. The different occurrence states of fluorine during coal combustion and emission show that fluorine in coal mainly assumes insoluble inorganic mineral forms. The results illustrate that the three typical occurrence modes in coal are CaF2, MgF2 and AlF3. The fluorine in fly ash can be captured by an electrostatic precipitator (EPS or a bag filter. In contrast, the gaseous fluorine content in flue gas is only in the range of several parts per million; thus, it cannot be used in this study. The occurrence mode of fluorine in bottom ash and slag is inorganic villiaumite (e.g., soluble NaF, KF and insoluble CaF2 which is difficult to break down even at high temperatures. The occurrence mode of fluorine with the highest content in fly ash is physically adsorbed fluorine along the direction of the flue gas flow. The insoluble inorganic mineral fluoride content in fly ash is also high, but the gradually increasing fluorine content in fly ash is mainly caused by physical adsorption. Fluorine in the coal-fired power plant discharges mostly as solid products; however, very little fluorine emitted into the environment as gas products (HF, SiF4 cannot be captured. The parameters used in this study may provide useful references in developing a monitoring and control system for fluorine in coal-fired power plants.

  4. Developing the Business Modelling Method

    NARCIS (Netherlands)

    Meertens, Lucas Onno; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, B; Shishkov, Boris

    2011-01-01

    Currently, business modelling is an art, instead of a science, as no scientific method for business modelling exists. This, and the lack of using business models altogether, causes many projects to end after the pilot stage, unable to fulfil their apparent promise. We propose a structured method to

  5. Space Experiment Module (SEM)

    Science.gov (United States)

    Brodell, Charles L.

    1999-01-01

    The Space Experiment Module (SEM) Program is an education initiative sponsored by the National Aeronautics and Space Administration (NASA) Shuttle Small Payloads Project. The program provides nationwide educational access to space for Kindergarten through University level students. The SEM program focuses on the science of zero-gravity and microgravity. Within the program, NASA provides small containers or "modules" for students to fly experiments on the Space Shuttle. The experiments are created, designed, built, and implemented by students with teacher and/or mentor guidance. Student experiment modules are flown in a "carrier" which resides in the cargo bay of the Space Shuttle. The carrier supplies power to, and the means to control and collect data from each experiment.

  6. A comparative study of representative 2D microstructures in Shaly and Sandy facies of Opalinus Clay (Mont Terri, Switzerland) inferred form BIB-SEM and MIP methods

    NARCIS (Netherlands)

    Houben, M.E.; Desbois, G.; Urai, J.L.

    2014-01-01

    A combination of Broad-Ion-Beam (BIB) polishing and Scanning Electron Microscopy (SEM) has been used to study qualitatively and quantitatively the microstructure of Opalinus Clay in 2D. High quality 2D cross-sections (ca. 1mm2), belonging to the Shaly and Sandy facies of Opalinus Clay, were

  7. Identifying longitudinal growth trajectories of learning domains in problem-based learning: a latent growth curve modeling approach using SEM.

    Science.gov (United States)

    Wimmers, Paul F; Lee, Ming

    2015-05-01

    To determine the direction and extent to which medical student scores (as observed by small-group tutors) on four problem-based-learning-related domains change over nine consecutive blocks during a two-year period (Domains: Problem Solving/Use of Information/Group Process/Professionalism). Latent growth curve modeling is used to analyze performance trajectories in each domain of two cohorts of 1st and 2nd year students (n = 296). Slopes of the growth trajectories show similar linear increments in the first three domains. Further analysis revealed relative strong individual variability in initial scores but not in their later increments. Professionalism, on the other hand, shows low variability and has very small, insignificant slope increments. In this study, we showed that the learning domains (Problem Solving, Use of Information, and Group Process) observed during PBL tutorials are not only related to each other but also develop cumulatively over time. Professionalism, in contrast to the other domains studied, is less affected by the curriculum suggesting that this represents a stable characteristic. The observation that the PBL tutorial has an equal benefit to all students is noteworthy and needs further investigation.

  8. Methods of statistical model estimation

    CERN Document Server

    Hilbe, Joseph

    2013-01-01

    Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th

  9. Piecewise Structural Equation Model (SEM) Disentangles the Environmental Conditions Favoring Diatom Diazotroph Associations (DDAs) in the Western Tropical North Atlantic (WTNA).

    Science.gov (United States)

    Stenegren, Marcus; Berg, Carlo; Padilla, Cory C; David, Stefan-Sebastian; Montoya, Joseph P; Yager, Patricia L; Foster, Rachel A

    2017-01-01

    Diatom diazotroph associations (DDAs) are important components in the world's oceans, especially in the western tropical north Atlantic (WTNA), where blooms have a significant impact on carbon and nitrogen cycling. However, drivers of their abundances and distribution patterns remain unknown. Here, we examined abundance and distribution patterns for two DDA populations in relation to the Amazon River (AR) plume in the WTNA. Quantitative PCR assays, targeting two DDAs (het-1 and het-2) by their symbiont's nifH gene, served as input in a piecewise structural equation model (SEM). Collections were made during high (spring 2010) and low (fall 2011) flow discharges of the AR. The distributions of dissolved nutrients, chlorophyll- a , and DDAs showed coherent patterns indicative of areas influenced by the AR. A symbiotic Hemiaulus hauckii-Richelia (het-2) bloom (>10 6 cells L -1 ) occurred during higher discharge of the AR and was coincident with mesohaline to oceanic (30-35) sea surface salinities (SSS), and regions devoid of dissolved inorganic nitrogen (DIN), low concentrations of both DIP (>0.1 μmol L -1 ) and Si (>1.0 μmol L -1 ). The Richelia (het-1) associated with Rhizosolenia was only present in 2010 and at lower densities (10-1.76 × 10 5 nifH copies L -1 ) than het-2 and limited to regions of oceanic SSS (>36). The het-2 symbiont detected in 2011 was associated with H. membranaceus (>10 3 nifH copies L -1 ) and were restricted to regions with mesohaline SSS (31.8-34.3), immeasurable DIN, moderate DIP (0.1-0.60 μmol L -1 ) and higher Si (4.19-22.1 μmol L -1 ). The piecewise SEM identified a profound direct negative effect of turbidity on the het-2 abundance in spring 2010, while DIP and water turbidity had a more positive influence in fall 2011, corroborating our observations of DDAs at subsurface maximas. We also found a striking difference in the influence of salinity on DDA symbionts suggesting a niche differentiation and preferences in oceanic and

  10. Typology, morphology and connectivity of pore space in claystones from reference site for research using BIB, FIB and cryo-SEM methods

    Science.gov (United States)

    Desbois, G.; Urai, J. L.; Houben, M. E.; Sholokhova, Y.

    2010-06-01

    Detailed investigation of the morphology of the pore space in clay is a key factor in understanding the sealing capacity, coupled flows, capillary processes and associated deformation present in mudstones. Actually, the combination of ion milling tools (FIB and BIB), cryogenic techniques and SEM imaging offers a new alternative to study in-situ elusive microstructures in wet geomaterials and has the high potential to make a step change in our understanding of how fluids occur in pore space. By using this range of techniques, it is possible to quantify porosity, stabilize in-situ fluids in pore space, preserve the natural structures at nm-scale, produce high quality polished cross-sections for high resolution SEM imaging and reconstruct accurately microstructure networks in 3D by serial cross sectioning.

  11. Typology, morphology and connectivity of pore space in claystones from reference site for research using BIB, FIB and cryo-SEM methods

    Directory of Open Access Journals (Sweden)

    Houben M.E.

    2010-06-01

    Full Text Available Detailed investigation of the morphology of the pore space in clay is a key factor in understanding the sealing capacity, coupled flows, capillary processes and associated deformation present in mudstones. Actually, the combination of ion milling tools (FIB and BIB, cryogenic techniques and SEM imaging offers a new alternative to study in-situ elusive microstructures in wet geomaterials and has the high potential to make a step change in our understanding of how fluids occur in pore space. By using this range of techniques, it is possible to quantify porosity, stabilize in-situ fluids in pore space, preserve the natural structures at nm-scale, produce high quality polished cross-sections for high resolution SEM imaging and reconstruct accurately microstructure networks in 3D by serial cross sectioning.

  12. Nondestructive SEM for surface and subsurface wafer imaging

    Science.gov (United States)

    Propst, Roy H.; Bagnell, C. Robert; Cole, Edward I., Jr.; Davies, Brian G.; Dibianca, Frank A.; Johnson, Darryl G.; Oxford, William V.; Smith, Craig A.

    1987-01-01

    The scanning electron microscope (SEM) is considered as a tool for both failure analysis as well as device characterization. A survey is made of various operational SEM modes and their applicability to image processing methods on semiconductor devices.

  13. Graph modeling systems and methods

    Science.gov (United States)

    Neergaard, Mike

    2015-10-13

    An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.

  14. A time-resolved current method and TSC under vacuum conditions of SEM: Trapping and detrapping processes in thermal aged XLPE insulation cables

    Science.gov (United States)

    Boukezzi, L.; Rondot, S.; Jbara, O.; Boubakeur, A.

    2017-03-01

    Thermal aging of cross-linked polyethylene (XLPE) can cause serious concerns in the safety operation in high voltage system. To get a more detailed picture on the effect of thermal aging on the trapping and detrapping process of XLPE in the melting temperature range, Thermal Stimulated Current (TSC) have been implemented in a Scanning Electron Microscope (SEM) with a specific arrangement. The XLPE specimens are molded and aged at two temperatures (120 °C and 140 °C) situated close to the melting temperature of the material. The use of SEM allows us to measure both leakage and displacement currents induced in samples under electron irradiation. The first represents the conduction process of XLPE and the second gives information on the trapping of charges in the bulk of the material. TSC associated to the SEM leads to show spectra of XLPE discharge under thermal stimulation using both currents measured after electron irradiation. It was found that leakage current in the charging process may be related to the physical defects resulting in crystallinity variation under thermal aging. However the trapped charge can be affected by the carbonyl groups resulting from the thermo-oxidation degradation and the disorder in the material. It is evidenced from the TSC spectra of unaged XLPE that there is no detrapping charge under heat stimulation. Whereas the presence of peaks in the TSC spectra of thermally aged samples indicates that there is some amount of trapped charge released by heating. The detrapping behavior of aged XLPE is supported by the supposition of the existence of two trap levels: shallow traps and deep traps. Overall, physico-chemical reactions under thermal aging at high temperatures leads to the enhancement of shallow traps density and changes in range of traps depth. These changes induce degradation of electrical properties of XLPE.

  15. 3DSEM++: Adaptive and intelligent 3D SEM surface reconstruction.

    Science.gov (United States)

    Tafti, Ahmad P; Holz, Jessica D; Baghaie, Ahmadreza; Owen, Heather A; He, Max M; Yu, Zeyun

    2016-08-01

    Structural analysis of microscopic objects is a longstanding topic in several scientific disciplines, such as biological, mechanical, and materials sciences. The scanning electron microscope (SEM), as a promising imaging equipment has been around for decades to determine the surface properties (e.g., compositions or geometries) of specimens by achieving increased magnification, contrast, and resolution greater than one nanometer. Whereas SEM micrographs still remain two-dimensional (2D), many research and educational questions truly require knowledge and facts about their three-dimensional (3D) structures. 3D surface reconstruction from SEM images leads to remarkable understanding of microscopic surfaces, allowing informative and qualitative visualization of the samples being investigated. In this contribution, we integrate several computational technologies including machine learning, contrario methodology, and epipolar geometry to design and develop a novel and efficient method called 3DSEM++ for multi-view 3D SEM surface reconstruction in an adaptive and intelligent fashion. The experiments which have been performed on real and synthetic data assert the approach is able to reach a significant precision to both SEM extrinsic calibration and its 3D surface modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Investigation of the agglomeration and amorphous transformation effects of neutron irradiation on the nanocrystalline silicon carbide (3C-SiC) using TEM and SEM methods

    Energy Technology Data Exchange (ETDEWEB)

    Huseynov, Elchin M., E-mail: elchin.h@yahoo.com [Department of Nanotechnology and Radiation Material Science, National Nuclear Research Center, Inshaatchilar pr. 4, AZ 1073 Baku (Azerbaijan); Institute of Radiation Problems of Azerbaijan National Academy of Sciences, B.Vahabzade 9, AZ 1143 Baku (Azerbaijan)

    2017-04-01

    Nanocrystalline 3C-SiC particles irradiated by neutron flux during 20 h in TRIGA Mark II light water pool type research reactor. Silicon carbide nanoparticles were analyzed by Scanning Electron Microscope (SEM) and Transmission Electron Microscopy (TEM) devices before and after neutron irradiation. The agglomeration of nanoparticles was studied comparatively before and after neutron irradiation. After neutron irradiation the amorphous layer surrounding the nanoparticles was analyzed in TEM device. Neutron irradiation defects in the 3C-SiC nanoparticles and other effects investigated by TEM device. The effect of irradiation on the crystal structure of the nanomaterial was studied by selected area electron diffraction (SAED) and electron diffraction patterns (EDP) analysis.

  17. Variational methods in molecular modeling

    CERN Document Server

    2017-01-01

    This book presents tutorial overviews for many applications of variational methods to molecular modeling. Topics discussed include the Gibbs-Bogoliubov-Feynman variational principle, square-gradient models, classical density functional theories, self-consistent-field theories, phase-field methods, Ginzburg-Landau and Helfrich-type phenomenological models, dynamical density functional theory, and variational Monte Carlo methods. Illustrative examples are given to facilitate understanding of the basic concepts and quantitative prediction of the properties and rich behavior of diverse many-body systems ranging from inhomogeneous fluids, electrolytes and ionic liquids in micropores, colloidal dispersions, liquid crystals, polymer blends, lipid membranes, microemulsions, magnetic materials and high-temperature superconductors. All chapters are written by leading experts in the field and illustrated with tutorial examples for their practical applications to specific subjects. With emphasis placed on physical unders...

  18. Three dimensional analysis of the pore space in fine-grained Boom Clay, using BIB-SEM (broad-ion beam scanning electron microscopy), combined with FIB (focused ion-beam) serial cross-sectioning, pore network modeling and Wood's metal injection

    Science.gov (United States)

    Hemes, Susanne; Klaver, Jop; Desbois, Guillaume; Urai, Janos

    2014-05-01

    The Boom Clay is, besides the Ypresian clays, one of the potential host rock materials for radioactive waste disposal in Belgium (Gens et al., 2003; Van Marcke & Laenen, 2005; Verhoef et al., 2011). To access parameters, which are relevant for the diffusion controlled transport of radionuclides in the material, such as porosity, pore connectivity and permeability, it is crucial to characterize the pore space at high resolution (nm-scale) and in 3D. Focused-ion-beam (FIB) serial cross-sectioning in combination with high resolution scanning electron microscopy (SEM), pore network modeling, Wood's metal injection and broad-ion-beam (BIB) milling, constitute a superior set of methods to characterize the 3D pore space in fine-grained, clayey materials, down to the nm-scale resolution. In the present study, we identified characteristic 3D pore space morphologies, determined the 3D volume porosity of the material and applied pore network extraction modeling (Dong and Blunt, 2009), to access the connectivity of the pore space and to discriminate between pore bodies and pore throats. Moreover, we used Wood's metal injection (WMI) in combination with BIB-SEM imaging to assess the pore connectivity at a larger scale and even higher resolution. The FIB-SEM results show a highly (~ 90 %) interconnected pore space in Boom Clay, down to the resolution of ~ 3E+03 nm³ (voxel-size), with a total volume porosity of ~ 20 %. Pore morphologies of large (> 5E+08 nm³), highly interconnected pores are complex, with high surface area to volume ratios (shape factors G ~ 0.01), whereas small (< 1E+06 nm³), often isolated pores are much more compact and show higher shape factors (G) up to 0.03. WMI in combination with BIB-SEM, down to a resolution of ~ 50 nm² pixel-size, indicates an interconnected porosity fraction of ~ 80 %, of a total measured 2D porosity of ~ 20 %. Determining and distinguishing between pore bodies and pore throats enables us to compare 3D FIB-SEM pore

  19. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  20. The microscopic (optical and SEM) examination of dental calculus deposits (DCD). Potential interest in forensic anthropology of a bio-archaeological method.

    Science.gov (United States)

    Charlier, Philippe; Huynh-Charlier, Isabelle; Munoz, Olivia; Billard, Michel; Brun, Luc; de la Grandmaison, Geoffroy Lorin

    2010-07-01

    This article describes the potential interest in forensic anthropology of the microscopic analysis of dental calculus deposits (DCD), a calcified residue frequently found on the surface of teeth. Its sampling and analysis seem straightforward and relatively reproducible. Samples came from archaeological material (KHB-1 Ra's al-Khabbah and RH-5 Ra's al-Hamra, two Prehistoric graveyards located in the Sultanate of Oman, dated between the 5th and 4th millennium B.C.; Montenzio Vecchia, an Etruscan-Celtic necropolis from the north of Italy, dated between the 5th and 3rd century B.C.; body rests of Agnès Sorel, French royal mistress died in 1450 A.D.; skeleton of Pierre Hazard, French royal notary from the 15th century A.D.). Samples were studies by direct optical microscope (OM) or scanning electron microscopy (SEM). Many cytological, histological and elemental analyses were possible, producing precious data for the identification of these remains, the reconstitution of their alimentation and occupational habits, and propositions for manner of death. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  1. Model reduction methods for vector autoregressive processes

    CERN Document Server

    Brüggemann, Ralf

    2004-01-01

    1. 1 Objective of the Study Vector autoregressive (VAR) models have become one of the dominant research tools in the analysis of macroeconomic time series during the last two decades. The great success of this modeling class started with Sims' (1980) critique of the traditional simultaneous equation models (SEM). Sims criticized the use of 'too many incredible restrictions' based on 'supposed a priori knowledge' in large scale macroeconometric models which were popular at that time. Therefore, he advo­ cated largely unrestricted reduced form multivariate time series models, unrestricted VAR models in particular. Ever since his influential paper these models have been employed extensively to characterize the underlying dynamics in systems of time series. In particular, tools to summarize the dynamic interaction between the system variables, such as impulse response analysis or forecast error variance decompo­ sitions, have been developed over the years. The econometrics of VAR models and related quantities i...

  2. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1991-01-01

    Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data

  3. Characterizing the hydraulic properties of a paper coating layer using FIB-SEM tomography and 3D pore-scale modeling

    OpenAIRE

    Aslannejad, H.; Hassanizadeh, S.M.; Raoof, A.; de Winter, D.A.M.; Tomozeu, N.; van Genuchten, M.T.

    2017-01-01

    Paper used in the printing industry generally contains a relatively thin porous coating covering a thicker fibrous base layer. The three-dimensional pore structure of coatings has a major effect on fluid flow patterns inside the paper medium. Understanding and quantifying the flow properties of thin coating layers is hence crucial. Pore spaces within the coating have an average size of about 180 nm. We used scanning electron microscopy combined with focused ion beam (FIB-SEM) to visualize the...

  4. FIB-SEM tomography in biology.

    Science.gov (United States)

    Kizilyaprak, Caroline; Bittermann, Anne Greet; Daraspe, Jean; Humbel, Bruno M

    2014-01-01

    Three-dimensional information is much easier to understand than a set of two-dimensional images. Therefore a layman is thrilled by the pseudo-3D image taken in a scanning electron microscope (SEM) while, when seeing a transmission electron micrograph, his imagination is challenged. First approaches to gain insight in the third dimension were to make serial microtome sections of a region of interest (ROI) and then building a model of the object. Serial microtome sectioning is a tedious and skill-demanding work and therefore seldom done. In the last two decades with the increase of computer power, sophisticated display options, and the development of new instruments, an SEM with a built-in microtome as well as a focused ion beam scanning electron microscope (FIB-SEM), serial sectioning, and 3D analysis has become far easier and faster.Due to the relief like topology of the microtome trimmed block face of resin-embedded tissue, the ROI can be searched in the secondary electron mode, and at the selected spot, the ROI is prepared with the ion beam for 3D analysis. For FIB-SEM tomography, a thin slice is removed with the ion beam and the newly exposed face is imaged with the electron beam, usually by recording the backscattered electrons. The process, also called "slice and view," is repeated until the desired volume is imaged.As FIB-SEM allows 3D imaging of biological fine structure at high resolution of only small volumes, it is crucial to perform slice and view at carefully selected spots. Finding the region of interest is therefore a prerequisite for meaningful imaging. Thin layer plastification of biofilms offers direct access to the original sample surface and allows the selection of an ROI for site-specific FIB-SEM tomography just by its pronounced topographic features.

  5. METROLOGICAL PERFORMANCE OF SEM 3D TECHNIQUES

    DEFF Research Database (Denmark)

    Marinello, Francesco; Carmignato, Simone; Savio, Enrico

    2008-01-01

    This paper addresses the metrological performance of three-dimensional measurements performed with Scanning Electron Microscopes (SEMs) using reconstruction of surface topography through stereo-photogrammetry. Reconstruction is based on the model function introduced by Piazzesi adapted for eucent...... condition are studied, in order to define a strategy to optimise the measurements taking account of the critical factors in SEM 3D reconstruction. Investigations were performed on a novel sample, specifically developed and implemented for the tests....... and the instrument set-up; the second concerns the quality of scanned images and represents the major criticality in the application of SEMs for 3D characterizations. In particular the critical role played by the tilting angle and its relative uncertainty, the magnification and the deviations from the eucentricity......This paper addresses the metrological performance of three-dimensional measurements performed with Scanning Electron Microscopes (SEMs) using reconstruction of surface topography through stereo-photogrammetry. Reconstruction is based on the model function introduced by Piazzesi adapted...

  6. On the Nature of SEM Estimates of ARMA Parameters.

    Science.gov (United States)

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2002-01-01

    Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…

  7. Superconducting Performance, Microstructure and SEM by EDX Analysis of IG Processed YBa2Cu3Oy Bulk Superconductors by Top and Interior Seeding Methods

    Science.gov (United States)

    Ide, N.; Muralidhar, M.; Radusovska, M.; Diko, P.; Jirsa, M.; Murakami, M.

    2017-07-01

    The top-seeded and interior seeded methods, together with infiltration growth (IG) technique were used to produce YBa2Cu3Oy (Y-123) samples with Y2BaCuO5 (Y-211) secondary phase particles. Tc (onset) was around 91.5. K. When interior seeding process was used, a complete growth of Y-123 single grain starting at the lower part of the bulk was observed by optical microscopy. The Y-211 particles dispersion was quite uniform in lower and upper parts of the samples, both in the a- and c-axis growth sectors, both at the beginning and end of the grain growth. In the sample produced by infiltration growth and interior seeding the critical current density at 77 K with H//c-axis was 44,000 A/cm2 and 7,750 A/cm2 in self-field and 2 T, respectively. It is a good basis for optimization of processing conditions, which can further improve the superconductor’s performance and enable to grow large-size Y-123 bulks.

  8. SEM-based methods for the analysis of basaltic ash from weak explosive activity at Etna in 2006 and the 2007 eruptive crisis at Stromboli

    Science.gov (United States)

    Lautze, Nicole C.; Taddeucci, Jacopo; Andronico, Daniele; Cannata, Chiara; Tornetta, Lauretta; Scarlato, Piergiorgio; Houghton, Bruce; Lo Castro, Maria Deborah

    2012-01-01

    We present results from a semi-automated field-emission scanning electron microscope investigation of basaltic ash from a variety of eruptive processes that occurred at Mount Etna volcano in 2006 and at Stromboli volcano in 2007. From a methodological perspective, the proposed techniques provide relatively fast (about 4 h per sample) information on the size distribution, morphology, and surface chemistry of several hundred ash particles. Particle morphology is characterized by compactness and elongation parameters, and surface chemistry data are shown using ternary plots of the relative abundance of several key elements. The obtained size distributions match well those obtained by an independent technique. The surface chemistry data efficiently characterize the chemical composition, type and abundance of crystals, and dominant alteration phases in the ash samples. From a volcanological perspective, the analyzed samples cover a wide spectrum of relatively minor ash-forming eruptive activity, including weak Hawaiian fountaining at Etna, and lava-sea water interaction, weak Strombolian explosions, vent clearing activity, and a paroxysm during the 2007 eruptive crisis at Stromboli. This study outlines subtle chemical and morphological differences in the ash deposited at different locations during the Etna event, and variable alteration patterns in the surface chemistry of the Stromboli samples specific to each eruptive activity. Overall, we show this method to be effective in quantifying the main features of volcanic ash particles from the relatively weak - and yet frequent - explosive activity occurring at basaltic volcanoes.

  9. Curvelet based offline analysis of SEM images.

    Directory of Open Access Journals (Sweden)

    Syed Hamad Shirazi

    Full Text Available Manual offline analysis, of a scanning electron microscopy (SEM image, is a time consuming process and requires continuous human intervention and efforts. This paper presents an image processing based method for automated offline analyses of SEM images. To this end, our strategy relies on a two-stage process, viz. texture analysis and quantification. The method involves a preprocessing step, aimed at the noise removal, in order to avoid false edges. For texture analysis, the proposed method employs a state of the art Curvelet transform followed by segmentation through a combination of entropy filtering, thresholding and mathematical morphology (MM. The quantification is carried out by the application of a box-counting algorithm, for fractal dimension (FD calculations, with the ultimate goal of measuring the parameters, like surface area and perimeter. The perimeter is estimated indirectly by counting the boundary boxes of the filled shapes. The proposed method, when applied to a representative set of SEM images, not only showed better results in image segmentation but also exhibited a good accuracy in the calculation of surface area and perimeter. The proposed method outperforms the well-known Watershed segmentation algorithm.

  10. Buscador semántico

    OpenAIRE

    Zamar, Esteban David

    2016-01-01

    84 p. il. Este trabajo consiste en la construcción de un Prototipo de Buscador Semántico para Resoluciones Rectorales de la Universidad Católica de Salta. Conforma una parte de un proyecto de investigación sobre Minería de Textos a cargo de la Dra. Alicia Pérez y la Licenciada Carolina Cardoso. Entonces, el trabajo en cuestión intenta asociar la idea de que a partir de la minería de textos se puede desarrollar un buscador semántico con herramientas de software libre cumpliendo caracterí...

  11. Secondary emission monitor (SEM) grids.

    CERN Multimedia

    Patrice Loïez

    2002-01-01

    A great variety of Secondary Emission Monitors (SEM) are used all over the PS Complex. At other accelerators they are also called wire-grids, harps, etc. They are used to measure beam density profiles (from which beam size and emittance can be derived) in single-pass locations (not on circulating beams). Top left: two individual wire-planes. Top right: a combination of a horizontal and a vertical wire plane. Bottom left: a ribbon grid in its frame, with connecting wires. Bottom right: a SEM-grid with its insertion/retraction mechanism.

  12. Analytical methods used at model facility

    International Nuclear Information System (INIS)

    Wing, N.S.

    1984-01-01

    A description of analytical methods used at the model LEU Fuel Fabrication Facility is presented. The methods include gravimetric uranium analysis, isotopic analysis, fluorimetric analysis, and emission spectroscopy

  13. SEM: A Cultural Change Agent

    Science.gov (United States)

    Barnes, Bradley; Bourke, Brian

    2015-01-01

    The authors advance the concept that institutional culture is a purposeful framework by which to view SEM's utility, particularly as a cultural change agent. Through the connection of seemingly independent functions of performance and behavior, implications emerge that deepen the understanding of the influence of culture on performance outcomes…

  14. Structural Equations and Causal Explanations: Some Challenges for Causal SEM

    Science.gov (United States)

    Markus, Keith A.

    2010-01-01

    One common application of structural equation modeling (SEM) involves expressing and empirically investigating causal explanations. Nonetheless, several aspects of causal explanation that have an impact on behavioral science methodology remain poorly understood. It remains unclear whether applications of SEM should attempt to provide complete…

  15. Energy models: methods and trends

    International Nuclear Information System (INIS)

    Reuter, A.; Kuehner, R.; Wohlgemuth, N.

    1996-01-01

    Energy environmental and economical systems do not allow for experimentation since this would be dangerous, too expensive or even impossible. Instead, mathematical models are applied for energy planning. Experimenting is replaced by varying the structure and some parameters of 'energy models', computing the values of depending parameters, comparing variations, and interpreting their outcomings. Energy models are as old as computers. In this article the major new developments in energy modeling will be pointed out. We distinguish between 3 reasons of new developments: progress in computer technology, methodological progress and novel tasks of energy system analysis and planning

  16. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  17. Scanning Electron Microscopy (SEM) Procedure for HE Powders on a Zeiss Sigma HD VP SEM

    Energy Technology Data Exchange (ETDEWEB)

    Zaka, F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-11-15

    This method describes the characterization of inert and HE materials by the Zeiss Sigma HD VP field emission Scanning Electron Microscope (SEM). The SEM uses an accelerated electron beam to generate high-magnification images of explosives and other materials. It is fitted with five detectors (SE, Inlens, STEM, VPSE, HDBSD) to enable imaging of the sample via different secondary electron signatures, angles, and energies. In addition to imaging through electron detection, the microscope is also fitted with two Oxford Instrument Energy Dispersive Spectrometer (EDS) 80 mm detectors to generate elemental constituent spectra and two-dimensional maps of the material being scanned.

  18. Model correction factor method for system analysis

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Johannesen, Johannes M.

    2000-01-01

    The Model Correction Factor Method is an intelligent response surface method based on simplifiedmodeling. MCFM is aimed for reliability analysis in case of a limit state defined by an elaborate model. Herein it isdemonstrated that the method is applicable for elaborate limit state surfaces on which...... severallocally most central points exist without there being a simple geometric definition of the corresponding failuremodes such as is the case for collapse mechanisms in rigid plastic hinge models for frame structures. Taking as simplifiedidealized model a model of similarity with the elaborate model...... surface than existing in the idealized model....

  19. SEM Analysis of Tooth Enamel

    OpenAIRE

    Azinović, Zoran; Keros, Jadranka; Buković, Dino; Azinović, Ana

    2003-01-01

    SEM analysis contains researches of tooth enamel surfaces of two populations. First group of samples is tooth enamel of prehistorically ancestor from Vu~edol and the second group of samples is enamel of modern Croatian citizen. Even on small number of human teeth samples from cooperage site of Vu~edol (3,000 BC) and today’s Croatian people, we can conclude about chewing biometry of prehistorically ancestors and today’s modern Croatian people, comparing interspecifically the mor...

  20. Robust surface reconstruction by design-guided SEM photometric stereo

    Science.gov (United States)

    Miyamoto, Atsushi; Matsuse, Hiroki; Koutaki, Gou

    2017-04-01

    We present a novel approach that addresses the blind reconstruction problem in scanning electron microscope (SEM) photometric stereo for complicated semiconductor patterns to be measured. In our previous work, we developed a bootstrapping de-shadowing and self-calibration (BDS) method, which automatically calibrates the parameter of the gradient measurement formulas and resolves shadowing errors for estimating an accurate three-dimensional (3D) shape and underlying shadowless images. Experimental results on 3D surface reconstruction demonstrated the significance of the BDS method for simple shapes, such as an isolated line pattern. However, we found that complicated shapes, such as line-and-space (L&S) and multilayered patterns, produce deformed and inaccurate measurement results. This problem is due to brightness fluctuations in the SEM images, which are mainly caused by the energy fluctuations of the primary electron beam, variations in the electronic expanse inside a specimen, and electrical charging of specimens. Despite these being essential difficulties encountered in SEM photometric stereo, it is difficult to model accurately all the complicated physical phenomena of electronic behavior. We improved the robustness of the surface reconstruction in order to deal with these practical difficulties with complicated shapes. Here, design data are useful clues as to the pattern layout and layer information of integrated semiconductors. We used the design data as a guide of the measured shape and incorporated a geometrical constraint term to evaluate the difference between the measured and designed shapes into the objective function of the BDS method. Because the true shape does not necessarily correspond to the designed one, we use an iterative scheme to develop proper guide patterns and a 3D surface that provides both a less distorted and more accurate 3D shape after convergence. Extensive experiments on real image data demonstrate the robustness and effectiveness

  1. SEM metrology on bit patterned media nanoimprint template: issues and improvements

    Science.gov (United States)

    Hwu, Justin J.; Babin, Sergey; Yushmanov, Peter

    2012-03-01

    Critical dimension measurement is the most essential metrology needed in nanofabrication processes and the practice is most commonly executed using SEMs for its flexibility in sampling, imaging, and data processing. In bit patterned media process development, nanoimprint lithography (NIL) is used for template replication and media fabrication. SEM imaging on templates provide not only individual dot size, but also information for dot size distribution, the location of dots, pitch and array alignment quality, etc. It is very important to know the SEM measurement limit since the feature nominal size is less than 20 nm and the dot feature size and other metrics will relate to the final media performance. In our work an analytical SEM was used. We performed and compared two imaging analysis approaches for metrology information. The SEM beam was characterized using BEAMETR test sample and software for proper beam condition setup. A series of images obtained on a 27 nm nominal pitch dot array patterns were analyzed by conventional brightness intensity threshold method and physical model based analysis using myCD software. Through comparison we identified the issues with threshold method and the strength of using model based analysis for its improvement in feature size and pitch measurement uncertainty and accuracy. TEM cross sections were performed as accuracy reference for better understanding the source of measurement accuracy deviation.

  2. The influence of environment temperature on SEM image quality

    International Nuclear Information System (INIS)

    Chen, Li; Liu, Junshan

    2015-01-01

    As the structure dimension goes down to the nano-scale, it often requires a scanning electron microscope (SEM) to provide image magnification up to 100 000  ×. However, SEM images at such a high magnification usually suffer from high resolution value and low signal-to-noise ratio, which results in low quality of the SEM image. In this paper, the quality of the SEM image is improved by optimizing the environment temperature. The experimental results indicate that at 100 000  ×, the quality of the SEM image is influenced by the environment temperature, whereas at 50 000  × it is not. At 100 000  × the best SEM image quality can be achieved from the environment temperature ranging 292 from 294 K, and the SEM image quality evaluated by the double stimulus continuous quality scale method can increase from grade 1 to grade 5. It is expected that this image quality improving method can be used in routine measurements with ordinary SEMs to get high quality images by optimizing the environment temperature. (paper)

  3. Modelling Method of Recursive Entity

    Science.gov (United States)

    Amal, Rifai; Messoussi, Rochdi

    2012-01-01

    With the development of the Information and Communication Technologies, great masses of information are published in the Web. In order to reuse, to share and to organise them in distance formation and e-learning frameworks, several research projects have been achieved and various standards and modelling languages developed. In our previous…

  4. 3D reconstruction of SEM images by use of optical photogrammetry software.

    Science.gov (United States)

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Developing a TQM quality management method model

    OpenAIRE

    Zhang, Zhihai

    1997-01-01

    From an extensive review of total quality management literature, the external and internal environment affecting an organization's quality performance and the eleven primary elements of TQM are identified. Based on the primary TQM elements, a TQM quality management method model is developed. This model describes the primary quality management methods which may be used to assess an organization's present strengths and weaknesses with regard to its use of quality management methods. This model ...

  6. Spectral methods applied to Ising models

    International Nuclear Information System (INIS)

    DeFacio, B.; Hammer, C.L.; Shrauner, J.E.

    1980-01-01

    Several applications of Ising models are reviewed. A 2-d Ising model is studied, and the problem of describing an interface boundary in a 2-d Ising model is addressed. Spectral methods are used to formulate a soluble model for the surface tension of a many-Fermion system

  7. Social Context, Self-Perceptions and Student Engagement: A SEM Investigation of the Self-System Model of Motivational Development (SSMMD)

    Science.gov (United States)

    Dupont, Serge; Galand, Benoit; Nils, Frédéric; Hospel, Virginie

    2014-01-01

    Introduction: The present study aimed to test a theoretically-based model (the self-system model of motivational development) including at the same time the extent to which the social context provides structure, warmth and autonomy support, the students' perceived autonomy, relatedness and competence, and behavioral, cognitive and emotional…

  8. A business case method for business models

    OpenAIRE

    Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris

    2013-01-01

    Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model alternatives and choose the best one. In this article, we develop a business case method to objectively compare business models. It is an eight-step method, starting with business drivers and ending wit...

  9. Assessment of engineered surfaces roughness by high-resolution 3D SEM photogrammetry

    Energy Technology Data Exchange (ETDEWEB)

    Gontard, L.C., E-mail: lionelcg@gmail.com [Departamento de Ciencia de los Materiales e Ingeniería Metalúrgica y Química Inorgánica, Universidad de Cádiz, Puerto Real 11510 (Spain); López-Castro, J.D.; González-Rovira, L. [Departamento de Ciencia de los Materiales e Ingeniería Metalúrgica y Química Inorgánica, Escuela Superior de Ingeniería, Laboratorio de Corrosión, Universidad de Cádiz, Puerto Real 11519 (Spain); Vázquez-Martínez, J.M. [Departamento de Ingeniería Mecánica y Diseño Industrial, Escuela Superior de Ingeniería, Universidad de Cádiz, Puerto Real 11519 (Spain); Varela-Feria, F.M. [Servicio de Microscopía Centro de Investigación, Tecnología e Innovación (CITIUS), Universidad de Sevilla, Av. Reina Mercedes 4b, 41012 Sevilla (Spain); Marcos, M. [Departamento de Ingeniería Mecánica y Diseño Industrial, Escuela Superior de Ingeniería, Universidad de Cádiz, Puerto Real 11519 (Spain); and others

    2017-06-15

    Highlights: • We describe a method to acquire a high-angle tilt series of SEM images that is symmetrical respect to the zero tilt of the sample stage. The method can be applied in any SEM microscope. • Using the method, high-resolution 3D SEM photogrammetry can be applied on planar surfaces. • 3D models of three surfaces patterned with grooves are reconstructed with high resolution using multi-view freeware photogrammetry software as described in LC Gontard et al. Ultramicroscopy, 2016. • From the 3D models roughness parameters are measured • 3D SEM high-resolution photogrammetry is compared with two conventional methods used for roughness characetrization: stereophotogrammetry and contact profilometry. • It provides three-dimensional information with high-resolution that is out of reach for any other metrological technique. - Abstract: We describe a methodology to obtain three-dimensional models of engineered surfaces using scanning electron microscopy and multi-view photogrammetry (3DSEM). For the reconstruction of the 3D models of the surfaces we used freeware available in the cloud. The method was applied to study the surface roughness of metallic samples patterned with parallel grooves by means of laser. The results are compared with measurements obtained using stylus profilometry (PR) and SEM stereo-photogrammetry (SP). The application of 3DSEM is more time demanding than PR or SP, but it provides a more accurate representation of the surfaces. The results obtained with the three techniques are compared by investigating the influence of sampling step on roughness parameters.

  10. International Conference on SEMS 2012

    CERN Document Server

    Liu, Chuang; Scientific explanation and methodology of science; SEMS 2012

    2014-01-01

    This volume contains the contributed papers of invitees to SEMS 2012 who have also given talks at the conference. The invitees are experts in philosophy of science and technology from Asia (besides China), Australia, Europe, Latin America, North America, as well as from within China. The papers in this volume represent the latest work of each researcher in his or her expertise; and as a result, they give a good representation of the cutting-edge researches in diverse areas in different parts of the world.

  11. Residual-based model diagnosis methods for mixture cure models.

    Science.gov (United States)

    Peng, Yingwei; Taylor, Jeremy M G

    2017-06-01

    Model diagnosis, an important issue in statistical modeling, has not yet been addressed adequately for cure models. We focus on mixture cure models in this work and propose some residual-based methods to examine the fit of the mixture cure model, particularly the fit of the latency part of the mixture cure model. The new methods extend the classical residual-based methods to the mixture cure model. Numerical work shows that the proposed methods are capable of detecting lack-of-fit of a mixture cure model, particularly in the latency part, such as outliers, improper covariate functional form, or nonproportionality in hazards if the proportional hazards assumption is employed in the latency part. The methods are illustrated with two real data sets that were previously analyzed with mixture cure models. © 2016, The International Biometric Society.

  12. Exploring Several Methods of Groundwater Model Selection

    Science.gov (United States)

    Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar

    2017-04-01

    Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).

  13. Mechatronic Systems Design Methods, Models, Concepts

    CERN Document Server

    Janschek, Klaus

    2012-01-01

    In this textbook, fundamental methods for model-based design of mechatronic systems are presented in a systematic, comprehensive form. The method framework presented here comprises domain-neutral methods for modeling and performance analysis: multi-domain modeling (energy/port/signal-based), simulation (ODE/DAE/hybrid systems), robust control methods, stochastic dynamic analysis, and quantitative evaluation of designs using system budgets. The model framework is composed of analytical dynamic models for important physical and technical domains of realization of mechatronic functions, such as multibody dynamics, digital information processing and electromechanical transducers. Building on the modeling concept of a technology-independent generic mechatronic transducer, concrete formulations for electrostatic, piezoelectric, electromagnetic, and electrodynamic transducers are presented. More than 50 fully worked out design examples clearly illustrate these methods and concepts and enable independent study of th...

  14. Element-by-element parallel spectral-element methods for 3-D teleseismic wave modeling

    KAUST Repository

    Liu, Shaolin

    2017-09-28

    The development of an efficient algorithm for teleseismic wave field modeling is valuable for calculating the gradients of the misfit function (termed misfit gradients) or Fréchet derivatives when the teleseismic waveform is used for adjoint tomography. Here, we introduce an element-by-element parallel spectral-element method (EBE-SEM) for the efficient modeling of teleseismic wave field propagation in a reduced geology model. Under the plane-wave assumption, the frequency-wavenumber (FK) technique is implemented to compute the boundary wave field used to construct the boundary condition of the teleseismic wave incidence. To reduce the memory required for the storage of the boundary wave field for the incidence boundary condition, a strategy is introduced to efficiently store the boundary wave field on the model boundary. The perfectly matched layers absorbing boundary condition (PML ABC) is formulated using the EBE-SEM to absorb the scattered wave field from the model interior. The misfit gradient can easily be constructed in each time step during the calculation of the adjoint wave field. Three synthetic examples demonstrate the validity of the EBE-SEM for use in teleseismic wave field modeling and the misfit gradient calculation.

  15. Exploring the Association between Transformational Leadership and Teacher's Self-Efficacy in Greek Education System: A Multilevel SEM Model

    Science.gov (United States)

    Gkolia, Aikaterini; Koustelios, Athanasios; Belias, Dimitrios

    2018-01-01

    The main aim of this study is to examine the effect of principals' transformational leadership on teachers' self-efficacy across 77 different Greek elementary and secondary schools based on a centralized education system. For the investigation of the above effect multilevel Structural Equation Modelling analysis was conducted, recognizing the…

  16. Multiple-Group Analysis Using the sem Package in the R System

    Science.gov (United States)

    Evermann, Joerg

    2010-01-01

    Multiple-group analysis in covariance-based structural equation modeling (SEM) is an important technique to ensure the invariance of latent construct measurements and the validity of theoretical models across different subpopulations. However, not all SEM software packages provide multiple-group analysis capabilities. The sem package for the R…

  17. Twitter's tweet method modelling and simulation

    Science.gov (United States)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    This paper seeks to purpose the concept of Twitter marketing methods. The tools that Twitter provides are modelled and simulated using iThink in the context of a Twitter media-marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following models have been developed for a twitter marketing agent/company and tested in real circumstances and with real numbers. These models were finalized through a number of revisions and iterators of the design, develop, simulate, test and evaluate. It also addresses these methods that suit most organized promotion through targeting, to the Twitter social media service. The validity and usefulness of these Twitter marketing methods models for the day-to-day decision making are authenticated by the management of the company organization. It implements system dynamics concepts of Twitter marketing methods modelling and produce models of various Twitter marketing situations. The Tweet method that Twitter provides can be adjusted, depending on the situation, in order to maximize the profit of the company/agent.

  18. Moderators, mediators, and bidirectional relationships in the International Classification of Functioning, Disability and Health (ICF) framework: An empirical investigation using a longitudinal design and Structural Equation Modeling (SEM).

    Science.gov (United States)

    Rouquette, Alexandra; Badley, Elizabeth M; Falissard, Bruno; Dub, Timothée; Leplege, Alain; Coste, Joël

    2015-06-01

    The International Classification of Functioning, Disability and Health (ICF) published in 2001 describes the consequences of health conditions with three components of impairments in body structures or functions, activity limitations and participation restrictions. Two of the new features of the conceptual model were the possibility of feedback effects between each ICF component and the introduction of contextual factors conceptualized as moderators of the relationship between the components. The aim of this longitudinal study is to provide empirical evidence of these two kinds of effect. Structural equation modeling was used to analyze data from a French population-based cohort of 548 patients with knee osteoarthritis recruited between April 2007 and March 2009 and followed for three years. Indicators of the body structure and function, activity and participation components of the ICF were derived from self-administered standardized instruments. The measurement model revealed four separate factors for body structures impairments, body functions impairments, activity limitations and participation restrictions. The classic sequence from body impairments to participation restrictions through activity limitations was found at each assessment time. Longitudinal study of the ICF component relationships showed a feedback pathway indicating that the level of participation restrictions at baseline was predictive of activity limitations three years later. Finally, the moderating role of personal (age, sex, mental health, etc.) and environmental factors (family relationships, mobility device use, etc.) was investigated. Three contextual factors (sex, family relationships and walking stick use) were found to be moderators for the relationship between the body impairments and the activity limitations components. Mental health was found to be a mediating factor of the effect of activity limitations on participation restrictions. Copyright © 2015 Elsevier Ltd. All rights

  19. Model Uncertainty Quantification Methods In Data Assimilation

    Science.gov (United States)

    Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.

    2017-12-01

    Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.

  20. In Situ Characterization of Boehmite Particles in Water Using Liquid SEM.

    Science.gov (United States)

    Yao, Juan; Arey, Bruce W; Yang, Li; Zhang, Fei; Komorek, Rachel; Chun, Jaehun; Yu, Xiao-Ying

    2017-09-27

    In situ imaging and elemental analysis of boehmite (AlOOH) particles in water is realized using the System for Analysis at the Liquid Vacuum Interface (SALVI) and Scanning Electron Microscopy (SEM). This paper describes the method and key steps in integrating the vacuum compatible SAVLI to SEM and obtaining secondary electron (SE) images of particles in liquid in high vacuum. Energy dispersive x-ray spectroscopy (EDX) is used to obtain elemental analysis of particles in liquid and control samples including deionized (DI) water only and an empty channel as well. Synthesized boehmite (AlOOH) particles suspended in liquid are used as a model in the liquid SEM illustration. The results demonstrate that the particles can be imaged in the SE mode with good resolution (i.e., 400 nm). The AlOOH EDX spectrum shows significant signal from the aluminum (Al) when compared with the DI water and the empty channel control. In situ liquid SEM is a powerful technique to study particles in liquid with many exciting applications. This procedure aims to provide technical know-how in order to conduct liquid SEM imaging and EDX analysis using SALVI and to reduce potential pitfalls when using this approach.

  1. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...... the definitions with models to ensure that all interactions are captured. The method is illustrated on a home automation example with model checking as analysis tool. In particular, the modelling formalism is timed automata and the analysis uses UPPAAL to find interactions....

  2. Automated CD-SEM metrology for efficient TD and HVM

    Science.gov (United States)

    Starikov, Alexander; Mulapudi, Satya P.

    2008-03-01

    CD-SEM is the metrology tool of choice for patterning process development and production process control. We can make these applications more efficient by extracting more information from each CD-SEM image. This enables direct monitors of key process parameters, such as lithography dose and focus, or predicting the outcome of processing, such as etched dimensions or electrical parameters. Automating CD-SEM recipes at the early stages of process development can accelerate technology characterization, segmentation of variance and process improvements. This leverages the engineering effort, reduces development costs and helps to manage the risks inherent in new technology. Automating CD-SEM for manufacturing enables efficient operations. Novel SEM Alarm Time Indicator (SATI) makes this task manageable. SATI pulls together data mining, trend charting of the key recipe and Operations (OPS) indicators, Pareto of OPS losses and inputs for root cause analysis. This approach proved natural to our FAB personnel. After minimal initial training, we applied new methods in 65nm FLASH manufacture. This resulted in significant lasting improvements of CD-SEM recipe robustness, portability and automation, increased CD-SEM capacity and MT productivity.

  3. Level Crossing Methods in Stochastic Models

    CERN Document Server

    Brill, Percy H

    2008-01-01

    Since its inception in 1974, the level crossing approach for analyzing a large class of stochastic models has become increasingly popular among researchers. This volume traces the evolution of level crossing theory for obtaining probability distributions of state variables and demonstrates solution methods in a variety of stochastic models including: queues, inventories, dams, renewal models, counter models, pharmacokinetics, and the natural sciences. Results for both steady-state and transient distributions are given, and numerous examples help the reader apply the method to solve problems fa

  4. Numerical methods and modelling for engineering

    CERN Document Server

    Khoury, Richard

    2016-01-01

    This textbook provides a step-by-step approach to numerical methods in engineering modelling. The authors provide a consistent treatment of the topic, from the ground up, to reinforce for students that numerical methods are a set of mathematical modelling tools which allow engineers to represent real-world systems and compute features of these systems with a predictable error rate. Each method presented addresses a specific type of problem, namely root-finding, optimization, integral, derivative, initial value problem, or boundary value problem, and each one encompasses a set of algorithms to solve the problem given some information and to a known error bound. The authors demonstrate that after developing a proper model and understanding of the engineering situation they are working on, engineers can break down a model into a set of specific mathematical problems, and then implement the appropriate numerical methods to solve these problems. Uses a “building-block” approach, starting with simpler mathemati...

  5. Modeling complex work systems - method meets reality

    NARCIS (Netherlands)

    van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert

    1996-01-01

    Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the

  6. The housing market: modeling and assessment methods

    Directory of Open Access Journals (Sweden)

    Zapadnjuk Evgenij Aleksandrovich

    2016-10-01

    Full Text Available This paper analyzes the theoretical foundations of econometric simulation model that can be used to study the housing sector. Shows the methods of the practical use of correlation and regression models in the analysis of the status and prospects of development of the housing market.

  7. Global minimization line-edge roughness analysis of top down SEM images

    Science.gov (United States)

    Lane, Barton; Mack, Chris; Eibagi, Nasim; Ventzek, Peter

    2017-03-01

    Line edge placement error is a limiting factor in multipatterning schemes which are required for advanced nodes in high volume manufacturing for the semiconductor industry. Thus, we aim to develop an approach which provides both a quantitative estimate of whether a segment of a feature edge is in the ideal location and a quantitative estimate of the long wavelength roughness. The method is described, numerical simulation models its application to the issue of distortion caused by SEM aberrations, and the method is applied to a sample data set of SEM images. We show that the method gives a robust estimate of a major component leading to feature edge placement error. Long wavelength distortions either from SEM aberrations or from long wavelength noise have a clear statistical signature. This methodology applied to a large, consistently acquired SEM data set allows estimates as to important elements required to assess the line edge placement error issue and to whether there is underlying long wavelength roughness which arises from physical sources

  8. Measurement error models, methods, and applications

    CERN Document Server

    Buonaccorsi, John P

    2010-01-01

    Over the last 20 years, comprehensive strategies for treating measurement error in complex models and accounting for the use of extra data to estimate measurement error parameters have emerged. Focusing on both established and novel approaches, ""Measurement Error: Models, Methods, and Applications"" provides an overview of the main techniques and illustrates their application in various models. It describes the impacts of measurement errors on naive analyses that ignore them and presents ways to correct for them across a variety of statistical models, from simple one-sample problems to regres

  9. Global Optimization Ensemble Model for Classification Methods

    Directory of Open Access Journals (Sweden)

    Hina Anwar

    2014-01-01

    Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.

  10. Modelling methods for milk intake measurements

    International Nuclear Information System (INIS)

    Coward, W.A.

    1999-01-01

    One component of the first Research Coordination Programme was a tutorial session on modelling in in-vivo tracer kinetic methods. This section describes the principles that are involved and how these can be translated into spreadsheets using Microsoft Excel and the SOLVER function to fit the model to the data. The purpose of this section is to describe the system developed within the RCM, and how it is used

  11. Modelling asteroid brightness variations. I - Numerical methods

    Science.gov (United States)

    Karttunen, H.

    1989-01-01

    A method for generating lightcurves of asteroid models is presented. The effects of the shape of the asteroid and the scattering law of a surface element are distinctly separable, being described by chosen functions that can easily be changed. The shape is specified by means of two functions that yield the length of the radius vector and the normal vector of the surface at a given point. The general shape must be convex, but spherical concavities producing macroscopic shadowing can also be modeled.

  12. Modeling Storm Surges Using Discontinuous Galerkin Methods

    Science.gov (United States)

    2016-06-01

    STATEMENT Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words ) Storm surges have a...model. One of the governing systems of equations used to model storm surges’ effects is the Shallow Water Equations (SWE). In this thesis, we solve the...fundamental truth, we found the error norm of the implicit method to be minimal. This study focuses on the impacts of a simulated storm surge in La Push

  13. Revolving SEM images visualising 3D taxonomic characters

    DEFF Research Database (Denmark)

    Akkari, Nesrine; Cheung, David Koon-Bong; Enghoff, Henrik

    2013-01-01

    A novel illustration technique based on scanning electron microscopy is used for the first time to enhance taxonomic descriptions. The male genitalia (gonopods) of six species of millipedes are used for construction of interactive imaging models. Each model is a compilation of a number of SEM ima...

  14. Models and Methods for Free Material Optimization

    DEFF Research Database (Denmark)

    Weldeyesus, Alemseged Gebrehiwot

    conditions for physical attainability, in the context that, it has to be symmetric and positive semidefinite. FMO problems have been studied for the last two decades in many articles that led to the development of a wide range of models, methods, and theories. As the design variables in FMO are the local...... programs. The method has successfully obtained solutions to large-scale classical FMO problems of simultaneous analysis and design, nested and dual formulations. The second goal is to extend the method and the FMO problem formulations to general laminated shell structures. The thesis additionally addresses...

  15. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  16. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based......The geomagnetic field varies on a variety of time- and length scales, which are only rudimentary considered in most present field models. The part of the observed field that can not be explained by a given model, the model residuals, is often considered as an estimate of the data uncertainty (which...... on 5 years of Ørsted and CHAMP data, and includes secular variation and acceleration, as well as low-degree external (magnetospheric) and induced fields. The analysis is done in order to find the statistical behaviour of the space-time structure of the residuals, as a proxy for the data covariances...

  17. Developing a TQM quality management method model

    NARCIS (Netherlands)

    Zhang, Zhihai

    1997-01-01

    From an extensive review of total quality management literature, the external and internal environment affecting an organization's quality performance and the eleven primary elements of TQM are identified. Based on the primary TQM elements, a TQM quality management method model is developed. This

  18. Railway Track Allocation: Models and Methods

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    2011-01-01

    Efficiently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in finding train routes. Since the problem of routing trains on ...

  19. Railway Track Allocation: Models and Methods

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    Eciently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in nding train routes. Since the problem of routing trains on a rai...

  20. Sem analysis zirconia-ceramic adhesion interface

    Science.gov (United States)

    CARDELLI, P.; VERTUCCI, V.; MONTANI, M.; ARCURI, C.

    2015-01-01

    SUMMARY Objectives Modern dentistry increasingly tends to use materials aesthetically acceptable and biomimetic. Among these are zirconia and ceramics for several years, a combination that now has becoming synonym of aesthetic; however, what could be the real link between these two materials and especially its nature, remains a controversial topic debated in the literature. The aim of our study was to “underline” the type of bonding that could exist between these materials. Materials and methods To investigate the nature of this bond we used a SEM microscopy (Zeiss SUPRA 25). Different bilaminar specimens: “white” zirconia Zircodent® and ceramic “Noritake®”, after being tested with loading test in bending (three-point-bending) and FEM analysis, were analyzed by SEM. Fragments’ analysis in closeness of the fracture’s point has allowed us to be able to “see” if at large magnifications between these two materials, and without the use of linear, could exist a lasting bond and the possible type of failure that could incur. Results From our analysis of the specimens’ fragments analyzed after test Equipment, it is difficult to highlight a clear margin and no-adhesion zones between the two materials, although the analysis involving fragments adjacent to the fracture that has taken place at the time of Mechanical test Equipment. Conclusions According to our analysis and with all the clarification of the case, we can assume that you can obtain a long and lasting bond between the zirconia and ceramics. Agree to the data present in the literature, we can say that the type of bond varies according to the type of specimens and of course also the type of failure. In samples where the superstructure envelops the ceramic framework Zirconium we are in the presence of a cohesive failure, otherwise in a presence of adhesive failure. PMID:27555905

  1. Acceleration methods and models in Sn calculations

    International Nuclear Information System (INIS)

    Sbaffoni, M.M.; Abbate, M.J.

    1984-01-01

    In some neutron transport problems solved by the discrete ordinate method, it is relatively common to observe some particularities as, for example, negative fluxes generation, slow and insecure convergences and solution instabilities. The commonly used models for neutron flux calculation and acceleration methods included in the most used codes were analyzed, in face of their use in problems characterized by a strong upscattering effect. Some special conclusions derived from this analysis are presented as well as a new method to perform the upscattering scaling for solving the before mentioned problems in this kind of cases. This method has been included in the DOT3.5 code (two dimensional discrete ordinates radiation transport code) generating a new version of wider application. (Author) [es

  2. Alternative methods of modeling wind generation using production costing models

    International Nuclear Information System (INIS)

    Milligan, M.R.; Pang, C.K.

    1996-08-01

    This paper examines the methods of incorporating wind generation in two production costing models: one is a load duration curve (LDC) based model and the other is a chronological-based model. These two models were used to evaluate the impacts of wind generation on two utility systems using actual collected wind data at two locations with high potential for wind generation. The results are sensitive to the selected wind data and the level of benefits of wind generation is sensitive to the load forecast. The total production cost over a year obtained by the chronological approach does not differ significantly from that of the LDC approach, though the chronological commitment of units is more realistic and more accurate. Chronological models provide the capability of answering important questions about wind resources which are difficult or impossible to address with LDC models

  3. Mathematical methods and models in composites

    CERN Document Server

    Mantic, Vladislav

    2014-01-01

    This book provides a representative selection of the most relevant, innovative, and useful mathematical methods and models applied to the analysis and characterization of composites and their behaviour on micro-, meso-, and macroscale. It establishes the fundamentals for meaningful and accurate theoretical and computer modelling of these materials in the future. Although the book is primarily concerned with fibre-reinforced composites, which have ever-increasing applications in fields such as aerospace, many of the results presented can be applied to other kinds of composites. The topics cover

  4. Intelligent structural optimization: Concept, Model and Methods

    International Nuclear Information System (INIS)

    Lu, Dagang; Wang, Guangyuan; Peng, Zhang

    2002-01-01

    Structural optimization has many characteristics of Soft Design, and so, it is necessary to apply the experience of human experts to solving the uncertain and multidisciplinary optimization problems in large-scale and complex engineering systems. With the development of artificial intelligence (AI) and computational intelligence (CI), the theory of structural optimization is now developing into the direction of intelligent optimization. In this paper, a concept of Intelligent Structural Optimization (ISO) is proposed. And then, a design process model of ISO is put forward in which each design sub-process model are discussed. Finally, the design methods of ISO are presented

  5. Sparsity-Based Super Resolution for SEM Images.

    Science.gov (United States)

    Tsiper, Shahar; Dicker, Or; Kaizerman, Idan; Zohar, Zeev; Segev, Mordechai; Eldar, Yonina C

    2017-09-13

    The scanning electron microscope (SEM) is an electron microscope that produces an image of a sample by scanning it with a focused beam of electrons. The electrons interact with the atoms in the sample, which emit secondary electrons that contain information about the surface topography and composition. The sample is scanned by the electron beam point by point, until an image of the surface is formed. Since its invention in 1942, the capabilities of SEMs have become paramount in the discovery and understanding of the nanometer world, and today it is extensively used for both research and in industry. In principle, SEMs can achieve resolution better than one nanometer. However, for many applications, working at subnanometer resolution implies an exceedingly large number of scanning points. For exactly this reason, the SEM diagnostics of microelectronic chips is performed either at high resolution (HR) over a small area or at low resolution (LR) while capturing a larger portion of the chip. Here, we employ sparse coding and dictionary learning to algorithmically enhance low-resolution SEM images of microelectronic chips-up to the level of the HR images acquired by slow SEM scans, while considerably reducing the noise. Our methodology consists of two steps: an offline stage of learning a joint dictionary from a sequence of LR and HR images of the same region in the chip, followed by a fast-online super-resolution step where the resolution of a new LR image is enhanced. We provide several examples with typical chips used in the microelectronics industry, as well as a statistical study on arbitrary images with characteristic structural features. Conceptually, our method works well when the images have similar characteristics, as microelectronics chips do. This work demonstrates that employing sparsity concepts can greatly improve the performance of SEM, thereby considerably increasing the scanning throughput without compromising on analysis quality and resolution.

  6. Mathematical Models and Methods for Living Systems

    CERN Document Server

    Chaplain, Mark; Pugliese, Andrea

    2016-01-01

    The aim of these lecture notes is to give an introduction to several mathematical models and methods that can be used to describe the behaviour of living systems. This emerging field of application intrinsically requires the handling of phenomena occurring at different spatial scales and hence the use of multiscale methods. Modelling and simulating the mechanisms that cells use to move, self-organise and develop in tissues is not only fundamental to an understanding of embryonic development, but is also relevant in tissue engineering and in other environmental and industrial processes involving the growth and homeostasis of biological systems. Growth and organization processes are also important in many tissue degeneration and regeneration processes, such as tumour growth, tissue vascularization, heart and muscle functionality, and cardio-vascular diseases.

  7. The Schwarzschild Method for Building Galaxy Models

    Science.gov (United States)

    de Zeeuw, P. T.

    1998-09-01

    Martin Schwarzschild is most widely known as one of the towering figures of the theory of stellar evolution. However, from the early fifties onward he displayed a strong interest in dynamical astronomy, and in particular in its application to the structure of star clusters and galaxies. This resulted in a string of remarkable investigations, including the discovery of what became known as the Spitzer-Schwarzschild mechanism, the invention of the strip count method for mass determinations, the demonstration of the existence of dark matter on large scales, and the study of the nucleus of M31, based on his own Stratoscope II balloon observations. With his retirement approaching he decided to leave the field of stellar evolution, and to make his life--long hobby of stellar dynamics a full-time occupation, and to tackle the problem of self-consistent equilibria for elliptical galaxies, which by then were suspected to have a triaxial shape. Rather than following classical methods, which had trouble already in dealing with axisymmetric systems, he invented a simple numerical technique, which seeks to populate individual stellar orbits in the galaxy potential so as to reproduce the associated model density. This is now known as Schwarzschild's method. He showed by numerical calculation that most stellar orbits in a triaxial potential relevant for elliptical galaxies have two effective integrals of motion in addition to the classical energy integral, and then constructed the first ever self-consistent equilibrium model for a realistic triaxial galaxy. This provided a very strong stimulus to research in the dynamics of flattened galaxies. This talk will review how Schwarzschild's Method is used today, in problems ranging from the existence of equilibrium models as a function of shape, central cusp slope, tumbling rate, and presence of a central point mass, to modeling of individual galaxies to find stellar dynamical evidence for dark matter in extended halos, and/or massive

  8. Deep learning and shapes similarity for joint segmentation and tracing single neurons in SEM images

    Science.gov (United States)

    Rao, Qiang; Xiao, Chi; Han, Hua; Chen, Xi; Shen, Lijun; Xie, Qiwei

    2017-02-01

    Extracting the structure of single neurons is critical for understanding how they function within the neural circuits. Recent developments in microscopy techniques, and the widely recognized need for openness and standardization provide a community resource for automated reconstruction of dendritic and axonal morphology of single neurons. In order to look into the fine structure of neurons, we use the Automated Tape-collecting Ultra Microtome Scanning Electron Microscopy (ATUM-SEM) to get images sequence of serial sections of animal brain tissue that densely packed with neurons. Different from other neuron reconstruction method, we propose a method that enhances the SEM images by detecting the neuronal membranes with deep convolutional neural network (DCNN) and segments single neurons by active contour with group shape similarity. We joint the segmentation and tracing together and they interact with each other by alternate iteration that tracing aids the selection of candidate region patch for active contour segmentation while the segmentation provides the neuron geometrical features which improve the robustness of tracing. The tracing model mainly relies on the neuron geometrical features and is updated after neuron being segmented on the every next section. Our method enables the reconstruction of neurons of the drosophila mushroom body which is cut to serial sections and imaged under SEM. Our method provides an elementary step for the whole reconstruction of neuronal networks.

  9. Recent advances in 3D SEM surface reconstruction.

    Science.gov (United States)

    Tafti, Ahmad P; Kirkpatrick, Andrew B; Alavi, Zahrasadat; Owen, Heather A; Yu, Zeyun

    2015-11-01

    The scanning electron microscope (SEM), as one of the most commonly used instruments in biology and material sciences, employs electrons instead of light to determine the surface properties of specimens. However, the SEM micrographs still remain 2D images. To effectively measure and visualize the surface attributes, we need to restore the 3D shape model from the SEM images. 3D surface reconstruction is a longstanding topic in microscopy vision as it offers quantitative and visual information for a variety of applications consisting medicine, pharmacology, chemistry, and mechanics. In this paper, we attempt to explain the expanding body of the work in this area, including a discussion of recent techniques and algorithms. With the present work, we also enhance the reliability, accuracy, and speed of 3D SEM surface reconstruction by designing and developing an optimized multi-view framework. We then consider several real-world experiments as well as synthetic data to examine the qualitative and quantitative attributes of our proposed framework. Furthermore, we present a taxonomy of 3D SEM surface reconstruction approaches and address several challenging issues as part of our future work. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Simulation of FIB-SEM images for analysis of porous microstructures.

    Science.gov (United States)

    Prill, Torben; Schladitz, Katja

    2013-01-01

    Focused ion beam nanotomography-scanning electron microscopy tomography yields high-quality three-dimensional images of materials microstructures at the nanometer scale combining serial sectioning using a focused ion beam with SEM. However, FIB-SEM tomography of highly porous media leads to shine-through artifacts preventing automatic segmentation of the solid component. We simulate the SEM process in order to generate synthetic FIB-SEM image data for developing and validating segmentation methods. Monte-Carlo techniques yield accurate results, but are too slow for the simulation of FIB-SEM tomography requiring hundreds of SEM images for one dataset alone. Nevertheless, a quasi-analytic description of the specimen and various acceleration techniques, including a track compression algorithm and an acceleration for the simulation of secondary electrons, cut down the computing time by orders of magnitude, allowing for the first time to simulate FIB-SEM tomography. © Wiley Periodicals, Inc.

  11. An integrated modeling method for wind turbines

    Science.gov (United States)

    Fadaeinedjad, Roohollah

    Simulink environment to study the flicker contribution of the wind turbine in the wind-diesel system. By using a new wind power plant representation method, a large wind farm (consisting of 96 fixed speed wind turbines) is modelled to study the power quality of wind power system. The flicker contribution of wind farm is also studied with different wind turbine numbers, using the flickermeter model. Keywords. Simulink, FAST, TurbSim, AreoDyn, wind energy, doubly-fed induction generator, variable speed wind turbine, voltage sag, tower vibration, power quality, flicker, fixed speed wind turbine, wind shear, tower shadow, and yaw error.

  12. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  13. Calibração de um modelo de umidade para um solo aluvial sem cobertura vegetal Calibration of a soil moisture model for an alluvial soil without vegetable cover

    Directory of Open Access Journals (Sweden)

    Eduardo C. B. de Araújo

    2001-12-01

    Full Text Available O presente trabalho foi conduzido na região semi-árida em solo aluvial eutrófico, em área irrigada da Fazenda Experimental Vale do Curu, pertencente à Universidade Federal do Ceará. O experimento se desenvolveu no período de 4/10/99 a 10/3/00, mantendo-se o solo sem vegetação, com objetivo de calibrar o modelo de umidade do solo para atividades agrícolas (MUSAG e determinar os parâmetros associados às funções que compõem o modelo (infiltração, percolação e evaporação. A calibração consistiu em se medir a umidade no solo, na profundidade de 0 a 0,30 m, com uma sonda de nêutrons, e comparar essas medidas com a umidade no solo estimada pelo modelo parametrizado. O MUSAG permitiu valores estimados de armazenamento hídrico, estatisticamente não diferentes dos valores determinados pela sonda de nêutrons para a profundidade de 0 - 0,30 m. O modelo apresentou resultado menos satisfatório para estimativa da umidade em períodos com maior freqüência de precipitação, seguindo a tendência dos valores observados, porém subestimando-os.This work was carried out at the Experimental Farm of the Universidade Federal do Ceará, Curu River Valley, in the State of Ceará, Brazil, from October 4, 1999 to March 10, 2000, to calibrate, under field conditions, the so called soil moisture model for agricultural activities (MUSAG determining the parameters associated to the model functions (infiltration, percolation and evaporation.The measurements of soil water storage were done with a neutron probe, and compared with that predicted by the model. The MUSAG provided values of storage water, statistically not different from the values determined by the neutron probe for the depth of 0 - 0,30 m. The model did not provide reasonable estimates of the soil moisture during periods with larger precipitation frequency, however, the predictions followed the tendency of the observed values, but underestimating them.

  14. SEM probe of IC radiation sensitivity

    Science.gov (United States)

    Gauthier, M. K.; Stanley, A. G.

    1979-01-01

    Scanning Electron Microscope (SEM) used to irradiate single integrated circuit (IC) subcomponent to test for radiation sensitivity can localize area of IC less than .03 by .03 mm for determination of exact location of radiation sensitive section.

  15. Fuzzy Control and Connected Region Marking Algorithm-Based SEM Nanomanipulation

    Directory of Open Access Journals (Sweden)

    Dongjie Li

    2012-01-01

    Full Text Available The interactive nanomanipulation platform is established based on fuzzy control and connected region marking (CRM algorithm in SEM. The 3D virtual nanomanipulation model is developed to make up the insufficiency of the 2D SEM image information, which provides the operator with depth and real-time visual feedback information to guide the manipulation. The haptic device Omega3 is used as the master to control the 3D motion of the nanopositioner in master-slave mode and offer the force sensing to the operator controlled with fuzzy control algorithm. Aiming at sensing of force feedback during the nanomanipulation, the collision detection method of the virtual nanomanipulation model and the force rending model are studied to realize the force feedback of nanomanipulation. The CRM algorithm is introduced to process the SEM image which provides effective position data of the objects for updating the virtual environment (VE, and relevant issues such as calibration and update rate of VE are also discussed. Finally, the performance of the platform is validated by the ZnO nanowire manipulation experiments.

  16. ACTIVE AND PARTICIPATORY METHODS IN BIOLOGY: MODELING

    Directory of Open Access Journals (Sweden)

    Brînduşa-Antonela SBÎRCEA

    2011-01-01

    Full Text Available By using active and participatory methods it is hoped that pupils will not only come to a deeper understanding of the issues involved, but also that their motivation will be heightened. Pupil involvement in their learning is essential. Moreover, by using a variety of teaching techniques, we can help students make sense of the world in different ways, increasing the likelihood that they will develop a conceptual understanding. The teacher must be a good facilitator, monitoring and supporting group dynamics. Modeling is an instructional strategy in which the teacher demonstrates a new concept or approach to learning and pupils learn by observing. In the teaching of biology the didactic materials are fundamental tools in the teaching-learning process. Reading about scientific concepts or having a teacher explain them is not enough. Research has shown that modeling can be used across disciplines and in all grade and ability level classrooms. Using this type of instruction, teachers encourage learning.

  17. Surface physics theoretical models and experimental methods

    CERN Document Server

    Mamonova, Marina V; Prudnikova, I A

    2016-01-01

    The demands of production, such as thin films in microelectronics, rely on consideration of factors influencing the interaction of dissimilar materials that make contact with their surfaces. Bond formation between surface layers of dissimilar condensed solids-termed adhesion-depends on the nature of the contacting bodies. Thus, it is necessary to determine the characteristics of adhesion interaction of different materials from both applied and fundamental perspectives of surface phenomena. Given the difficulty in obtaining reliable experimental values of the adhesion strength of coatings, the theoretical approach to determining adhesion characteristics becomes more important. Surface Physics: Theoretical Models and Experimental Methods presents straightforward and efficient approaches and methods developed by the authors that enable the calculation of surface and adhesion characteristics for a wide range of materials: metals, alloys, semiconductors, and complex compounds. The authors compare results from the ...

  18. Wind turbine noise modeling : a comparison of modeling methods

    International Nuclear Information System (INIS)

    Wang, L.; Strasser, A.

    2009-01-01

    All wind turbine arrays must undergo a noise impact assessment. DataKustik GmbH developed the Computer Aided Noise Abatement (Cadna/A) modeling software for calculating noise propagation to meet accepted protocols and international standards such as CONCAWE and ISO 9613 standards. The developer of Cadna/A, recommended the following 3 models for simulating wind turbine noise. These include a disk of point sources; a ring of point sources located at the tip of each blade; and a point source located at the top of the wind turbine tower hub. This paper presented an analytical comparison of the 3 models used for a typical wind turbine with a hub tower containing 3 propeller blades, a drive-train and top-mounted generator, as well as a representative wind farm, using Cadna/A. AUC, ISO and IEC criteria requirements for the meteorological input with Cadna/A for wind farm noise were also discussed. The noise predicting modelling approach was as follows: the simplest model, positioning a single point source at the top of the hub, can be used to predict sound levels for a typical wind turbine if receptors are located 250 m from the hub; a-weighted sound power levels of a wind turbine at cut-in and cut-off wind speed should be used in the models; 20 by 20 or 50 by 50 meter terrain parameters are suitable for large wind farm modeling; and ISO 9613-2 methods are recommended to predict wind farm noise with various metrological inputs based on local conditions. The study showed that the predicted sound level differences of the 3 wind turbine models using Cadna/A are less than 0.2 dB at receptors located greater than 250 m from the wind turbine hub, which fall within the accuracy range of the calculation method. All 3 models of wind turbine noise meet ISO9613-2 standards for noise prediction using Cadna/A. However, the single point source model was found to be the most efficient in terms of modeling run-time among the 3 models. 7 refs., 3 tabs., 15 figs.

  19. Statistical Models and Methods for Lifetime Data

    CERN Document Server

    Lawless, Jerald F

    2011-01-01

    Praise for the First Edition"An indispensable addition to any serious collection on lifetime data analysis and . . . a valuable contribution to the statistical literature. Highly recommended . . ."-Choice"This is an important book, which will appeal to statisticians working on survival analysis problems."-Biometrics"A thorough, unified treatment of statistical models and methods used in the analysis of lifetime data . . . this is a highly competent and agreeable statistical textbook."-Statistics in MedicineThe statistical analysis of lifetime or response time data is a key tool in engineering,

  20. Mechanics, Models and Methods in Civil Engineering

    CERN Document Server

    Maceri, Franco

    2012-01-01

    „Mechanics, Models and Methods in Civil Engineering” collects leading papers dealing with actual Civil Engineering problems. The approach is in the line of the Italian-French school and therefore deeply couples mechanics and mathematics creating new predictive theories, enhancing clarity in understanding, and improving effectiveness in applications. The authors of the contributions collected here belong to the Lagrange Laboratory, an European Research Network active since many years. This book will be of a major interest for the reader aware of modern Civil Engineering.

  1. The forward tracking, an optical model method

    CERN Document Server

    Benayoun, M

    2002-01-01

    This Note describes the so-called Forward Tracking, and the underlying optical model, developed in the context of LHCb-Light studies. Starting from Velo tracks, cheated or found by real pattern recognition, the tracks are found in the ST1-3 chambers after the magnet. The main ingredient to the method is a parameterisation of the track in the ST1-3 region, based on the Velo track parameters and an X seed in one ST station. Performance with the LHCb-Minus and LHCb-Light setups is given.

  2. Experimental modeling methods in Industrial Engineering

    Directory of Open Access Journals (Sweden)

    Peter Trebuňa

    2009-03-01

    Full Text Available Dynamic approaches to a management system of the present industrial practice, forcing businesses to address management issues in-house continuous improvement of production and non-production processes. Experience has repeatedly demonstrated the need for a system approach not only in analysis but also in the planning and actual implementation of these processes. Therefore, the contribution is focused on the description of the modeling in industrial practice by a system approach, in order to avoid erroneous application of the decision to the implementation phase, and thus prevent any longer applying methods "attempt - fallacy".

  3. Finite element modeling methods for photonics

    CERN Document Server

    Rahman, B M Azizur

    2013-01-01

    The term photonics can be used loosely to refer to a vast array of components, devices, and technologies that in some way involve manipulation of light. One of the most powerful numerical approaches available to engineers developing photonic components and devices is the Finite Element Method (FEM), which can be used to model and simulate such components/devices and analyze how they will behave in response to various outside influences. This resource provides a comprehensive description of the formulation and applications of FEM in photonics applications ranging from telecommunications, astron

  4. Syngine: On-Demand Synthetic Seismograms from the IRIS DMC based on AxiSEM & Instaseis

    Science.gov (United States)

    van Driel, Martin; Hutko, Alex; Krischer, Lion; Trabant, Chad; Stähler, Simon; Nissen-Meyer, Tarje

    2016-04-01

    This presentation highlights the IRIS DMC's Synthetics Engine (Syngine), a new on-demand synthetic seismogram service (ds.iris.edu/ds/products/syngine/) that complements the time series data IRIS has traditionally distributed. The synthetics are accessible using a web service for user specified source-receiver combinations and a variety of Earth models. Syngine is designed to be extremely fast, making it feasible to request large numbers of source-receiver combinations. This capability supports studying variations in source properties, Earth models or temporal changes in instrument responses. We have computed a set of global-scale databases of Green's functions using the spectral-element method AxiSEM (www.axisem.info , see also abstract EGU2016-9008) for selected well known spherically symmetric Earth models (PREM, IASP91, AK135f...) with anisotropy and attenuation. Fine-scale models have resolution from 1 to about 100 sec periods with durations of 60 minutes; lower resolution models extend to a few hours duration. Behind the scenes, the web service runs Instaseis (www.instaseis.net), a system that rapidly calculates broadband synthetic seismograms from the pre-calculated Green's functions. Receivers may be specified at arbitrary coordinates or using real network and station codes, which are resolved using metadata at the DMC. The service also provides optional, on-demand processing methods, including convolution with a specified moment tensor (specified explicitly or by GCMT ID) and one of a few source-time functions with variable duration. The interface is designed to be callable by scripts and to support automated processing workflows. The DMC also provides a user-friendly command line Fetch script to download selections of synthetics. This new resource provides a powerful tool in multiple research areas where synthetic seismograms are useful. Regarding the Instaseis/AxiSEM functionality, one only needs to perform two forward calculations with AxiSEM for a

  5. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  6. Deposition of ZnSO4 · 3Zn(OH2 · 4H2O films by SILAR method and their study by XRD, SEM and µ-Raman Depósito de películas de ZnSO4 • 3Zn(OH2 • 4H2O por el método SILAR y su estudio por DRX, SEM Y μ-RAMAN

    Directory of Open Access Journals (Sweden)

    F N Jiménez García

    2012-06-01

    Full Text Available ZnSO4 · 3Zn(OH2 · 4H2O(Zinc Sulfate Hidroxide Hidrate films were obtained on glass substrates by SILAR method. It was employed a precursorsolution of ZnSO4 and MnSO4 and water near boiling point complexed with 1 ml of NH4OH as a second solution. Films were treated on air at 300oC by 1 hour. Both films ZnSO4·3Zn(OH2·4H2O as ZnO are important protective against zinc corrosion because they are passive films that give a longer duration to material, it is therefore relevant to study their response to temperature changes. For those reasons films were analyzed before and after thermal treatment to study the structural and morphological changes by X ray diffraction (XRD, Scanning electron microscopy (SEM and Raman Microscopy techniques. It was found before thermal treatment by XRD thepresence of ZnSO4 · 3Zn(OH2 · 4H2O triclinic phase and after such treatment the ZnO hexagonal phase was evidenced. The morphology identified by SEM before thermal treatment was sheets formed by platelet like structure of micrometric size which changes after thermal treatment to a combination ofthose sheets with flowers like structure characteristic of ZnO hexagonal. By µ-Raman the hexagonal ZnO phase before thermal treatment as the triclinicZnSO4 · 3Zn(OH2 · 4H2O phase after thermal treatment were confirmed.One objective of this study was to obtain this protective corrosion material in a controlled manner by techiniques of low cost and high simplicity as Silarmethod. Which, even under temperture increases continue being protective corrosion although suffers phase changes because new phases have protectivecorrosive characteristics too.Se obtuvieron películas de ZnSO4 · 3Zn(OH2 · 4H2O (Zinc Sulfate Hidroxide Hidrate sobre sustratos de vidrio mediante procedimiento SILAR. Se empleó una solución precursora de ZnSO4 y MnSO4 y una segunda solución de agua a ebullición acomplejada con 1 ml de NH4OH. Se realizó tratamiento térmico en aire a 300oC por

  7. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    Science.gov (United States)

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  8. Study of inclusion complex between 2,6-dinitrobenzoic acid and β-cyclodextrin by 1H NMR, 2D 1H NMR (ROESY), FT-IR, XRD, SEM and photophysical methods.

    Science.gov (United States)

    Srinivasan, Krishnan; Stalin, Thambusamy

    2014-09-15

    The formation of host-guest inclusion complex of 2,6-dinitrobenzoic acid (2,6-DNB) with nano-hydrophobic cavity of β-cyclodextrin (β-CD) in solution phase has been studied by UV-visible spectroscopy and electrochemical analysis (cyclic voltammetry, CV). The effect of acid-base concentrations of 2,6-DNB has been studied in presence and absence of β-CD to determination for the ground state acidity constant (pKa). The binding constant of inclusion complex at 303 K was calculated using Benesi-Hildebrand plot and thermodynamic parameter (ΔG) was also calculated. The solid inclusion complex formation between β-CD and 2,6-DNB was confirmed by 1H NMR, 2D 1H NMR (ROESY), FT-IR, XRD and SEM analysis. A schematic representation of this inclusion process was proposed by molecular docking studies using patch dock server. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Functional methods in the generalized Dicke model

    International Nuclear Information System (INIS)

    Alcalde, M. Aparicio; Lemos, A.L.L. de; Svaiter, N.F.

    2007-01-01

    The Dicke model describes an ensemble of N identical two-level atoms (qubits) coupled to a single quantized mode of a bosonic field. The fermion Dicke model should be obtained by changing the atomic pseudo-spin operators by a linear combination of Fermi operators. The generalized fermion Dicke model is defined introducing different coupling constants between the single mode of the bosonic field and the reservoir, g 1 and g 2 for rotating and counter-rotating terms respectively. In the limit N -> ∞, the thermodynamic of the fermion Dicke model can be analyzed using the path integral approach with functional method. The system exhibits a second order phase transition from normal to superradiance at some critical temperature with the presence of a condensate. We evaluate the critical transition temperature and present the spectrum of the collective bosonic excitations for the general case (g 1 ≠ 0 and g 2 ≠ 0). There is quantum critical behavior when the coupling constants g 1 and g 2 satisfy g 1 + g 2 =(ω 0 Ω) 1/2 , where ω 0 is the frequency of the mode of the field and Ω is the energy gap between energy eigenstates of the qubits. Two particular situations are analyzed. First, we present the spectrum of the collective bosonic excitations, in the case g 1 ≠ 0 and g 2 ≠ 0, recovering the well known results. Second, the case g 1 ≠ 0 and g 2 ≠ 0 is studied. In this last case, it is possible to have a super radiant phase when only virtual processes are introduced in the interaction Hamiltonian. Here also appears a quantum phase transition at the critical coupling g 2 (ω 0 Ω) 1/2 , and for larger values for the critical coupling, the system enter in this super radiant phase with a Goldstone mode. (author)

  10. Mathematical models and methods for planet Earth

    CERN Document Server

    Locatelli, Ugo; Ruggeri, Tommaso; Strickland, Elisabetta

    2014-01-01

    In 2013 several scientific activities have been devoted to mathematical researches for the study of planet Earth. The current volume presents a selection of the highly topical issues presented at the workshop “Mathematical Models and Methods for Planet Earth”, held in Roma (Italy), in May 2013. The fields of interest span from impacts of dangerous asteroids to the safeguard from space debris, from climatic changes to monitoring geological events, from the study of tumor growth to sociological problems. In all these fields the mathematical studies play a relevant role as a tool for the analysis of specific topics and as an ingredient of multidisciplinary problems. To investigate these problems we will see many different mathematical tools at work: just to mention some, stochastic processes, PDE, normal forms, chaos theory.

  11. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  12. Oxford CyberSEM: remote microscopy

    International Nuclear Information System (INIS)

    Rahman, M; Kirkland, A; Cockayne, D; Meyer, R

    2008-01-01

    The Internet has enabled researchers to communicate over vast geographical distances, sharing ideas and documents. e-Science, underpinned by Grid and Web Services, has enabled electronic communications to the next level where, in addition to document sharing, researchers can increasingly control high precision scientific instruments over the network. The Oxford CyberSEM project developed a simple Java applet via which samples placed in a JEOL 5510LV Scanning Electron Microscope (SEM) can be manipulated and examined collaboratively over the Internet. Designed with schoolchildren in mind, CyberSEM does not require any additional hardware or software other than a generic Java-enabled web browser. This paper reflects on both the technical and social challenges in designing real-time systems for controlling scientific equipments in collaborative environments. Furthermore, it proposes potential deployment beyond the classroom setting.

  13. Partial Least Squares Strukturgleichungsmodellierung (PLS-SEM)

    DEFF Research Database (Denmark)

    Hair, Joseph F.; Hult, G. Tomas M.; Ringle, Christian M.

    (PLS-SEM) hat sich in der wirtschafts- und sozialwissenschaftlichen Forschung als geeignetes Verfahren zur Schätzung von Kausalmodellen behauptet. Dank der Anwenderfreundlichkeit des Verfahrens und der vorhandenen Software ist es inzwischen auch in der Praxis etabliert. Dieses Buch liefert eine...... anwendungsorientierte Einführung in die PLS-SEM. Der Fokus liegt auf den Grundlagen des Verfahrens und deren praktischer Umsetzung mit Hilfe der SmartPLS-Software. Das Konzept des Buches setzt dabei auf einfache Erläuterungen statistischer Ansätze und die anschauliche Darstellung zahlreicher Anwendungsbeispiele anhand...... einer einheitlichen Fallstudie. Viele Grafiken, Tabellen und Illustrationen erleichtern das Verständnis der PLS-SEM. Zudem werden dem Leser herunterladbare Datensätze, Aufgaben und weitere Fachartikel zur Vertiefung angeboten. Damit eignet sich das Buch hervorragend für Studierende, Forscher und...

  14. FDTD method and models in optical education

    Science.gov (United States)

    Lin, Xiaogang; Wan, Nan; Weng, Lingdong; Zhu, Hao; Du, Jihe

    2017-08-01

    In this paper, finite-difference time-domain (FDTD) method has been proposed as a pedagogical way in optical education. Meanwhile, FDTD solutions, a simulation software based on the FDTD algorithm, has been presented as a new tool which helps abecedarians to build optical models and to analyze optical problems. The core of FDTD algorithm is that the time-dependent Maxwell's equations are discretized to the space and time partial derivatives, and then, to simulate the response of the interaction between the electronic pulse and the ideal conductor or semiconductor. Because the solving of electromagnetic field is in time domain, the memory usage is reduced and the simulation consequence on broadband can be obtained easily. Thus, promoting FDTD algorithm in optical education is available and efficient. FDTD enables us to design, analyze and test modern passive and nonlinear photonic components (such as bio-particles, nanoparticle and so on) for wave propagation, scattering, reflection, diffraction, polarization and nonlinear phenomena. The different FDTD models can help teachers and students solve almost all of the optical problems in optical education. Additionally, the GUI of FDTD solutions is so friendly to abecedarians that learners can master it quickly.

  15. Correlative SEM SERS for quantitative analysis of dimer nanoparticles.

    Science.gov (United States)

    Timmermans, F J; Lenferink, A T M; van Wolferen, H A G M; Otto, C

    2016-11-14

    A Raman microscope integrated with a scanning electron microscope was used to investigate plasmonic structures by correlative SEM-SERS analysis. The integrated Raman-SEM microscope combines high-resolution electron microscopy information with SERS signal enhancement from selected nanostructures with adsorbed Raman reporter molecules. Correlative analysis is performed for dimers of two gold nanospheres. Dimers were selected on the basis of SEM images from multi aggregate samples. The effect of the orientation of the dimer with respect to the polarization state of the laser light and the effect of the particle gap size on the Raman signal intensity is observed. Additionally, calculations are performed to simulate the electric near field enhancement. These simulations are based on the morphologies observed by electron microscopy. In this way the experiments are compared with the enhancement factor calculated with near field simulations and are subsequently used to quantify the SERS enhancement factor. Large differences between experimentally observed and calculated enhancement factors are regularly detected, a phenomenon caused by nanoscale differences between the real and 'simplified' simulated structures. Quantitative SERS experiments reveal the structure induced enhancement factor, ranging from ∼200 to ∼20 000, averaged over the full nanostructure surface. The results demonstrate correlative Raman-SEM microscopy for the quantitative analysis of plasmonic particles and structures, thus enabling a new analytical method in the field of SERS and plasmonics.

  16. Application of SEM and EDX in studying biomineralization in plant tissues.

    Science.gov (United States)

    He, Honghua; Kirilak, Yaowanuj

    2014-01-01

    This chapter describes protocols using formalin-acetic acid-alcohol (FAA) to fix plant tissues for studying biomineralization by means of scanning electron microscopy (SEM) and qualitative energy-dispersive X-ray microanalysis (EDX). Specimen preparation protocols for SEM and EDX mainly include fixation, dehydration, critical point drying (CPD), mounting, and coating. Gold-coated specimens are used for SEM imaging, while gold- and carbon-coated specimens are prepared for qualitative X-ray microanalyses separately to obtain complementary information on the elemental compositions of biominerals. During the specimen preparation procedure for SEM, some biominerals may be dislodged or scattered, making it difficult to determine their accurate locations, and light microscopy is used to complement SEM studies. Specimen preparation protocols for light microscopy generally include fixation, dehydration, infiltration and embedding with resin, microtome sectioning, and staining. In addition, microwave processing methods are adopted here to speed up the specimen preparation process for both SEM and light microscopy.

  17. Free wake models for vortex methods

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, K. [Technical Univ. Berlin, Aerospace Inst. (Germany)

    1997-08-01

    The blade element method works fast and good. For some problems (rotor shapes or flow conditions) it could be better to use vortex methods. Different methods for calculating a wake geometry will be presented. (au)

  18. SEM and Raman studies of CNT films on porous Si

    Science.gov (United States)

    Belka, R.; Keczkowska, J.; Suchańska, M.; Firek, P.; Wronka, H.; Kozłowski, M.; Radomska, J.; Czerwosz, E.; Craciunoiu, F.

    2017-08-01

    Carbon nanotube (CNT) films deposited on different porous silica substrates were studied by Scanning Electron Microscopy (SEM) and Raman Spectroscopy (RS). The films samples were prepared by a two-step method consisting of PVD and CVD processes. In the first step the nanocomposite Ni-C film was obtained by evaporation in dynamic vacuum from two separated sources of fullerenes and nickel acetate. Those films were deposited on porous silica and DLC/porous silica substrates. Analysis of SEM imaging showed that the obtained film are composed of carbon nanotubes, the distribution, size and quality of which depend on the type of substrate. The CNT films were studied by RS method to determine the influence of the substrate type on disordering of carbonaceous structure and quality of CNT in deposited films.

  19. Search Engine Marketing (SEM: Financial & Competitive Advantages of an Effective Hotel SEM Strategy

    Directory of Open Access Journals (Sweden)

    Leora Halpern Lanz

    2015-05-01

    Full Text Available Search Engine Marketing and Optimization (SEO, SEM are keystones of a hotels marketing strategy, in fact research shows that 90% of travelers start their vacation planning with a Google search. Learn five strategies that can enhance a hotels SEO and SEM strategies to boost bookings.

  20. Histological structures of native and cooked yolks from duck egg observed by SEM and cryo-SEM.

    Science.gov (United States)

    Hsu, Kuo-Chiang; Chung, Wen-Hsin; Lai, Kung-Ming

    2009-05-27

    A method was used to fix duck egg yolk while retaining its original sol structure to elucidate the fine structure of native yolk by using fixation with liquid nitrogen and cryo-scanning electron microscopy (cryo-SEM). Native yolk spheres showed a polyhedron shape with a diameter at approximately 50 to 100 μm and packed closely together. Furthermore, the interior microstructure of the native yolk spheres showed that a great amount of round globules ranging from 0.5 to 1.5 μm were embedded in a continuous phase with a lot of voids. After cooking, the sizes of the spheres were almost unchanged, and the continuous phase became a fibrous network structure observed by SEM with chemical fixation probably constituted of low density lipoprotein (LDL). The fine structure of the native yolk can be observed by cryo-SEM; however, the microstructure of yolk granules and plasma from cooked shell eggs can be observed by SEM with chemical fixation.

  1. Building a SEM Analytics Reporting Portfolio

    Science.gov (United States)

    Goff, Jay W.; Williams, Brian G.; Kilgore, Wendy

    2016-01-01

    Effective strategic enrollment management (SEM) efforts require vast amounts of internal and external data to ensure that meaningful reporting and analysis systems can assist managers in decision making. A wide range of information is integral for leading effective and efficient student recruitment and retention programs. This article is designed…

  2. Does Sexually Explicit Media (SEM) Affect Me?

    DEFF Research Database (Denmark)

    Hald, Gert Martin; Træen, Bente; Noor, Syed W

    2015-01-01

    and understanding of one’s sexual orientation.First-person effects refer to self-perceived and self-reported effects of SEM consumptionas experienced by the consumer. In addition, the study examined and provided athorough validation of the psychometric properties of the seven-item PornographyConsumption Effect...

  3. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    . To illustrate these concepts a number of examples are used. These include models of polymer membranes, distillation and catalyst behaviour. Some detailed considerations within these models are stated and discussed. Model generation concepts are introduced and ideas of a reference model are given that shows...

  4. GREENSCOPE: A Method for Modeling Chemical Process ...

    Science.gov (United States)

    Current work within the U.S. Environmental Protection Agency’s National Risk Management Research Laboratory is focused on the development of a method for modeling chemical process sustainability. The GREENSCOPE methodology, defined for the four bases of Environment, Economics, Efficiency, and Energy, can evaluate processes with over a hundred different indicators. These indicators provide a means for realizing the principles of green chemistry and green engineering in the context of sustainability. Development of the methodology has centered around three focal points. One is a taxonomy of impacts that describe the indicators and provide absolute scales for their evaluation. The setting of best and worst limits for the indicators allows the user to know the status of the process under study in relation to understood values. Thus, existing or imagined processes can be evaluated according to their relative indicator scores, and process modifications can strive towards realizable targets. A second area of focus is in advancing definitions of data needs for the many indicators of the taxonomy. Each of the indicators has specific data that is necessary for their calculation. Values needed and data sources have been identified. These needs can be mapped according to the information source (e.g., input stream, output stream, external data, etc.) for each of the bases. The user can visualize data-indicator relationships on the way to choosing selected ones for evalua

  5. On the emancipation of PLS-SEM : A commentary on Rigdon

    NARCIS (Netherlands)

    Sarstedt, Marko; Ringle, Christian M.; Henseler, Jörg; Hair, Joseph F.

    2014-01-01

    Rigdon's (2012) thoughtful article argues that PLS-SEM should free itself from CB-SEM. It should renounce all mechanisms, frameworks, and jargon associated with factor models entirely. In this comment, we shed further light on two subject areas on which Rigdon (2012) touches in his discussion of

  6. A business case method for business models

    NARCIS (Netherlands)

    Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris

    2013-01-01

    Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model

  7. Bases de Datos Semánticas

    Directory of Open Access Journals (Sweden)

    Irving Caro Fierros

    2016-12-01

    Full Text Available En 1992, cuando Tim Berners-Lee dio a conocer la primera  versión  de  la  Web,  su  visión  a  futuro  era  incorporar metadatos  con  información  semántica  en  las  páginas  Web.  Es precisamente   a   principios   de   este   siglo   que   inicia   el   auge repentino  de  la  Web  semántica  en  el  ambiente  académico  e Internet. El modelo de datos semántico se define como un modelo conceptual que permite definir el significado de los datos a través de  sus  relaciones  con  otros.  En  este  sentido,  el  formato  de representación  de  los  datos  es  fundamental  para  proporcionar información de carácter semántico. La tecnología enfocada en las bases de datos semánticas se encuentra actualmente en un punto de  inflexión,  al  pasar  del  ámbito  académico  y  de  investigación  a ser una opción comercial completa. En este artículo se realiza un análisis  del  concepto  de  base  de  datos  semántica.  También  se presenta  un  caso  de  estudio  donde  se  ejemplifican  operaciones básicas  que  involucran  la  gestión  de  la  información  almacenada en este tipo de base de datos.

  8. Does your SEM really tell the truth? How would you know? part 3: vibration and drift

    Science.gov (United States)

    Postek, Michael T.; Vladár, András. E.; Cizmar, Petr

    2014-09-01

    This is the third of a series of papers discussing various causes of measurement uncertainty in scanned particle beam instruments, and some of the solutions researched and developed at NIST. Scanned particle beam instruments especially the scanning electron microscope (SEM) have gone through tremendous evolution to become indispensable tools for many and diverse scientifi c and industrial applications. These improvements have signifi cantly enhanced their performance and made them far easier to operate. But, ease of operation has also fostered operator complacency. In addition, the user-friendliness has reduced the need for extensive operator training. Unfortunately, this has led to the concept that the SEM is just another expensive digital camera or another peripheral device connected to a computer and that all of the issues related to obtaining quality data have been solved. Hence, a person (or company) using these instruments may be lulled into thinking that all of the potential pitfalls have been fully eliminated and they believe everything they see on the micrograph is always correct. But, as described in this and the earlier presentations this may not be the case. The fi rst paper in this series discussed some of the issues related to signal generation in the SEM, including instrument calibration, electron beam-sample interactions and the need for physics-based modelling to understand the actual image formation mechanisms to properly interpret SEM images. The second paper, discussed another major issue confronting the microscopist: specimen contamination and methods of contamination elimination. This third paper, discusses vibration and drift and some useful solutions.

  9. Nanomanufacturing concerns about measurements made in the SEM Part III: vibration and drift

    Science.gov (United States)

    Postek, Michael T.; Vladár, András. E.; Cizmar, Petr

    2014-08-01

    Many advanced manufacturing processes employ scanning electron microscopes (SEM) for on-line critical measurements for process and quality control. This is the third of a series of papers discussing various causes of measurement uncertainty in scanned particle beam instruments, and some of the solutions researched and developed at NIST. Scanned particle beam instruments especially the scanning electron microscope have gone through tremendous evolution to become indispensable tools for many and diverse scientifi c and industrial applications. These improvements have signifi cantly enhanced their performance and made them far easier to operate. But, ease of operation has also fostered operator complacency. In addition, the user-friendliness has reduced the need for extensive operator training. Unfortunately, this has led to the concept that the SEM is just another expensive digital camera or another peripheral device connected to a computer and that all of the issues related to obtaining quality data have been solved. Hence, a person (or company) using these instruments may be lulled into thinking that all of the potential pitfalls have been fully eliminated and they believe everything they see on the micrograph is always correct. But, as described in this and the earlier presentations this may not be the case. The fi rst paper in this series discussed some of the issues related to signal generation in the SEM, including instrument calibration, electron beam-sample interactions and the need for physics-based modelling to understand the actual image formation mechanisms to properly interpret SEM images. The second paper, discussed another major issue confronting the microscopist: specimen contamination and methods of contamination elimination. This third paper, discusses vibration and drift and some useful solutions.

  10. Nanomanufacturing concerns about measurements made in the SEM part IV: charging and its mitigation

    Science.gov (United States)

    Postek, Michael T.; Vladár, András. E.

    2015-08-01

    This is the fourth part of a series of tutorial papers discussing various causes of measurement uncertainty in scanned particle beam instruments, and some of the solutions researched and developed at NIST and other research institutions. Scanned particle beam instruments especially the scanning electron microscope (SEM) have gone through tremendous evolution to become indispensable tools for many and diverse scientifi c and industrial applications. These improvements have signifi cantly enhanced their performance and made them far easier to operate. But, the ease of operation has also fostered operator complacency. In addition, the user-friendliness has reduced the apparent need for extensive operator training. Unfortunately, this has led to the idea that the SEM is just another expensive "digital camera" or another peripheral device connected to a computer and that all of the problems in obtaining good quality images and data have been solved. Hence, one using these instruments may be lulled into thinking that all of the potential pitfalls have been fully eliminated and believing that, everything one sees on the micrograph is always correct. But, as described in this and the earlier papers, this may not be the case. Care must always be taken when reliable quantitative data are being sought. The fi rst paper in this series discussed some of the issues related to signal generation in the SEM, including instrument calibration, electron beam-sample interactions and the need for physics-based modeling to understand the actual image formation mechanisms to properly interpret SEM images. The second paper has discussed another major issue confronting the microscopist: specimen contamination and methods to eliminate it. The third paper discussed mechanical vibration and stage drift and some useful solutions to mitigate the problems caused by them, and here, in this the fourth contribution, the issues related to specimen "charging" and its mitigation are discussed relative

  11. Nanomanufacturing Concerns about Measurements Made in the SEM Part IV: Charging and its Mitigation.

    Science.gov (United States)

    Postek, Michael T; Vladár, András E

    2015-01-01

    This is the fourth part of a series of tutorial papers discussing various causes of measurement uncertainty in scanned particle beam instruments, and some of the solutions researched and developed at NIST and other research institutions. Scanned particle beam instruments especially the scanning electron microscope (SEM) have gone through tremendous evolution to become indispensable tools for many and diverse scientific and industrial applications. These improvements have significantly enhanced their performance and made them far easier to operate. But, the ease of operation has also fostered operator complacency. In addition, the user-friendliness has reduced the apparent need for extensive operator training. Unfortunately, this has led to the idea that the SEM is just another expensive "digital camera" or another peripheral device connected to a computer and that all of the problems in obtaining good quality images and data have been solved. Hence, one using these instruments may be lulled into thinking that all of the potential pitfalls have been fully eliminated and believing that, everything one sees on the micrograph is always correct. But, as described in this and the earlier papers, this may not be the case. Care must always be taken when reliable quantitative data are being sought. The first paper in this series discussed some of the issues related to signal generation in the SEM, including instrument calibration, electron beam-sample interactions and the need for physics-based modeling to understand the actual image formation mechanisms to properly interpret SEM images. The second paper has discussed another major issue confronting the microscopist: specimen contamination and methods to eliminate it. The third paper discussed mechanical vibration and stage drift and some useful solutions to mitigate the problems caused by them, and here, in this the fourth contribution, the issues related to specimen "charging" and its mitigation are discussed relative to

  12. Comparison of SEM and VPSEM imaging techniques with respect to Streptococcus mutans biofilm topography.

    Science.gov (United States)

    Weber, Kathryn; Delben, Juliana; Bromage, Timothy G; Duarte, Simone

    2014-01-01

    The study compared images of mature Streptococcus mutans biofilms captured at increasing magnification to determine which microscopy method is most acceptable for imaging the biofilm topography and the extracellular polymeric substance (EPS). In vitro S. mutans biofilms were imaged using (1) scanning electron microscopy (SEM), which requires a dehydration process; (2) SEM and ruthenium red (SEM-RR), which has been shown to support the EPS of biofilms during the SEM dehydration; and (3) variable pressure scanning electron microscopy (VPSEM), which does not require the intensive dehydration process of SEM. The dehydration process and high chamber vacuum of both SEM techniques devastated the biofilm EPS, removed supporting structures, and caused cracking on the biofilm surface. The VPSEM offered the most comprehensive representation of the S. mutans biofilm morphology. VPSEM provides similar contrast and focus as the SEM, but the procedure is far less time-consuming, and the use of hazardous chemicals associated with SEM dehydration protocol is avoided with the VPSEM. The inaccurate representations of the biofilm EPS in SEM experimentation is a possible source of inaccurate data and impediments in the study of S. mutans biofilms. © 2013 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.

  13. Critical factors in SEM 3D stereo microscopy

    DEFF Research Database (Denmark)

    Marinello, F.; Bariano, P.; Savio, E.

    2008-01-01

    This work addresses dimensional measurements performed with the scanning electron microscope (SEM) using 3D reconstruction of surface topography through stereo-photogrammetry. The paper presents both theoretical and experimental investigations, on the effects of instrumental variables...... factors are recognized: the first one is related to the measurement operation and the instrument set-up; the second concerns the quality of scanned images and represents the major criticality in the application of SEMs for 3D characterizations....... and measurement parameters on reconstruction accuracy. Investigations were performed on a novel sample, specifically developed and implemented for the tests. The description is based on the model function introduced by Piazzesi and adapted for eucentrically tilted stereopairs. Two main classes of influencing...

  14. Two novel approaches to study arthropod anatomy by using dualbeam FIB/SEM.

    Science.gov (United States)

    Di Giulio, Andrea; Muzzi, Maurizio

    2018-03-01

    Transmission Electron Microscopy (TEM) has always been the conventional method to study arthropod ultrastructure, while the use of Scanning Electron Microscopy (SEM) was mainly devoted to the examination of the external cuticular structures by secondary electrons. The new generation field emission SEMs are capable to generate images at sub-cellular level, comparable to TEM images employing backscattered electrons. The potential of this kind of acquisition becomes very powerful in the dual beam FIB/SEM where the SEM column is combined with a Focused Ion Beam (FIB) column. FIB uses ions as a nano-scalpel to slice samples fixed and embedded in resin, replacing traditional ultramicrotomy. We here present two novel methods, which optimize the use of FIB/SEM for studying arthropod anatomy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. SEM investigation of heart tissue samples

    Energy Technology Data Exchange (ETDEWEB)

    Saunders, R; Amoroso, M [Physics Department, University of the West Indies, St. Augustine, Trinidad and Tobago, West Indies (Trinidad and Tobago)

    2010-07-01

    We used the scanning electron microscope to examine the cardiac tissue of a cow (Bos taurus), a pig (Sus scrofa), and a human (Homo sapiens). 1mm{sup 3} blocks of left ventricular tissue were prepared for SEM scanning by fixing in 96% ethanol followed by critical point drying (cryofixation), then sputter-coating with gold. The typical ridged structure of the myofibrils was observed for all the species. In addition crystal like structures were found in one of the samples of the heart tissue of the pig. These structures were investigated further using an EDVAC x-ray analysis attachment to the SEM. Elemental x-ray analysis showed highest peaks occurred for gold, followed by carbon, oxygen, magnesium and potassium. As the samples were coated with gold for conductivity, this highest peak is expected. Much lower peaks at carbon, oxygen, magnesium and potassium suggest that a cystallized salt such as a carbonate was present in the tissue before sacrifice.

  16. SEM investigation of heart tissue samples

    Science.gov (United States)

    Saunders, R.; Amoroso, M.

    2010-07-01

    We used the scanning electron microscope to examine the cardiac tissue of a cow (Bos taurus), a pig (Sus scrofa), and a human (Homo sapiens). 1mm3 blocks of left ventricular tissue were prepared for SEM scanning by fixing in 96% ethanol followed by critical point drying (cryofixation), then sputter-coating with gold. The typical ridged structure of the myofibrils was observed for all the species. In addition crystal like structures were found in one of the samples of the heart tissue of the pig. These structures were investigated further using an EDVAC x-ray analysis attachment to the SEM. Elemental x-ray analysis showed highest peaks occurred for gold, followed by carbon, oxygen, magnesium and potassium. As the samples were coated with gold for conductivity, this highest peak is expected. Much lower peaks at carbon, oxygen, magnesium and potassium suggest that a cystallized salt such as a carbonate was present in the tissue before sacrifice.

  17. Viewing Integrated-Circuit Interconnections By SEM

    Science.gov (United States)

    Lawton, Russel A.; Gauldin, Robert E.; Ruiz, Ronald P.

    1990-01-01

    Back-scattering of energetic electrons reveals hidden metal layers. Experiment shows that with suitable operating adjustments, scanning electron microscopy (SEM) used to look for defects in aluminum interconnections in integrated circuits. Enables monitoring, in situ, of changes in defects caused by changes in temperature. Gives truer picture of defects, as etching can change stress field of metal-and-passivation pattern, causing changes in defects.

  18. Numerical methods in Markov chain modeling

    Science.gov (United States)

    Philippe, Bernard; Saad, Youcef; Stewart, William J.

    1989-01-01

    Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.

  19. Dynamic spatial panels : models, methods, and inferences

    NARCIS (Netherlands)

    Elhorst, J. Paul

    This paper provides a survey of the existing literature on the specification and estimation of dynamic spatial panel data models, a collection of models for spatial panels extended to include one or more of the following variables and/or error terms: a dependent variable lagged in time, a dependent

  20. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...

  1. Characterization of Yeast Biofilm by Cryo-SEM and FIB-SEM

    Czech Academy of Sciences Publication Activity Database

    Hrubanová, Kamila; Nebesářová, Jana; Růžička, F.; Dluhoš, J.; Krzyžánek, Vladislav

    2013-01-01

    Roč. 19, S2 (2013), s. 226-227 ISSN 1431-9276 R&D Projects: GA MŠk EE.2.3.20.0103; GA TA ČR TE01020118; GA ČR GAP205/11/1687 Institutional support: RVO:68081731 ; RVO:60077344 Keywords : yeast biofilm * cryo-SEM * FIB-SEM Subject RIV: BH - Optics, Masers, Lasers Impact factor: 1.757, year: 2013

  2. The relationship between cost estimates reliability and BIM adoption: SEM analysis

    Science.gov (United States)

    Ismail, N. A. A.; Idris, N. H.; Ramli, H.; Rooshdi, R. R. Raja Muhammad; Sahamir, S. R.

    2018-02-01

    This paper presents the usage of Structural Equation Modelling (SEM) approach in analysing the effects of Building Information Modelling (BIM) technology adoption in improving the reliability of cost estimates. Based on the questionnaire survey results, SEM analysis using SPSS-AMOS application examined the relationships between BIM-improved information and cost estimates reliability factors, leading to BIM technology adoption. Six hypotheses were established prior to SEM analysis employing two types of SEM models, namely the Confirmatory Factor Analysis (CFA) model and full structural model. The SEM models were then validated through the assessment on their uni-dimensionality, validity, reliability, and fitness index, in line with the hypotheses tested. The final SEM model fit measures are: P-value=0.000, RMSEA=0.0790.90, TLI=0.956>0.90, NFI=0.935>0.90 and ChiSq/df=2.259; indicating that the overall index values achieved the required level of model fitness. The model supports all the hypotheses evaluated, confirming that all relationship exists amongst the constructs are positive and significant. Ultimately, the analysis verified that most of the respondents foresee better understanding of project input information through BIM visualization, its reliable database and coordinated data, in developing more reliable cost estimates. They also perceive to accelerate their cost estimating task through BIM adoption.

  3. Combining static and dynamic modelling methods: a comparison of four methods

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1995-01-01

    A conceptual model of a system is an explicit description of the behaviour required of the system. Methods for conceptual modelling include entity-relationship (ER) modelling, data flow modelling, Jackson System Development (JSD) and several object-oriented analysis method. Given the current

  4. A Pattern-Oriented Approach to a Methodical Evaluation of Modeling Methods

    Directory of Open Access Journals (Sweden)

    Michael Amberg

    1996-11-01

    Full Text Available The paper describes a pattern-oriented approach to evaluate modeling methods and to compare various methods with each other from a methodical viewpoint. A specific set of principles (the patterns is defined by investigating the notations and the documentation of comparable modeling methods. Each principle helps to examine some parts of the methods from a specific point of view. All principles together lead to an overall picture of the method under examination. First the core ("method neutral" meaning of each principle is described. Then the methods are examined regarding the principle. Afterwards the method specific interpretations are compared with each other and with the core meaning of the principle. By this procedure, the strengths and weaknesses of modeling methods regarding methodical aspects are identified. The principles are described uniformly using a principle description template according to descriptions of object oriented design patterns. The approach is demonstrated by evaluating a business process modeling method.

  5. The SEM description of interaction of a transient electromagnetic wave with an object

    Science.gov (United States)

    Pearson, L. W.; Wilton, D. R.

    1980-01-01

    The singularity expansion method (SEM), proposed as a means for determining and representing the transient surface current density induced on a scatterer by a transient electromagnetic wave is described. The resulting mathematical description of the transient surface current on the object is discussed. The data required to represent the electromagnetic scattering properties of a given object are examined. Experimental methods which were developed for the determination of the SEM description are discussed. The feasibility of characterizing the surface current induced on aircraft flying in proximity to a lightning stroke by way of SEM is examined.

  6. Analysis of microtraces in invasive traumas using SEM/EDS.

    Science.gov (United States)

    Vermeij, E J; Zoon, P D; Chang, S B C G; Keereweer, I; Pieterman, R; Gerretsen, R R R

    2012-01-10

    Scanning electron microscopy in combination with energy-dispersive X-ray spectrometry (SEM/EDS) is a proven forensic tool and has been used to analyze several kinds of trace evidence. A forensic application of SEM/EDS is the examination of morphological characteristics of tool marks that tools and instruments leave on bone. The microtraces that are left behind by these tools and instruments on the bone are, however, often ignored or not noticed at all. In this paper we will describe the use of SEM/EDS for the analysis of microtraces in invasive sharp-force, blunt-force and bone-hacking traumas in bone. This research is part of a larger multi-disciplinary approach in which pathologists, forensic anthropologists, toolmark and microtrace experts work together to link observed injuries to a suspected weapon or, in case of an unknown weapon, to indicate a group of objects that could have been used as a weapon. Although there are a few difficulties one have to consider, the method itself is rather simple and straightforward to apply. A sample of dry and clean bone is placed into the SEM sample chamber and brightness and contrast are set such that bone appears grey, metal appears white and organic material appears black. The sample is then searched manually to find relevant features. Once features are found their elemental composition is measured by an energy dispersive X-ray spectrometer (EDS). This method is illustrated using several cases. It is shown that SEM/EDS analysis of microtraces in bone is a valuable tool to get clues about an unknown weapon and can associate a specific weapon with injuries on the basis of appearance and elemental composition. In particular the separate results from the various disciplines are complementary and may be combined to reach a conclusion with a stronger probative value. This is not only useful in the courtroom but above all in criminal investigations when one have to know for what weapon or object to look for. Copyright © 2011

  7. Resampling methods for evaluating classification accuracy of wildlife habitat models

    Science.gov (United States)

    Verbyla, David L.; Litvaitis, John A.

    1989-11-01

    Predictive models of wildlife-habitat relationships often have been developed without being tested The apparent classification accuracy of such models can be optimistically biased and misleading. Data resampling methods exist that yield a more realistic estimate of model classification accuracy These methods are simple and require no new sample data. We illustrate these methods (cross-validation, jackknife resampling, and bootstrap resampling) with computer simulation to demonstrate the increase in precision of the estimate. The bootstrap method is then applied to field data as a technique for model comparison We recommend that biologists use some resampling procedure to evaluate wildlife habitat models prior to field evaluation.

  8. A catalog of automated analysis methods for enterprise models.

    Science.gov (United States)

    Florez, Hector; Sánchez, Mario; Villalobos, Jorge

    2016-01-01

    Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.

  9. WEB SemânticaSemantic web

    Directory of Open Access Journals (Sweden)

    Gisele Vasconcelos Dziekaniak

    2004-01-01

    Full Text Available O trabalho aborda a Web Semântica: a nova versão da web que está em desenvolvimento, através de projetos como o Scorpion1 e o Desire2. Estes projetos buscam organizar o conhecimento armazenado em seus arquivos e páginas web, prometendo a compreensão da linguagem humana pelas máquinas na recuperação da informação, sem que o usuário precise dominar refinadas estratégias de buscas. O artigo apresenta o padrão de metadados Dublin Core como o padrão mais utilizado atualmente pelas comunidades desenvolvedoras de projetos na área da Web Semântica e aborda o RDF como estrutura indicada pelos visionários desta nova web para desenvolver esquemas semânticos na representação da informação disponibilizada via rede, bem como o XML enquanto linguagem de marcação de dados estruturados. Revela a necessidade de melhorias na organização da informação no cenário brasileiro de indexação eletrônica a fim de que o mesmo possa acompanhar o novo paradigma da recuperação da informação e organização do conhecimento.This paper approaches the Semantic Web: a new version of web development, through projects as Scorpion and Desire. The aim of these projects in to organize knowledge stored in their files and web pages promissing the understanding of human language by the machines to recover information, without the user needs to dominate refined searching strategies. The article presents the metadatas pattern Dublin Core as the present day most used pattern by the project developer communities in the area of the Web Semantic and approaches RDF as suitable structure for the visionary of this new web to develop semantic outlines in the representation of the information made available through net, as well as XML as language of demarcation of structured data. Reveals the need of improvements in the treatment of the information in the Brazilian scenery of electronic indexation so that the same can accompany the new paradigm of recovery of

  10. O ciberativismo sem bússola

    Directory of Open Access Journals (Sweden)

    Francisco Rüdiger

    2014-07-01

    Full Text Available Questiona-se no texto se uma abordagem que, no essencial, relata a trajetória do chamado ciberativismo de acordo com seus próprios termos se justifica academicamente ou, em vez disso, se mantém prisioneira de uma mitologia que o fenômeno, em si mesmo, já construiu e, por isso, autoriza seus sujeitos a dispensarem sem prejuízo eventual contribuição de origem universitária.

  11. A Comparison of Two Balance Calibration Model Building Methods

    Science.gov (United States)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  12. A Systematic Identification Method for Thermodynamic Property Modelling

    DEFF Research Database (Denmark)

    Ana Perederic, Olivia; Cunico, Larissa; Sarup, Bent

    2017-01-01

    In this work, a systematic identification method for thermodynamic property modelling is proposed. The aim of the method is to improve the quality of phase equilibria prediction by group contribution based property prediction models. The method is applied to lipid systems where the Original UNIFAC...

  13. Data mining concepts models methods and algorithms

    CERN Document Server

    Kantardzic, Mehmed

    2011-01-01

    This book reviews state-of-the-art methodologies and techniques for analyzing enormous quantities of raw data in high-dimensional data spaces, to extract new information for decision making. The goal of this book is to provide a single introductory source, organized in a systematic way, in which we could direct the readers in analysis of large data sets, through the explanation of basic concepts, models and methodologies developed in recent decades.

  14. Ensemble Learning Method for Hidden Markov Models

    Science.gov (United States)

    2014-12-01

    Schunck, “Determining optical flow,” Artificial Inteligence , vol. 17, 1981. [77] “International campaign to ban landmines, landmiane monitor report 2013...outputs using a decision level fusion method such as an artificial v neural network or a hierarchical mixture of experts. Our approach was evaluated on...techniques such as simple algebraic [63], artificial neural networks (ANN) [1], and hierarchical mixture of experts (HME) [46] can be used. 3.3.4.1

  15. A Versatile Nonlinear Method for Predictive Modeling

    Science.gov (United States)

    Liou, Meng-Sing; Yao, Weigang

    2015-01-01

    As computational fluid dynamics techniques and tools become widely accepted for realworld practice today, it is intriguing to ask: what areas can it be utilized to its potential in the future. Some promising areas include design optimization and exploration of fluid dynamics phenomena (the concept of numerical wind tunnel), in which both have the common feature where some parameters are varied repeatedly and the computation can be costly. We are especially interested in the need for an accurate and efficient approach for handling these applications: (1) capturing complex nonlinear dynamics inherent in a system under consideration and (2) versatility (robustness) to encompass a range of parametric variations. In our previous paper, we proposed to use first-order Taylor expansion collected at numerous sampling points along a trajectory and assembled together via nonlinear weighting functions. The validity and performance of this approach was demonstrated for a number of problems with a vastly different input functions. In this study, we are especially interested in enhancing the method's accuracy; we extend it to include the second-orer Taylor expansion, which however requires a complicated evaluation of Hessian matrices for a system of equations, like in fluid dynamics. We propose a method to avoid these Hessian matrices, while maintaining the accuracy. Results based on the method are presented to confirm its validity.

  16. Cultivo de células mesenquimais do sangue de cordão umbilical com e sem uso do gradiente de densidade Ficoll-Paque Blood mesenchymal stem cell culture from the umbilical cord with and without Ficoll-Paque density gradient method

    Directory of Open Access Journals (Sweden)

    Rosa Sayoko Kawasaki-Oyama

    2008-03-01

    Full Text Available OBJETIVOS: Implantação de técnicas de isolamento e cultivo de células-tronco mesenquimais do sangue de cordão umbilical humano, com e sem uso de gradiente de densidade Ficoll-Paque (d=1,077g/ml. MÉTODOS: Dez amostras de sangue de cordão umbilical humano de gestação a termo foram submetidas a dois procedimentos de cultivo de células-tronco mesenquimais: sem gradiente de densidade Ficoll-Paque e com gradiente de densidade. As células foram semeadas em frascos de 25cm² a uma densidade de 1x10(7células nucleadas/cm² (sem Ficoll e 1,0x10(6 células mononucleares/cm² (com Ficoll. As células aderentes foram submetidas a marcação citoquímica com fosfatase ácida e reativo de Schiff. RESULTADOS: No procedimento sem gradiente de densidade Ficoll, foram obtidas 2,0-13,0x10(7 células nucleadas (mediana=2,35x10(7 e, no procedimento com gradiente de densidade Ficoll, foram obtidas 3,7-15,7x10(6 células mononucleares (mediana=7,2x10(6. Em todas as culturas foram observadas células aderentes 24 horas após o início de cultivo. As células apresentaram morfologias fibroblastóides ou epitelióides. Na maioria das culturas houve proliferação celular nas primeiras semanas de cultivo, mas após a segunda semana, somente três culturas provenientes do método sem gradiente de densidade Ficoll-Paque mantiveram crescimento celular, formando focos confluentes de células. Essas culturas foram submetidas a várias etapas de tripsinização para espalhamento ou subdivisão e permaneceram em cultivo por períodos que variaram de dois a três meses. CONCLUSÃO: Nas amostras estudadas, o isolamento e cultivo de células-tronco mesenquimais do sangue de cordão umbilical humano pelo método sem gradiente de densidade Ficoll-Paque foi mais eficiente do que o método com gradiente de densidade Ficoll-Paque.OBJECTIVES: Implantation of cell separation and mesenchymal stem cell culture techniques from human umbilical cord blood with and without using the

  17. Diffusion in condensed matter methods, materials, models

    CERN Document Server

    Kärger, Jörg

    2005-01-01

    Diffusion as the process of particle transport due to stochastic movement is a phenomenon of crucial relevance for a large variety of processes and materials. This comprehensive, handbook- style survey of diffusion in condensed matter gives detailed insight into diffusion as the process of particle transport due to stochastic movement. Leading experts in the field describe in 23 chapters the different aspects of diffusion, covering microscopic and macroscopic experimental techniques and exemplary results for various classes of solids, liquids and interfaces as well as several theoretical concepts and models. Students and scientists in physics, chemistry, materials science, and biology will benefit from this detailed compilation.

  18. Extrudate Expansion Modelling through Dimensional Analysis Method

    DEFF Research Database (Denmark)

    to describe the extrudates expansion. From the three dimensionless groups, an equation with three experimentally determined parameters is derived to express the extrudate expansion. The model is evaluated with whole wheat flour and aquatic feed extrusion experimental data. The average deviations...... of the correlation are respectively 5.9% and 9% for the whole wheat flour and the aquatic feed extrusion. An alternative 4-coefficient equation is also suggested from the 3 dimensionless groups. The average deviations of the alternative equation are respectively 5.8% and 2.5% in correlation with the same set...

  19. Reverberation Modelling Using a Parabolic Equation Method

    Science.gov (United States)

    2012-10-01

    results obtained by other authors and methods. Résumé …..... RDDC Atlantique a élaboré un modèle de fouillis d’échos acoustiques fondé sur les modes...PE pour parabolic equation), pour déterminer la faisabilité du calcul du champ acoustique et de la réverbération des échos de cibles dans différents...2012. Introduction ou contexte : RDDC Atlantique a élaboré un modèle de fouillis d’échos acoustiques fondé sur les modes normaux adiabatiques pour

  20. Current status of uncertainty analysis methods for computer models

    International Nuclear Information System (INIS)

    Ishigami, Tsutomu

    1989-11-01

    This report surveys several existing uncertainty analysis methods for estimating computer output uncertainty caused by input uncertainties, illustrating application examples of those methods to three computer models, MARCH/CORRAL II, TERFOC and SPARC. Merits and limitations of the methods are assessed in the application, and recommendation for selecting uncertainty analysis methods is provided. (author)

  1. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  2. Laser filamentation mathematical methods and models

    CERN Document Server

    Lorin, Emmanuel; Moloney, Jerome

    2016-01-01

    This book is focused on the nonlinear theoretical and mathematical problems associated with ultrafast intense laser pulse propagation in gases and in particular, in air. With the aim of understanding the physics of filamentation in gases, solids, the atmosphere, and even biological tissue, specialists in nonlinear optics and filamentation from both physics and mathematics attempt to rigorously derive and analyze relevant non-perturbative models. Modern laser technology allows the generation of ultrafast (few cycle) laser pulses, with intensities exceeding the internal electric field in atoms and molecules (E=5x109 V/cm or intensity I = 3.5 x 1016 Watts/cm2 ). The interaction of such pulses with atoms and molecules leads to new, highly nonlinear nonperturbative regimes, where new physical phenomena, such as High Harmonic Generation (HHG), occur, and from which the shortest (attosecond - the natural time scale of the electron) pulses have been created. One of the major experimental discoveries in this nonlinear...

  3. Structural equation and log-linear modeling: a comparison of methods in the analysis of a study on caregivers' health

    Directory of Open Access Journals (Sweden)

    Rosenbaum Peter L

    2006-10-01

    Full Text Available Abstract Background In this paper we compare the results in an analysis of determinants of caregivers' health derived from two approaches, a structural equation model and a log-linear model, using the same data set. Methods The data were collected from a cross-sectional population-based sample of 468 families in Ontario, Canada who had a child with cerebral palsy (CP. The self-completed questionnaires and the home-based interviews used in this study included scales reflecting socio-economic status, child and caregiver characteristics, and the physical and psychological well-being of the caregivers. Both analytic models were used to evaluate the relationships between child behaviour, caregiving demands, coping factors, and the well-being of primary caregivers of children with CP. Results The results were compared, together with an assessment of the positive and negative aspects of each approach, including their practical and conceptual implications. Conclusion No important differences were found in the substantive conclusions of the two analyses. The broad confirmation of the Structural Equation Modeling (SEM results by the Log-linear Modeling (LLM provided some reassurance that the SEM had been adequately specified, and that it broadly fitted the data.

  4. Three-Dimensional (3D) Nanometrology Based on Scanning Electron Microscope (SEM) Stereophotogrammetry.

    Science.gov (United States)

    Tondare, Vipin N; Villarrubia, John S; Vlada R, András E

    2017-10-01

    Three-dimensional (3D) reconstruction of a sample surface from scanning electron microscope (SEM) images taken at two perspectives has been known for decades. Nowadays, there exist several commercially available stereophotogrammetry software packages. For testing these software packages, in this study we used Monte Carlo simulated SEM images of virtual samples. A virtual sample is a model in a computer, and its true dimensions are known exactly, which is impossible for real SEM samples due to measurement uncertainty. The simulated SEM images can be used for algorithm testing, development, and validation. We tested two stereophotogrammetry software packages and compared their reconstructed 3D models with the known geometry of the virtual samples used to create the simulated SEM images. Both packages performed relatively well with simulated SEM images of a sample with a rough surface. However, in a sample containing nearly uniform and therefore low-contrast zones, the height reconstruction error was ≈46%. The present stereophotogrammetry software packages need further improvement before they can be used reliably with SEM images with uniform zones.

  5. Three-dimensional intracellular structure of a whole rice mesophyll cell observed with FIB-SEM.

    Science.gov (United States)

    Oi, Takao; Enomoto, Sakiko; Nakao, Tomoyo; Arai, Shigeo; Yamane, Koji; Taniguchi, Mitsutaka

    2017-07-01

    Ultrathin sections of rice leaf blades observed two-dimensionally using a transmission electron microscope (TEM) show that the chlorenchyma is composed of lobed mesophyll cells, with intricate cell boundaries, and lined with chloroplasts. The lobed cell shape and chloroplast positioning are believed to enhance the area available for the gas exchange surface for photosynthesis in rice leaves. However, a cell image revealing the three-dimensional (3-D) ultrastructure of rice mesophyll cells has not been visualized. In this study, a whole rice mesophyll cell was observed using a focused ion beam scanning electron microscope (FIB-SEM), which provides many serial sections automatically, rapidly and correctly, thereby enabling 3-D cell structure reconstruction. Rice leaf blades were fixed chemically using the method for conventional TEM observation, embedded in resin and subsequently set in the FIB-SEM chamber. Specimen blocks were sectioned transversely using the FIB, and block-face images were captured using the SEM. The sectioning and imaging were repeated overnight for 200-500 slices (each 50 nm thick). The resultant large-volume image stacks ( x = 25 μm, y = 25 μm, z = 10-25 μm) contained one or two whole mesophyll cells. The 3-D models of whole mesophyll cells were reconstructed using image processing software. The reconstructed cell models were discoid shaped with several lobes around the cell periphery. The cell shape increased the surface area, and the ratio of surface area to volume was twice that of a cylinder having the same volume. The chloroplasts occupied half the cell volume and spread as sheets along the cell lobes, covering most of the inner cell surface, with adjacent chloroplasts in close contact with each other. Cellular and sub-cellular ultrastructures of a whole mesophyll cell in a rice leaf blade are demonstrated three-dimensionally using a FIB-SEM. The 3-D models and numerical information support the hypothesis that rice mesophyll

  6. Enhancing nanoscale SEM image segmentation and reconstruction with crystallographic orientation data and machine learning

    International Nuclear Information System (INIS)

    Converse, Matthew I.; Fullwood, David T.

    2013-01-01

    Current methods of image segmentation and reconstructions from scanning electron micrographs can be inadequate for resolving nanoscale gaps in composite materials (1–20 nm). Such information is critical to both accurate material characterizations and models of piezoresistive response. The current work proposes the use of crystallographic orientation data and machine learning for enhancing this process. It is first shown how a machine learning algorithm can be used to predict the connectivity of nanoscale grains in a Nickel nanostrand/epoxy composite. This results in 71.9% accuracy for a 2D algorithm and 62.4% accuracy in 3D. Finally, it is demonstrated how these algorithms can be used to predict the location of gaps between distinct nanostrands — gaps which would otherwise not be detected with the sole use of a scanning electron microscope. - Highlights: • A method is proposed for enhancing the segmentation/reconstruction of SEM images. • 3D crystallographic orientation data from a nickel nanocomposite is collected. • A machine learning algorithm is used to detect trends in adjacent grains. • This algorithm is then applied to predict likely regions of nanoscale gaps. • These gaps would otherwise be unresolved with the sole use of an SEM

  7. A compositional method to model dependent failure behavior based on PoF models

    Directory of Open Access Journals (Sweden)

    Zhiguo ZENG

    2017-10-01

    Full Text Available In this paper, a new method is developed to model dependent failure behavior among failure mechanisms. Unlike the existing methods, the developed method models the root cause of the dependency explicitly, so that a deterministic model, rather than a probabilistic one, can be established. Three steps comprise the developed method. First, physics-of-failure (PoF models are utilized to model each failure mechanism. Then, interactions among failure mechanisms are modeled as a combination of three basic relations, competition, superposition and coupling. This is the reason why the method is referred to as “compositional method”. Finally, the PoF models and the interaction model are combined to develop a deterministic model of the dependent failure behavior. As a demonstration, the method is applied on an actual spool and the developed failure behavior model is validated by a wear test. The result demonstrates that the compositional method is an effective way to model dependent failure behavior.

  8. METHODICAL MODEL FOR TEACHING BASIC SKI TURN

    Directory of Open Access Journals (Sweden)

    Danijela Kuna

    2013-07-01

    Full Text Available With the aim of forming an expert model of the most important operators for basic ski turn teaching in ski schools, an experiment was conducted on a sample of 20 ski experts from different countries (Croatia, Bosnia and Herzegovina and Slovenia. From the group of the most commonly used operators for teaching basic ski turn the experts picked the 6 most important: uphill turn and jumping into snowplough, basic turn with hand sideways, basic turn with clapping, ski poles in front, ski poles on neck, uphill turn with active ski guiding. Afterwards, ranking and selection of the most efficient operators was carried out. Due to the set aim of research, a Chi square test was used, as well as the differences between frequencies of chosen operators, differences between values of the most important operators and differences between experts due to their nationality. Statistically significant differences were noticed between frequencies of chosen operators (c2= 24.61; p=0.01, while differences between values of the most important operators were not obvious (c2= 1.94; p=0.91. Meanwhile, the differences between experts concerning thier nationality were only noticeable in the expert evaluation of ski poles on neck operator (c2=7.83; p=0.02. Results of current research are reflected in obtaining useful information about methodological priciples of learning basic ski turn organization in ski schools.

  9. A Parametric Modelling Method for Dexterous Finger Reachable Workspaces

    Directory of Open Access Journals (Sweden)

    Wenzhen Yang

    2016-03-01

    Full Text Available The well-known algorithms, such as the graphic method, analytical method or numerical method, have some defects when modelling the dexterous finger workspace, which is a significant kinematical feature of dexterous hands and valuable for grasp planning, motion control and mechanical design. A novel modelling method with convenient and parametric performances is introduced to generate the dexterous-finger reachable workspace. This method constructs the geometric topology of the dexterous-finger reachable workspace, and uses a joint feature recognition algorithm to extract the kinematical parameters of the dexterous finger. Compared with graphic, analytical and numerical methods, this parametric modelling method can automatically and conveniently construct a more vivid workspace's forms and contours of the dexterous finger. The main contribution of this paper is that a workspace-modelling tool with high interactive efficiency is developed for designers to precisely visualize the dexterous-finger reachable workspace, which is valuable for analysing the flexibility of the dexterous finger.

  10. Modeling shallow water flows using the discontinuous Galerkin method

    CERN Document Server

    Khan, Abdul A

    2014-01-01

    Replacing the Traditional Physical Model Approach Computational models offer promise in improving the modeling of shallow water flows. As new techniques are considered, the process continues to change and evolve. Modeling Shallow Water Flows Using the Discontinuous Galerkin Method examines a technique that focuses on hyperbolic conservation laws and includes one-dimensional and two-dimensional shallow water flows and pollutant transports. Combines the Advantages of Finite Volume and Finite Element Methods This book explores the discontinuous Galerkin (DG) method, also known as the discontinuous finite element method, in depth. It introduces the DG method and its application to shallow water flows, as well as background information for implementing and applying this method for natural rivers. It considers dam-break problems, shock wave problems, and flows in different regimes (subcritical, supercritical, and transcritical). Readily Adaptable to the Real World While the DG method has been widely used in the fie...

  11. Modeling shallow water flows using the discontinuous galerkin method

    CERN Document Server

    Khan, Abdul A

    2014-01-01

    Replacing the Traditional Physical Model Approach Computational models offer promise in improving the modeling of shallow water flows. As new techniques are considered, the process continues to change and evolve. Modeling Shallow Water Flows Using the Discontinuous Galerkin Method examines a technique that focuses on hyperbolic conservation laws and includes one-dimensional and two-dimensional shallow water flows and pollutant transports. Combines the Advantages of Finite Volume and Finite Element Methods This book explores the discontinuous Galerkin (DG) method, also known as the discontinuous finite element method, in depth. It introduces the DG method and its application to shallow water flows, as well as background information for implementing and applying this method for natural rivers. It considers dam-break problems, shock wave problems, and flows in different regimes (subcritical, supercritical, and transcritical). Readily Adaptable to the Real World While the DG method has been widely used in the fie...

  12. SEMS: System for Environmental Monitoring and Sustainability

    Science.gov (United States)

    Arvidson, Raymond E.

    1998-01-01

    The goal of this project was to establish a computational and data management system, SEMS, building on our existing system and MTPE-related research. We proposed that the new system would help support Washington University's efforts in environmental sustainability through use in: (a) Problem-based environmental curriculum for freshmen and sophomores funded by the Hewlett Foundation that integrates scientific, cultural, and policy perspectives to understand the dynamics of wetland degradation, deforestation, and desertification and that will develop policies for sustainable environments and economies; (b) Higher-level undergraduate and graduate courses focused on monitoring the environment and developing policies that will lead to sustainable environmental and economic conditions; and (c) Interdisciplinary research focused on the dynamics of the Missouri River system and development of policies that lead to sustainable environmental and economic floodplain conditions.

  13. "Method, system and storage medium for generating virtual brick models"

    DEFF Research Database (Denmark)

    2009-01-01

    An exemplary embodiment is a method for generating a virtual brick model. The virtual brick models are generated by users and uploaded to a centralized host system. Users can build virtual models themselves or download and edit another user's virtual brick models while retaining the identity...... of the original virtual brick model. Routines are provided for both storing user created building steps in and generating automated building instructions for virtual brick models, generating a bill of materials for a virtual brick model and ordering physical bricks corresponding to a virtual brick model....

  14. Assessment of engineered surfaces roughness by high-resolution 3D SEM photogrammetry.

    Science.gov (United States)

    Gontard, L C; López-Castro, J D; González-Rovira, L; Vázquez-Martínez, J M; Varela-Feria, F M; Marcos, M; Calvino, J J

    2017-06-01

    We describe a methodology to obtain three-dimensional models of engineered surfaces using scanning electron microscopy and multi-view photogrammetry (3DSEM). For the reconstruction of the 3D models of the surfaces we used freeware available in the cloud. The method was applied to study the surface roughness of metallic samples patterned with parallel grooves by means of laser. The results are compared with measurements obtained using stylus profilometry (PR) and SEM stereo-photogrammetry (SP). The application of 3DSEM is more time demanding than PR or SP, but it provides a more accurate representation of the surfaces. The results obtained with the three techniques are compared by investigating the influence of sampling step on roughness parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  16. IDEF method-based simulation model design and development framework

    Directory of Open Access Journals (Sweden)

    Ki-Young Jeong

    2009-09-01

    Full Text Available The purpose of this study is to provide an IDEF method-based integrated framework for a business process simulation model to reduce the model development time by increasing the communication and knowledge reusability during a simulation project. In this framework, simulation requirements are collected by a function modeling method (IDEF0 and a process modeling method (IDEF3. Based on these requirements, a common data model is constructed using the IDEF1X method. From this reusable data model, multiple simulation models are automatically generated using a database-driven simulation model development approach. The framework is claimed to help both requirement collection and experimentation phases during a simulation project by improving system knowledge, model reusability, and maintainability through the systematic use of three descriptive IDEF methods and the features of the relational database technologies. A complex semiconductor fabrication case study was used as a testbed to evaluate and illustrate the concepts and the framework. Two different simulation software products were used to develop and control the semiconductor model from the same knowledge base. The case study empirically showed that this framework could help improve the simulation project processes by using IDEF-based descriptive models and the relational database technology. Authors also concluded that this framework could be easily applied to other analytical model generation by separating the logic from the data.

  17. Simulation of arc models with the block modelling method

    NARCIS (Netherlands)

    Thomas, R.; Lahaye, D.J.P.; Vuik, C.; Van der Sluis, L.

    2015-01-01

    Simulation of current interruption is currently performed with non-ideal switching devices for large power systems. Nevertheless, for small networks, non-ideal switching devices can be substituted by arc models. However, this substitution has a negative impact on the computation time. At the same

  18. The luminal surface of thyroid cysts in SEM

    DEFF Research Database (Denmark)

    Zelander, T; Kirkeby, S

    1978-01-01

    Four of the five kinds of cells constituting the walls of thyroid cysts can be identified in the SEM. These are cuboidal cells, mucous cells, cells with large granules and ciliated cells. A correlation between SEM and TEM observations is attempted.......Four of the five kinds of cells constituting the walls of thyroid cysts can be identified in the SEM. These are cuboidal cells, mucous cells, cells with large granules and ciliated cells. A correlation between SEM and TEM observations is attempted....

  19. Advanced methods of solid oxide fuel cell modeling

    CERN Document Server

    Milewski, Jaroslaw; Santarelli, Massimo; Leone, Pierluigi

    2011-01-01

    Fuel cells are widely regarded as the future of the power and transportation industries. Intensive research in this area now requires new methods of fuel cell operation modeling and cell design. Typical mathematical models are based on the physical process description of fuel cells and require a detailed knowledge of the microscopic properties that govern both chemical and electrochemical reactions. ""Advanced Methods of Solid Oxide Fuel Cell Modeling"" proposes the alternative methodology of generalized artificial neural networks (ANN) solid oxide fuel cell (SOFC) modeling. ""Advanced Methods

  20. Fuzzy Clustering Methods and their Application to Fuzzy Modeling

    DEFF Research Database (Denmark)

    Kroszynski, Uri; Zhou, Jianjun

    1999-01-01

    . A method to obtain an optimized number of clusters is outlined. Based upon the cluster's characteristics, a behavioural model is formulated in terms of a rule-base and an inference engine. The article reviews several variants for the model formulation. Some limitations of the methods are listed......Fuzzy modeling techniques based upon the analysis of measured input/output data sets result in a set of rules that allow to predict system outputs from given inputs. Fuzzy clustering methods for system modeling and identification result in relatively small rule-bases, allowing fast, yet accurate...

  1. [Model transfer method based on support vector machine].

    Science.gov (United States)

    Xiong, Yu-hong; Wen, Zhi-yu; Liang, Yu-qian; Chen, Qin; Zhang, Bo; Liu, Yu; Xiang, Xian-yi

    2007-01-01

    The model transfer is a basic method to build up universal and comparable performance of spectrometer data by seeking a mathematical transformation relation among different spectrometers. Because of nonlinear effect and small calibration sample set in fact, it is important to solve the problem of model transfer under the condition of nonlinear effect in evidence and small sample set. This paper summarizes support vector machines theory, puts forward the method of model transfer based on support vector machine and piecewise direct standardization, and makes use of computer simulation method, giving a example to explain the method and compare it with artificial neural network in the end.

  2. Classification and printability of EUV mask defects from SEM images

    Science.gov (United States)

    Cho, Wonil; Price, Daniel; Morgan, Paul A.; Rost, Daniel; Satake, Masaki; Tolani, Vikram L.

    2017-10-01

    -to-Aerial printability) analysis of every defect. First, a defect-free or reference mask SEM is rendered from the post-OPC design, and the defective signature is detected from the defect-reference difference image. These signatures help assess the true nature of the defect as evident in e-beam imaging; for example, excess or missing absorber, line-edge roughness, contamination, etc. Next, defect and reference contours are extracted from the grayscale SEM images and fed into the simulation engine with an EUV scanner model to generate corresponding EUV defect and reference aerial images. These are then analyzed for printability and dispositioned using an Aerial Image Analyzer (AIA) application to automatically measure and determine the amount of CD errors. Thus by integrating EUV ADC and S2A applications together, every defect detection is characterized for its type and printability which is essential for not only determining which defects to repair, but also in monitoring the performance of EUV mask process tools. The accuracy of the S2A print modeling has been verified with other commercially-available simulators, and will also be verified with actual wafer print results. With EUV lithography progressing towards volume manufacturing at 5nm technology, and the likelihood of EBMI inspectors approaching the horizon, the EUV ADC-S2A system will continue serving an essential role of dispositioning defects off e-beam imaging.

  3. Systematic Methods and Tools for Computer Aided Modelling

    DEFF Research Database (Denmark)

    Fedorova, Marina

    Models are playing important roles in design and analysis of chemicals/bio-chemicals based products and the processes that manufacture them. Model-based methods and tools have the potential to decrease the number of experiments, which can be expensive and time consuming, and point to candidates......, where the experimental effort could be focused. In this project a general modelling framework for systematic model building through modelling templates, which supports the reuse of existing models via its new model import and export capabilities, have been developed. The new feature for model transfer...... has been developed by establishing a connection with an external modelling environment for code generation. The main contribution of this thesis is a creation of modelling templates and their connection with other modelling tools within a modelling framework. The goal was to create a user...

  4. Inter-operator and inter-device agreement and reliability of the SEM Scanner.

    Science.gov (United States)

    Clendenin, Marta; Jaradeh, Kindah; Shamirian, Anasheh; Rhodes, Shannon L

    2015-02-01

    The SEM Scanner is a medical device designed for use by healthcare providers as part of pressure ulcer prevention programs. The objective of this study was to evaluate the inter-rater and inter-device agreement and reliability of the SEM Scanner. Thirty-one (31) volunteers free of pressure ulcers or broken skin at the sternum, sacrum, and heels were assessed with the SEM Scanner. Each of three operators utilized each of three devices to collect readings from four anatomical sites (sternum, sacrum, left and right heels) on each subject for a total of 108 readings per subject collected over approximately 30 min. For each combination of operator-device-anatomical site, three SEM readings were collected. Inter-operator and inter-device agreement and reliability were estimated. Over the course of this study, more than 3000 SEM Scanner readings were collected. Agreement between operators was good with mean differences ranging from -0.01 to 0.11. Inter-operator and inter-device reliability exceeded 0.80 at all anatomical sites assessed. The results of this study demonstrate the high reliability and good agreement of the SEM Scanner across different operators and different devices. Given the limitations of current methods to prevent and detect pressure ulcers, the SEM Scanner shows promise as an objective, reliable tool for assessing the presence or absence of pressure-induced tissue damage such as pressure ulcers. Copyright © 2015 Bruin Biometrics, LLC. Published by Elsevier Ltd.. All rights reserved.

  5. Systems and methods for modeling and analyzing networks

    Science.gov (United States)

    Hill, Colin C; Church, Bruce W; McDonagh, Paul D; Khalil, Iya G; Neyarapally, Thomas A; Pitluk, Zachary W

    2013-10-29

    The systems and methods described herein utilize a probabilistic modeling framework for reverse engineering an ensemble of causal models, from data and then forward simulating the ensemble of models to analyze and predict the behavior of the network. In certain embodiments, the systems and methods described herein include data-driven techniques for developing causal models for biological networks. Causal network models include computational representations of the causal relationships between independent variables such as a compound of interest and dependent variables such as measured DNA alterations, changes in mRNA, protein, and metabolites to phenotypic readouts of efficacy and toxicity.

  6. Model-Based Methods in the Biopharmaceutical Process Lifecycle.

    Science.gov (United States)

    Kroll, Paul; Hofer, Alexandra; Ulonska, Sophia; Kager, Julian; Herwig, Christoph

    2017-12-01

    Model-based methods are increasingly used in all areas of biopharmaceutical process technology. They can be applied in the field of experimental design, process characterization, process design, monitoring and control. Benefits of these methods are lower experimental effort, process transparency, clear rationality behind decisions and increased process robustness. The possibility of applying methods adopted from different scientific domains accelerates this trend further. In addition, model-based methods can help to implement regulatory requirements as suggested by recent Quality by Design and validation initiatives. The aim of this review is to give an overview of the state of the art of model-based methods, their applications, further challenges and possible solutions in the biopharmaceutical process life cycle. Today, despite these advantages, the potential of model-based methods is still not fully exhausted in bioprocess technology. This is due to a lack of (i) acceptance of the users, (ii) user-friendly tools provided by existing methods, (iii) implementation in existing process control systems and (iv) clear workflows to set up specific process models. We propose that model-based methods be applied throughout the lifecycle of a biopharmaceutical process, starting with the set-up of a process model, which is used for monitoring and control of process parameters, and ending with continuous and iterative process improvement via data mining techniques.

  7. An iterative method for accelerated degradation testing data of smart electricity meter

    Science.gov (United States)

    Wang, Xiaoming; Xie, Jinzhe

    2017-01-01

    In order to evaluate the performance of smart electricity meter (SEM), we must spend a lot of time censoring its status. For example, if we assess to the meter stability of the SEM which needs several years at least according to the standards. So accelerated degradation testing (ADT) is a useful method to assess the performance of the SEM. As we known, the Wiener process is a prevalent method to interpret the performance degradation. This paper proposes an iterative method for ADT data of SEM. The simulation study verifies the application and superiority of the proposed model than other ADT methods.

  8. Comparison of surrogate models with different methods in ...

    Indian Academy of Sciences (India)

    and kriging methods were compared for building surrogate models of a multiphase flow simulation model in a simplified ... 2001;. Keywords. Surrogate modelling; simulation optimization; groundwater remediation; polynomial regression; radial basis .... silty clay with a thickness of 1–2 m, while the lower part is made up of ...

  9. Modeling of Landslides with the Material Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2008-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  10. Comparison of surrogate models with different methods in ...

    Indian Academy of Sciences (India)

    Surrogate modelling is an effective tool for reducing computational burden of simulation optimization. In this article, polynomial regression (PR), radial basis function artificial neural network (RBFANN), and kriging methods were compared for building surrogate models of a multiphase flow simulation model in a simplified ...

  11. Two Undergraduate Process Modeling Courses Taught Using Inductive Learning Methods

    Science.gov (United States)

    Soroush, Masoud; Weinberger, Charles B.

    2010-01-01

    This manuscript presents a successful application of inductive learning in process modeling. It describes two process modeling courses that use inductive learning methods such as inquiry learning and problem-based learning, among others. The courses include a novel collection of multi-disciplinary complementary process modeling examples. They were…

  12. Monte carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  13. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  14. Method of moments estimation of GO-GARCH models

    NARCIS (Netherlands)

    Boswijk, H.P.; van der Weide, R.

    2009-01-01

    We propose a new estimation method for the factor loading matrix in generalized orthogonal GARCH (GO-GARCH) models. The method is based on the eigenvectors of a suitably defined sample autocorrelation matrix of squares and cross-products of the process. The method can therefore be easily applied to

  15. Design of nuclear power generation plants adopting model engineering method

    International Nuclear Information System (INIS)

    Waki, Masato

    1983-01-01

    The utilization of model engineering as the method of design has begun about ten years ago in nuclear power generation plants. By this method, the result of design can be confirmed three-dimensionally before actual production, and it is the quick and sure method to meet the various needs in design promptly. The adoption of models aims mainly at the improvement of the quality of design since the high safety is required for nuclear power plants in spite of the complex structure. The layout of nuclear power plants and piping design require the model engineering to arrange rationally enormous quantity of things in a limited period. As the method of model engineering, there are the use of check models and of design models, and recently, the latter method has been mainly taken. The procedure of manufacturing models and engineering is explained. After model engineering has been completed, the model information must be expressed in drawings, and the automation of this process has been attempted by various methods. The computer processing of design is in progress, and its role is explained (CAD system). (Kako, I.)

  16. Method of modeling the cognitive radio using Opnet Modeler

    OpenAIRE

    Yakovenko, I. V.; Poshtarenko, V. M.; Kostenko, R. V.

    2012-01-01

    This article is a review of the first wireless standard based on cognitive radio networks. The necessity of wireless networks based on the technology of cognitive radio. An example of the use of standard IEEE 802.22 in Wimax network through which was implemented in the simulation software environment Opnet Modeler. Schedules to check the performance of HTTP and FTP protocols CR network. Simulation results justify the use of standard IEEE 802.22 in wireless networks. Ця стаття являє собою о...

  17. Semântica e lexicografia

    Directory of Open Access Journals (Sweden)

    Julio Casares

    2001-01-01

    Full Text Available

    A Semântica e a Lexicografia se interpenetram mutuamente porque a Lexicografia não se limita a recolher as palavras do léxico, mas procura descrever a significação dos vocábulos e seus usos. O lexicógrafo também se ocupa de evolução dos sentidos das palavras para estabelecer a escala das acepções de um signo lexical. Casares conceitua acepção e discute o problema da discriminação das acepções e da sua ordenação no caso de palavras polissêmicas. Outra Questão delicada para o lexicógrafo é o reconhecimento e a identificação correta dos valores metafóricos. O autor usa como exemplo ilustrativo o verbete lat. ordo > esp. orden (port. ordem, signo polissêmico. Traça gráficos da ma-, lha de significações na semântica evolutiva dessa palavra, do étimo original latino ao espanhol moderno. Casares também trata do problema da lematização, ou seja, a decisão técnica de escolher como entrada de um dicionário, uma ou outra forma vocabular, o que envolve controvérsias permanentes em meio aos lexicólogos sobre as lexias (palavras complexas e como e quando se dá a categorização lexical de um polinómio vocabular. Esse problema é ampliado por causa da tradição caótica de muitas grafias, particularmente no caso de "locuções vocabulares". Advoga as vantagens e as virtudes de um dicionário que tivesse um índice de freqüência do uso de cada palavra, ou de cada acepção de um vocábulo.

  18. Non-monotonic modelling from initial requirements: a proposal and comparison with monotonic modelling methods

    NARCIS (Netherlands)

    Marincic, J.; Mader, Angelika H.; Wupper, H.; Wieringa, Roelf J.

    2008-01-01

    Researchers make a significant effort to develop new modelling languages and tools. However, they spend less effort developing methods for constructing models using these languages and tools. We are developing a method for building an embedded system model for formal verification. Our method

  19. Analysis list: sem-4 [Chip-atlas[Archive

    Lifescience Database Archive (English)

    Full Text Available sem-4 Larvae + ce10 http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/target/sem-4.1....tsv http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/target/sem-4.5.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/target/sem...-4.10.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/colo/sem-4.Larvae.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/ce10/colo/Larvae.gml ...

  20. Domain decomposition methods in FVM approach to gravity field modelling.

    Science.gov (United States)

    Macák, Marek

    2017-04-01

    The finite volume method (FVM) as a numerical method can be straightforwardly implemented for global or local gravity field modelling. This discretization method solves the geodetic boundary value problems in a space domain. In order to obtain precise numerical solutions, it usually requires very refined discretization leading to large-scale parallel computations. To optimize such computations, we present a special class of numerical techniques that are based on a physical decomposition of the global solution domain. The domain decomposition (DD) methods like the Multiplicative Schwarz Method and Additive Schwarz Method are very efficient methods for solving partial differential equations. We briefly present their mathematical formulations and we test their efficiency. Presented numerical experiments are dealing with gravity field modelling. Since there is no need to solve special interface problems between neighbouring subdomains, in our applications we use the overlapping DD methods.

  1. A RECREATION OPTIMIZATION MODEL BASED ON THE TRAVEL COST METHOD

    OpenAIRE

    Hof, John G.; Loomis, John B.

    1983-01-01

    A recreation allocation model is developed which efficiently selects recreation areas and degree of development from an array of proposed and existing sites. The model does this by maximizing the difference between gross recreation benefits and travel, investment, management, and site-opportunity costs. The model presented uses the Travel Cost Method for estimating recreation benefits within an operations research framework. The model is applied to selection of potential wilderness areas in C...

  2. Numerical methods for modeling photonic-crystal VCSELs

    DEFF Research Database (Denmark)

    Dems, Maciej; Chung, Il-Sug; Nyakas, Peter

    2010-01-01

    We show comparison of four different numerical methods for simulating Photonic-Crystal (PC) VCSELs. We present the theoretical basis behind each method and analyze the differences by studying a benchmark VCSEL structure, where the PC structure penetrates all VCSEL layers, the entire top-mirror DBR...... to the effective index method. The simulation results elucidate the strength and weaknesses of the analyzed methods; and outline the limits of applicability of the different models....

  3. A Model-Driven Development Method for Management Information Systems

    Science.gov (United States)

    Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki

    Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.

  4. Extension of local front reconstruction method with controlled coalescence model

    Science.gov (United States)

    Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.

    2018-02-01

    The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.

  5. Prospective Mathematics Teachers' Opinions about Mathematical Modeling Method and Applicability of This Method

    Science.gov (United States)

    Akgün, Levent

    2015-01-01

    The aim of this study is to identify prospective secondary mathematics teachers' opinions about the mathematical modeling method and the applicability of this method in high schools. The case study design, which is among the qualitative research methods, was used in the study. The study was conducted with six prospective secondary mathematics…

  6. Comparative SEM analysis of nine F22 aligner cleaning strategies.

    Science.gov (United States)

    Lombardo, Luca; Martini, Marco; Cervinara, Francesca; Spedicato, Giorgio Alfredo; Oliverio, Teresa; Siciliani, Giuseppe

    2017-12-01

    The orthodontics industry has paid great attention to the aesthetics of orthodontic appliances, seeking to make them as invisible as possible. There are several advantages to clear aligner systems, including aesthetics, comfort, chairside time reduction, and the fact that they can be removed for meals and oral hygiene procedures. Five patients were each given a series of F22 aligners, each to be worn for 14 days and nights, with the exception of meal and brushing times. Patients were instructed to clean each aligner using a prescribed strategy, and sections of the used aligners were observed under SEM. One grey-scale SEM image was saved per aligner in JPEG format with an 8-bit colour depth, and a total of 45 measurements on the grey scale ("Value" variable) were made. This dataset was analysed statistically via repeated measures ANOVA to determine the effect of each of the nine cleaning strategies in each of the five patients. A statistically significant difference in the efficacy of the cleaning strategies was detected. Specifically, rinsing with water alone was significantly less efficacious, and a combination of cationic detergent solution and ultrasonication was significantly more efficacious than the other methods (p aligners.

  7. Stencil method: a Markov model for transport in porous media

    Science.gov (United States)

    Delgoshaie, A. H.; Tchelepi, H.; Jenny, P.

    2016-12-01

    In porous media the transport of fluid is dominated by flow-field heterogeneity resulting from the underlying transmissibility field. Since the transmissibility is highly uncertain, many realizations of a geological model are used to describe the statistics of the transport phenomena in a Monte Carlo framework. One possible way to avoid the high computational cost of physics-based Monte Carlo simulations is to model the velocity field as a Markov process and use Markov Chain Monte Carlo. In previous works multiple Markov models for discrete velocity processes have been proposed. These models can be divided into two general classes of Markov models in time and Markov models in space. Both of these choices have been shown to be effective to some extent. However some studies have suggested that the Markov property cannot be confirmed for a temporal Markov process; Therefore there is not a consensus about the validity and value of Markov models in time. Moreover, previous spacial Markov models have only been used for modeling transport on structured networks and can not be readily applied to model transport in unstructured networks. In this work we propose a novel approach for constructing a Markov model in time (stencil method) for a discrete velocity process. The results form the stencil method are compared to previously proposed spacial Markov models for structured networks. The stencil method is also applied to unstructured networks and can successfully describe the dispersion of particles in this setting. Our conclusion is that both temporal Markov models and spacial Markov models for discrete velocity processes can be valid for a range of model parameters. Moreover, we show that the stencil model can be more efficient in many practical settings and is suited to model dispersion both on structured and unstructured networks.

  8. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  9. A verification system survival probability assessment model test methods

    International Nuclear Information System (INIS)

    Jia Rui; Wu Qiang; Fu Jiwei; Cao Leituan; Zhang Junnan

    2014-01-01

    Subject to the limitations of funding and test conditions, the number of sub-samples of large complex system test less often. Under the single sample conditions, how to make an accurate evaluation of the performance, it is important for reinforcement of complex systems. It will be able to significantly improve the technical maturity of the assessment model, if that can experimental validation and evaluation model. In this paper, a verification system survival probability assessment model test method, the method by the test system sample test results, verify the correctness of the assessment model and a priori information. (authors)

  10. Models and estimation methods for clinical HIV-1 data

    Science.gov (United States)

    Verotta, Davide

    2005-12-01

    Clinical HIV-1 data include many individual factors, such as compliance to treatment, pharmacokinetics, variability in respect to viral dynamics, race, sex, income, etc., which might directly influence or be associated with clinical outcome. These factors need to be taken into account to achieve a better understanding of clinical outcome and mathematical models can provide a unifying framework to do so. The first objective of this paper is to demonstrate the development of comprehensive HIV-1 dynamics models that describe viral dynamics and also incorporate different factors influencing such dynamics. The second objective of this paper is to describe alternative estimation methods that can be applied to the analysis of data with such models. In particular, we consider: (i) simple but effective two-stage estimation methods, in which data from each patient are analyzed separately and summary statistics derived from the results, (ii) more complex nonlinear mixed effect models, used to pool all the patient data in a single analysis. Bayesian estimation methods are also considered, in particular: (iii) maximum posterior approximations, MAP, and (iv) Markov chain Monte Carlo, MCMC. Bayesian methods incorporate prior knowledge into the models, thus avoiding some of the model simplifications introduced when the data are analyzed using two-stage methods, or a nonlinear mixed effect framework. We demonstrate the development of the models and the different estimation methods using real AIDS clinical trial data involving patients receiving multiple drugs regimens.

  11. Compositions and methods for modeling Saccharomyces cerevisiae metabolism

    DEFF Research Database (Denmark)

    2012-01-01

    The invention provides an in silica model for determining a S. cerevisiae physiological function. The model includes a data structure relating a plurality of S. cerevisiae reactants to a plurality of S. cerevisiae reactions, a constraint set for the plurality of S. cerevisiae reactions......, and commands for determining a distribution of flux through the reactions that is predictive of a S. cerevisiae physiological function. A model of the invention can further include a gene database containing information characterizing the associated gene or genes. The invention further provides methods...... for making an in silica S. cerevisiae model and methods for determining a S. cerevisiae physiological function using a model of the invention. The invention provides an in silica model for determining a S. cerevisiae physiological function. The model includes a data structure relating a plurality of S...

  12. Models and methods for hot spot safety work

    DEFF Research Database (Denmark)

    Vistisen, Dorte

    2002-01-01

    and statistical methods less developed. The purpose of this thesis is to contribute to improving "State of the art" in Denmark. Basis for the systematic hot spot safety work are the models describing the variation in accident counts on the road network. In the thesis hierarchical models disaggregated on time...... is the task of improving road safety through alterations of the geometrical and environmental characteristics of the existing road network. The presently applied models and methods in hot spot safety work on the Danish road network were developed about two decades ago, when data was more limited and software...... are derived. The proposed models are shown to describe variation in accident counts better than the models currently at use in Denmark. The parameters of the models are estimated for the national and regional road network using data from the Road Sector Information system, VIS. No specific accident models...

  13. Mean photon number dependent variational method to the Rabi model

    International Nuclear Information System (INIS)

    Liu, Maoxin; Ying, Zu-Jian; Luo, Hong-Gang; An, Jun-Hong

    2015-01-01

    We present a mean photon number dependent variational method, which works well in the whole coupling regime if the photon energy is dominant over the spin-flipping, to evaluate the properties of the Rabi model for both the ground state and excited states. For the ground state, it is shown that the previous approximate methods, the generalized rotating-wave approximation (only working well in the strong coupling limit) and the generalized variational method (only working well in the weak coupling limit), can be recovered in the corresponding coupling limits. The key point of our method is to tailor the merits of these two existing methods by introducing a mean photon number dependent variational parameter. For the excited states, our method yields considerable improvements over the generalized rotating-wave approximation. The variational method proposed could be readily applied to more complex models, for which it is difficult to formulate an analytic formula. (paper)

  14. Rotational Scanning Electron Micrographs (rSEM: A novel and accessible tool to visualize and communicate complex morphology

    Directory of Open Access Journals (Sweden)

    David Koon-Bong Cheung

    2013-09-01

    Full Text Available An accessible workflow is presented to create interactive, rotational scanning electron micrographs (rSEM. These information-rich animations facilitate the study and communication of complex morphological structures exemplified here by male arthropod genitalia. Methods are outlined for the publication of rSEMs on the web or in journal articles as SWF files. Image components of rSEMs were archived in MorphBank to ensure future data access. rSEM represents a promising new addition to the toolkit of a new generation of digital taxonomy.

  15. Physical Model Method for Seismic Study of Concrete Dams

    Directory of Open Access Journals (Sweden)

    Bogdan Roşca

    2008-01-01

    Full Text Available The study of the dynamic behaviour of concrete dams by means of the physical model method is very useful to understand the failure mechanism of these structures to action of the strong earthquakes. Physical model method consists in two main processes. Firstly, a study model must be designed by a physical modeling process using the dynamic modeling theory. The result is a equations system of dimensioning the physical model. After the construction and instrumentation of the scale physical model a structural analysis based on experimental means is performed. The experimental results are gathered and are available to be analysed. Depending on the aim of the research may be designed an elastic or a failure physical model. The requirements for the elastic model construction are easier to accomplish in contrast with those required for a failure model, but the obtained results provide narrow information. In order to study the behaviour of concrete dams to strong seismic action is required the employment of failure physical models able to simulate accurately the possible opening of joint, sliding between concrete blocks and the cracking of concrete. The design relations for both elastic and failure physical models are based on dimensional analysis and consist of similitude relations among the physical quantities involved in the phenomenon. The using of physical models of great or medium dimensions as well as its instrumentation creates great advantages, but this operation involves a large amount of financial, logistic and time resources.

  16. Surgery on spinal epidural metastases (SEM) in renal cell carcinoma: a plea for a new paradigm.

    Science.gov (United States)

    Bakker, Nicolaas A; Coppes, Maarten H; Vergeer, Rob A; Kuijlen, Jos M A; Groen, Rob J M

    2014-09-01

    Prediction models for outcome of decompressive surgical resection of spinal epidural metastases (SEM) have in common that they have been developed for all types of SEM, irrespective of the type of primary tumor. It is our experience in clinical practice, however, that these models often fail to accurately predict outcome in the individual patient. To investigate whether decision making could be optimized by applying tumor-specific prediction models. For the proof of concept, we analyzed patients with SEM from renal cell carcinoma that we have operated on. Retrospective chart analysis 2006 to 2012. Twenty-one consecutive patients with symptomatic SEM of renal cell carcinoma. Predictive factors for survival. Next to established predictive factors for survival, we analyzed the predictive value of the Motzer criteria in these patients. The Motzer criteria comprise a specific and validated risk model for survival in patients with renal cell carcinoma. After multivariable analysis, only Motzer intermediate (hazard ratio [HR] 17.4, 95% confidence interval [CI] 1.82-166, p=.01) and high risk (HR 39.3, 95% CI 3.10-499, p=.005) turned out to be significantly associated with survival in patients with renal cell carcinoma that we have operated on. In this study, we have demonstrated that decision making could have been optimized by implementing the Motzer criteria next to established prediction models. We, therefore, suggest that in future, in patients with SEM from renal cell carcinoma, the Motzer criteria are also taken into account. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Extending product modeling methods for integrated product development

    DEFF Research Database (Denmark)

    Bonev, Martin; Wörösch, Michael; Hauksdóttir, Dagný

    2013-01-01

    Despite great efforts within the modeling domain, the majority of methods often address the uncommon design situation of an original product development. However, studies illustrate that development tasks are predominantly related to redesigning, improving, and extending already existing products...

  18. Gas/Aerosol partitioning: a simplified method for global modeling

    NARCIS (Netherlands)

    Metzger, S.M.

    2000-01-01

    The main focus of this thesis is the development of a simplified method to routinely calculate gas/aerosol partitioning of multicomponent aerosols and aerosol associated water within global atmospheric chemistry and climate models. Atmospheric aerosols are usually multicomponent mixtures,

  19. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    Science.gov (United States)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream

  20. Multifunctional Collaborative Modeling and Analysis Methods in Engineering Science

    Science.gov (United States)

    Ransom, Jonathan B.; Broduer, Steve (Technical Monitor)

    2001-01-01

    Engineers are challenged to produce better designs in less time and for less cost. Hence, to investigate novel and revolutionary design concepts, accurate, high-fidelity results must be assimilated rapidly into the design, analysis, and simulation process. This assimilation should consider diverse mathematical modeling and multi-discipline interactions necessitated by concepts exploiting advanced materials and structures. Integrated high-fidelity methods with diverse engineering applications provide the enabling technologies to assimilate these high-fidelity, multi-disciplinary results rapidly at an early stage in the design. These integrated methods must be multifunctional, collaborative, and applicable to the general field of engineering science and mechanics. Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple-method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized. The multifunctional methodology presented provides an effective mechanism by which domains with diverse idealizations are

  1. Turbulence modeling methods for the compressible Navier-Stokes equations

    Science.gov (United States)

    Coakley, T. J.

    1983-01-01

    Turbulence modeling methods for the compressible Navier-Stokes equations, including several zero- and two-equation eddy-viscosity models, are described and applied. Advantages and disadvantages of the models are discussed with respect to mathematical simplicity, conformity with physical theory, and numerical compatibility with methods. A new two-equation model is introduced which shows advantages over other two-equation models with regard to numerical compatibility and the ability to predict low-Reynolds-number transitional phenomena. Calculations of various transonic airfoil flows are compared with experimental results. A new implicit upwind-differencing method is used which enhances numerical stability and accuracy, and leads to rapidly convergent steady-state solutions.

  2. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Directory of Open Access Journals (Sweden)

    Oluwaseun Egbelowo

    2017-05-01

    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  3. SELECTION MOMENTS AND GENERALIZED METHOD OF MOMENTS FOR HETEROSKEDASTIC MODELS

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2016-06-01

    Full Text Available In this paper, the authors describe the selection methods for moments and the application of the generalized moments method for the heteroskedastic models. The utility of GMM estimators is found in the study of the financial market models. The selection criteria for moments are applied for the efficient estimation of GMM for univariate time series with martingale difference errors, similar to those studied so far by Kuersteiner.

  4. Thermal Efficiency Degradation Diagnosis Method Using Regression Model

    International Nuclear Information System (INIS)

    Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol

    2011-01-01

    This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant

  5. 3D Face modeling using the multi-deformable method.

    Science.gov (United States)

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-09-25

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper.

  6. Qualidade de vida de pessoas colostomizadas com e sem uso de métodos de controle intestinal Calidad de vida de personas colostomizadas con o sin uso de métodos de control intestinal Quality of life of the colostomized person with or without use of methods of bowel control

    Directory of Open Access Journals (Sweden)

    Isabel Umbelina Ribeiro Cesaretti

    2010-02-01

    Full Text Available OBJETIVO: Avaliar e comparar a qualidade de vida (QV de pessoas colostomizadas que utilizam e não utilizam os métodos de controle intestinal (MCI, ou seja, a irrigação e o sistema oclusor da colostomia, considerando a hipótese de que aquelas que os utilizam têm melhor QV. Método: O estudo foi desenvolvido no Ambulatório do Hospital Heliópolis, após a aprovação do projeto pelo Comitê de Ética, usando o WHOQoL-abreviado. A amostra foi constituída de dois grupos: 50 pessoas colostomizadas usando os dois MCI e 50, sem os MCI. Resultados: A QV do Grupo com MCI foi significativamente melhor em todos os Domínios e na QV Geral do que daquelas do Grupo sem MCI. Conclusão: O estudo confirmou a hipótese de que a QV do Grupo com MCI é melhor do que a do Grupo sem MCI.OBJETIVO: Evaluar y comparar la calidad de vida (CV de las personas colostomizadas, con y sin el uso de los métodos de control intestinal (MCI, es decir, la irrigación y el sistema obturador de la colostomia, teniendo en cuenta la hipótesis de que aquellas que los utilizan tienen mejor CV. Método: El estudio fue llevado a cabo en el Ambulatorio del Hospital Heliópolis después de la aprobación del proyecto por lo Comité de Ética, usando el WHOQoL-bref. La muestra fue constituida de dos grupos: 50 personas colostomizadas usando los dos MCI y 50, sin los MCI. Resultados: Las personas del Grupo con MCI tenían CV perceptiblemente mejor, siendo eso observado en todos los Dominios y en la CV General, que aquellas del Grupo sin MCI. Conclusión: El estudio confirmo la hipótesis que la CV del Grupo con MCI es mejor que la del Grupo sin MCI.OBJECTIVE: To evaluate and to compare the quality of life (QoL of colostomy people, using or not using the bowel control methods (BCM, in other words, the colostomy irrigation and the plug system, considering the hypothesis that people who used them had better QoL. Method: This study was carried out in the Heliópolis Hospital Outpatient

  7. SEM analysis of ionizing radiation effects in linear integrated circuits. [Scanning Electron Microscope

    Science.gov (United States)

    Stanley, A. G.; Gauthier, M. K.

    1977-01-01

    A successful diagnostic technique was developed using a scanning electron microscope (SEM) as a precision tool to determine ionization effects in integrated circuits. Previous SEM methods radiated the entire semiconductor chip or major areas. The large area exposure methods do not reveal the exact components which are sensitive to radiation. To locate these sensitive components a new method was developed, which consisted in successively irradiating selected components on the device chip with equal doses of electrons /10 to the 6th rad (Si)/, while the whole device was subjected to representative bias conditions. A suitable device parameter was measured in situ after each successive irradiation with the beam off.

  8. Approximating methods for intractable probabilistic models: Applications in neuroscience

    DEFF Research Database (Denmark)

    Højen-Sørensen, Pedro

    2002-01-01

    This thesis investigates various methods for carrying out approximate inference in intractable probabilistic models. By capturing the relationships between random variables, the framework of graphical models hints at which sets of random variables pose a problem to the inferential step. The appro...

  9. Automated Model Fit Method for Diesel Engine Control Development

    NARCIS (Netherlands)

    Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  10. Attitude Research in Science Education: Contemporary Models and Methods.

    Science.gov (United States)

    Crawley, Frank E.; Kobala, Thomas R., Jr.

    1994-01-01

    Presents a summary of models and methods of attitude research which are embedded in the theoretical tenets of social psychology and in the broader framework of constructivism. Focuses on the construction of social reality rather than the construction of physical reality. Models include theory of reasoned action, theory of planned behavior, and…

  11. Introduction to Discrete Element Methods: Basics of Contact Force Models

    NARCIS (Netherlands)

    Luding, Stefan

    2008-01-01

    One challenge of today's research is the realistic simulation of granular materials, like sand or powders, consisting of millions of particles. In this article, the discrete element method (DEM), as based on molecular dynamics methods, is introduced. Contact models are at the physical basis of DEM.

  12. Hierarchical modelling for the environmental sciences statistical methods and applications

    CERN Document Server

    Clark, James S

    2006-01-01

    New statistical tools are changing the way in which scientists analyze and interpret data and models. Hierarchical Bayes and Markov Chain Monte Carlo methods for analysis provide a consistent framework for inference and prediction where information is heterogeneous and uncertain, processes are complicated, and responses depend on scale. Nowhere are these methods more promising than in the environmental sciences.

  13. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  14. Vortex Tube Modeling Using the System Identification Method

    Energy Technology Data Exchange (ETDEWEB)

    Han, Jaeyoung; Jeong, Jiwoong; Yu, Sangseok [Chungnam Nat’l Univ., Daejeon (Korea, Republic of); Im, Seokyeon [Tongmyong Univ., Busan (Korea, Republic of)

    2017-05-15

    In this study, vortex tube system model is developed to predict the temperature of the hot and the cold sides. The vortex tube model is developed based on the system identification method, and the model utilized in this work to design the vortex tube is ARX type (Auto-Regressive with eXtra inputs). The derived polynomial model is validated against experimental data to verify the overall model accuracy. It is also shown that the derived model passes the stability test. It is confirmed that the derived model closely mimics the physical behavior of the vortex tube from both the static and dynamic numerical experiments by changing the angles of the low-temperature side throttle valve, clearly showing temperature separation. These results imply that the system identification based modeling can be a promising approach for the prediction of complex physical systems, including the vortex tube.

  15. Modelling of Airship Flight Mechanics by the Projection Equivalent Method

    OpenAIRE

    Frantisek Jelenciak; Michael Gerke; Ulrich Borgolte

    2015-01-01

    This article describes the projection equivalent method (PEM) as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that - in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a...

  16. A discontinuous Galerkin method on kinetic flocking models

    OpenAIRE

    Tan, Changhui

    2014-01-01

    We study kinetic representations of flocking models. They arise from agent-based models for self-organized dynamics, such as Cucker-Smale and Motsch-Tadmor models. We prove flocking behavior for the kinetic descriptions of flocking systems, which indicates a concentration in velocity variable in infinite time. We propose a discontinuous Galerkin method to treat the asymptotic $\\delta$-singularity, and construct high order positive preserving scheme to solve kinetic flocking systems.

  17. Structural equation modeling in pediatric psychology: overview and review of applications.

    Science.gov (United States)

    Nelson, Timothy D; Aylward, Brandon S; Steele, Ric G

    2008-08-01

    To describe the use of structural equation modeling (SEM) in the Journal of Pediatric Psychology (JPP) and to discuss the usefulness of SEM applications in pediatric psychology research. The use of SEM in JPP between 1997 and 2006 was examined and compared to leading journals in clinical psychology, clinical child psychology, and child development. SEM techniques were used in psychology research, although investigations employing these methods are becoming more prevalent. Despite its infrequent use to date, SEM is a potentially useful tool for advancing pediatric psychology research with a number of advantages over traditional statistical methods.

  18. Projection methods for the numerical solution of Markov chain models

    Science.gov (United States)

    Saad, Youcef

    1989-01-01

    Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.

  19. Filler segmentation of SEM paper images based on mathematical morphology.

    Science.gov (United States)

    Ait Kbir, M; Benslimane, Rachid; Princi, Elisabetta; Vicini, Silvia; Pedemonte, Enrico

    2007-07-01

    Recent developments in microscopy and image processing have made digital measurements on high-resolution images of fibrous materials possible. This helps to gain a better understanding of the structure and other properties of the material at micro level. In this paper SEM image segmentation based on mathematical morphology is proposed. In fact, paper models images (Whatman, Murillo, Watercolor, Newsprint paper) selected in the context of the Euro Mediterranean PaperTech Project have different distributions of fibers and fillers, caused by the presence of SiAl and CaCO3 particles. It is a microscopy challenge to make filler particles in the sheet distinguishable from the other components of the paper surface. This objectif is reached here by using switable strutural elements and mathematical morphology operators.

  20. X-ray projection microscopy in the SEM

    International Nuclear Information System (INIS)

    Miller, P.

    2003-01-01

    Full text: The projection method for X-ray microscopy is very simple in principle, X-rays from the point-like source pass through the sample to form a projected image at the detector. Magnification is varied by moving the sample position between the source and the detector with magnification given by (R1+R2)/R1 where Rl is the distance from the source to the sample and R1+R2 is the distance from the source to the detector. The projection X-ray microscope is capable of providing high magnification over a wide range of X-ray energies. Point projection X-ray microscopy was first used in the early 1930s. Resolution of the point projection X-ray microscope is limited in part by the size of the X-ray source. Performance was improved in the late 1950s when magnetic lenses were used to focus an electron beam to form a sub-micron X-ray source (see Cosslett and Nixon 1960). In 1978 Horn and Waltinger developed an X-ray microscope using a scanning electron microscope to produce a fine X-ray source. However, the low current density of electron sources at that time resulted in low X-ray intensities and this combined with poor detection efficiency meant that very long exposure times were needed. The subsequent development of high-brightness field-emission gun-based SEMs, CCD X-ray detectors with much better detection efficiency, new phase retrieval algorithms, automation of SEM operation and the ready availability of powerful desktop computers has allowed the development of a very much more capable laboratory-based X-ray microscope. XRT Limited has produced the X-ray ultraMicroscope (XuM) based on original research and development undertaken by the X-ray Science and Instrumentation Group led by Dr Steve Wilkins at CSIRO Manufacturing and Infrastructure Technology. Figure 2 compares SEM and XuM images of a multi-layer fuel pellet. The SEM image shows only the surface while the XuM image reveals the internal structure of the pellet. The XuM allows X-ray images to be recorded with

  1. SEM-EDX--a useful tool for forensic examinations

    International Nuclear Information System (INIS)

    Zadora, G.; Brozek-Mucha, Z.

    2003-01-01

    There are two main aims of forensic examination of the physical evidences. The first aim is comparison of the evidence with the reference material (called discrimination). The task is to find out whether they could have come from the same object. The second aim, when there is no comparative material available, is a classification of the evidence sample into a group of objects taking into account its specific chemical and physical properties. Scanning electron microscopy with energy dispersive X-ray spectrometry (SEM-EDX) is a powerful tool for forensic scientists to classify and discriminate evidence material because they can simultaneously examine the morphology and the elemental composition of objects. Moreover, the obtained results could be enhanced using some methods of chemometric analysis. A few examples of problems related to the classification and discrimination of selected types of microtraces are presented

  2. A method for model identification and parameter estimation

    International Nuclear Information System (INIS)

    Bambach, M; Heinkenschloss, M; Herty, M

    2013-01-01

    We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)

  3. An image segmentation method based on network clustering model

    Science.gov (United States)

    Jiao, Yang; Wu, Jianshe; Jiao, Licheng

    2018-01-01

    Network clustering phenomena are ubiquitous in nature and human society. In this paper, a method involving a network clustering model is proposed for mass segmentation in mammograms. First, the watershed transform is used to divide an image into regions, and features of the image are computed. Then a graph is constructed from the obtained regions and features. The network clustering model is applied to realize clustering of nodes in the graph. Compared with two classic methods, the algorithm based on the network clustering model performs more effectively in experiments.

  4. Alternative methods to model frictional contact surfaces using NASTRAN

    Science.gov (United States)

    Hoang, Joseph

    1992-01-01

    Elongated (slotted) holes have been used extensively for the integration of equipment into Spacelab racks. In the past, this type of interface has been modeled assuming that there is not slippage between contact surfaces, or that there is no load transfer in the direction of the slot. Since the contact surfaces are bolted together, the contact friction provides a load path determined by the normal applied force (bolt preload) and the coefficient of friction. Three alternate methods that utilize spring elements, externally applied couples, and stress dependent elements are examined to model the contacted surfaces. Results of these methods are compared with results obtained from methods that use GAP elements and rigid elements.

  5. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  6. Two-dimensional fully dynamic SEM simulations of the 2011 Tohoku earthquake cycle

    Science.gov (United States)

    Shimizu, H.; Hirahara, K.

    2014-12-01

    Earthquake cycle simulations have been performed to successfully reproduce the historical earthquake occurrences. Most of them are quasi-dynamic, where inertial effects are approximated using the radiation damping proposed by Rice [1993]. Lapusta et al. [2000, 2009] developed a methodology capable of the detailed description of seismic and aseismic slip and gradual process of earthquake nucleation in the entire earthquake cycle. Their fully dynamic simulations have produced earthquake cycles considerably different from quasi-dynamic ones. Those simulations have, however, never been performed for interplate earthquakes at subduction zones. Many studies showed that on dipping faults such as interplate earthquakes at subduction zones, normal stress is changed during faulting due to the interaction with Earth's free surface. This change in normal stress not only affects the earthquake rupture process, but also causes the residual stress variation that might affect the long-term histories of earthquake cycle. Accounting for such effects, we perform two-dimensional simulations of the 2011 Tohoku earthquake cycle. Our model is in-plane and a laboratory derived rate and state friction acts on a dipping fault embedded on an elastic half-space that reaches the free surface. We extended the spectral element method (SEM) code [Ampuero, 2002] to incorporate a conforming mesh of triangles and quadrangles introduced in Komatitsch et al. [2001], which enables us to analyze the complex geometry with ease. The problem is solved by the methodology almost the same as Kaneko et al. [2011], which is the combined scheme switching in turn a fully dynamic SEM and a quasi-static SEM. The difference is the dip-slip thrust fault in our study in contrast to the vertical strike slip fault. With this method, we can analyze how the dynamic rupture with surface breakout interacting with the free surface affects the long-term earthquake cycle. We discuss the fully dynamic earthquake cycle results

  7. Effects of Sample Size, Estimation Methods, and Model Specification on Structural Equation Modeling Fit Indexes.

    Science.gov (United States)

    Fan, Xitao; Wang, Lin; Thompson, Bruce

    1999-01-01

    A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)

  8. Comparative SEM analysis of nine F22 aligner cleaning strategies

    Directory of Open Access Journals (Sweden)

    Luca Lombardo

    2017-09-01

    Full Text Available Abstract Background The orthodontics industry has paid great attention to the aesthetics of orthodontic appliances, seeking to make them as invisible as possible. There are several advantages to clear aligner systems, including aesthetics, comfort, chairside time reduction, and the fact that they can be removed for meals and oral hygiene procedures. Methods Five patients were each given a series of F22 aligners, each to be worn for 14 days and nights, with the exception of meal and brushing times. Patients were instructed to clean each aligner using a prescribed strategy, and sections of the used aligners were observed under SEM. One grey-scale SEM image was saved per aligner in JPEG format with an 8-bit colour depth, and a total of 45 measurements on the grey scale (“Value” variable were made. This dataset was analysed statistically via repeated measures ANOVA to determine the effect of each of the nine cleaning strategies in each of the five patients. Results A statistically significant difference in the efficacy of the cleaning strategies was detected. Specifically, rinsing with water alone was significantly less efficacious, and a combination of cationic detergent solution and ultrasonication was significantly more efficacious than the other methods (p < 0.05. Conclusions Of the nine cleaning strategies examined, only that involving 5 min of ultrasonication at 42 k Hz combined with a 0.3% germicidal cationic detergent was observed to be statistically effective at removing the bacterial biofilm from the surface of F22 aligners.

  9. SEM-based overlay measurement between via patterns and buried M1 patterns using high-voltage SEM

    Science.gov (United States)

    Hasumi, Kazuhisa; Inoue, Osamu; Okagawa, Yutaka; Shao, Chuanyu; Leray, Philippe; Halder, Sandip; Lorusso, Gian; Jehoul, Christiane

    2017-03-01

    The miniaturization of semiconductors continues, importance of overlay measurement is increasing. We measured overlay with analysis SEM called Miracle Eye which can output ultrahigh acceleration voltage in 1998. Meanwhile, since 2006, we have been working on SEM based overlay measurement and developed overlay measurement function of the same layer using CD-SEM. Then, we evaluated overlay of the same layer pattern after etching. This time, in order to measure overlay after lithography, we evaluated the see-through overlay using high voltage SEM CV5000 released in October 2016. In collaboration between imec and Hitachi High-Technologies, we evaluated repeatability, TIS of SEM-OVL as well as correlation between SEM-OVL and Opt-OVL in the M1@ADI and V0@ADI process. Repeatability and TIS results are reasonable and SEM-OVL has good correlation with Opt-OVL. By overlay measurement using CV 5000, we got the following conclusions. (1)SEM_OVL results of both M1 and V0 at ADI show good correlation to OPT_OVL. (2)High voltage SEM can prove the measurement capability of a small pattern(Less than 1 2um) like device that can be placed in-die area. (3)"In-die SEM based overlay" shows possibility for high order control of scanner

  10. Quantitative sociodynamics stochastic methods and models of social interaction processes

    CERN Document Server

    Helbing, Dirk

    1995-01-01

    Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...

  11. Coarse Analysis of Microscopic Models using Equation-Free Methods

    DEFF Research Database (Denmark)

    Marschler, Christian

    -dimensional models. The goal of this thesis is to investigate such high-dimensional multiscale models and extract relevant low-dimensional information from them. Recently developed mathematical tools allow to reach this goal: a combination of so-called equation-free methods with numerical bifurcation analysis...... using short simulation bursts of computationally-expensive complex models. Those information is subsequently used to construct bifurcation diagrams that show the parameter dependence of solutions of the system. The methods developed for this thesis have been applied to a wide range of relevant problems....... Applications include the learning behavior in the barn owl’s auditory system, traffic jam formation in an optimal velocity model for circular car traffic and oscillating behavior of pedestrian groups in a counter-flow through a corridor with narrow door. The methods do not only quantify interesting properties...

  12. Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes

    CERN Document Server

    Helbing, Dirk

    2010-01-01

    This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...

  13. Feature evaluation of complex hysteresis smoothing and its practical applications to noisy SEM images.

    Science.gov (United States)

    Suzuki, Kazuhiko; Oho, Eisaku

    2013-01-01

    Quality of a scanning electron microscopy (SEM) image is strongly influenced by noise. This is a fundamental drawback of the SEM instrument. Complex hysteresis smoothing (CHS) has been previously developed for noise removal of SEM images. This noise removal is performed by monitoring and processing properly the amplitude of the SEM signal. As it stands now, CHS may not be so utilized, though it has several advantages for SEM. For example, the resolution of image processed by CHS is basically equal to that of the original image. In order to find wide application of the CHS method in microscopy, the feature of CHS, which has not been so clarified until now is evaluated correctly. As the application of the result obtained by the feature evaluation, cursor width (CW), which is the sole processing parameter of CHS, is determined more properly using standard deviation of noise Nσ. In addition, disadvantage that CHS cannot remove the noise with excessively large amplitude is improved by a certain postprocessing. CHS is successfully applicable to SEM images with various noise amplitudes. © Wiley Periodicals, Inc.

  14. Model based methods and tools for process systems engineering

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    Process systems engineering (PSE) provides means to solve a wide range of problems in a systematic and efficient manner. This presentation will give a perspective on model based methods and tools needed to solve a wide range of problems in product-process synthesis-design. These methods and tools...... need to be integrated with work-flows and data-flows for specific product-process synthesis-design problems within a computer-aided framework. The framework therefore should be able to manage knowledge-data, models and the associated methods and tools needed by specific synthesis-design work...... of model based methods and tools within a computer aided framework for product-process synthesis-design will be highlighted....

  15. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  16. Method and apparatus for modeling, visualization and analysis of materials

    KAUST Repository

    Aboulhassan, Amal

    2016-08-25

    A method, apparatus, and computer readable medium are provided for modeling of materials and visualization of properties of the materials. An example method includes receiving data describing a set of properties of a material, and computing, by a processor and based on the received data, geometric features of the material. The example method further includes extracting, by the processor, particle paths within the material based on the computed geometric features, and geometrically modeling, by the processor, the material using the geometric features and the extracted particle paths. The example method further includes generating, by the processor and based on the geometric modeling of the material, one or more visualizations regarding the material, and causing display, by a user interface, of the one or more visualizations.

  17. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  18. Canticum Novum: música sem palavras e palavras sem som no pensamento de Santo Agostinho

    Directory of Open Access Journals (Sweden)

    Lorenzo Mammì

    2000-04-01

    Full Text Available NO De Magistro, Santo Agostinho coloca a reza e o canto numa posição similar, à margem das funções imediatamente comunicativas da linguagem. A reflexão agostiniana sobre a reza se baseia nos hábitos cristãos da leitura, da oração e da meditação silenciosas. Há sobre o canto, na prática igualmente inovadora do jubilus, melodia sem palavra destinada aos momentos mais intensos e gaudiosos da liturgia. A oração silenciosa e o jubilus são temas recorrentes da literatura patrística, mas Agostinho os aborda de maneira original, desenhando, a partir das palavras sem som da oração e do som sem palavra do jubilus, o perfil de um discurso interior, que não se destina aos homens, mas a Deus.IN HIS De Magistro Saint Augustine places prayer and song on a similar level, alongside the language immediately communicative functions. His considerations on prayer are grounded on the Christian habits of silent reading, prayer and meditation; those on song, on the equally innovating practice called jubilus, which is melody without words designed for the intensest and most joyous liturgical moments. Silent prayer and jubilus are recurring topics in patristic literature, but Augustine deals with them in an original way, drawing from the soundless words of prayer and the wordless sound of jubilus an inner discourse, addressed not to men but to God.

  19. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); CAS Center for Excellence in Tibetan Plateau Earth Sciences, Beijing, 100101 (China); Badal, José, E-mail: badal@unizar.es [Physics of the Earth, Sciences B, University of Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  20. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    International Nuclear Information System (INIS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-01-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  1. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  2. Quality index method (QIM application on shef life estimation of skinned fillets of Nile tilapia (Oreochromis niloticus kept in iceAplicação do método do índice de qualidade (MIQ para o estudo da vida útil de filés de tilápia do Nilo (Oreochromis niloticus sem pele, armazenados em gelo

    Directory of Open Access Journals (Sweden)

    Karoline Mikaelle de Paiva Soares

    2012-12-01

    Full Text Available The objective of this study was to develop the Quality Index Method (QIM for skinned fillets from farmed Nile tilapia (Oreochromis niloticus, and apply it in the establishment of its shelf life. The skinned fillets (120 g in average were kept in boxes with ice in the proportion of 1:1 (fillet:ice under average temperature of 0°C and stored at refrigeration chamber (4°C during 18 days. To evaluate the freshness during storage time sensory analysis (QIM and physicochemical (pH and TVB-N were performed every 72 hours from time zero, in triplicate. The maximum life of the Nile tilapia fillet in ice was estimated at 15 days. The MIQ was considered effective in evaluating the freshness of the Nile tilapia, since the sensory rejection by MIQ was determinant in the shelf life establishment. O objetivo do presente trabalho foi desenvolver o Método do Índice de Qualidade (MIQ para filé sem pele de tilápia do Nilo (Oreochromis niloticus, cultivada, e aplicá-lo no estabelecimento da sua vida útil. Os filés (média de 120 g cada foram mantidos em caixas com gelo na proporção de 1:1 (filé:gelo na temperatura média de 0°C e armazenados em câmaras de refrigeração (4°C por 18 dias. Para avaliar o frescor durante o armazenamento, realizaram-se análises sensoriais (MIQ e físico-químicas (pH e Nitrogênio das Bases Voláteis Totais a cada 72 horas, a partir do tempo zero, em triplicata. A vida útil máxima do filé sem pele de tilápia do Nilo, em gelo, foi estimada em 15 dias. O MIQ foi considerado eficiente na avaliação do frescor da tilápia do Nilo, já que a rejeição sensorial pelo MIQ foi determinante no estabelecimento da vida de prateleira.

  3. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  4. Modeling Music Emotion Judgments Using Machine Learning Methods.

    Science.gov (United States)

    Vempala, Naresh N; Russo, Frank A

    2017-01-01

    Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML) methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.

  5. Does Your SEM Really Tell the Truth?-How Would You Know? Part 4: Charging and its Mitigation.

    Science.gov (United States)

    Postek, Michael T; Vladár, András E

    2015-01-01

    This is the fourth part of a series of tutorial papers discussing various causes of measurement uncertainty in scanned particle beam instruments, and some of the solutions researched and developed at NIST and other research institutions. Scanned particle beam instruments, especially the scanning electron microscope (SEM), have gone through tremendous evolution to become indispensable tools for many and diverse scientifc and industrial applications. These improvements have significantly enhanced their performance and made them far easier to operate. But, the ease of operation has also fostered operator complacency. In addition, the user-friendliness has reduced the apparent need for extensive operator training. Unfortunately, this has led to the idea that the SEM is just another expensive "digital camera" or another peripheral device connected to a computer and that all of the problems in obtaining good quality images and data have been solved. Hence, one using these instruments may be lulled into thinking that all of the potential pitfalls have been fully eliminated and believing that, everything one sees on the micrograph is always correct. But, as described in this and the earlier papers, this may not be the case. Care must always be taken when reliable quantitative data are being sought. The first paper in this series discussed some of the issues related to signal generation in the SEM, including instrument calibration, electron beam-sample interactions and the need for physics-based modeling to understand the actual image formation mechanisms to properly interpret SEM images. The second paper has discussed another major issue confronting the microscopist: specimen contamination and methods to eliminate it. The third paper discussed mechanical vibration and stage drift and some useful solutions to mitigate the problems caused by them, and here, in this the fourth contribution, the issues related to specimen "charging" and its mitigation are discussed relative to

  6. FIB and MIP: understanding nanoscale porosity in molecularly imprinted polymers via 3D FIB/SEM tomography.

    Science.gov (United States)

    Neusser, G; Eppler, S; Bowen, J; Allender, C J; Walther, P; Mizaikoff, B; Kranz, C

    2017-10-05

    We present combined focused ion beam/scanning electron beam (FIB/SEM) tomography as innovative method for differentiating and visualizing the distribution and connectivity of pores within molecularly imprinted polymers (MIPs) and non-imprinted control polymers (NIPs). FIB/SEM tomography is used in cell biology for elucidating three-dimensional structures such as organelles, but has not yet been extensively applied for visualizing the heterogeneity of nanoscopic pore networks, interconnectivity, and tortuosity in polymers. To our best knowledge, the present study is the first application of this strategy for analyzing the nanoscale porosity of MIPs. MIPs imprinted for propranolol - and the corresponding NIPs - were investigated establishing FIB/SEM tomography as a viable future strategy complementing conventional isotherm studies. For visualizing and understanding the properties of pore networks in detail, polymer particles were stained with osmium tetroxide (OsO 4 ) vapor, and embedded in epoxy resin. Staining with OsO 4 provides excellent contrast during high-resolution SEM imaging. After optimizing the threshold to discriminate between the stained polymer matrix, and pores filled with epoxy resin, a 3D model of the sampled volume may be established for deriving not only the pore volume and pore surface area, but also to visualize the interconnectivity and tortuosity of the pores within the sampled polymer volume. Detailed studies using different types of cross-linkers and the effect of hydrolysis on the resulting polymer properties have been investigated. In comparison of MIP and NIP, it could be unambiguously shown that the interconnectivity of the visualized pores in MIPs is significantly higher vs. the non-imprinted polymer, and that the pore volume and pore area is 34% and approx. 35% higher within the MIP matrix. This confirms that the templating process not only induces selective binding sites, but indeed also affects the physical properties of such

  7. Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models

    Science.gov (United States)

    Marquette, Michele L.; Sognier, Marguerite A.

    2013-01-01

    An improved method for culturing immature muscle cells (myoblasts) into a mature skeletal muscle overcomes some of the notable limitations of prior culture methods. The development of the method is a major advance in tissue engineering in that, for the first time, a cell-based model spontaneously fuses and differentiates into masses of highly aligned, contracting myotubes. This method enables (1) the construction of improved two-dimensional (monolayer) skeletal muscle test beds; (2) development of contracting three-dimensional tissue models; and (3) improved transplantable tissues for biomedical and regenerative medicine applications. With adaptation, this method also offers potential application for production of other tissue types (i.e., bone and cardiac) from corresponding precursor cells.

  8. Modeling of piezoelectric devices with the finite volume method.

    Science.gov (United States)

    Bolborici, Valentin; Dawson, Francis; Pugh, Mary

    2010-07-01

    A partial differential equation (PDE) model for the dynamics of a thin piezoelectric plate in an electric field is presented. This PDE model is discretized via the finite volume method (FVM), resulting in a system of coupled ordinary differential equations. A static analysis and an eigenfrequency analysis are done with results compared with those provided by a commercial finite element (FEM) package. We find that fewer degrees of freedom are needed with the FVM model to reach a specified degree of accuracy. This suggests that the FVM model, which also has the advantage of an intuitive interpretation in terms of electrical circuits, may be a better choice in control situations.

  9. Methods and models in mathematical biology deterministic and stochastic approaches

    CERN Document Server

    Müller, Johannes

    2015-01-01

    This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and  branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.

  10. Theory of model Hamiltonians and method of functional integration

    International Nuclear Information System (INIS)

    Popov, V.N.

    1990-01-01

    Results on application of functional integration method to statistical physics systems with model Hamiltonians Dicke and Bardeen-Cooper-Schrieffer (BCS) are presented. Representations of statistical sums of these functional integration models are obtained. Asymptotic formulae (in N → ∞ thermodynamic range) for statistical sums of various modifications of the Dicke model as well as for the Green functions and Bose-excitations collective spectrum are exactly proved. Analogous results without exact substantiation are obtained for statistical sums and spectrum of Bose-excitations of the BCS model. 21 refs

  11. Minimal resin embedding of multicellular specimens for targeted FIB-SEM imaging.

    Science.gov (United States)

    Schieber, Nicole L; Machado, Pedro; Markert, Sebastian M; Stigloher, Christian; Schwab, Yannick; Steyer, Anna M

    2017-01-01

    Correlative light and electron microscopy (CLEM) is a powerful tool to perform ultrastructural analysis of targeted tissues or cells. The large field of view of the light microscope (LM) enables quick and efficient surveys of the whole specimen. It is also compatible with live imaging, giving access to functional assays. CLEM protocols take advantage of the features to efficiently retrace the position of targeted sites when switching from one modality to the other. They more often rely on anatomical cues that are visible both by light and electron microscopy. We present here a simple workflow where multicellular specimens are embedded in minimal amounts of resin, exposing their surface topology that can be imaged by scanning electron microscopy (SEM). LM and SEM both benefit from a large field of view that can cover whole model organisms. As a result, targeting specific anatomic locations by focused ion beam-SEM (FIB-SEM) tomography becomes straightforward. We illustrate this application on three different model organisms, used in our laboratory: the zebrafish embryo Danio rerio, the marine worm Platynereis dumerilii, and the dauer larva of the nematode Caenorhabditis elegans. Here we focus on the experimental steps to reduce the amount of resin covering the samples and to image the specimens inside an FIB-SEM. We expect this approach to have widespread applications for volume electron microscopy on multiple model organisms. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Interactive Modelling of Shapes Using the Level-Set Method

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Christensen, Niels Jørgen

    2002-01-01

    In this paper, we propose a technique for intuitive, interactive modelling of {3D} shapes. The technique is based on the Level-Set Method which has the virtue of easily handling changes to the topology of the represented solid. Furthermore, this method also leads to sculpting operations that are ......In this paper, we propose a technique for intuitive, interactive modelling of {3D} shapes. The technique is based on the Level-Set Method which has the virtue of easily handling changes to the topology of the represented solid. Furthermore, this method also leads to sculpting operations...... which are suitable for shape modelling are proposed. However, normally these would result in tools that would a ect the entire model. To facilitate local changes to the model, we introduce a windowing scheme which constrains the {LSM} to a ect only a small part of the model. The {LSM} based sculpting...... tools have been incorporated in our sculpting system which also includes facilities for volumetric {CSG} and several techniques for visualization....

  13. A statistical method for descriminating between alternative radiobiological models

    International Nuclear Information System (INIS)

    Kinsella, I.A.; Malone, J.F.

    1977-01-01

    Radiobiological models assist understanding of the development of radiation damage, and may provide a basis for extrapolating dose-effect curves from high to low dose regions. Many models have been proposed such as multitarget and its modifications, enzymatic models, and those with a quadratic dose response relationship (i.e. αD + βD 2 forms). It is difficult to distinguish between these because the statistical techniques used are almost always limited, in that one method can rarely be applied to the whole range of models. A general statistical procedure for parameter estimation (Maximum Liklihood Method) has been found applicable to a wide range of radiobiological models. The curve parameters are estimated using a computerised search that continues until the most likely set of values to fit the data is obtained. When the search is complete two procedures are carried out. First a goodness of fit test is applied which examines the applicability of an individual model to the data. Secondly an index is derived which provides an indication of the adequacy of any model compared with alternative models. Thus the models may be ranked according to how well they fit the data. For example, with one set of data, multitarget types were found to be more suitable than quadratic types (αD + βD 2 ). This method should be of assitance is evaluating various models. It may also be profitably applied to selection of the most appropriate model to use, when it is necessary to extrapolate from high to low doses

  14. Modeling Water Quality Parameters Using Data-driven Methods

    Directory of Open Access Journals (Sweden)

    Shima Soleimani

    2017-02-01

    Full Text Available Introduction: Surface water bodies are the most easily available water resources. Increase use and waste water withdrawal of surface water causes drastic changes in surface water quality. Water quality, importance as the most vulnerable and important water supply resources is absolutely clear. Unfortunately, in the recent years because of city population increase, economical improvement, and industrial product increase, entry of pollutants to water bodies has been increased. According to that water quality parameters express physical, chemical, and biological water features. So the importance of water quality monitoring is necessary more than before. Each of various uses of water, such as agriculture, drinking, industry, and aquaculture needs the water with a special quality. In the other hand, the exact estimation of concentration of water quality parameter is significant. Material and Methods: In this research, first two input variable models as selection methods (namely, correlation coefficient and principal component analysis were applied to select the model inputs. Data processing is consisting of three steps, (1 data considering, (2 identification of input data which have efficient on output data, and (3 selecting the training and testing data. Genetic Algorithm-Least Square Support Vector Regression (GA-LSSVR algorithm were developed to model the water quality parameters. In the LSSVR method is assumed that the relationship between input and output variables is nonlinear, but by using a nonlinear mapping relation can create a space which is named feature space in which relationship between input and output variables is defined linear. The developed algorithm is able to gain maximize the accuracy of the LSSVR method with auto LSSVR parameters. Genetic algorithm (GA is one of evolutionary algorithm which automatically can find the optimum coefficient of Least Square Support Vector Regression (LSSVR. The GA-LSSVR algorithm was employed to

  15. Hematite/silver nanoparticle bilayers on mica--AFM, SEM and streaming potential studies.

    Science.gov (United States)

    Morga, Maria; Adamczyk, Zbigniew; Oćwieja, Magdalena; Bielańska, Elżbieta

    2014-06-15

    Bilayers of hematite/silver nanoparticles were obtained in the self-assembly process and thoroughly characterized using scanning electron microscopy (SEM), atomic force microscopy (AFM), and in situ streaming potential measurements. The hematite nanoparticles, forming a supporting layer, were 22 nm in diameter, exhibiting an isoelectric point at pH 8.9. The silver nanoparticles, used to obtain an external layer, were 29 nm in diameter, and remained negative within the pH range 3 to 11. In order to investigate the particle deposition, mica sheets were used as a model solid substrate. The coverage of the supporting layer was adjusted by changing the bulk concentration of the hematite suspension and the deposition time. Afterward, silver nanoparticle monolayers of controlled coverage were deposited under the diffusion-controlled transport. The coverage of bilayers was determined by a direct enumeration of deposited particles from SEM micrographs and AFM images. Additionally, the formation of the hematite/silver bilayers was investigated by streaming potential measurements carried out under in situ conditions. The effect of the mica substrate and the coverage of a supporting layer on the zeta potential of bilayers was systematically studied. It was established that for the coverage exceeding 0.20, the zeta potential of bilayers was independent on the substrate and the supporting layer coverage. This behavior was theoretically interpreted in terms of the 3D electrokinetic model. Beside significance for basic sciences, these measurements allowed to develop a robust method of preparing nanoparticle bilayers of controlled properties, having potential applications in catalytic processes. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Statistical learning modeling method for space debris photometric measurement

    Science.gov (United States)

    Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen

    2016-03-01

    Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.

  17. Numerical modeling of spray combustion with an advanced VOF method

    Science.gov (United States)

    Chen, Yen-Sen; Shang, Huan-Min; Shih, Ming-Hsin; Liaw, Paul

    1995-01-01

    This paper summarizes the technical development and validation of a multiphase computational fluid dynamics (CFD) numerical method using the volume-of-fluid (VOF) model and a Lagrangian tracking model which can be employed to analyze general multiphase flow problems with free surface mechanism. The gas-liquid interface mass, momentum and energy conservation relationships are modeled by continuum surface mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed flow regimes. The objectives of the present study are to develop and verify the fractional volume-of-fluid cell partitioning approach into a predictor-corrector algorithm and to demonstrate the effectiveness of the present approach by simulating benchmark problems including laminar impinging jets, shear coaxial jet atomization and shear coaxial spray combustion flows.

  18. Methods of mathematical modelling continuous systems and differential equations

    CERN Document Server

    Witelski, Thomas

    2015-01-01

    This book presents mathematical modelling and the integrated process of formulating sets of equations to describe real-world problems. It describes methods for obtaining solutions of challenging differential equations stemming from problems in areas such as chemical reactions, population dynamics, mechanical systems, and fluid mechanics. Chapters 1 to 4 cover essential topics in ordinary differential equations, transport equations and the calculus of variations that are important for formulating models. Chapters 5 to 11 then develop more advanced techniques including similarity solutions, matched asymptotic expansions, multiple scale analysis, long-wave models, and fast/slow dynamical systems. Methods of Mathematical Modelling will be useful for advanced undergraduate or beginning graduate students in applied mathematics, engineering and other applied sciences.

  19. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  20. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  1. Curve fitting methods for solar radiation data modeling

    International Nuclear Information System (INIS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-01-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods

  2. Modeling Enzymatic Transition States by Force Field Methods

    DEFF Research Database (Denmark)

    Hansen, Mikkel Bo; Jensen, Hans Jørgen Aagaard; Jensen, Frank

    2009-01-01

    The SEAM method, which models a transition structure as a minimum on the seam of two diabatic surfaces represented by force field functions, has been used to generate 20 transition structures for the decarboxylation of orotidine by the orotidine-5'-monophosphate decarboxylase enzyme. The dependence...... by various electronic structure methods, where part of the enzyme is represented by a force field description and the effects of the solvent are represented by a continuum model. The relative energies vary by several hundreds of kJ/mol between the transition structures, and tests showed that a large part...

  3. Evaluation process radiological in ternopil region method of box models

    Directory of Open Access Journals (Sweden)

    І.В. Матвєєва

    2006-02-01

    Full Text Available  Results of radionuclides Sr-90 flows analyses in the ecosystem of Kotsubinchiky village of Ternopolskaya oblast were analyzed. The block-scheme of ecosystem and its mathematical model using the box models method were made. It allowed us to evaluate the ways of dose’s loadings formation of internal irradiation for miscellaneous population groups – working people, retirees, children, and also to prognose the dynamic of these loadings during the years after the Chernobyl accident.

  4. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  5. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  6. Web semántica y servicios web semanticos

    OpenAIRE

    Marquez Solis, Santiago

    2007-01-01

    Des d'aquest TFC volem estudiar l'evolució de la Web actual cap a la Web Semàntica. Desde este TFC queremos estudiar la evolución de la Web actual hacia la Web Semántica. From this Final Degree Project we want to study the evolution of the current Web to the Semantic Web.

  7. Benefits from bremsstrahlung distribution evaluation to get unknown information from specimen in SEM and TEM

    Science.gov (United States)

    Eggert, F.; Camus, P. P.; Schleifer, M.; Reinauer, F.

    2018-01-01

    The energy-dispersive X-ray spectrometer (EDS or EDX) is a commonly used device to characterise the composition of investigated material in scanning and transmission electron microscopes (SEM and TEM). One major benefit compared to wavelength-dispersive X-ray spectrometers (WDS) is that EDS systems collect the entire spectrum simultaneously. Therefore, not only are all emitted characteristic X-ray lines in the spectrum, but also the complete bremsstrahlung distribution is included. It is possible to get information about the specimen even from this radiation, which is usually perceived more as a disturbing background. This is possible by using theoretical model knowledge about bremsstrahlung excitation and absorption in the specimen in comparison to the actual measured spectrum. The core aim of this investigation is to present a method for better bremsstrahlung fitting in unknown geometry cases by variation of the geometry parameters and to utilise this knowledge also for characteristic radiation evaluation. A method is described, which allows the parameterisation of the true X-ray absorption conditions during spectrum acquisition. An ‘effective tilt’ angle parameter is determined by evaluation of the bremsstrahlung shape of the measured SEM spectra. It is useful for bremsstrahlung background approximation, with exact calculations of the absorption edges below the characteristic peaks, required for P/B-ZAF model based quantification methods. It can even be used for ZAF based quantification models as a variable input parameter. The analytical results are then much more reliable for the different absorption effects from irregular specimen surfaces because the unknown absorption dependency is considered. Finally, the method is also applied for evaluation of TEM spectra. In this case, the real physical parameter optimisation is with sample thickness (mass thickness), which is influencing the emitted and measured spectrum due to different absorption with TEM

  8. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  9. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  10. Methods and models used in comparative risk studies

    International Nuclear Information System (INIS)

    Devooght, J.

    1983-01-01

    Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr

  11. A delay financial model with stochastic volatility; martingale method

    Science.gov (United States)

    Lee, Min-Ku; Kim, Jeong-Hoon; Kim, Joocheol

    2011-08-01

    In this paper, we extend a delayed geometric Brownian model by adding a stochastic volatility term, which is driven by a hidden process of fast mean reverting diffusion, to the delayed model. Combining a martingale approach and an asymptotic method, we develop a theory for option pricing under this hybrid model. The core result obtained by our work is a proof that a discounted approximate option price can be decomposed as a martingale part plus a small term. Subsequently, a correction effect on the European option price is demonstrated both theoretically and numerically for a good agreement with practical results.

  12. Finite analytic method for modeling variably saturated flows.

    Science.gov (United States)

    Zhang, Zaiyong; Wang, Wenke; Gong, Chengcheng; Yeh, Tian-Chyi Jim; Wang, Zhoufeng; Wang, Yu-Li; Chen, Li

    2018-04-15

    This paper develops a finite analytic method (FAM) for solving the two-dimensional Richards' equation. The FAM incorporates the analytic solution in local elements to formulate the algebraic representation of the partial differential equation of unsaturated flow so as to effectively control both numerical oscillation and dispersion. The FAM model is then verified using four examples, in which the numerical solutions are compared with analytical solutions, solutions from VSAFT2, and observational data from a field experiment. These numerical experiments show that the method is not only accurate but also efficient, when compared with other numerical methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Finite element method for incompressible two-fluid model using a fractional step method

    International Nuclear Information System (INIS)

    Uchiyama, Tomomi

    1997-01-01

    This paper presents a finite element method for an incompressible two-fluid model. The solution algorithm is based on the fractional step method, which is frequently used in the finite element calculation for single-phase flows. The calculating domain is divided into quadrilateral elements with four nodes. The Galerkin method is applied to derive the finite element equations. Air-water two-phase flows around a square cylinder are calculated by the finite element method. The calculation demonstrates the close relation between the volumetric fraction of the gas-phase and the vortices shed from the cylinder, which is favorably compared with the existing data. It is also confirmed that the present method allows the calculation with less CPU time than the SMAC finite element method proposed in my previous paper. (author)

  14. Comparison of methods for modelling geomagnetically induced currents

    Science.gov (United States)

    Boteler, D. H.; Pirjola, R. J.

    2014-09-01

    Assessing the geomagnetic hazard to power systems requires reliable modelling of the geomagnetically induced currents (GIC) produced in the power network. This paper compares the Nodal Admittance Matrix method with the Lehtinen-Pirjola method and shows them to be mathematically equivalent. GIC calculation using the Nodal Admittance Matrix method involves three steps: (1) using the voltage sources in the lines representing the induced geoelectric field to calculate equivalent current sources and summing these to obtain the nodal current sources, (2) performing the inversion of the admittance matrix and multiplying by the nodal current sources to obtain the nodal voltages, (3) using the nodal voltages to determine the currents in the lines and in the ground connections. In the Lehtinen-Pirjola method, steps 2 and 3 of the Nodal Admittance Matrix calculation are combined into one matrix expression. This involves inversion of a more complicated matrix but yields the currents to ground directly from the nodal current sources. To calculate GIC in multiple voltage levels of a power system, it is necessary to model the connections between voltage levels, not just the transmission lines and ground connections considered in traditional GIC modelling. Where GIC flow to ground through both the high-voltage and low-voltage windings of a transformer, they share a common path through the substation grounding resistance. This has been modelled previously by including non-zero, off-diagonal elements in the earthing impedance matrix of the Lehtinen-Pirjola method. However, this situation is more easily handled in both the Nodal Admittance Matrix method and the Lehtinen-Pirjola method by introducing a node at the neutral point.

  15. Comparison of methods for modelling geomagnetically induced currents

    Directory of Open Access Journals (Sweden)

    D. H. Boteler

    2014-09-01

    Full Text Available Assessing the geomagnetic hazard to power systems requires reliable modelling of the geomagnetically induced currents (GIC produced in the power network. This paper compares the Nodal Admittance Matrix method with the Lehtinen–Pirjola method and shows them to be mathematically equivalent. GIC calculation using the Nodal Admittance Matrix method involves three steps: (1 using the voltage sources in the lines representing the induced geoelectric field to calculate equivalent current sources and summing these to obtain the nodal current sources, (2 performing the inversion of the admittance matrix and multiplying by the nodal current sources to obtain the nodal voltages, (3 using the nodal voltages to determine the currents in the lines and in the ground connections. In the Lehtinen–Pirjola method, steps 2 and 3 of the Nodal Admittance Matrix calculation are combined into one matrix expression. This involves inversion of a more complicated matrix but yields the currents to ground directly from the nodal current sources. To calculate GIC in multiple voltage levels of a power system, it is necessary to model the connections between voltage levels, not just the transmission lines and ground connections considered in traditional GIC modelling. Where GIC flow to ground through both the high-voltage and low-voltage windings of a transformer, they share a common path through the substation grounding resistance. This has been modelled previously by including non-zero, off-diagonal elements in the earthing impedance matrix of the Lehtinen–Pirjola method. However, this situation is more easily handled in both the Nodal Admittance Matrix method and the Lehtinen–Pirjola method by introducing a node at the neutral point.

  16. Involving stakeholders in building integrated fisheries models using Bayesian methods

    DEFF Research Database (Denmark)

    Haapasaari, Päivi Elisabet; Mäntyniemi, Samu; Kuikka, Sakari

    2013-01-01

    A participatory Bayesian approach was used to investigate how the views of stakeholders could be utilized to develop models to help understand the Central Baltic herring fishery. In task one, we applied the Bayesian belief network methodology to elicit the causal assumptions of six stakeholders...... on factors that influence natural mortality, growth, and egg survival of the herring stock in probabilistic terms. We also integrated the expressed views into a meta-model using the Bayesian model averaging (BMA) method. In task two, we used influence diagrams to study qualitatively how the stakeholders frame...... the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology...

  17. Optimisation-Based Solution Methods for Set Partitioning Models

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel

    The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...

  18. Sample preparation for SEM of plant surfaces

    OpenAIRE

    A.K. Pathan; J. Bond; R.E. Gaskin

    2010-01-01

    Plant tissues must be dehydrated for observation in most electron microscopes. Although a number of sample processing techniques have been developed for preserving plant tissues in their original form and structure, none of them are guaranteed artefact-free. The current paper reviews common scanning electron microscopy techniques and the sample preparation methods employed for visualisation of leaves under specific types of electron microscopes. Common artefacts introduced by specific techniq...

  19. Modeling of Information Security Strategic Planning Methods and Expert Assessments

    Directory of Open Access Journals (Sweden)

    Alexander Panteleevich Batsula

    2014-09-01

    Full Text Available The article, paper addresses problem of increasing the level of information security. As a result, a method of increasing the level of information security is developed through its modeling of strategic planning SWOT-analysis using expert assessments.

  20. Ethnographic Decision Tree Modeling: A Research Method for Counseling Psychology.

    Science.gov (United States)

    Beck, Kirk A.

    2005-01-01

    This article describes ethnographic decision tree modeling (EDTM; C. H. Gladwin, 1989) as a mixed method design appropriate for counseling psychology research. EDTM is introduced and located within a postpositivist research paradigm. Decision theory that informs EDTM is reviewed, and the 2 phases of EDTM are highlighted. The 1st phase, model…

  1. Site Structure and User Navigation: Models, Measures and Methods

    NARCIS (Netherlands)

    Herder, E.; van Dijk, Elisabeth M.A.G.; Chen, S.Y; Magoulas, G.D.

    2004-01-01

    The analysis of the structure of Web sites and patterns of user navigation through these sites is gaining attention from different disciplines, as it enables unobtrusive discovery of user needs. In this chapter we give an overview of models, measures, and methods that can be used for analysis

  2. Unsteady panel method for complex configurations including wake modeling

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    2008-01-01

    Full Text Available The calculation of unsteady air loads is an essential step in any aeroelastic analysis. The subsonic doublet lattice method (DLM) is used extensively for this purpose due to its simplicity and reliability. The body models available with the popular...

  3. Application of the simplex method of linear programming model to ...

    African Journals Online (AJOL)

    This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...

  4. A comparison of two methods for fitting the INDCLUS model

    NARCIS (Netherlands)

    Ten Berge, Jos M.F.; Kiers, Henk A.L.

    2005-01-01

    Chaturvedi and Carroll have proposed the SINDCLUS method for fitting the INDCLUS model. It is based on splitting the two appearances of the cluster matrix in the least squares fit function and relying on convergence to a solution where both cluster matrices coincide. Kiers has proposed an

  5. Computational Methods for Modeling Aptamers and Designing Riboswitches

    Directory of Open Access Journals (Sweden)

    Sha Gong

    2017-11-01

    Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.

  6. Modelling of Airship Flight Mechanics by the Projection Equivalent Method

    Directory of Open Access Journals (Sweden)

    Frantisek Jelenciak

    2015-12-01

    Full Text Available This article describes the projection equivalent method (PEM as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that -in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a dynamics viewpoint. The principle of this method is based on applying Newton's mechanics, which are then combined with a specific form of the finite element method to cover additional effects. The main advantage of the PEM is that it is not necessary to carry out measurements in a wind tunnel for the identification of the model's parameters. The plausible dynamical behaviour of the model can be achieved by specific correction parameters, which can be determined on the basis of experimental data obtained during the flight of the aircraft. In this article, we present the PEM as applied to an airship as well as a comparison of the data calculated by the PEM and experimental flight data.

  7. Acoustic 3D modeling by the method of integral equations

    Science.gov (United States)

    Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.

    2018-02-01

    This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.

  8. Cognitive psychology and self-reports: models and methods.

    Science.gov (United States)

    Jobe, Jared B

    2003-05-01

    This article describes the models and methods that cognitive psychologists and survey researchers use to evaluate and experimentally test cognitive issues in questionnaire design and subsequently improve self-report instruments. These models and methods assess the cognitive processes underlying how respondents comprehend and generate answers to self-report questions. Cognitive processing models are briefly described. Non-experimental methods--expert cognitive review, cognitive task analysis, focus groups, and cognitive interviews--are described. Examples are provided of how these methods were effectively used to identify cognitive self-report issues. Experimental methods--cognitive laboratory experiments, field tests, and experiments embedded in field surveys--are described. Examples are provided of: (a) how laboratory experiments were designed to test the capability and accuracy of respondents in performing the cognitive tasks required to answer self-report questions, (b) how a field experiment was conducted in which a cognitively designed questionnaire was effectively tested against the original questionnaire, and (c) how a cognitive experiment embedded in a field survey was conducted to test cognitive predictions.

  9. A new method to determine the number of experimental data using statistical modeling methods

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jung-Ho; Kang, Young-Jin; Lim, O-Kaung; Noh, Yoojeong [Pusan National University, Busan (Korea, Republic of)

    2017-06-15

    For analyzing the statistical performance of physical systems, statistical characteristics of physical parameters such as material properties need to be estimated by collecting experimental data. For accurate statistical modeling, many such experiments may be required, but data are usually quite limited owing to the cost and time constraints of experiments. In this study, a new method for determining a rea- sonable number of experimental data is proposed using an area metric, after obtaining statistical models using the information on the underlying distribution, the Sequential statistical modeling (SSM) approach, and the Kernel density estimation (KDE) approach. The area metric is used as a convergence criterion to determine the necessary and sufficient number of experimental data to be acquired. The pro- posed method is validated in simulations, using different statistical modeling methods, different true models, and different convergence criteria. An example data set with 29 data describing the fatigue strength coefficient of SAE 950X is used for demonstrating the performance of the obtained statistical models that use a pre-determined number of experimental data in predicting the probability of failure for a target fatigue life.

  10. A constructive model potential method for atomic interactions

    Science.gov (United States)

    Bottcher, C.; Dalgarno, A.

    1974-01-01

    A model potential method is presented that can be applied to many electron single centre and two centre systems. The development leads to a Hamiltonian with terms arising from core polarization that depend parametrically upon the positions of the valence electrons. Some of the terms have been introduced empirically in previous studies. Their significance is clarified by an analysis of a similar model in classical electrostatics. The explicit forms of the expectation values of operators at large separations of two atoms given by the model potential method are shown to be equivalent to the exact forms when the assumption is made that the energy level differences of one atom are negligible compared to those of the other.

  11. Unicriterion Model: A Qualitative Decision Making Method That Promotes Ethics

    Directory of Open Access Journals (Sweden)

    Fernando Guilherme Silvano Lobo Pimentel

    2011-06-01

    Full Text Available Management decision making methods frequently adopt quantitativemodels of several criteria that bypass the question of whysome criteria are considered more important than others, whichmakes more difficult the task of delivering a transparent viewof preference structure priorities that might promote ethics andlearning and serve as a basis for future decisions. To tackle thisparticular shortcoming of usual methods, an alternative qualitativemethodology of aggregating preferences based on the rankingof criteria is proposed. Such an approach delivers a simpleand transparent model for the solution of each preference conflictfaced during the management decision making process. Themethod proceeds by breaking the decision problem into ‘two criteria– two alternatives’ scenarios, and translating the problem ofchoice between alternatives to a problem of choice between criteriawhenever appropriate. The unicriterion model method is illustratedby its application in a car purchase and a house purchasedecision problem.

  12. Dynamic modeling method for infrared smoke based on enhanced discrete phase model

    Science.gov (United States)

    Zhang, Zhendong; Yang, Chunling; Zhang, Yan; Zhu, Hongbo

    2018-03-01

    The dynamic modeling of infrared (IR) smoke plays an important role in IR scene simulation systems and its accuracy directly influences the system veracity. However, current IR smoke models cannot provide high veracity, because certain physical characteristics are frequently ignored in fluid simulation; simplifying the discrete phase as a continuous phase and ignoring the IR decoy missile-body spinning. To address this defect, this paper proposes a dynamic modeling method for IR smoke, based on an enhanced discrete phase model (DPM). A mathematical simulation model based on an enhanced DPM is built and a dynamic computing fluid mesh is generated. The dynamic model of IR smoke is then established using an extended equivalent-blackbody-molecule model. Experiments demonstrate that this model realizes a dynamic method for modeling IR smoke with higher veracity.

  13. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    Science.gov (United States)

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  14. CAD-based automatic modeling method for Geant4 geometry model through MCAM

    International Nuclear Information System (INIS)

    Wang, D.; Nie, F.; Wang, G.; Long, P.; LV, Z.

    2013-01-01

    The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)

  15. Model parameterization as method for data analysis in dendroecology

    Science.gov (United States)

    Tychkov, Ivan; Shishov, Vladimir; Popkova, Margarita

    2017-04-01

    There is no argue in usefulness of process-based models in ecological studies. Only limitations is how developed algorithm of model and how it will be applied for research. Simulation of tree-ring growth based on climate provides valuable information of tree-ring growth response on different environmental conditions, but also shares light on species-specifics of tree-ring growth process. Visual parameterization of the Vaganov-Shashkin model, allows to estimate non-linear response of tree-ring growth based on daily climate data: daily temperature, estimated day light and soil moisture. Previous using of the VS-Oscilloscope (a software tool of the visual parameterization) shows a good ability to recreate unique patterns of tree-ring growth for coniferous species in Siberian Russia, USA, China, Mediterranean Spain and Tunisia. But using of the models mostly is one-sided to better understand different tree growth processes, opposite to statistical methods of analysis (e.g. Generalized Linear Models, Mixed Models, Structural Equations.) which can be used for reconstruction and forecast. Usually the models are used either for checking of new hypothesis or quantitative assessment of physiological tree growth data to reveal a growth process mechanisms, while statistical methods used for data mining assessment and as a study tool itself. The high sensitivity of the model's VS-parameters reflects the ability of the model to simulate tree-ring growth and evaluates value of limiting growth climate factors. Precise parameterization of VS-Oscilloscope provides valuable information about growth processes of trees and under what conditions these processes occur (e.g. day of growth season onset, length of season, value of minimal/maximum temperature for tree-ring growth, formation of wide or narrow rings etc.). The work was supported by the Russian Science Foundation (RSF # 14-14-00219)

  16. Modeling of radionuclide migration through porous material with meshless method

    International Nuclear Information System (INIS)

    Vrankar, L.; Turk, G.; Runovc, F.

    2005-01-01

    To assess the long term safety of a radioactive waste disposal system, mathematical models are used to describe groundwater flow, chemistry and potential radionuclide migration through geological formations. A number of processes need to be considered when predicting the movement of radionuclides through the geosphere. The most important input data are obtained from field measurements, which are not completely available for all regions of interest. For example, the hydraulic conductivity as an input parameter varies from place to place. In such cases geostatistical science offers a variety of spatial estimation procedures. Methods for solving the solute transport equation can also be classified as Eulerian, Lagrangian and mixed. The numerical solution of partial differential equations (PDE) is usually obtained by finite difference methods (FDM), finite element methods (FEM), or finite volume methods (FVM). Kansa introduced the concept of solving partial differential equations using radial basis functions (RBF) for hyperbolic, parabolic and elliptic PDEs. Our goal was to present a relatively new approach to the modelling of radionuclide migration through the geosphere using radial basis function methods in Eulerian and Lagrangian coordinates. Radionuclide concentrations will also be calculated in heterogeneous and partly heterogeneous 2D porous media. We compared the meshless method with the traditional finite difference scheme. (author)

  17. Automated SEM Modal Analysis Applied to the Diogenites

    Science.gov (United States)

    Bowman, L. E.; Spilde, M. N.; Papike, James J.

    1996-01-01

    Analysis of volume proportions of minerals, or modal analysis, is routinely accomplished by point counting on an optical microscope, but the process, particularly on brecciated samples such as the diogenite meteorites, is tedious and prone to error by misidentification of very small fragments, which may make up a significant volume of the sample. Precise volume percentage data can be gathered on a scanning electron microscope (SEM) utilizing digital imaging and an energy dispersive spectrometer (EDS). This form of automated phase analysis reduces error, and at the same time provides more information than could be gathered using simple point counting alone, such as particle morphology statistics and chemical analyses. We have previously studied major, minor, and trace-element chemistry of orthopyroxene from a suite of diogenites. This abstract describes the method applied to determine the modes on this same suite of meteorites and the results of that research. The modal abundances thus determined add additional information on the petrogenesis of the diogenites. In addition, low-abundance phases such as spinels were located for further analysis by this method.

  18. A review of distributed parameter groundwater management modeling methods

    Science.gov (United States)

    Gorelick, Steven M.

    1983-01-01

    Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programing, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Groundwater hydraulic management models enable the determination of optimal locations and pumping rates of numerous wells under a variety of restrictions placed upon local drawdown, hydraulic gradients, and water production targets. Groundwater policy evaluation and allocation models can be used to study the influence upon regional groundwater use of institutional policies such as taxes and quotas. Furthermore, fairly complex groundwater-surface water allocation problems can be handled using system decomposition and multilevel optimization. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. Classes of research missing from the literature are groundwater quality management models involving nonlinear constraints, models which join groundwater hydraulic and quality simulations with political-economic management considerations, and management models that include parameter uncertainty.

  19. Semantic Model Driven Architecture Based Method for Enterprise Application Development

    Science.gov (United States)

    Wu, Minghui; Ying, Jing; Yan, Hui

    Enterprise applications have the requirements of meeting dynamic businesses processes and adopting lasted technologies flexibly, with to solve the problems caused by the nature of heterogeneous characteristic. Service-Oriented Architecture (SOA) is becoming a leading paradigm for business process integration. This research work focuses on business process modeling, proposes a semantic model-driven development method named SMDA combined with the Ontology and Model-Driven Architecture (MDA) technologies. The architecture of SMDA is presented in three orthogonal perspectives. (1) Vertical axis is the MDA 4 layers, the focus is UML profiles in M2 (meta-model layer) for ontology modeling, and three abstract levels: CIM, PIM and PSM modeling respectively. (2) Horizontal axis is different concerns involved in the development: Process, Application, Information, Organization, and Technology. (3) Traversal Axis is referred to aspects that have influence on other models of the cross-cutting axis: Architecture, Semantics, Aspect, and Pattern. The paper also introduces the modeling and transformation process in SMDA, and describes dynamic service composition supports briefly.

  20. Modeling of uncertainty in atmospheric transport system using hybrid method

    International Nuclear Information System (INIS)

    Pandey, M.; Ranade, Ashok; Brij Kumar; Datta, D.

    2012-01-01

    Atmospheric dispersion models are routinely used at nuclear and chemical plants to estimate exposure to the members of the public and occupational workers due to release of hazardous contaminants into the atmosphere. Atmospheric dispersion is a stochastic phenomenon and in general, the concentration of the contaminant estimated at a given time and at a predetermined location downwind of a source cannot be predicted precisely. Uncertainty in atmospheric dispersion model predictions is associated with: 'data' or 'parameter' uncertainty resulting from errors in the data used to execute and evaluate the model, uncertainties in empirical model parameters, and initial and boundary conditions; 'model' or 'structural' uncertainty arising from inaccurate treatment of dynamical and chemical processes, approximate numerical solutions, and internal model errors; and 'stochastic' uncertainty, which results from the turbulent nature of the atmosphere as well as from unpredictability of human activities related to emissions, The possibility theory based on fuzzy measure has been proposed in recent years as an alternative approach to address knowledge uncertainty of a model in situations where available information is too vague to represent the parameters statistically. The paper presents a novel approach (called Hybrid Method) to model knowledge uncertainty in a physical system by a combination of probabilistic and possibilistic representation of parametric uncertainties. As a case study, the proposed approach is applied for estimating the ground level concentration of hazardous contaminant in air due to atmospheric releases through the stack (chimney) of a nuclear plant. The application illustrates the potential of the proposed approach. (author)

  1. Evaluation of internal noise methods for Hotelling observer models

    International Nuclear Information System (INIS)

    Zhang Yani; Pham, Binh T.; Eckstein, Miguel P.

    2007-01-01

    The inclusion of internal noise in model observers is a common method to allow for quantitative comparisons between human and model observer performance in visual detection tasks. In this article, we studied two different strategies for inserting internal noise into Hotelling model observers. In the first strategy, internal noise was added to the output of individual channels: (a) Independent nonuniform channel noise, (b) independent uniform channel noise. In the second strategy, internal noise was added to the decision variable arising from the combination of channel responses. The standard deviation of the zero mean internal noise was either constant or proportional to: (a) the decision variable's standard deviation due to the external noise, (b) the decision variable's variance caused by the external noise, (c) the decision variable magnitude on a trial to trial basis. We tested three model observers: square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO) using a four alternative forced choice (4AFC) signal known exactly but variable task with a simulated signal embedded in real x-ray coronary angiogram backgrounds. The results showed that the internal noise method that led to the best prediction of human performance differed across the studied model observers. The CHO model best predicted human observer performance with the channel internal noise. The HO and LGHO best predicted human observer performance with the decision variable internal noise. The present results might guide researchers with the choice of methods to include internal noise into Hotelling model observers when evaluating and optimizing medical image quality

  2. Using Structural Equation Modelling (SEM) to predict use of ...

    African Journals Online (AJOL)

    mother to child transmission of HIV . It was found to be effective in changing behaviour and studies ... indirectly via intention (willingness) in order to influence the behaviour of coming for VCT. This is because VCT .... similar to those reported in hospital or satellite. (stand-alone) VCT centres, we feel that our findings can also ...

  3. Comparison of SEM and Optical Analysis of DT Neutron Tracks in CR-39 Detectors

    Energy Technology Data Exchange (ETDEWEB)

    Mosier-Boss, P A; Carbonelle, P; Morey, M S; Tinsley, J R; Hurley, J P

    2012-01-01

    CR-39 detectors were exposed to DT neutrons generated by a Thermo Fisher model A290 neutron generator. Afterwards, the etched tracks were examined both optically and by SEM. The purpose of the analysis was to compare the two techniques and to determine whether additional information on track geometry could be obtained by SEM analysis. The use of these techniques to examine triple tracks, diagnostic of ≥9.6 MeV neutrons, observed in CR-39 used in Pd/D codeposition experiments will also be discussed.

  4. Storm surge model based on variational data assimilation method

    Directory of Open Access Journals (Sweden)

    Shi-li Huang

    2010-06-01

    Full Text Available By combining computation and observation information, the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting. It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge. By controlling the wind stress drag coefficient, the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon. In the data assimilation tests, the model accurately identified the wind stress drag coefficient and obtained results close to the true state. Then, the actual storm surge induced by Typhoon 0515 was forecast by the developed model, and the results demonstrate its efficiency in practical application.

  5. Morphological characteristics of primary enamel surfaces versus permanent enamel surfaces: SEM digital analysis.

    Science.gov (United States)

    Lucchese, A; Storti, E

    2011-09-01

    The morphology of permanent and primary enamel surface merits further analysis. The objective of this study was to illustrate a method of SEM digital image processing able to quantify and discriminate between the morphological characteristics of primary and permanent tooth enamel. Sixteen extracted teeth, 8 primary teeth and 8 permanent teeth, kept in saline solution, were analysed. The teeth were observed under SEM. The SEM images were analysed by means of digitally processed algorithms. The two algorithms used were: Local standard deviation to measure surface roughness with the roughness index (RI); Hough's theorem to identify linear structures with the linear structure index (LSI). The SEM images of primary teeth enamel show smooth enamel with little areas of irregularity. No linear structures are apparent. The SEM images of permanent enamel show a not perfectly smooth surface; there are furrows and irregularities of variable depth and width. In the clinical practice a number of different situations require the removal of a thin layer of enamel. Only a good morphological knowledge of both permanent and primary tooth enamel gives the opportunity to identify and exploit the effects of rotary tools on enamel, thus allowing for a correct finishing technique.

  6. Soft tissue digestion of Paradiplozoon vaalense for SEM of sclerites and simultaneous molecular analysis.

    Science.gov (United States)

    Dos Santos, Q M; Avenant-Oldewage, A

    2015-02-01

    Classification of most monogeneans is primarily based on size, shape, and arrangement of haptoral sclerites. These structures are often obscured or misinterpreted when studied using light microscopy, leading to confusion regarding defining characters. Scanning electron microscopy (SEM) has predominantly been used to study haptoral sclerites in smaller monogeneans, focusing on hooks and anchors. In the Diplozoidae, SEM has not been used to study haptoral sclerites. Using new and modified techniques, the sclerites of diplozoids collected in South Africa were successfully studied using SEM. The digestion buffer from a DNA extraction kit was used to digest the surrounding tissue, and Poly-L-lysine-coated and concavity slides were employed to limit the movement and loss of sclerites, with the latter being more user-friendly. In addition to the success of visualizing the sclerites using SEM, the digested tissue from as little as half of the haptor provided viable genetic material for molecular characterization. From the results presented here, the study of the sclerites of larger monogeneans using SEM, including those bearing clamps, is a viable possibility for future research. Also, this method may be beneficial for the study of other, non-haptoral sclerites, such as cirri in other families of monogeneans. During this study, Labeo capensis was noted as a valid host of Paradiplozoon vaalense in a region of the Vaal River where the type host, Labeo umbratus, appears to be absent.

  7. Modeling crime events by d-separation method

    Science.gov (United States)

    Aarthee, R.; Ezhilmaran, D.

    2017-11-01

    Problematic legal cases have recently called for a scientifically founded method of dealing with the qualitative and quantitative roles of evidence in a case [1].To deal with quantitative, we proposed a d-separation method for modeling the crime events. A d-separation is a graphical criterion for identifying independence in a directed acyclic graph. By developing a d-separation method, we aim to lay the foundations for the development of a software support tool that can deal with the evidential reasoning in legal cases. Such a tool is meant to be used by a judge or juror, in alliance with various experts who can provide information about the details. This will hopefully improve the communication between judges or jurors and experts. The proposed method used to uncover more valid independencies than any other graphical criterion.

  8. Numerical methods for the Lévy LIBOR model

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Skovmand, David

    2010-01-01

    but the methods are generally slow. We propose an alternative approximation scheme based on Picard iterations. Our approach is similar in accuracy to the full numerical solution, but with the feature that each rate is, unlike the standard method, evolved independently of the other rates in the term structure....... This enables simultaneous calculation of derivative prices of different maturities using parallel computing. We include numerical illustrations of the accuracy and speed of our method pricing caplets.......The aim of this work is to provide fast and accurate approximation schemes for the Monte-Carlo pricing of derivatives in the L\\'evy LIBOR model of Eberlein and \\"Ozkan (2005). Standard methods can be applied to solve the stochastic differential equations of the successive LIBOR rates...

  9. Numerical Methods for the Lévy LIBOR Model

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Skovmand, David

    are generally slow. We propose an alternative approximation scheme based on Picard iterations. Our approach is similar in accuracy to the full numerical solution, but with the feature that each rate is, unlike the standard method, evolved independently of the other rates in the term structure. This enables...... simultaneous calculation of derivative prices of different maturities using parallel computing. We include numerical illustrations of the accuracy and speed of our method pricing caplets.......The aim of this work is to provide fast and accurate approximation schemes for the Monte-Carlo pricing of derivatives in the Lévy LIBOR model of Eberlein and Özkan (2005). Standard methods can be applied to solve the stochastic differential equations of the successive LIBOR rates but the methods...

  10. Dimensional verification of high aspect micro structures using FIB-SEM

    DEFF Research Database (Denmark)

    Zhang, Yang; Hansen, Hans Nørgaard

    2013-01-01

    -SEM) assisted by Spip®. The micro features are circular holes 10μm in diameter and 20μm deep, with a 20μm pitch. Various inspection methods were attempted to obtain dimensional information. Due to the dimension, neither optical instrument nor atomic force microscope (AFM) was capable to perform the measurement...

  11. Comparison of SEM and Optical Analysis of DT Neutron Tracks in CR-39 Detectors

    Energy Technology Data Exchange (ETDEWEB)

    P.A. Mosier-Boss, L.P.G. Forsley, P. Carbonnelle, M.S. Morey, J.R. Tinsley, J. P. Hurley, F.E. Gordon

    2012-01-01

    A solid state nuclear track detector, CR-39, was exposed to DT neutrons. After etching, the resultant tracks were analyzed using both an optical microscope and a scanning electron microscope (SEM). In this communication, both methods of analyzing DT neutron tracks are discussed.

  12. Soybean yield modeling using bootstrap methods for small samples

    Energy Technology Data Exchange (ETDEWEB)

    Dalposso, G.A.; Uribe-Opazo, M.A.; Johann, J.A.

    2016-11-01

    One of the problems that occur when working with regression models is regarding the sample size; once the statistical methods used in inferential analyzes are asymptotic if the sample is small the analysis may be compromised because the estimates will be biased. An alternative is to use the bootstrap methodology, which in its non-parametric version does not need to guess or know the probability distribution that generated the original sample. In this work we used a set of soybean yield data and physical and chemical soil properties formed with fewer samples to determine a multiple linear regression model. Bootstrap methods were used for variable selection, identification of influential points and for determination of confidence intervals of the model parameters. The results showed that the bootstrap methods enabled us to select the physical and chemical soil properties, which were significant in the construction of the soybean yield regression model, construct the confidence intervals of the parameters and identify the points that had great influence on the estimated parameters. (Author)

  13. A hierarchical network modeling method for railway tunnels safety assessment

    Science.gov (United States)

    Zhou, Jin; Xu, Weixiang; Guo, Xin; Liu, Xumin

    2017-02-01

    Using network theory to model risk-related knowledge on accidents is regarded as potential very helpful in risk management. A large amount of defects detection data for railway tunnels is collected in autumn every year in China. It is extremely important to discover the regularities knowledge in database. In this paper, based on network theories and by using data mining techniques, a new method is proposed for mining risk-related regularities to support risk management in railway tunnel projects. A hierarchical network (HN) model which takes into account the tunnel structures, tunnel defects, potential failures and accidents is established. An improved Apriori algorithm is designed to rapidly and effectively mine correlations between tunnel structures and tunnel defects. Then an algorithm is presented in order to mine the risk-related regularities table (RRT) from the frequent patterns. At last, a safety assessment method is proposed by consideration of actual defects and possible risks of defects gained from the RRT. This method cannot only generate the quantitative risk results but also reveal the key defects and critical risks of defects. This paper is further development on accident causation network modeling methods which can provide guidance for specific maintenance measure.

  14. Alternative SEM techniques for observing pyritised fossil material.

    Science.gov (United States)

    Poole; Lloyd

    2000-11-01

    Two scanning electron microscopy (SEM) electron-specimen interactions that provide images based on sample crystal structure, electron channelling and electron backscattered diffraction, are described. The SEM operating conditions and sample preparation are presented, followed by an example application of these techniques to the study of pyritised plant material. The two approaches provide an opportunity to examine simultaneously, at higher magnifications normally available optically, detailed specimen anatomy and preservation state. Our investigation suggests that whereas both techniques have their advantages, the electron channelling approach is generally more readily available to most SEM users. However, electron backscattered diffraction does afford the opportunity of automated examination and characterisation of pyritised fossil material.

  15. Improvement of geometrical measurements from 3D-SEM reconstructions

    DEFF Research Database (Denmark)

    Carli, Lorenzo; De Chiffre, Leonardo; Horsewell, Andy

    2009-01-01

    The quantification of 3D geometry at the nanometric scale is a major metrological challenge. In this work geometrical measurements on cylindrical items obtained with a 3D-SEM were investigated. Two items were measured: a wire gauge having a 0.25 mm nominal diameter and a hypodermic needle having...... that the diameter estimation performed using the 3D-SEM leads to an overestimation of approx. 7% compared to the reference values obtained using a 1-D length measuring machine. Standard deviation of SEM measurements performed on the wire gauge is approx. 1.5 times lower than the one performed on the hypodermic...

  16. A Kriging Model Based Finite Element Model Updating Method for Damage Detection

    Directory of Open Access Journals (Sweden)

    Xiuming Yang

    2017-10-01

    Full Text Available Model updating is an effective means of damage identification and surrogate modeling has attracted considerable attention for saving computational cost in finite element (FE model updating, especially for large-scale structures. In this context, a surrogate model of frequency is normally constructed for damage identification, while the frequency response function (FRF is rarely used as it usually changes dramatically with updating parameters. This paper presents a new surrogate model based model updating method taking advantage of the measured FRFs. The Frequency Domain Assurance Criterion (FDAC is used to build the objective function, whose nonlinear response surface is constructed by the Kriging model. Then, the efficient global optimization (EGO algorithm is introduced to get the model updating results. The proposed method has good accuracy and robustness, which have been verified by a numerical simulation of a cantilever and experimental test data of a laboratory three-story structure.

  17. Development of an Economical Interfacing Circuit for Upgrading of SEM Data Printing System

    International Nuclear Information System (INIS)

    Punnachaiya, S.; Thong-Aram, D.

    2002-01-01

    The operating conditions of a Scanning Electron Microscope (SEM) i.e., magnification, accelerating voltage, micron mark and film identification labeling, are very important for the accurate interpretation of a micrograph picture. In the old model SEM, the built-in data printing system for film identification can be inputted only the numerical number. This will be made a confusing problems when various operating conditions were applied in routine work. An economical interfacing circuit, therefore, was developed to upgrade the data printing system for capable of alphanumerical labeling. The developed circuit was tested on both data printing systems of JSM-T220 and JSM-T330 (JEOL SEM). It was found that the interfacing function worked properly and easily installed

  18. Review of Methods for Buildings Energy Performance Modelling

    Science.gov (United States)

    Krstić, Hrvoje; Teni, Mihaela

    2017-10-01

    Research presented in this paper gives a brief review of methods used for buildings energy performance modelling. This paper gives also a comprehensive review of the advantages and disadvantages of available methods as well as the input parameters used for modelling buildings energy performance. European Directive EPBD obliges the implementation of energy certification procedure which gives an insight on buildings energy performance via exiting energy certificate databases. Some of the methods for buildings energy performance modelling mentioned in this paper are developed by employing data sets of buildings which have already undergone an energy certification procedure. Such database is used in this paper where the majority of buildings in the database have already gone under some form of partial retrofitting – replacement of windows or installation of thermal insulation but still have poor energy performance. The case study presented in this paper utilizes energy certificates database obtained from residential units in Croatia (over 400 buildings) in order to determine the dependence between buildings energy performance and variables from database by using statistical dependencies tests. Building energy performance in database is presented with building energy efficiency rate (from A+ to G) which is based on specific annual energy needs for heating for referential climatic data [kWh/(m2a)]. Independent variables in database are surfaces and volume of the conditioned part of the building, building shape factor, energy used for heating, CO2 emission, building age and year of reconstruction. Research results presented in this paper give an insight in possibilities of methods used for buildings energy performance modelling. Further on it gives an analysis of dependencies between buildings energy performance as a dependent variable and independent variables from the database. Presented results could be used for development of new building energy performance

  19. Modelo de web semántica para universidades

    Directory of Open Access Journals (Sweden)

    Karla Abad

    2015-12-01

    Full Text Available A raíz del estudio de estado actual de micrositios y repositorios en la Universidad Estatal Península de Santa Elena se encontró que su información carecía de semántica óptima y adecuada. Bajo estas circunstancias, se plantea entonces la necesidad de crear un modelo de estructura de web semántica para Universidades, el cual posteriormente fue aplicado a micrositios y repositorio digital de la UPSE, como caso de prueba. Parte de este proyecto incluye la instalación de módulos de software con sus respectivas configuraciones y la utilización de estándares de metadatos como DUBLIN CORE, para la mejora del SEO (optimización en motores de búsqueda; con ello se ha logrado la generación de metadatos estandarizados y la creación de políticas para la subida de información. El uso de metadatos transforma datos simples en estructuras bien organizadas que aportan información y conocimiento para generar resultados en buscadores web. Al culminar la implementación del modelo de web semántica es posible decir que la universidad ha mejorado su presencia y visibilidad en la web a través del indexamiento de información en diferentes motores de búsqueda y posicionamiento en la categorización de universidades y de repositorios de Webometrics (ranking que proporciona clasificación de universidades de todo el mundo.   Abstract After examining the current microsites and repositories situation in University, Peninsula of Santa Elena´s, it was found that information lacked optimal and appropriate semantic. Under these circumstances, there is a need to create a semantic web structure model for Universities, which was subsequently applied to UPSE´s microsites and digital repositories, as a test study case. Part of this project includes the installation of software modules with their respective configurations and the use of metadata standards such as DUBLIN CORE, to improve the SEO (Search Engine Optimization; with these applications, it was

  20. A MODEL AND CONTROLLER REDUCTION METHOD FOR ROBUST CONTROL DESIGN.

    Energy Technology Data Exchange (ETDEWEB)

    YUE,M.; SCHLUETER,R.

    2003-10-20

    A bifurcation subsystem based model and controller reduction approach is presented. Using this approach a robust {micro}-synthesis SVC control is designed for interarea oscillation and voltage control based on a small reduced order bifurcation subsystem model of the full system. The control synthesis problem is posed by structured uncertainty modeling and control configuration formulation using the bifurcation subsystem knowledge of the nature of the interarea oscillation caused by a specific uncertainty parameter. Bifurcation subsystem method plays a key role in this paper because it provides (1) a bifurcation parameter for uncertainty modeling; (2) a criterion to reduce the order of the resulting MSVC control; and (3) a low order model for a bifurcation subsystem based SVC (BMSVC) design. The use of the model of the bifurcation subsystem to produce a low order controller simplifies the control design and reduces the computation efforts so significantly that the robust {micro}-synthesis control can be applied to large system where the computation makes robust control design impractical. The RGA analysis and time simulation show that the reduced BMSVC control design captures the center manifold dynamics and uncertainty structure of the full system model and is capable of stabilizing the full system and achieving satisfactory control performance.

  1. A Parsimonious Bootstrap Method to Model Natural Inflow Energy Series

    Directory of Open Access Journals (Sweden)

    Fernando Luiz Cyrino Oliveira

    2014-01-01

    Full Text Available The Brazilian energy generation and transmission system is quite peculiar in its dimension and characteristics. As such, it can be considered unique in the world. It is a high dimension hydrothermal system with huge participation of hydro plants. Such strong dependency on hydrological regimes implies uncertainties related to the energetic planning, requiring adequate modeling of the hydrological time series. This is carried out via stochastic simulations of monthly inflow series using the family of Periodic Autoregressive models, PAR(p, one for each period (month of the year. In this paper it is shown the problems in fitting these models by the current system, particularly the identification of the autoregressive order “p” and the corresponding parameter estimation. It is followed by a proposal of a new approach to set both the model order and the parameters estimation of the PAR(p models, using a nonparametric computational technique, known as Bootstrap. This technique allows the estimation of reliable confidence intervals for the model parameters. The obtained results using the Parsimonious Bootstrap Method of Moments (PBMOM produced not only more parsimonious model orders but also adherent stochastic scenarios and, in the long range, lead to a better use of water resources in the energy operation planning.

  2. Modeling Music Emotion Judgments Using Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Naresh N. Vempala

    2018-01-01

    Full Text Available Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.

  3. New Models and Methods for the Electroweak Scale

    Energy Technology Data Exchange (ETDEWEB)

    Carpenter, Linda [The Ohio State Univ., Columbus, OH (United States). Dept. of Physics

    2017-09-26

    This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac

  4. Impacts modeling using the SPH particulate method. Case study

    International Nuclear Information System (INIS)

    Debord, R.

    1999-01-01

    The aim of this study is the modeling of the impact of melted metal on the reactor vessel head in the case of a core-meltdown accident. Modeling using the classical finite-element method alone is not sufficient but requires a coupling with particulate methods in order to take into account the behaviour of the corium. After a general introduction about particulate methods, the Nabor and SPH (smoothed particle hydrodynamics) methods are described. Then, the theoretical and numerical reliability of the SPH method is determined using simple cases. In particular, the number of neighbours significantly influences the preciseness of calculations. Also, the mesh of the structure must be adapted to the mesh of the fluid in order to reduce the edge effects. Finally, this study has shown that the values of artificial velocity coefficients used in the simulation of the BERDA test performed by the FZK Karlsruhe (Germany) are not correct. The domain of use of these coefficients was precised during a low speed impact. (J.S.)

  5. Seamless Method- and Model-based Software and Systems Engineering

    Science.gov (United States)

    Broy, Manfred

    Today engineering software intensive systems is still more or less handicraft or at most at the level of manufacturing. Many steps are done ad-hoc and not in a fully systematic way. Applied methods, if any, are not scientifically justified, not justified by empirical data and as a result carrying out large software projects still is an adventure. However, there is no reason why the development of software intensive systems cannot be done in the future with the same precision and scientific rigor as in established engineering disciplines. To do that, however, a number of scientific and engineering challenges have to be mastered. The first one aims at a deep understanding of the essentials of carrying out such projects, which includes appropriate models and effective management methods. What is needed is a portfolio of models and methods coming together with a comprehensive support by tools as well as deep insights into the obstacles of developing software intensive systems and a portfolio of established and proven techniques and methods with clear profiles and rules that indicate when which method is ready for application. In the following we argue that there is scientific evidence and enough research results so far to be confident that solid engineering of software intensive systems can be achieved in the future. However, yet quite a number of scientific research problems have to be solved.

  6. Finite-element method modeling of hyper-frequency structures

    International Nuclear Information System (INIS)

    Zhang, Min

    1990-01-01

    The modelization of microwave propagation problems, including Eigen-value problem and scattering problem, is accomplished by the finite element method with vector functional and scalar functional. For Eigen-value problem, propagation modes in waveguides and resonant modes in cavities can be calculated in a arbitrarily-shaped structure with inhomogeneous material. Several microwave structures are resolved in order to verify the program. One drawback associated with the vector functional is the appearance of spurious or non-physical solutions. A penalty function method has been introduced to reduce spurious' solutions. The adaptive charge method is originally proposed in this thesis to resolve waveguide scattering problem. This method, similar to VSWR measuring technique, is more efficient to obtain the reflection coefficient than the matrix method. Two waveguide discontinuity structures are calculated by the two methods and their results are compared. The adaptive charge method is also applied to a microwave plasma excitor. It allows us to understand the role of different physical parameters of excitor in the coupling of microwave energy to plasma mode and the mode without plasma. (author) [fr

  7. Fractal Image Informatics: from SEM to DEM

    Science.gov (United States)

    Oleschko, K.; Parrot, J.-F.; Korvin, G.; Esteves, M.; Vauclin, M.; Torres-Argüelles, V.; Salado, C. Gaona; Cherkasov, S.

    2008-05-01

    In this paper, we introduce a new branch of Fractal Geometry: Fractal Image Informatics, devoted to the systematic and standardized fractal analysis of images of natural systems. The methods of this discipline are based on the properties of multiscale images of selfaffine fractal surfaces. As proved in the paper, the image inherits the scaling and lacunarity of the surface and of its reflectance distribution [Korvin, 2005]. We claim that the fractal analysis of these images must be done without any smoothing, thresholding or binarization. Two new tools of Fractal Image Informatics, firmagram analysis (FA) and generalized lacunarity (GL), are presented and discussed in details. These techniques are applicable to any kind of image or to any observed positive-valued physical field, and can be used to correlate between images. It will be shown, by a modified Grassberger-Hentschel-Procaccia approach [Phys. Lett. 97A, 227 (1983); Physica 8D, 435 (1983)] that GL obeys the same scaling law as the Allain-Cloitre lacunarity [Phys. Rev. A 44, 3552 (1991)] but is free of the problems associated with gliding boxes. Several applications are shown from Soil Physics, Surface Science, and other fields.

  8. Modeling of Methods to Control Heat-Consumption Efficiency

    Science.gov (United States)

    Tsynaeva, E. A.; Tsynaeva, A. A.

    2016-11-01

    In this work, consideration has been given to thermophysical processes in automated heat consumption control systems (AHCCSs) of buildings, flow diagrams of these systems, and mathematical models describing the thermophysical processes during the systems' operation; an analysis of adequacy of the mathematical models has been presented. A comparison has been made of the operating efficiency of the systems and the methods to control the efficiency. It has been determined that the operating efficiency of an AHCCS depends on its diagram and the temperature chart of central quality control (CQC) and also on the temperature of a low-grade heat source for the system with a heat pump.

  9. A Method for Modeling of Floating Vertical Axis Wind Turbine

    DEFF Research Database (Denmark)

    Wang, Kai; Hansen, Martin Otto Laver; Moan, Torgeir

    2013-01-01

    It is of interest to investigate the potential advantages of floating vertical axis wind turbine (FVAWT) due to its economical installation and maintenance. A novel 5MW vertical axis wind turbine concept with a Darrieus rotor mounted on a semi-submersible support structure is proposed in this paper....... In order to assess the technical and economic feasibility of this novel concept, a comprehensive simulation tool for modeling of the floating vertical axis wind turbine is needed. This work presents the development of a coupled method for modeling of the dynamics of a floating vertical axis wind turbine...

  10. (Environmental and geophysical modeling, fracture mechanics, and boundary element methods)

    Energy Technology Data Exchange (ETDEWEB)

    Gray, L.J.

    1990-11-09

    Technical discussions at the various sites visited centered on application of boundary integral methods for environmental modeling, seismic analysis, and computational fracture mechanics in composite and smart'' materials. The traveler also attended the International Association for Boundary Element Methods Conference at Rome, Italy. While many aspects of boundary element theory and applications were discussed in the papers, the dominant topic was the analysis and application of hypersingular equations. This has been the focus of recent work by the author, and thus the conference was highly relevant to research at ORNL.

  11. Markov chain Monte Carlo methods in directed graphical models

    DEFF Research Database (Denmark)

    Højbjerre, Malene

    have primarily been based on a Bayesian paradigm, i.e. prior information on the parameters is a prerequisite, but questions about undesirable side effects from the priors are raised.     We present a method, based on MCMC methods, that approximates profile log-likelihood functions in directed graphical...... a tendency to foetal loss is heritable. The data possess a complicated dependence structure due to replicate pregnancies for the same woman, and a given family pattern. We conclude that a tendency to foetal loss is heritable. The model is of great interest in genetic epidemiology, because it considers both...

  12. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  13. RF tunable devices and subsystems methods of modeling, analysis, and applications methods of modeling, analysis, and applications

    CERN Document Server

    Gu, Qizheng

    2015-01-01

    This book serves as a hands-on guide to RF tunable devices, circuits and subsystems. An innovative method of modeling for tunable devices and networks is described, along with a new tuning algorithm, adaptive matching network control approach, and novel filter frequency automatic control loop.  The author provides readers with the necessary background and methods for designing and developing tunable RF networks/circuits and tunable RF font-ends, with an emphasis on applications to cellular communications. ·      Discusses the methods of characterizing, modeling, analyzing, and applying RF tunable devices and subsystems; ·      Explains the necessary methods of utilizing RF tunable devices and subsystems, rather than discussing the RF tunable devices themselves; ·      Presents and applies methods for MEMS tunable capacitors, which can be used for any RF tunable device; ·      Uses analytic methods wherever possible and provides numerous, closed-form solutions; ·      Includ...

  14. Alternative wind power modeling methods using chronological and load duration curve production cost models

    Energy Technology Data Exchange (ETDEWEB)

    Milligan, M R

    1996-04-01

    As an intermittent resource, capturing the temporal variation in windpower is an important issue in the context of utility production cost modeling. Many of the production cost models use a method that creates a cumulative probability distribution that is outside the time domain. The purpose of this report is to examine two production cost models that represent the two major model types: chronological and load duration cure models. This report is part of the ongoing research undertaken by the Wind Technology Division of the National Renewable Energy Laboratory in utility modeling and wind system integration.

  15. Evaluation of depth of field in SEM images in terms of the information-passing capacity (IPC) and contrast gradient in SEM image

    International Nuclear Information System (INIS)

    Sato, Mitsugu; Ishitani, Tohru; Watanabe, Shunya; Nakagawa, Mine

    2004-01-01

    The depth of field (DoF) in scanning electron microscope (SEM) images has been determined by estimating the change of image sharpness or resolution near the exact focus position. The image sharpness or resolution along the optical axis is determined by calculating the information-passing capacity (IPC) of an optical system taking into account the effect of pixel size of the image. The change of image sharpness near the exact focus position is determined by measuring the slope gradient of the line profile in SEM images obtained at various focal positions of beam. The change of image sharpness along the optical axis determined by the IPC agrees well with those determined by the slope gradient of line profiles in SEM images when a Gaussian distribution having radius 0.86L p (L p : pixel size in image) at which the intensity has fallen to 1/e of the maximum is applied to the IPC calculation for each pixel intensity. The change of image sharpness near the exact focus position has also been compared with those determined by the CG (Contrast-to-Gradient) method. The CG method slightly underestimates the change of image sharpness compared with those determined by the IPC method

  16. Procedures and Methods of Digital Modeling in Representation Didactics

    Science.gov (United States)

    La Mantia, M.

    2011-09-01

    At the Bachelor degree course in Engineering/Architecture of the University "La Sapienza" of Rome, the courses of Design and Survey, in addition to considering the learning of methods of representation, the application of descriptive geometry and survey, in order to expand the vision and spatial conception of the student, pay particular attention to the use of information technology for the preparation of design and survey drawings, achieving their goals through an educational path of "learning techniques, procedures and methods of modeling architectural structures." The fields of application involved two different educational areas: the analysis and that of survey, both from the acquisition of the given metric (design or survey) to the development of three-dimensional virtual model.

  17. Optimization Method of Fusing Model Tree into Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Yu Fang

    2017-01-01

    Full Text Available Partial Least Square (PLS can’t adapt to the characteristics of the data of many fields due to its own features multiple independent variables, multi-dependent variables and non-linear. However, Model Tree (MT has a good adaptability to nonlinear function, which is made up of many multiple linear segments. Based on this, a new method combining PLS and MT to analysis and predict the data is proposed, which build MT through the main ingredient and the explanatory variables(the dependent variable extracted from PLS, and extract residual information constantly to build Model Tree until well-pleased accuracy condition is satisfied. Using the data of the maxingshigan decoction of the monarch drug to treat the asthma or cough and two sample sets in the UCI Machine Learning Repository, the experimental results show that, the ability of explanation and predicting get improved in the new method.

  18. A Method of Upgrading a Hydrostatic Model to a Nonhydrostatic Model

    Directory of Open Access Journals (Sweden)

    Chi-Sann Liou

    2009-01-01

    Full Text Available As the sigma-p coordinate under hydrostatic approximation can be interpreted as the mass coordinate with out the hydro static approximation, we propose a method that up grades a hydro static model to a nonhydrostatic model with relatively less effort. The method adds to the primitive equations the extra terms omitted by the hydro static approximation and two prognostic equations for vertical speed w and nonhydrostatic part pres sure p'. With properly formulated governing equations, at each time step, the dynamic part of the model is first integrated as that for the original hydro static model and then nonhydrostatic contributions are added as corrections to the hydro static solutions. In applying physical parameterizations after the dynamic part integration, all physics pack ages of the original hydro static model can be directly used in the nonhydrostatic model, since the up graded nonhydrostatic model shares the same vertical coordinates with the original hydro static model. In this way, the majority codes of the nonhydrostatic model come from the original hydro static model. The extra codes are only needed for the calculation additional to the primitive equations. In order to handle sound waves, we use smaller time steps in the nonhydrostatic part dynamic time integration with a split-explicit scheme for horizontal momentum and temperature and a semi-implicit scheme for w and p'. Simulations of 2-dimensional mountain waves and density flows associated with a cold bubble have been used to test the method. The idealized case tests demonstrate that the pro posed method realistically simulates the nonhydrostatic effects on different atmospheric circulations that are revealed in the oretical solutions and simulations from other nonhydrostatic models. This method can be used in upgrading any global or mesoscale models from a hydrostatic to nonhydrostatic model.

  19. Multigrid Methods for A Mixed Finite Element Method of The Darcy-Forchheimer Model.

    Science.gov (United States)

    Huang, Jian; Chen, Long; Rui, Hongxing

    2018-01-01

    An efficient nonlinear multigrid method for a mixed finite element method of the Darcy-Forchheimer model is constructed in this paper. A Peaceman-Rachford type iteration is used as a smoother to decouple the nonlinearity from the divergence constraint. The nonlinear equation can be solved element-wise with a closed formulae. The linear saddle point system for the constraint is reduced into a symmetric positive definite system of Poisson type. Furthermore an empirical choice of the parameter used in the splitting is proposed and the resulting multigrid method is robust to the so-called Forchheimer number which controls the strength of the nonlinearity. By comparing the number of iterations and CPU time of different solvers in several numerical experiments, our multigrid method is shown to convergent with a rate independent of the mesh size and the Forchheimer number and with a nearly linear computational cost.

  20. Role of scanning electron microscope )SEM) in metal failure analysis

    International Nuclear Information System (INIS)

    Shaiful Rizam Shamsudin; Hafizal Yazid; Mohd Harun; Siti Selina Abd Hamid; Nadira Kamarudin; Zaiton Selamat; Mohd Shariff Sattar; Muhamad Jalil

    2005-01-01

    Scanning electron microscope (SEM) is a scientific instrument that uses a beam of highly energetic electrons to examine the surface and phase distribution of specimens on a micro scale through the live imaging of secondary electrons (SE) and back-scattered electrons (BSE) images. One of the main activities of SEM Laboratory at MINT is for failure analysis on metal part and components. The capability of SEM is excellent for determining the root cause of metal failures such as ductility or brittleness, stress corrosion, fatigue and other types of failures. Most of our customers that request for failure analysis are from local petrochemical plants, manufacturers of automotive components, pipeline maintenance personnel and engineers who involved in the development of metal parts and component. This paper intends to discuss some of the technical concepts in failure analysis associated with SEM. (Author)

  1. HyPEP FY06 Report: Models and Methods

    Energy Technology Data Exchange (ETDEWEB)

    DOE report

    2006-09-01

    The Department of Energy envisions the next generation very high-temperature gas-cooled reactor (VHTR) as a single-purpose or dual-purpose facility that produces hydrogen and electricity. The Ministry of Science and Technology (MOST) of the Republic of Korea also selected VHTR for the Nuclear Hydrogen Development and Demonstration (NHDD) Project. This research project aims at developing a user-friendly program for evaluating and optimizing cycle efficiencies of producing hydrogen and electricity in a Very-High-Temperature Reactor (VHTR). Systems for producing electricity and hydrogen are complex and the calculations associated with optimizing these systems are intensive, involving a large number of operating parameter variations and many different system configurations. This research project will produce the HyPEP computer model, which is specifically designed to be an easy-to-use and fast running tool for evaluating nuclear hydrogen and electricity production facilities. The model accommodates flexible system layouts and its cost models will enable HyPEP to be well-suited for system optimization. Specific activities of this research are designed to develop the HyPEP model into a working tool, including (a) identifying major systems and components for modeling, (b) establishing system operating parameters and calculation scope, (c) establishing the overall calculation scheme, (d) developing component models, (e) developing cost and optimization models, and (f) verifying and validating the program. Once the HyPEP model is fully developed and validated, it will be used to execute calculations on candidate system configurations. FY-06 report includes a description of reference designs, methods used in this study, models and computational strategies developed for the first year effort. Results from computer codes such as HYSYS and GASS/PASS-H used by Idaho National Laboratory and Argonne National Laboratory, respectively will be benchmarked with HyPEP results in the

  2. Chebyshev super spectral viscosity method for a fluidized bed model

    International Nuclear Information System (INIS)

    Sarra, Scott A.

    2003-01-01

    A Chebyshev super spectral viscosity method and operator splitting are used to solve a hyperbolic system of conservation laws with a source term modeling a fluidized bed. The fluidized bed displays a slugging behavior which corresponds to shocks in the solution. A modified Gegenbauer postprocessing procedure is used to obtain a solution which is free of oscillations caused by the Gibbs-Wilbraham phenomenon in the spectral viscosity solution. Conservation is maintained by working with unphysical negative particle concentrations

  3. A Model Based Security Testing Method for Protocol Implementation

    Directory of Open Access Journals (Sweden)

    Yu Long Fu

    2014-01-01

    Full Text Available The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  4. Chebyshev super spectral viscosity method for a fluidized bed model

    CERN Document Server

    Sarra, S A

    2003-01-01

    A Chebyshev super spectral viscosity method and operator splitting are used to solve a hyperbolic system of conservation laws with a source term modeling a fluidized bed. The fluidized bed displays a slugging behavior which corresponds to shocks in the solution. A modified Gegenbauer postprocessing procedure is used to obtain a solution which is free of oscillations caused by the Gibbs-Wilbraham phenomenon in the spectral viscosity solution. Conservation is maintained by working with unphysical negative particle concentrations.

  5. Methods for landslide susceptibility modelling in Lower Austria

    Science.gov (United States)

    Bell, Rainer; Petschko, Helene; Glade, Thomas; Leopold, Philip; Heiss, Gerhard; Proske, Herwig; Granica, Klaus; Schweigl, Joachim; Pomaroli, Gilbert

    2010-05-01

    Landslide susceptibility modelling and implementation of the resulting maps is still a challenge for geoscientists, spatial and infrastructure planners. Particularly on a regional scale landslide processes and their dynamics are poorly understood. Furthermore, the availability of appropriate spatial data in high resolution is often a limiting factor for modelling high quality landslide susceptibility maps for large study areas. However, these maps form an important basis for preventive spatial planning measures. Thus, new methods have to be developed, especially focussing on the implementation of final maps into spatial planning processes. The main objective of the project "MoNOE" (Method development for landslide susceptibility modelling in Lower Austria) is to design a method for landslide susceptibility modelling for a large study area (about 10.200 km²) and to produce landslide susceptibility maps which are finally implemented in the spatial planning strategies of the Federal state of Lower Austria. The project focuses primarily on the landslide types fall and slide. To enable susceptibility modelling, landslide inventories for the respective landslide types must be compiled and relevant data has to be gathered, prepared and homogenized. Based on this data new methods must be developed to tackle the needs of the spatial planning strategies. Considerable efforts will also be spent on the validation of the resulting maps for each landslide type. A great challenge will be the combination of the susceptibility maps for slides and falls in just one single susceptibility map (which is requested by the government) and the definition of the final visualisation. Since numerous landslides have been favoured or even triggered by human impact, the human influence on landslides will also have to be investigated. Furthermore possibilities to integrate respective findings in regional susceptibility modelling will be explored. According to these objectives the project is

  6. Semi-Lagrangian methods in air pollution models

    Directory of Open Access Journals (Sweden)

    A. B. Hansen

    2011-06-01

    Full Text Available Various semi-Lagrangian methods are tested with respect to advection in air pollution modeling. The aim is to find a method fulfilling as many of the desirable properties by Rasch andWilliamson (1990 and Machenhauer et al. (2008 as possible. The focus in this study is on accuracy and local mass conservation.

    The methods tested are, first, classical semi-Lagrangian cubic interpolation, see e.g. Durran (1999, second, semi-Lagrangian cubic cascade interpolation, by Nair et al. (2002, third, semi-Lagrangian cubic interpolation with the modified interpolation weights, Locally Mass Conserving Semi-Lagrangian (LMCSL, by Kaas (2008, and last, semi-Lagrangian cubic interpolation with a locally mass conserving monotonic filter by Kaas and Nielsen (2010.

    Semi-Lagrangian (SL interpolation is a classical method for atmospheric modeling, cascade interpolation is more efficient computationally, modified interpolation weights assure mass conservation and the locally mass conserving monotonic filter imposes monotonicity.

    All schemes are tested with advection alone or with advection and chemistry together under both typical rural and urban conditions using different temporal and spatial resolution. The methods are compared with a current state-of-the-art scheme, Accurate Space Derivatives (ASD, see Frohn et al. (2002, presently used at the National Environmental Research Institute (NERI in Denmark. To enable a consistent comparison only non-divergent flow configurations are tested.

    The test cases are based either on the traditional slotted cylinder or the rotating cone, where the schemes' ability to model both steep gradients and slopes are challenged.

    The tests showed that the locally mass conserving monotonic filter improved the results significantly for some of the test cases, however, not for all. It was found that the semi-Lagrangian schemes, in almost every case, were not able to outperform the current ASD scheme

  7. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.

    Science.gov (United States)

    Krishnamoorthi, Shankarjee; Perotti, Luigi E; Borgstrom, Nils P; Ajijola, Olujimi A; Frid, Anna; Ponnaluri, Aditya V; Weiss, James N; Qu, Zhilin; Klug, William S; Ennis, Daniel B; Garfinkel, Alan

    2014-01-01

    We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG) and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.

  8. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.

    Directory of Open Access Journals (Sweden)

    Shankarjee Krishnamoorthi

    Full Text Available We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.

  9. Sparse aerosol models beyond the quadrature method of moments

    Science.gov (United States)

    McGraw, Robert

    2013-05-01

    This study examines a class of sparse aerosol models derived from linear programming (LP). The widely used quadrature method of moments (QMOM) is shown to fall into this class. Here it is shown how other sparse aerosol models can be constructed, which are not based on moments of the particle size distribution. The new methods enable one to bound atmospheric aerosol physical and optical properties using arbitrary combinations of model parameters and measurements. Rigorous upper and lower bounds, e.g. on the number of aerosol particles that can activate to form cloud droplets, can be obtained this way from measurement constraints that may include total particle number concentration and size distribution moments. The new LP-based methods allow a much wider range of aerosol properties, such as light backscatter or extinction coefficient, which are not easily connected to particle size moments, to also be assimilated into a list of constraints. Finally, it is shown that many of these more general aerosol properties can be tracked directly in an aerosol dynamics simulation, using SAMs, in much the same way that moments are tracked directly in the QMOM.

  10. The Quadrotor Dynamic Modeling and Indoor Target Tracking Control Method

    Directory of Open Access Journals (Sweden)

    Dewei Zhang

    2014-01-01

    Full Text Available A reliable nonlinear dynamic model of the quadrotor is presented. The nonlinear dynamic model includes actuator dynamic and aerodynamic effect. Since the rotors run near a constant hovering speed, the dynamic model is simplified at hovering operating point. Based on the simplified nonlinear dynamic model, the PID controllers with feedback linearization and feedforward control are proposed using the backstepping method. These controllers are used to control both the attitude and position of the quadrotor. A fully custom quadrotor is developed to verify the correctness of the dynamic model and control algorithms. The attitude of the quadrotor is measured by inertia measurement unit (IMU. The position of the quadrotor in a GPS-denied environment, especially indoor environment, is estimated from the downward camera and ultrasonic sensor measurements. The validity and effectiveness of the proposed dynamic model and control algorithms are demonstrated by experimental results. It is shown that the vehicle achieves robust vision-based hovering and moving target tracking control.

  11. Quality control of clinker products by SEM and XRF analysis

    International Nuclear Information System (INIS)

    Ziad Abu Kaddourah; Khairun Azizi

    1996-01-01

    The microstructure and chemical properties of industrial Portland cement clinkers have been examined by SEM and XRF methods to establish the nature of the clinkers and how variations in the clinker characteristics can be used to control the clinker quality. The clinker nodules were found to show differences in the chemical composition and microstructure between the inner and outer parts of the clinker nodules. Microstructure studies of industrial Portland cement clinker have shown that the outer part of the nodules are enriched in silicate more than the inner part. There is better crystallization and larger alite crystal size in the outer part than in the inner part. The alite crystal size varied between 16.2 -46.12 μ m. The clinker chemical composition was found to affect the residual >45 μ m, where a higher belite content causes an increase in the residual >45 μ m in the the cement product and will cause a decrease in the concrete strength of the cement product. The aluminate and ferrite crystals and the microcracks within the alite crystal are clear in some clinker only. The quality of the raw material preparation, burning and cooling stages can be controlled using the microstructure of the clinker product

  12. Thermal Modeling Method Improvements for SAGE III on ISS

    Science.gov (United States)

    Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; McLeod, Shawn

    2015-01-01

    The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle. A detailed thermal model of the SAGE III payload, which consists of multiple subsystems, has been developed in Thermal Desktop (TD). Many innovative analysis methods have been used in developing this model; these will be described in the paper. This paper builds on a paper presented at TFAWS 2013, which described some of the initial developments of efficient methods for SAGE III. The current paper describes additional improvements that have been made since that time. To expedite the correlation of the model to thermal vacuum (TVAC) testing, the chambers and GSE for both TVAC chambers at Langley used to test the payload were incorporated within the thermal model. This allowed the runs of TVAC predictions and correlations to be run within the flight model, thus eliminating the need for separate models for TVAC. In one TVAC test, radiant lamps were used which necessitated shooting rays from the lamps, and running in both solar and IR wavebands. A new Dragon model was incorporated which entailed a change in orientation; that change was made using an assembly, so that any potential additional new Dragon orbits could be added in the future without modification of the model. The Earth orbit parameters such as albedo and Earth infrared flux were incorporated as time-varying values that change over the course of the orbit; despite being required in one of the ISS documents, this had not been done before by any previous payload. All parameters such as initial temperature, heater voltage, and location of the payload are defined based on the case definition. For one component, testing was performed in both air and vacuum; incorporating the air convection in a submodel that was

  13. Item bias detection in the Hospital Anxiety and Depression Scale using structural equation modeling: comparison with other item bias detection methods

    NARCIS (Netherlands)

    Verdam, M.G.E.; Oort, F.J.; Sprangers, M.A.G.

    Purpose Comparison of patient-reported outcomes may be invalidated by the occurrence of item bias, also known as differential item functioning. We show two ways of using structural equation modeling (SEM) to detect item bias: (1) multigroup SEM, which enables the detection of both uniform and

  14. Modelling of packet traffic with matrix analytic methods

    DEFF Research Database (Denmark)

    Andersen, Allan T.

    1995-01-01

    vot reveal any adverse behaviour. In fact the observed traffic seemed very close to what would be expected from Poisson traffic. The Changeover/Changeback procedure in SS7, which is used to redirect traffic in case of link failure, has been analyzed. The transient behaviour during a Changeover...... scenario was modelled using Markovian models. The Ordinary Differential Equations arising from these models were solved numerically. The results obtained seemed very similar to those obtained using a different method in previous work by Akinpelu & Skoog 1985. Recent measurement studies of packet traffic...... is found by noting the close relationship with the expressions for the corresponding infinite queue. For the special case of a batch Poisson arrival process this observation makes it possible to express the queue length at an arbitrary in terms of the corresponding queue lengths for the infinite case....

  15. Methods to model-check parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O. S.; McCune, W.; Lusk, E.

    2003-01-01

    We report on an effort to develop methodologies for formal verification of parts of the Multi-Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of communicating processes. While the individual components of the collection execute simple algorithms, their interaction leads to unexpected errors that are difficult to uncover by conventional means. Two verification approaches are discussed here: the standard model checking approach using the software model checker SPIN and the nonstandard use of a general-purpose first-order resolution-style theorem prover OTTER to conduct the traditional state space exploration. We compare modeling methodology and analyze performance and scalability of the two methods with respect to verification of MPD

  16. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  17. Quantum Monte Carlo method for models of molecular nanodevices

    Science.gov (United States)

    Arrachea, Liliana; Rozenberg, Marcelo J.

    2005-07-01

    We introduce a quantum Monte Carlo technique to calculate exactly at finite temperatures the Green function of a fermionic quantum impurity coupled to a bosonic field. While the algorithm is general, we focus on the single impurity Anderson model coupled to a Holstein phonon as a schematic model for a molecular transistor. We compute the density of states at the impurity in a large range of parameters, to demonstrate the accuracy and efficiency of the method. We also obtain the conductance of the impurity model and analyze different regimes. The results show that even in the case when the effective attractive phonon interaction is larger than the Coulomb repulsion, a Kondo-like conductance behavior might be observed.

  18. Image to Point Cloud Method of 3D-MODELING

    Science.gov (United States)

    Chibunichev, A. G.; Galakhov, V. P.

    2012-07-01

    This article describes the method of constructing 3D models of objects (buildings, monuments) based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  19. A novel duplicate images detection method based on PLSA model

    Science.gov (United States)

    Liao, Xiaofeng; Wang, Yongji; Ding, Liping; Gu, Jian

    2012-01-01

    Web image search results usually contain duplicate copies. This paper considers the problem of detecting and clustering duplicate images contained in web image search results. Detecting and clustering the duplicate images together facilitates users' viewing. A novel method is presented in this paper to detect and cluster duplicate images by measuring similarity between their topics. More specifically, images are viewed as documents consisting of visual words formed by vector quantizing the affine invariant visual features. Then a statistical model widely used in text domain, the PLSA(Probabilistic Latent Semantic Analysis) model, is utilized to map images into a probabilistic latent semantic space. Because the main content remains unchanged despite small digital alteration, duplicate images will be close to each other in the derived semantic space. Based on this, a simple clustering process can successfully detect duplicate images and cluster them together. Comparing to those methods based on comparison between hash value of visual words, this method is more robust to the visual feature level alteration posed on the images. Experiments demonstrates the effectiveness of this method.

  20. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  1. Reflexion on linear regression trip production modelling method for ensuring good model quality

    Science.gov (United States)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  2. Multicomponent gas mixture air bearing modeling via lattice Boltzmann method

    Science.gov (United States)

    Tae Kim, Woo; Kim, Dehee; Hari Vemuri, Sesha; Kang, Soo-Choon; Seung Chung, Pil; Jhon, Myung S.

    2011-04-01

    As the demand for ultrahigh recording density increases, development of an integrated head disk interface (HDI) modeling tool, which considers the air bearing and lubricant film morphology simultaneously is of paramount importance. To overcome the shortcomings of the existing models based on the modified Reynolds equation (MRE), the lattice Boltzmann method (LBM) is a natural choice in modeling high Knudsen number (Kn) flows owing to its advantages over conventional methods. The transient and parallel nature makes this LBM an attractive tool for the next generation air bearing design. Although LBM has been successfully applied to single component systems, a multicomponent system analysis has been thwarted because of the complexity in coupling the terms for each component. Previous studies have shown good results in modeling immiscible component mixtures by use of an interparticle potential. In this paper, we extend our LBM model to predict the flow rate of high Kn pressure-driven flows in multicomponent gas mixture air bearings, such as the air-helium system. For accurate modeling of slip conditions near the wall, we adopt our LBM scheme with spatially dependent relaxation times for air bearings in HDIs. To verify the accuracy of our code, we tested our scheme via simple two-dimensional benchmark flows. In the pressure-driven flow of an air-helium mixture, we found that the simple linear combination of pure helium and pure air flow rates, based on helium and air mole fraction, gives considerable error when compared to our LBM calculation. Hybridization with the existing MRE database can be adopted with the procedure reported here to develop the state-of-the-art slider design software.

  3. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    Science.gov (United States)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  4. Setting at point on critical assembly of modelling methods for fast neutron power reactors

    International Nuclear Information System (INIS)

    Zhukov, A.V.; Kazanskij, Y.A.; Kochetkov, A.L.; Matveev, V.I.; Mironovich, Y.N.

    1986-01-01

    In this report the authors examine two modelling methods. In the first method the model presents faithfully the flux distribution. In the second method the reactor model is made by a central mixed oxide fuel surrounded with uranium [fr

  5. IMPROVED NUMERICAL METHODS FOR MODELING RIVER-AQUIFER INTERACTION.

    Energy Technology Data Exchange (ETDEWEB)

    Tidwell, Vincent Carroll; Sue Tillery; Phillip King

    2008-09-01

    A new option for Local Time-Stepping (LTS) was developed to use in conjunction with the multiple-refined-area grid capability of the U.S. Geological Survey's (USGS) groundwater modeling program, MODFLOW-LGR (MF-LGR). The LTS option allows each local, refined-area grid to simulate multiple stress periods within each stress period of a coarser, regional grid. This option is an alternative to the current method of MF-LGR whereby the refined grids are required to have the same stress period and time-step structure as the coarse grid. The MF-LGR method for simulating multiple-refined grids essentially defines each grid as a complete model, then for each coarse grid time-step, iteratively runs each model until the head and flux changes at the interfacing boundaries of the models are less than some specified tolerances. Use of the LTS option is illustrated in two hypothetical test cases consisting of a dual well pumping system and a hydraulically connected stream-aquifer system, and one field application. Each of the hypothetical test cases was simulated with multiple scenarios including an LTS scenario, which combined a monthly stress period for a coarse grid model with a daily stress period for a refined grid model. The other scenarios simulated various combinations of grid spacing and temporal refinement using standard MODFLOW model constructs. The field application simulated an irrigated corridor along the Lower Rio Grande River in New Mexico, with refinement of a small agricultural area in the irrigated corridor.The results from the LTS scenarios for the hypothetical test cases closely replicated the results from the true scenarios in the refined areas of interest. The head errors of the LTS scenarios were much smaller than from the other scenarios in relation to the true solution, and the run times for the LTS models were three to six times faster than the true models for the dual well and stream-aquifer test cases, respectively. The results of the field

  6. High resolution SEM imaging of gold nanoparticles in cells and tissues.

    Science.gov (United States)

    Goldstein, A; Soroka, Y; Frušić-Zlotkin, M; Popov, I; Kohen, R

    2014-12-01

    The growing demand of gold nanoparticles in medical applications increases the need for simple and efficient characterization methods of the interaction between the nanoparticles and biological systems. Due to its nanometre resolution, modern scanning electron microscopy (SEM) offers straightforward visualization of metallic nanoparticles down to a few nanometre size, almost without any special preparation step. However, visualization of biological materials in SEM requires complicated preparation procedure, which is typically finished by metal coating needed to decrease charging artefacts and quick radiation damage of biomaterials in the course of SEM imaging. The finest conductive metal coating available is usually composed of a few nanometre size clusters, which are almost identical to the metal nanoparticles employed in medical applications. Therefore, SEM monitoring of metal nanoparticles within cells and tissues is incompatible with the conventional preparation methods. In this work, we show that charging artefacts related to non-conductive biological specimen can be successfully eliminated by placing the uncoated biological sample on a conductive substrate. By growing the cells on glass pre-coated with a chromium layer, we were able to observe the uptake of 10 nm gold nanoparticles inside uncoated and unstained macrophages and keratinocytes cells. Imaging in back scattered electrons allowed observation of gold nanoparticles located inside the cells, while imaging in secondary electron gave information on gold nanoparticles located on the surface of the cells. By mounting a skin cross-section on an improved conductive holder, consisting of a silicon substrate coated with copper, we were able to observe penetration of gold nanoparticles of only 5 nm size through the skin barrier in an uncoated skin tissue. The described method offers a convenient modification in preparation procedure for biological samples to be analyzed in SEM. The method provides high

  7. Modeling of Unsteady Flow through the Canals by Semiexact Method

    Directory of Open Access Journals (Sweden)

    Farshad Ehsani

    2014-01-01

    Full Text Available The study of free-surface and pressurized water flows in channels has many interesting application, one of the most important being the modeling of the phenomena in the area of natural water systems (rivers, estuaries as well as in that of man-made systems (canals, pipes. For the development of major river engineering projects, such as flood prevention and flood control, there is an essential need to have an instrument that be able to model and predict the consequences of any possible phenomenon on the environment and in particular the new hydraulic characteristics of the system. The basic equations expressing hydraulic principles were formulated in the 19th century by Barre de Saint Venant and Valentin Joseph Boussinesq. The original hydraulic model of the Saint Venant equations is written in the form of a system of two partial differential equations and it is derived under the assumption that the flow is one-dimensional, the cross-sectional velocity is uniform, the streamline curvature is small and the pressure distribution is hydrostatic. The St. Venant equations must be solved with continuity equation at the same time. Until now no analytical solution for Saint Venant equations is presented. In this paper the Saint Venant equations and continuity equation are solved with homotopy perturbation method (HPM and comparison by explicit forward finite difference method (FDM. For decreasing the present error between HPM and FDM, the st.venant equations and continuity equation are solved by HAM. The homotopy analysis method (HAM contains the auxiliary parameter ħ that allows us to adjust and control the convergence region of solution series. The study has highlighted the efficiency and capability of HAM in solving Saint Venant equations and modeling of unsteady flow through the rectangular canal that is the goal of this paper and other kinds of canals.

  8. FIB-SEM cathodoluminescence tomography: practical and theoretical considerations.

    Science.gov (United States)

    De Winter, D A M; Lebbink, M N; Wiggers De Vries, D F; Post, J A; Drury, M R

    2011-09-01

    Focused ion beam-scanning electron microscope (FIB-SEM) tomography is a powerful application in obtaining three-dimensional (3D) information. The FIB creates a cross section and subsequently removes thin slices. The SEM takes images using secondary or backscattered electrons, or maps every slice using X-rays and/or electron backscatter diffraction patterns. The objective of this study is to assess the possibilities of combining FIB-SEM tomography with cathodoluminescence (CL) imaging. The intensity of CL emission is related to variations in defect or impurity concentrations. A potential problem with FIB-SEM CL tomography is that ion milling may change the defect state of the material and the CL emission. In addition the conventional tilted sample geometry used in FIB-SEM tomography is not compatible with conventional CL detectors. Here we examine the influence of the FIB on CL emission in natural diamond and the feasibility of FIB-SEM CL tomography. A systematic investigation establishes that the ion beam influences CL emission of diamond, with a dependency on both the ion beam and electron beam acceleration voltage. CL emission in natural diamond is enhanced particularly at low ion beam and electron beam voltages. This enhancement of the CL emission can be partly explained by an increase in surface defects induced by ion milling. CL emission enhancement could be used to improve the CL image quality. To conduct FIB-SEM CL tomography, a recently developed novel specimen geometry is adopted to enable sequential ion milling and CL imaging on an untilted sample. We show that CL imaging can be manually combined with FIB-SEM tomography with a modified protocol for 3D microstructure reconstruction. In principle, automated FIB-SEM CL tomography should be feasible, provided that dedicated CL detectors are developed that allow subsequent milling and CL imaging without manual intervention, as the current CL detector needs to be manually retracted before a slice can be milled

  9. Modeling methods for merging computational and experimental aerodynamic pressure data

    Science.gov (United States)

    Haderlie, Jacob C.

    This research describes a process to model surface pressure data sets as a function of wing geometry from computational and wind tunnel sources and then merge them into a single predicted value. The described merging process will enable engineers to integrate these data sets with the goal of utilizing the advantages of each data source while overcoming the limitations of both; this provides a single, combined data set to support analysis and design. The main challenge with this process is accurately representing each data source everywhere on the wing. Additionally, this effort demonstrates methods to model wind tunnel pressure data as a function of angle of attack as an initial step towards a merging process that uses both location on the wing and flow conditions (e.g., angle of attack, flow velocity or Reynold's number) as independent variables. This surrogate model of pressure as a function of angle of attack can be useful for engineers that need to predict the location of zero-order discontinuities, e.g., flow separation or normal shocks. Because, to the author's best knowledge, there is no published, well-established merging method for aerodynamic pressure data (here, the coefficient of pressure Cp), this work identifies promising modeling and merging methods, and then makes a critical comparison of these methods. Surrogate models represent the pressure data for both data sets. Cubic B-spline surrogate models represent the computational simulation results. Machine learning and multi-fidelity surrogate models represent the experimental data. This research compares three surrogates for the experimental data (sequential--a.k.a. online--Gaussian processes, batch Gaussian processes, and multi-fidelity additive corrector) on the merits of accuracy and computational cost. The Gaussian process (GP) methods employ cubic B-spline CFD surrogates as a model basis function to build a surrogate model of the WT data, and this usage of the CFD surrogate in building the WT

  10. Methods for the development of in silico GPCR models

    Science.gov (United States)

    Morales, Paula; Hurst, Dow P.; Reggio, Patricia H.

    2018-01-01

    The Reggio group has constructed computer models of the inactive and G-protein activated states of the cannabinoid CB1 and CB2 receptors, as well, several orphan receptors that recognize a sub-set of cannabinoid compounds, including GPR55 and GPR18. These models have been used to design ligands, mutations and covalent labeling studies. The resultant second generation models have been used to design ligands with improved affinity, efficacy and sub-type selectivity. Herein, we provide a guide for the development of GPCR models using the most recent orphan receptor studied in our lab, GPR3. GPR3 is an orphan receptor that belongs to the Class A family of G-Protein Coupled Receptors. It shares high sequence similarity with GPR6, GPR12, the lysophospholipid receptors, and the cannabinoid receptors. GPR3 is predominantly expressed in mammalian brain and oocytes and it is known as a Gαs-coupled receptor activated constitutively in cells. GPR3 represents a possible target for the treatment of different pathological conditions such as Alzheimer’s disease, oocyte maturation or neuropathic pain. However, the lack of potent and selective GPR3 ligands is delaying the exploitation of this promising therapeutic target. In this context, we aim to develop a homology model that helps us to elucidate the structural determinants governing ligand-receptor interactions at GPR3. In this chapter, we detail the methods and rationale behind the construction of the GPR3 active and inactive state models. These homology models will enable the rational design of novel ligands, which may serve as research tools for further understanding of the biological role of GPR3. PMID:28750813

  11. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  12. Pursuing the method of multiple working hypotheses for hydrological modeling

    Science.gov (United States)

    Clark, M. P.; Kavetski, D.; Fenicia, F.

    2012-12-01

    Ambiguities in the representation of environmental processes have manifested themselves in a plethora of hydrological models, differing in almost every aspect of their conceptualization and implementation. The current overabundance of models is symptomatic of an insufficient scientific understanding of environmental dynamics at the catchment scale, which can be attributed to difficulties in measuring and representing the heterogeneity encountered in natural systems. This presentation advocates using the method of multiple working hypotheses for systematic and stringent testing of model alternatives in hydrology. We discuss how the multiple hypothesis approach provides the flexibility to formulate alternative representations (hypotheses) describing both individual processes and the overall system. When combined with incisive diagnostics to scrutinize multiple model representations against observed data, this provides hydrologists with a powerful and systematic approach for model development and improvement. Multiple hypothesis frameworks also support a broader coverage of the model hypothesis space and hence improve the quantification of predictive uncertainty arising from system and component non-identifiabilities. As part of discussing the advantages and limitations of multiple hypothesis frameworks, we critically review major contemporary challenges in hydrological hypothesis-testing, including exploiting different types of data to investigate the fidelity of alternative process representations, accounting for model structure ambiguities arising from major uncertainties in environmental data, quantifying regional differences in dominant hydrological processes, and the grander challenge of understanding the self-organization and optimality principles that may functionally explain and describe the heterogeneities evident in most environmental systems. We assess recent progress in these research directions, and how new advances are possible using multiple hypothesis

  13. Regularized Structural Equation Modeling

    Science.gov (United States)

    Jacobucci, Ross; Grimm, Kevin J.; McArdle, John J.

    2016-01-01

    A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM’s utility. PMID:27398019

  14. Investigating the performance of directional boundary layer model through staged modeling method

    Science.gov (United States)

    Jeong, Moon-Gyu; Lee, Won-Chan; Yang, Seung-Hune; Jang, Sung-Hoon; Shim, Seong-Bo; Kim, Young-Chang; Suh, Chun-Suk; Choi, Seong-Woon; Kim, Young-Hee

    2011-04-01

    Generally speaking, the models used in the optical proximity effect correction (OPC) can be divided into three parts, mask part, optic part, and resist part. For the excellent quality of the OPC model, each part has to be described by the first principles. However, OPC model can't take the all of the principles since it should cover the full chip level calculation during the correction. Moreover, the calculation has to be done iteratively during the correction until the cost function we want to minimize converges. Normally the optic part in OPC model is described with the sum of coherent system (SOCS[1]) method. Thanks to this method we can calculate the aerial image so fast without the significant loss of accuracy. As for the resist part, the first principle is too complex to implement in detail, so it is normally expressed in a simple way, such as the approximation of the first principles, and the linear combinations of factors which is highly correlated with the chemistries in the resist. The quality of this kind of the resist model depends on how well we train the model through fitting to the empirical data. The most popular way of making the mask function is based on the Kirchhoff's thin mask approximation. This method works well when the feature size on the mask is sufficiently large, but as the line width of the semiconductor circuit becomes smaller, this method causes significant error due to the mask topography effect. To consider the mask topography effect accurately, we have to use rigorous methods of calculating the mask function, such as finite difference time domain (FDTD[2]) and rigorous coupled-wave analysis (RCWA[3]). But these methods are too time-consuming to be used as a part of the OPC model. Until now many alternatives have been suggested as the efficient way of considering the mask topography effect. Among them we focused on the boundary layer model (BLM) in this paper. We mainly investigated the way of optimization of the parameters for the

  15. Comparative examination of two methods for modeling autoimmune uveitis

    Directory of Open Access Journals (Sweden)

    Svetlana V. Aksenova

    2017-09-01

    Full Text Available Introduction: Uveitis is a disease of the uveal tract, characterized by a variety of causes and clinical manifestations. The internal antigens prevail often in the pathogenesis of the disease and develop the so-called autoimmune reactions. The uveitis treatment has an important medico-social significance because of the high prevalence of uveitis, the significant rate of the disease in young people, and high disability. The article compares the efficiency of two methods for modeling autoimmune uveitis. Materials and Methods: The research was conducted on 6 rabbits of the Chinchilla breed (12 eyes. Two models of experimental uveitis were reproduced on rabbits using normal horse serum during the research. A clinical examination of the inflammatory process course in the eyes was carried out by biomicroscopy using a slit lamp, and a direct ophthalmoscope. Histological and immunological examinations were conducted by the authors of the article. Results: The faster-reproducing and vivid clinical picture of the disease was observed in the second group. The obvious changes in the immunological status of the animals were noted also: an increase in the number of leukocytes, neutrophils, HCT-active neutrophils, and activation of phagocytosis. Discussion and Conclusions: The research has showed that the second model of uveitis is the most convenient working variant, which is characterized by high activity and duration of the inflammatory process in the eye.

  16. Dynamic airspace configuration method based on a weighted graph model

    Directory of Open Access Journals (Sweden)

    Chen Yangzhou

    2014-08-01

    Full Text Available This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph partitioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm transfers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is completed by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connectivity, as well as minimum distance constraint.

  17. Optimization methods and silicon solar cell numerical models

    Science.gov (United States)

    Girardini, K.; Jacobsen, S. E.

    1986-01-01

    An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.

  18. Dimensionality reduction method based on a tensor model

    Science.gov (United States)

    Yan, Ronghua; Peng, Jinye; Ma, Dongmei; Wen, Desheng

    2017-04-01

    Dimensionality reduction is a preprocessing step for hyperspectral image (HSI) classification. Principal component analysis reduces the spectral dimension and does not utilize the spatial information of an HSI. Both spatial and spectral information are used when an HSI is modeled as a tensor, that is, the noise in the spatial dimension is decreased and the dimension in a spectral dimension is reduced simultaneously. However, this model does not consider factors affecting the spectral signatures of ground objects. This means that further improving classification is very difficult. The authors propose that the spectral signatures of ground objects are the composite result of multiple factors, such as illumination, mixture, atmospheric scattering and radiation, and so on. In addition, these factors are very difficult to distinguish. Therefore, these factors are synthesized as within-class factors. Within-class factors, class factors, and pixels are selected to model a third-order tensor. Experimental results indicate that the classification accuracy of the new method is higher than that of the previous methods.

  19. Outcome modelling strategies in epidemiology: traditional methods and basic alternatives.

    Science.gov (United States)

    Greenland, Sander; Daniel, Rhian; Pearce, Neil

    2016-04-01

    Controlling for too many potential confounders can lead to or aggravate problems of data sparsity or multicollinearity, particularly when the number of covariates is large in relation to the study size. As a result, methods to reduce the number of modelled covariates are often deployed. We review several traditional modelling strategies, including stepwise regression and the 'change-in-estimate' (CIE) approach to deciding which potential confounders to include in an outcome-regression model for estimating effects of a targeted exposure. We discuss their shortcomings, and then provide some basic alternatives and refinements that do not require special macros or programming. Throughout, we assume the main goal is to derive the most accurate effect estimates obtainable from the data and commercial software. Allowing that most users must stay within standard software packages, this goal can be roughly approximated using basic methods to assess, and thereby minimize, mean squared error (MSE). © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association.

  20. Multi-level decision making models, methods and applications

    CERN Document Server

    Zhang, Guangquan; Gao, Ya

    2015-01-01

    This monograph presents new developments in multi-level decision-making theory, technique and method in both modeling and solution issues. It especially presents how a decision support system can support managers in reaching a solution to a multi-level decision problem in practice. This monograph combines decision theories, methods, algorithms and applications effectively. It discusses in detail the models and solution algorithms of each issue of bi-level and tri-level decision-making, such as multi-leaders, multi-followers, multi-objectives, rule-set-based, and fuzzy parameters. Potential readers include organizational managers and practicing professionals, who can use the methods and software provided to solve their real decision problems; PhD students and researchers in the areas of bi-level and multi-level decision-making and decision support systems; students at an advanced undergraduate, master’s level in information systems, business administration, or the application of computer science.  

  1. High dimensional model representation method for fuzzy structural dynamics

    Science.gov (United States)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  2. Revisiting a model-independent dark energy reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Lazkoz, Ruth; Salzano, Vincenzo; Sendra, Irene [Euskal Herriko Unibertsitatea, Fisika Teorikoaren eta Zientziaren Historia Saila, Zientzia eta Teknologia Fakultatea, Bilbao (Spain)

    2012-09-15

    In this work we offer new insights into the model-independent dark energy reconstruction method developed by Daly and Djorgovski (Astrophys. J. 597:9, 2003; Astrophys. J. 612:652, 2004; Astrophys. J. 677:1, 2008). Our results, using updated SNeIa and GRBs, allow to highlight some of the intrinsic weaknesses of the method. Conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves, particularly for GRBs, which are poor performers in this context and cannot be used for cosmological purposes, that is, the state of the art does not allow to regard them on the same quality basis as SNeIa. We find there is a considerable sensitivity to some parameters (window width, overlap, selection criteria) affecting the results. Then, we try to establish what the current redshift range is for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend and has to be managed very carefully. But, on the other hand, we believe it offers an interesting complement to other approaches, given that it works on minimal assumptions. (orig.)

  3. Attempt of correlative observation of morphological synaptic connectivity by combining confocal laser-scanning microscope and FIB-SEM for immunohistochemical staining technique.

    Science.gov (United States)

    Sonomura, Takahiro; Furuta, Takahiro; Nakatani, Ikuko; Yamamoto, Yo; Honma, Satoru; Kaneko, Takeshi

    2014-11-01

    Ten years have passed since a serial block-face scanning electron microscopy (SBF-SEM) method was developed [1]. In this innovative method, samples were automatically sectioned with an ultramicrotome placed inside a scanning electron microscope column, and the block surfaces were imaged one after another by SEM to capture back-scattered electrons. The contrast-inverted images obtained by the SBF-SEM were very similar to those acquired using conventional TEM. SFB-SEM has made easy to acquire image stacks of the transmission electron microscopy (TEM) in the mesoscale, which is taken with the confocal laser-scanning microcopy(CF-LSM).Furthermore, serial-section SEM has been combined with the focused ion beam (FIB) milling method [2]. FIB-incorporated SEM (FIB-SEM) has enabled the acquisition of three-dimensional images with a higher z-axis resolution com- pared to ultramicrotome-equipped SEM.We tried immunocytochemistry for FIB-SEM and correlated this immunoreactivity with that in CF-LSM. Dendrites of neurons in the rat neostriatum were visualized using a recombinant viral vector. Moreover, the thalamostriatal afferent terminals were immunolabeled with Cy5 fluorescence for vesicular glutamate transporter 2 (VGluT2). After detection of the sites of terminals apposed to the dendrites by using CF-LSM, GFP and VGluT2 immunoreactivities were further developed for EM by using immunogold/silver enhancement and immunoperoxidase/diaminobenzidine (DAB) methods, respectively.We showed that conventional immuno-cytochemical staining for TEM was applicable to FIB-SEM. Furthermore, several synaptic contacts, which were thought to exist on the basis of CF-LSM findings, were confirmed with FIB-SEM, revealing the usefulness of the combined method of CF-LSM and FIB-SEM. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. `Dem DEMs: Comparing Methods of Digital Elevation Model Creation

    Science.gov (United States)

    Rezza, C.; Phillips, C. B.; Cable, M. L.

    2017-12-01

    Topographic details of Europa's surface yield implications for large-scale processes that occur on the moon, including surface strength, modification, composition, and formation mechanisms for geologic features. In addition, small scale details presented from this data are imperative for future exploration of Europa's surface, such as by a potential Europa Lander mission. A comparison of different methods of Digital Elevation Model (DEM) creation and variations between them can help us quantify the relative accuracy of each model and improve our understanding of Europa's surface. In this work, we used data provided by Phillips et al. (2013, AGU Fall meeting, abs. P34A-1846) and Schenk and Nimmo (2017, in prep.) to compare DEMs that were created using Ames Stereo Pipeline (ASP), SOCET SET, and Paul Schenk's own method. We began by locating areas of the surface with multiple overlapping DEMs, and our initial comparisons were performed near the craters Manannan, Pwyll, and Cilix. For each region, we used ArcGIS to draw profile lines across matching features to determine elevation. Some of the DEMs had vertical or skewed offsets, and thus had to be corrected. The vertical corrections were applied by adding or subtracting the global minimum of the data set to create a common zero-point. The skewed data sets were corrected by rotating the plot so that it had a global slope of zero and then subtracting for a zero-point vertical offset. Once corrections were made, we plotted the three methods on one graph for each profile of each region. Upon analysis, we found relatively good feature correlation between the three methods. The smoothness of a DEM depends on both the input set of images and the stereo processing methods used. In our comparison, the DEMs produced by SOCET SET were less smoothed than those from ASP or Schenk. Height comparisons show that ASP and Schenk's model appear similar, alternating in maximum height. SOCET SET has more topographic variability due to its

  5. Morphology of the pore space in claystones - evidence from BIB/FIB ion beam sectioning and cryo-SEM observations

    Science.gov (United States)

    Desbois, G.; Urai, J. L.; Kukla, P. A.

    2009-12-01

    Mudrocks and clay-rich fault gouges are important mechanical elements in the Earth’s crust and form seals for crustal fluids such as groundwater and hydrocarbons. Other fields of interest are the storage of anthropogenic carbon dioxide and radioactive waste in geologic formations. In addition, coupled flows, capillary processes, and associated deformation are of importance in many applied fields. A key factor to understanding these processes is a detailed understanding of the morphology of the pore space. Classic studies of porosity in fine grained materials are performed on dried or freeze dried samples and include metal injection methods, magnetic susceptibility measurement, SEM and TEM imaging, neutron scattering, NMR spectroscopy, and ESEM. Confocal microscopy and X-ray tomography are used to image porosity in coarse grained sediments but the resolution of these techniques is not sufficient at present for applications to mudrocks or clay-rich fault gouges. Therefore, observations and interpretations remain difficult because none of these approaches is able to directly describe the in-situ porosity at the pore scale. In addition, some methods require dried samples in which the natural structure of pores may have been damaged to some extent due to desiccation and dehydration of the clay minerals. A recently developed alternative is to study wet samples using a cryo-SEM, which allows stabilization of wet media at cryo-temperature, in-situ sample preparation by ion beam cross-sectioning (BIB, FIB) and observations of the stabilized microstructure at high resolution. We report on a study of Boom clay from a proposed disposal site of radioactive waste (Mol site, Belgium) using cryo-SEM at cryogenic temperature, with ion beam cross-sectioning to prepare smooth, damage free surfaces. Pores commonly have crack-like tips, preferred orientation parallel to bedding and power law size distribution. We define a number of pore types depending on shape and location in the

  6. Numerical simulations of multicomponent ecological models with adaptive methods.

    Science.gov (United States)

    Owolabi, Kolade M; Patidar, Kailash C

    2016-01-08

    The study of dynamic relationship between a multi-species models has gained a huge amount of scientific interest over the years and will continue to maintain its dominance in both ecology and mathematical ecology in the years to come due to its practical relevance and universal existence. Some of its emergence phenomena include spatiotemporal patterns, oscillating solutions, multiple steady states and spatial pattern formation. Many time-dependent partial differential equations are found combining low-order nonlinear with higher-order linear terms. In attempt to obtain a reliable results of such problems, it is desirable to use higher-order methods in both space and time. Most computations heretofore are restricted to second order in time due to some difficulties introduced by the combination of stiffness and nonlinearity. Hence, the dynamics of a reaction-diffusion models considered in this paper permit the use of two classic mathematical ideas. As a result, we introduce higher order finite difference approximation for the spatial discretization, and advance the resulting system of ODE with a family of exponential time differencing schemes. We present the stability properties of these methods along with the extensive numerical simulations for a number of multi-species models. When the diffusivity is small many of the models considered in this paper are found to exhibit a form of localized spatiotemporal patterns. Such patterns are correctly captured in the local analysis of the model equations. An extended 2D results that are in agreement with Turing typical patterns such as stripes and spots, as well as irregular snakelike structures are presented. We finally show that the designed schemes are dynamically consistent. The dynamic complexities of some ecological models are studied by considering their linear stability analysis. Based on the choices of parameters in transforming the system into a dimensionless form, we were able to obtain a well-balanced system that

  7. Modeling cometary photopolarimetric characteristics with Sh-matrix method

    Science.gov (United States)

    Kolokolova, L.; Petrov, D.

    2017-12-01

    Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.

  8. Statistical Models and Methods for Network Meta-Analysis.

    Science.gov (United States)

    Madden, L V; Piepho, H-P; Paul, P A

    2016-08-01

    Meta-analysis, the methodology for analyzing the results from multiple independent studies, has grown tremendously in popularity over the last four decades. Although most meta-analyses involve a single effect size (summary result, such as a treatment difference) from each study, there are often multiple treatments of interest across the network of studies in the analysis. Multi-treatment (or network) meta-analysis can be used for simultaneously analyzing the results from all the treatments. However, the methodology is considerably more complicated than for the analysis of a single effect size, and there have not been adequate explanations of the approach for agricultural investigations. We review the methods and models for conducting a network meta-analysis based on frequentist statistical principles, and demonstrate the procedures using a published multi-treatment plant pathology data set. A major advantage of network meta-analysis is that correlations of estimated treatment effects are automatically taken into account when an appropriate model is used. Moreover, treatment comparisons may be possible in a network meta-analysis that are not possible in a single study because all treatments of interest may not be included in any given study. We review several models that consider the study effect as either fixed or random, and show how to interpret model-fitting output. We further show how to model the effect of moderator variables (study-level characteristics) on treatment effects, and present one approach to test for the consistency of treatment effects across the network. Online supplemental files give explanations on fitting the network meta-analytical models using SAS.

  9. Tail modeling in a stretched magnetosphere. I - Methods and transformations

    Science.gov (United States)

    Stern, David P.

    1987-01-01

    A new method is developed for representing the magnetospheric field B as a distorted dipole field. Because Delta-B = 0 must be maintained, such a distortion may be viewed as a transformation of the vector potential A. The simplest form is a one-dimensional 'stretch transformation' along the x axis, concisely represented by the 'stretch function' f(x), which is also a convenient tool for representing features of the substorm cycle. One-dimensional stretch transformations are extended to spherical, cylindrical, and parabolic coordinates and then to arbitrary coordinates. It is shown that distortion transformations can be viewed as mappings of field lines from one pattern to another; the final result only requires knowledge of the field and not of the potentials. General transformations in Cartesian and arbitrary coordinates are derived, and applications to field modeling, field line motion, MHD modeling, and incompressible fluid dynamics are considered.

  10. Engineering models and methods for industrial cell control

    DEFF Research Database (Denmark)

    Lynggaard, Hans Jørgen Birk; Alting, Leo

    1997-01-01

    This paper is concerned with the engineering, i.e. the designing and making, of industrial cell control systems. The focus is on automated robot welding cells in the shipbuilding industry. The industrial research project defines models and methods for design and implemen-tation of computer based...... control and monitor-ing systems for production cells. The project participants are The Danish Academy of Technical Sciences, the Institute of Manufacturing Engineering at the Technical University of Denmark and ODENSE STEEL SHIPYARD Ltd.The manufacturing environment and the current practice...... for engineering of cell control systems has been analysed as well as automation software enablers. A number of problems related to these issues are identified.In order to support engineering of cell control systems by the use of enablers, a generic cell control data model and an architecture has been defined...

  11. Computational methods of the Advanced Fluid Dynamics Model

    International Nuclear Information System (INIS)

    Bohl, W.R.; Wilhelm, D.; Parker, F.R.

    1987-01-01

    To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development

  12. Scattering of surface waves modelled by the integral equation method

    Science.gov (United States)

    Lu, Laiyu; Maupin, Valerie; Zeng, Rongsheng; Ding, Zhifeng

    2008-09-01

    The integral equation method is used to model the propagation of surface waves in 3-D structures. The wavefield is represented by the Fredholm integral equation, and the scattered surface waves are calculated by solving the integral equation numerically. The integration of the Green's function elements is given analytically by treating the singularity of the Hankel function at R = 0, based on the proper expression of the Green's function and the addition theorem of the Hankel function. No far-field and Born approximation is made. We investigate the scattering of surface waves propagating in layered reference models imbedding a heterogeneity with different density, as well as Lamé constant contrasts, both in frequency and time domains, for incident plane waves and point sources.

  13. A model for explaining fusion suppression using classical trajectory method

    Directory of Open Access Journals (Sweden)

    Phookan C. K.

    2015-01-01

    Full Text Available We adopt a semi-classical approach for explanation of projectile breakup and above barrier fusion suppression for the reactions 6Li+152Sm and 6Li+144Sm. The cut-off impact parameter for fusion is determined by employing quantum mechanical ideas. Within this cut-off impact parameter for fusion, the fraction of projectiles undergoing breakup is determined using the method of classical trajectory in two-dimensions. For obtaining the initial conditions of the equations of motion, a simplified model of the 6Li nucleus has been proposed. We introduce a simple formula for explanation of fusion suppression. We find excellent agreement between the experimental and calculated fusion cross section. A slight modification of the above formula for fusion suppression is also proposed for a three-dimensional model.

  14. Modeling of Cracked Beams by the Experimental Design Method

    Directory of Open Access Journals (Sweden)

    M. Serier

    Full Text Available Abstract The understanding of phenomena, no matter their nature is based on the experimental results found. In the most cases, this requires an important number of tests in order to put a reliable and useful observation served into solving the technical problems subsequently. This paper is based on independent and variables combination resulting from experimentation in a mathematical formulation. Indeed, mathematical modeling gives us the advantage to optimize and predict the right choices without passing each case by the experiment. In this work we plan to apply the experimental design method on the experimental results found by (Deokar, A, 2011, concerning the effect of the size and position of a crack on the measured frequency of a beam console, and validating the mathematical model to predict other frequencies

  15. Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.

    Science.gov (United States)

    Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K

    2017-11-01

    Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A mathematical model and numerical method for thermoelectric DNA sequencing

    Science.gov (United States)

    Shi, Liwei; Guilbeau, Eric J.; Nestorova, Gergana; Dai, Weizhong

    2014-05-01

    Single nucleotide polymorphisms (SNPs) are single base pair variations within the genome that are important indicators of genetic predisposition towards specific diseases. This study explores the feasibility of SNP detection using a thermoelectric sequencing method that measures the heat released when DNA polymerase inserts a deoxyribonucleoside triphosphate into a DNA strand. We propose a three-dimensional mathematical model that governs the DNA sequencing device with a reaction zone that contains DNA template/primer complex immobilized to the surface of the lower channel wall. The model is then solved numerically. Concentrations of reactants and the temperature distribution are obtained. Results indicate that when the nucleoside is complementary to the next base in the DNA template, polymerization occurs lengthening the complementary polymer and releasing thermal energy with a measurable temperature change, implying that the thermoelectric conceptual device for sequencing DNA may be feasible for identifying specific genes in individuals.

  17. Modeling of electromigration salt removal methods in building materials

    DEFF Research Database (Denmark)

    Johannesson, Björn; Ottosen, Lisbeth M.

    2008-01-01

    and the effect of the composition of the ionic constituents on the overall behavior of the salt removal process. The model is obtained by assigning a Fick’s law type of assumption for each ionic species considered and also assuming that all ions is effected by the applied external electrical field in accordance...... with its ionic mobility properties. It is, further, assumed that Gauss’s law can be used to calculate the internal electrical field induced by the diffusion it self. In this manner the external electrical field applied can be modeled, simply, by assigning proper boundary conditions for the equation...... calculating the electrical field. A tailor made finite element code is written capable of solving the transient non-linear coupled set of differential equations numerically. A truly implicit time integration scheme is used together with a modified Newton-Raphson method to tackle the non...

  18. Methods for Developing Emissions Scenarios for Integrated Assessment Models

    Energy Technology Data Exchange (ETDEWEB)

    Prinn, Ronald [MIT; Webster, Mort [MIT

    2007-08-20

    The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.

  19. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  20. Engineering models and methods for industrial cell control

    DEFF Research Database (Denmark)

    Lynggaard, Hans Jørgen Birk; Alting, Leo

    1997-01-01

    This paper is concerned with the engineering, i.e. the designing and making, of industrial cell control systems. The focus is on automated robot welding cells in the shipbuilding industry. The industrial research project defines models and methods for design and implemen-tation of computer based....... Further, an engineering methodology is defined. The three elements enablers, architecture and methodology constitutes the Cell Control Engineering concept which has been defined and evaluated through the implementation of two cell control systems for robot welding cells in production at ODENSE STEEL...